The real issue at hand if NSFW AI is smart or not, and it this means what constitutes intelligent. Such NSFW AI models, in terms of processing power similar to large scale advanced AI system always process and run several billion parameters. OpenAI's 175-billion parameter model, GPT-3, can generate human-like text responses (which would be very NSFW if programmed to do so) because many NSFW AI models borrow from GPT-family architectures. While this vast data set combined with complex algorithms presents as intelligent, it is important to remember that the AI doesn't "think" in any human way; rather, these functions process patterns based on the extensive amounts of data it has been trained upon.
By usability, we mean that the character may do specific tasks or, let's say — you look at it and know what you want from it (you should not). This is possible with deep learning, which uses powerful and relatively enormous neural networks to learn categories of patterns from data. Efficient? yes, reason for this study? no; indeed, as a Stanford University AI Index report pointed out in 2021. The “system” is effective — these authors also mention that models have a 90% accuracy rate at detecting whether an item of content is inappropriate— so far from human levels of performance but still good enough to elicit their curiosity… The models work by identifying patterns learnt beforehand; they do not understand the context.
Some of the most famous NSFW AI in action. Platforms such as OnlyFans have made use of NSFW AI to police user content, with millions of uploads per day scanned by their systems to keep them in line with platform guidelines. That AI will be superfast and can process a huge amount of data, but speed does not mean intelligence in the human way. So the efficient work that an AI does is not a conclusion or reasoning driven by its own learning and undertaking of independent decisions but following the data on which the model has been trained upon and operated on with some pre-defined learned parameters.
As Elon Musk famously said, “AI doesn't have to be evil — it just has to be better than us,” partly indicating that consciousness is a part of spurious perceptions wrongly associated with intelligence and making an unnoticeable difference between functioning. For example, NSFW (Not Safe For Work) AI might seem contextualized, but it only works by detecting patterns in inputs and outputs. Also it can think beyond what was set in its algorithms.
One problem with that kind of AI for NSFW content is that it can't understand nuance. One example, from 2018, was when Posts of art that happened to contain nudity were erroneously detected by Facebook's automated content moderation system: this type of flub showcases the AI faceplant on anything resembling context. While the technology can catch some visual cues, it really does not possess the capability of understanding the sort of nuance that differentiates between porn and artistic expression without human assistance. The lack of context in the above example reveals that while NSFW AI can be very effective, it is far from being "intelligent" in the sense of a human.
SFW An nsfw ai can be great at processing large amounts of content quickly, however it doesn't have the kind of emotional intelligence, understanding or ethical reasoning that would make it truly intelligent again.