Stability AI’s text-to-image Stable Diffusion AI can generate images from text, following the example of MidJourney and DALL-E. However, this is the only AI that can generate pornographic content without censorship. Needless to say, Internet users are having a field day… find out all you need to know.
DALL-E and MidJourney have been delighting Internet users for several months now by creating works of art from their ideas. All these AIs have to do is enter words, and they generate magnificent drawings.
Unfortunately for lovers of art and poetry, even text-to-image AI is no exception to the rule 34 pornography exists on every conceivable subject.
The text-to-image generation model Stable Diffusion from artificial intelligence company Stability AI has just been launched, but people are already using it to create pornographic images.
What is Stable Diffusion?
As its name suggests, Stable Diffusion is a freemium text-to-image generator which creates stunning, detailed images from prompts. This latent diffusion model was developed by Stability AI and officially launched on August 22, 2022.
Stable Diffusion has taken the internet by storm since its initial release. Unlike DALL-E and Midjourney, it is open-sourcewhich means that you are free to use, modify or distribute its code legally. It’s an unofficial invitation to developers to improve their model. In September 2023, Stable Diffusion has over 10 million users.
Stable Diffusion XL
Launched in July 2023, Stable Diffusion XL or SDXL is the latest version of Stable Diffusion. As expected, it features significant advances in terms of AI-driven image generation.
SDXL adds more nuance, better understands short messages and better reproduces human anatomy.
Since August 22, 2022, Stable Diffusion is available as Open Access. Available in beta version, its interface allows you to create images using Stability AI’s cloud servers.
According to DreamStudio CEO Emad Mostaque, the aim of this public API is to“expand users’ creativity and enable them to live new experiences”. Other features to be added to DreamStudio include the ability to use your local GPU directly, or to add animations.
When you sign up for DreamStudio, you’ll receive 25 free credits. This is enough to try out 7 different prompts and generate around 30 images with the default settings. After that, you’ll have to pay around 1 euro for 100 generations and 100 euros for 10,000 generations. The beta version is available at this address.
You’ll find instructions for using DreamStudio on Reddit at this address. As an alternative to DreamStudio, HuggingFace also offers a rudimentary web interface for Stable Diffusion.
Keep in mind that you can’t generate pornographic content if you’re using DreamStudio. To create this type of image, you need to run the IA template locally on your GPU. The complete code is available on GitHub at this address.
Systems required to run Stable Diffusion locally
To run Stable Diffusion locally, you need download the template. In addition an Nvidia graphics card with more than 4 GB RAM.
As for AMD graphics cards, they are not officially supported, but can be used with a few tricks. However, Apple M1 chips will soon be supported.
In addition, if you’re not very inspired, you can use an automatic text creator at this address. To join the official Stable Diffusion community on Discordappointment at this address.
Stable Diffusion: AI text-to-image without censorship
Like DALL-E Mini (CrAIyon) or MidJourney, Stable Diffusion is capable of creating realistic images from simple text entered by Internet users using neural networks.
However, the creators of these MidJourney and DALL-E Mini have implemented limitations. Queries containing violent or sexual words are automatically censored.
Stability AI’s Stable Diffusion template, on the other hand, has no restrictions. Users can download this template and modify it as they wish to generate any content. Unsurprisingly, many use it to create pornographic content automatically.
In order to develop its AI model, Stability AI received the help of more than 15,000 beta testers. In July 2022, it finally opened up access to its tool for researchers.
Since Monday August 22, 2022, Stable Diffusion has been open to all. However, this model has fuity on the web and particularly on 4Chan well before its official release.
Despite the company’s ban on generating pornographic content, a number of mischievous Internet users were busy creating images as saucy as possible…
Stability AI bans porn creation… in vain
Since the beginning of August 2022, thehe Stable Diffusion forum is flooded with pornographic images. Hentai drawings, photos of naked celebrities and imaginary pornographic scenes created by the AI flood the site.
Yet, Stability AI strongly opposes to this type of content. Lhe beta version of Stable Diffusion and the DreamStudio web application prohibit pornographic or erotic content. Sn Twitter, the company had asked users to “don’t generate anything you’d be ashamed to show your mother”.
The company points out that content filters are implemented on the platform. It therefore applies the same form of censorship as MidJourney or DALL-E Mini. In fact, the company asked users to create adult content only on their own GPU when the model is relaxed.
However, anyone can copy it and run it on their PC and vile Internet users are quick to create obscene images. Nevertheless, users of Stable Diffusion must respect the conditions of use of the model’s license. This is the CreativeML OpenRAIL-M license, whose terms are the same as those of the DALLE-Mini open-access version.
In particular, this license prohibits the use of “misappropriated, malicious or malicious use”.. It is also forbidden to “generate images that people are likely to find objectionable. disturbing or offensive, or content propagating historical or current stereotypes”.
Reddit, Discord: where to see images created by Stable Diffusion?
The difference between the Stable Diffusion model and other text-to-image AIs is that it is available as open access. This means that anyone can download the and run it on their own machine at home or in a research laboratory.
There’s no need to run company servers via the cloud. And yet, filters and censorship rules apply only on these servers. Four subreddits dedicated to licentious content have already been created:
- r/unstablediffusion,
- r/PornDiffusion,
- r/HentaiDiffusion
- and r/stablediffusionnsfw.
In total, these groups have around 2,000 members. Stable Diffusion’s main subreddit has 8,000 fans.
A Discord server was also opened by the moderator of the r/Singularity, r/UnstableDiffusion, r/PornDiffusion and r/HentaiDiffusion subbreeds: Ashley22.
LAION: a training dataset “riddled with pornographic images
To obtain the desired drawings, users write long descriptions which they share and complete with each other. For example, a drawing was created from the text ” oil painting of a realistic nude white princess exposed symmetrical breasts and realistic thighs exposed with charming detailed eyes, sky, color page, tankoban, 4K, tone mapping, doll, akihiko yoshida, james dean, andrei riabovitchev, marc simonetti, yoshitaka amano, long hair, curly “.
These texts are then fed to the AI to let it create an image. Stable Diffusion was trained using 4000 Nvidia A100 GPUs, on a dataset named “ LAION-Aesthetics “. In fact, LAION is the anagram for Large-scale Artificial Intelligence Open Network (Large-scale Artificial Intelligence Open Network). It is a non-profit organization dedicated to AI.
The LAION 5B open source dataset weighs 250 terabytes and contains 5.6 billion images collected on the Internet. Its predecessor LAION-400M was known to contain aberrant content. A 2021 study revealed that it contained ” numerous disturbing and explicit texts and images of rape, pornography, malignant stereotypes, racist insults, and other extremely problematic content “.
The Google Research team also trained its Imagen text-to-image model on LAION-400M. The researchers preferred avoid public access to their model for fear that it will produce hurtful representations and stereotypes.
To remedy the problem, Stability AI has reduced LAION 5B from two billion to 120 million images by training a model to predict the score from 1 to 10 that people would give an image. Only the best were retained for the LAION-Aesthetics dataset. The aim was to eliminate pornographic images.
The danger of DeepFakes
If AI-generated images of hentai are nothing too dangerous, Stable Diffusion can be hijacked to generate much more problematic content DeepFakes. Internet users can use this AI to create fake nude photos of celebrities. All you have to do is provide him with a photo of a celebrity, and let him imagine his naked body.
AI-generated DeepFakes pose a problem for researchers and engineers for several years. But then again, the new text-to-image models in the tradition of Stable Diffusion are clearly not going to help matters.
Unlike DALL-E or MidJourney, Stable Diffusion can be used to create fake celebrity photos because the LAION on which it is trained contains numerous photos of stars.
Such photos can damage a star’s reputationeven if they are false. AI-generated images are not yet realistic enough to be mistaken for real snapshots, but could quickly become so…