Stable Diffusion May "Memorize" Some Images Used for Training, a Research Found

The results of this research raise concerns about the privacy of AI image synthesis models.

On Monday, a team of AI researchers from Google, DeepMind, UC Berkeley, Princeton, and ETH Zurich, published a research paper detailing a new adversarial attack that is capable of extracting a portion of the training images from AI image synthesis models that use latent diffusion, such as Stable Diffusion.

The research has proved that image synthesis models can retain information from their training data and that such data doesn't remain confidential.

The researchers were able to extract over a thousand images, ranging from personal photographs to trademarked logos, from state-of-the-art models. To further understand the impact of different modeling and data decisions on privacy, they trained hundreds of diffusion models in various settings and the results showed that these models retain information from their training data in the form of individual images and are able to emit them during generation.

This makes models like Stable Diffusion significantly less private compared to previous generative models such as GANs.

The results of this research are, however, not as straightforward as they may seem. As noted by Ars Technica, to uncover instances of memorization in the Stable Diffusion model, the researchers generated 175 million images for testing.

Out of 350,000 high-probability-of-memorization images that were tested, only 94 direct matches and 109 perceptual near-matches were extracted from a 160 million-image dataset used for training Stable Diffusion. This translates to a memorization rate of approximately 0.03% in this particular scenario.

Additionally, the researchers point out that the concept of "memorization" in this context is approximate as the AI model is not capable of producing exact replicas of the training images byte-for-byte.

This is because the 160 million-image training dataset used to train Stable Diffusion is many times larger than the 2GB size of the AI model, which is why the model is unable to memorize large amounts of data. As a result, any instances of memorization present in the model are limited, rare, and difficult to extract by chance.

Still, the results of this research raise concerns about the privacy of AI image synthesis models and, according to the researchers, suggest that new advances in privacy-preserving training methods may be necessary to mitigate these vulnerabilities.

You can find the research paper here. Also, don't forget to join our 80 Level Talent platformour Reddit page, and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more