This new data poisoning tool lets artists fight back against generative AI | MIT Technology Review

Description

Summary drafted by a large language model.

Melissa Heikkilä reports on Nightshade, a new data poisoning tool developed by researchers at the University of Chicago that lets artists add invisible changes to their art before uploading it online. If scraped into an AI training set, the resulting model can break in chaotic and unpredictable ways. The tool is intended to tip the power balance back towards artists from AI companies that use their work without consent or compensation. Nightshade exploits a security vulnerability in generative AI models trained on vast amounts of data scraped from the internet. The more poisoned images scraped into an AI model's dataset, the more damage the technique will cause.


Read article here
Link
We care about your privacy so we do not store nor use any cookie unless it is stricly necessary to make the website to work
Got it
Learn more