This new data poisoning tool lets artists fight back against generative AI | MIT Technology Review
Date : 2023-10-23
Description
Summary drafted by a large language model.
Melissa Heikkilä reports on Nightshade, a new data poisoning tool developed by researchers at the University of Chicago that lets artists add invisible changes to their art before uploading it online. If scraped into an AI training set, the resulting model can break in chaotic and unpredictable ways. The tool is intended to tip the power balance back towards artists from AI companies that use their work without consent or compensation. Nightshade exploits a security vulnerability in generative AI models trained on vast amounts of data scraped from the internet. The more poisoned images scraped into an AI model's dataset, the more damage the technique will cause.
Read article here
Artificial Intelligence : what everyone can agree on
Evaluation of Sports Performance: Cognitive Biases, Vectors an...
Recently on :
Artificial Intelligence
Design | Culture
WEB - 2024-12-30
Fine-tune ModernBERT for text classification using synthetic data
David Berenstein explains how to finetune a ModernBERT model for text classification on a synthetic dataset generated from argi...
WEB - 2024-12-25
Fine-tune classifier with ModernBERT in 2025
In this blog post Philipp Schmid explains how to fine-tune ModernBERT, a refreshed version of BERT models, with 8192 token cont...
WEB - 2024-12-18
MordernBERT, finally a replacement for BERT
6 years after the release of BERT, answer.ai introduce ModernBERT, bringing modern model optimizations to encoder-only models a...
PITTI - 2024-09-19
A bubble in AI?
Bubble or true technological revolution? While the path forward isn't without obstacles, the value being created by AI extends ...
PITTI - 2024-09-08
Artificial Intelligence : what everyone can agree on
Artificial Intelligence is a divisive subject that sparks numerous debates about both its potential and its limitations. Howeve...