Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Nightshade. Show all posts

AI Poison Pill App Nightshade Received 250K Downloads in Five Days

 

Shortly after its January release, the AI copyright infringement tool Nightshade exceeded the expectations of its developers at the University of Chicago's computer science department, with 250,000 downloads. With Nightshade, artists can avert AI models from using their artwork for training purposes without acquiring permission.

The Bureau of Labour Statistics reports that more than 2.67 million artists work in the United States, but social media response indicates that downloads have taken place across the globe. According to one of the coders, cloud mirror links were established in order to prevent overloading the University of Chicago's web servers.

The project's leader, Ben Zhao, a computer science professor at the University of Chicago, told VentureBeat that "the response is simply beyond anything we imagined.” 

"Nightshade seeks to 'poison' generative AI image models by altering artworks posted to the web, or 'shading' them on a pixel level, so that they appear to a machine learning algorithm to contain entirely different content — a purse instead of a cow," the researchers explained. After training on multiple "shaded" photos taken from the web, the goal is for AI models to generate erroneous images based on human input. 

Zhao, along with colleagues Shawn Shan, Wenxin Ding, Josephine Passananti, and Heather Zheng, "developed and released the tool to 'increase the cost of training on unlicensed data, such that licencing images from their creators becomes a viable alternative,'" VentureBeat reports, citing the Nightshade project page. 

Opt-out requests, which purport to stop unauthorised scraping, are reportedly made by the AI companies themselves; however, TechCrunch notes that "those motivated by profit over privacy can easily disregard such measures." 

Zhao and his colleagues do not intend to dismantle Big AI, but they do want to make sure that tech giants pay for licenced work—a requirement that applies to any business operating in the open—or else they risk legal repercussions. According to Zhao, the fact that AI businesses have web-crawling spiders that algorithmically collect data in an often undetectable manner has basically turned into a permit to steal.

Nightshade shows that these models are vulnerable and there are ways to attack, Zhao said. He went on to say that what it implies is that there are methods for content creators to provide harder returns than writing Congress or complaining via email or social media. 

Glaze, one of the team's apps that guards against AI infringement, has reportedly been downloaded 2.2 million times since its April 2023 release, according to VentureBeat. By changing pixels, glaze makes it more difficult for AI to "learn" from an artist's distinctive style.

New Data ‘Poisoning’ Tool Empowers Artist to Combat AI Scraping

 

AI scraping, a technique employed by AI companies to train their AI models by acquiring data from online sources without the owners' consent, is a significant issue impacting generative AI models. 

AI scraping, which uses artists' works to create new art in text-to-image models, can be particularly damaging to visual artists. But now there might be a way out. 

Nightshade is an entirely novel tool developed by University of Chicago researchers that allows artists the option to "poison" their digital artwork in order to stop developers from using it to train AI systems. 

According to the MIT Technology Review, which received an exclusive preview of the research, artists can employ Nightshade to alter pixels in their creations of art that are invisible to the human eye but that trigger "chaotic" and "unpredictable" breaks in the generative AI model. 

By influencing the model's learning, the prompt-specific attack makes generative AI models generate useless outputs that confuse subjects for each other.

For instance, the model might come to understand that a dog is actually a cat, which would lead to the creation of misleading visuals that don't correspond with the text prompt. Additionally, the research paper claims that in less than 100 samples, traces of nightshade poison can corrupt a stable diffusion prompt. 

The poisoned data is tough to remove from the model because the AI company would have to go in and delete each poisoned sample manually. Nightshade not only has the ability to deter AI firms from collecting data without authorization, but it also encourages consumers to exercise caution when using any of these generative AI models. 

Other efforts have been made to address the issue of artists' work being utilised without their permission. Some AI picture-generating models, such as Getty photos' image generator and Adobe Firefly, only use images that have been accepted by the artist or are open-sourced to train their models and have a compensation program in place.