Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Sensitive Content. Show all posts

Growing Concerns Regarding The Dark Side Of A.I.

 


In recent instances on the anonymous message board 4chan, troubling trends have emerged as users leverage advanced A.I. tools for malicious purposes. Rather than being limited to harmless experimentation, some individuals have taken advantage of these tools to create harassing and racist content. This ominous side of artificial intelligence prompts a critical examination of its ethical implications in the digital sphere. 

One disturbing case involved the manipulation of images of a doctor who testified at a Louisiana parole board meeting. Online trolls used A.I. to doctor screenshots from the doctor's testimony, creating fake nude images that were then shared on 4chan, a platform notorious for fostering harassment and spreading hateful content. 

Daniel Siegel, a Columbia University graduate student researching A.I. exploitation, noted that this incident is part of a broader pattern on 4chan. Users have been using various A.I.-powered tools, such as audio editors and image generators, to spread offensive content about individuals who appear before the parole board. 

While these manipulated images and audio haven't spread widely beyond 4chan, experts warn that this could be a glimpse into the future of online harassment. Callum Hood, head of research at the Center for Countering Digital Hate, emphasises that fringe platforms like 4chan often serve as early indicators of how new technologies, such as A.I., might be used to amplify extreme ideas. 

The Center for Countering Digital Hate has identified several problems arising from the misuse of A.I. tools on 4chan. These issues include the creation and dissemination of offensive content targeting specific individuals. 

To address these concerns, regulators and technology companies are actively exploring ways to mitigate the misuse of A.I. technologies. However, the challenge lies in staying ahead of nefarious internet users who quickly adopt new technologies to propagate their ideologies, often extending their tactics to more mainstream online platforms. 

A.I. and Explicit Content 

A.I. generators like Dall-E and Midjourney, initially designed for image creation, now pose a darker threat as tools for generating fake pornography emerge. Exploited by online hate campaigns, these tools allow the creation of explicit content by manipulating existing images. 

The absence of federal laws addressing this issue leaves authorities, like the Louisiana parole board, uncertain about how to respond. Illinois has taken a lead by expanding revenge pornography laws to cover A.I.-generated content, allowing targets to pursue legal action. California, Virginia, and New York have also passed laws against the creation or distribution of A.I.-generated pornography without consent. 

As concerns grow, legal frameworks must adapt swiftly to curb the misuse of A.I. and safeguard individuals from the potential harms of these advanced technologies. 

The Extent of AI Voice Cloning 

ElevenLabs, an A.I. company, recently introduced a tool that can mimic voices by simply inputting text. Unfortunately, this innovation quickly found its way into the wrong hands, as 4chan users circulated manipulated clips featuring a fabricated Emma Watson reading Adolf Hitler’s manifesto. Exploiting material from Louisiana parole board hearings, 4chan users extended their misuse by sharing fake clips of judges making offensive remarks, all thanks to ElevenLabs' tool. Despite efforts to curb misuse, such as implementing payment requirements, the tool's impact endured, resulting in a flood of videos featuring fabricated celebrity voices on TikTok and YouTube, often spreading political disinformation. 

In response to these risks, major social media platforms like TikTok and YouTube have taken steps to mandate labels on specific A.I. content. On a broader scale, President Biden issued an executive order, urging companies to label such content and directing the Commerce Department to set standards for watermarking and authenticating A.I. content. These proactive measures aim to educate and shield users from potential abuse of voice replication technologies. 

The Impact of Personalized A.I. Solutions 

In pursuing A.I. dominance, Meta's open-source strategy led to unforeseen consequences. The release of Llama's code to researchers resulted in 4chan users exploiting it to create chatbots with antisemitic content. This incident exposes the risks of freely sharing A.I. tools, as users manipulate code for explicit and far-right purposes. Despite Meta's efforts to balance responsibility and openness, challenges persist in preventing misuse, highlighting the need for vigilant control as users continue to find ways to exploit accessible A.I. tools.


Google Is Supplying Private Data to Advertisers?




A big time accusation on Google is allegedly in the wind that it’s surreptitiously using secret web pages to give away data to advertisers.

Per sources and the evidence provided it’s being said that maybe Google is dealing in data without paying much attention to data protective measures.

The matter is under investigation and is a serious matter of research. Apparently the sensitive data includes race, political and health inclinations of its users.

Reportedly, the secret web pages were discovered by the chief policy officer of a web browser and they’d also found that Google had tagged them with identifying trackers.

Allegedly, using that very tracker, Google apparently feeds data to advertisers. This is possible an attempt at predicting browsing behavior.

According to sources, Google is doing all it can to cooperate with the investigations. The Google representative also said that they don’t transact with ad bidders without users’ consent.

Reportedly, Google has mentioned previously that it shall not “share encrypted cookie IDs in bid requests with buyers in its authorized buyers marketplace”.