The GenAI problem
Today's information environment includes a wide range of communication. Social media platforms have enabled reposting, and comments. The platform is useful for both content consumers and creators, but it has its own challenges.
The rapid adoption of Generative AI has led to a significant increase in misleading content online. These chatbots have a tendency of generating false information which has no factual backing.
What is AI slop?
The internet is filled with AI slop- content that is made with minimal human input and is like junk. There is currently no mechanism to limit such massive production of harmful or misleading content that can impact human cognition and critical thinking. This calls for a robust mechanism that can address the new challenges that the current system is failing to tackle.
The content provenance report
For restoring the integrity of digital information, Canada's Centre for Cyber Security (CCCS) and the UK's National Cyber Security Centre (NCSC) have launched a new report on public content provenance. Provenance means "place of origin." For building stronger trust with external audiences, businesses and organisations must improve the way they manage the source of their information.
NSSC chief technology officer said that the "new publication examines the emerging field of content provenance technologies and offers clear insights using a range of cyber security perspectives on how these risks may be managed.”
What is next for Content Integrity?
The industry is implementing few measures to address content provenance challenges like Coalition for Content Provenance and Authenticity (C2PA). It will benefit from the help of Generative AI and tech giants like Meta, Google, OpenAI, and Microsoft.
Currently, there is a pressing need for interoperable standards across various media types such as image, video, and text documents. Although there are content provenance technologies, this area is still in nascent stage.
What is needed?
The main tech includes genuine timestamps and cryptographically-proof meta to prove that the content isn't tampered. But there are still obstacles in development of these secure technologies, like how and when they are executed.
The present technology places the pressure on the end user to understand the provenance data.
A provenance system must allow a user to see who or what made the content, the time and the edits/changes that were made. Threat actors have started using GenAI media to make scams believable, it has become difficult to differentiate between what is fake and real. Which is why a mechanism that can track the origin and edit history of digital media is needed. The NCSC and CCCS report will help others to navigate this gray area with more clarity.
