Today's information environment includes a wide range of communication. Social media platforms have enabled reposting, and comments. The platform is useful for both content consumers and creators, but it has its own challenges.
The rapid adoption of Generative AI has led to a significant increase in misleading content online. These chatbots have a tendency of generating false information which has no factual backing.
The internet is filled with AI slop- content that is made with minimal human input and is like junk. There is currently no mechanism to limit such massive production of harmful or misleading content that can impact human cognition and critical thinking. This calls for a robust mechanism that can address the new challenges that the current system is failing to tackle.
For restoring the integrity of digital information, Canada's Centre for Cyber Security (CCCS) and the UK's National Cyber Security Centre (NCSC) have launched a new report on public content provenance. Provenance means "place of origin." For building stronger trust with external audiences, businesses and organisations must improve the way they manage the source of their information.
NSSC chief technology officer said that the "new publication examines the emerging field of content provenance technologies and offers clear insights using a range of cyber security perspectives on how these risks may be managed.”
The industry is implementing few measures to address content provenance challenges like Coalition for Content Provenance and Authenticity (C2PA). It will benefit from the help of Generative AI and tech giants like Meta, Google, OpenAI, and Microsoft.
Currently, there is a pressing need for interoperable standards across various media types such as image, video, and text documents. Although there are content provenance technologies, this area is still in nascent stage.
The main tech includes genuine timestamps and cryptographically-proof meta to prove that the content isn't tampered. But there are still obstacles in development of these secure technologies, like how and when they are executed.
The present technology places the pressure on the end user to understand the provenance data.
A provenance system must allow a user to see who or what made the content, the time and the edits/changes that were made. Threat actors have started using GenAI media to make scams believable, it has become difficult to differentiate between what is fake and real. Which is why a mechanism that can track the origin and edit history of digital media is needed. The NCSC and CCCS report will help others to navigate this gray area with more clarity.
For the past week, England and parts of Northern Ireland have been gripped by unrest, with communities experiencing heightened tensions and an extensive police presence. Social media platforms have played an unjust role in spreading information, some of it harmful, during this period of turmoil. Despite this, major technology companies have remained largely silent, refusing to address their role in the situation publicly.
Big Tech's Reluctance to Speak
Journalists at BBC News have been actively seeking responses from major tech firms regarding their actions during the unrest. However, these companies have not been forthcoming. With the exception of Telegram, which issued a brief statement, platforms like Meta, TikTok, Snapchat, and Signal have refrained from commenting on the matter.
Telegram's involvement became particularly concerning when a list containing the names and addresses of immigration lawyers was circulated on its platform. The Law Society of England and Wales expressed serious concerns, treating the list as a credible threat to its members. Although Telegram did not directly address the list, it did confirm that its moderators were monitoring the situation and removing content that incites violence, in line with the platform's terms of service.
Elon Musk's Twitter and the Spread of Misinformation
The platform formerly known as Twitter, now rebranded as X under Elon Musk's ownership, has also drawn massive attention. The site has been a hub for false claims, hate speech, and conspiracy theories during the unrest. Despite this, X has remained silent, offering no public statements. Musk, however, has been vocal on the platform, making controversial remarks that have only added fuel to the fire.
Musk's tweets have included inflammatory statements, such as predicting a civil war and questioning the UK's approach to protecting communities. His posts have sparked criticism from various quarters, including the UK Prime Minister's spokesperson. Musk even shared, and later deleted, an image promoting a conspiracy theory about detainment camps in the Falkland Islands, further underlining the platform's problematic role during this crisis.
Experts Weigh In on Big Tech's Silence
Industry experts believe that tech companies are deliberately staying silent to avoid getting embroiled in political controversies and regulatory challenges. Matt Navarra, a social media analyst, suggests that these firms hope public attention will shift away, allowing them to avoid accountability. Meanwhile, Adam Leon Smith of BCS, The Chartered Institute for IT, criticised the silence as "incredibly disrespectful" to the public.
Hanna Kahlert, a media analyst at Midia Research, offered a strategic perspective, arguing that companies might be cautious about making public statements that could later constrain their actions. These firms, she explained, prioritise activities that drive ad revenue, often at the expense of public safety and social responsibility.
What Does It Look Like?
As the UK grapples with the fallout from this unrest, there are growing calls for stronger regulation of social media platforms. The Online Safety Act, set to come into effect early next year, is expected to give the regulator Ofcom more powers to hold these companies accountable. However, some, including London Mayor Sadiq Khan, question whether the Act will be sufficient.
Prime Minister Rishi Sunak has acknowledged the need for a broader review of social media in light of recent events. Professor Lorna Woods, an expert in internet law, pointed out that while the new legislation might address some issues, it might not be comprehensive enough to tackle all forms of harmful content.
A recent YouGov poll revealed that two-thirds of the British public want social media firms to be more accountable. As big tech remains silent, it appears that the UK is on the cusp of regulatory changes that could reshape the future of social media in the country.
Union Minister, on X, expressed gratitude to cricketer Sachin Tendulkar for pointing out the video. Tendulkar, in X, for pointing out the video, said that AI-generated deepfakes and misinformation are a threat to safety and trust of Indian users. He notes that platforms must comply with advisory issued by the Centre.
"Thank you @sachin_rt for this tweet #DeepFakes and misinformation powered by #AI are a threat to Safety&Trust of Indian users and represents harm & legal violation that platforms have to prevent and take down. Recent Advisory by @GoI_MeitY requires platforms to comply wth this 100%. We will be shortly notifying tighter rules under IT Act to ensure compliance by platforms," Chandrasekhar posted on X
On X, Sachin Tendulkar was seen cautioning his fans and the public that aforementioned video was fake. Further, he asked viewers to report any such applications, videos and advertisements.
"These videos are fake. It is disturbing to see rampant misuse of technology. Request everyone to report videos, ads & apps like these in large numbers. Social media platforms need to be alert and responsive to complaints. Swift action from their end is crucial to stopping the spread of misinformation and fake news. @GoI_MeitY, @Rajeev_GoI and @MahaCyber1," Tendulkar said on X.
Deepfakes are artificial media that have undergone digital manipulation to effectively swap out one person's likeness for another. The alteration of facial features using deep generative techniques is known as a "deepfake." While the practice of fabricating information is not new, deepfakes use sophisticated AI and machine learning algorithms to edit or create visual and auditory content that is easier to trick.
Last month, the government urged all online platforms to abide by the IT rules and mandated companies to notify users about forbidden content transparently and accurately.
The Centre has asked platforms to take urgent action against deepfakes and ensure that their terms of use and community standards comply with the laws and IT regulations in force. The government has made it abundantly evident that any violation will be taken very seriously and could result in legal actions against the entity.
Reddit, the popular social media platform, has announced that it will begin paying users for their posts. The new system, which is still in its early stages, will see users rewarded with cash for posts that are awarded "gold" by other users.
Gold awards are a form of virtual currency that can be purchased by Reddit users for a fee. They can be given to other users to reward them for their contributions to the platform. Until now, gold awards have only served as a way to show appreciation for other users' posts. However, under the new system, users who receive gold awards will also receive a share of the revenue generated from those awards.
The amount of money that users receive will vary depending on the number of gold awards they receive and their karma score. Karma score is a measure of how much other users have upvoted a user's posts and comments. Users will need to have at least 10 gold awards to cash out, and they will receive either 90 cents or $1 for each gold award.
Reddit says that the new system is designed to "reward the best and brightest content creators" on the platform. The company hopes that this will encourage users to create more high-quality content and contribute more to the community.
However, there are also some concerns about the new system. Some users worry that it could lead to users creating clickbait or inflammatory content to get more gold awards and more money. Others worry that the system could be unfair to users who do not have a lot of karma.
One Reddit user expressed concern that the approach will lead users to produce content of poor quality. If they know they can make money from it, people are more likely to upload clickbait or provocative stuff.
Artificial intelligence (AI) is ushering in a transformative era across various industries, including the cybersecurity sector. AI is driving innovation in the realm of cyber threats, enabling the creation of increasingly sophisticated attack methods and bolstering the efficiency of existing defense mechanisms.
In this age of AI advancement, the potential for a safer world coexists with the emergence of fresh prospects for cybercriminals. As the adoption of AI technologies becomes more pervasive, cyber adversaries are harnessing its power to craft novel attack vectors, automate their malicious activities, and maneuver under the radar to evade detection.
According to a recent article in The Messenger, the initial beneficiaries of the AI boom are unfortunately cybercriminals. They have quickly adapted to leverage generative AI in crafting sophisticated phishing emails and deepfake videos, making it harder than ever to discern real from fake. This highlights the urgency for organizations to fortify their cybersecurity infrastructure.
On a more positive note, the demand for custom chips has skyrocketed, as reported by TechCrunch. As generative AI algorithms become increasingly complex, off-the-shelf hardware struggles to keep up. This has paved the way for a new era of specialized chips designed to power these advanced systems. Industry leaders like NVIDIA and AMD are at the forefront of this technological arms race, racing to develop the most efficient and powerful AI chips.
McKinsey's comprehensive report on the state of AI in 2023 reinforces the notion that generative AI is experiencing its breakout year. The report notes, "Generative AIs have surpassed many traditional machine learning models, enabling tasks that were once thought impossible." This includes generating realistic human-like text, images, and even videos. The applications span from content creation to simulating real-world scenarios for training purposes.
However, amidst this wave of optimism, ethical concerns loom large. The potential for misuse, particularly in deepfakes and disinformation campaigns, is a pressing issue that society must grapple with. Dr. Sarah Rodriguez, a leading AI ethicist, warns, "We must establish robust frameworks and regulations to ensure responsible use of generative AI. The stakes are high, and we cannot afford to be complacent."
Unprecedented opportunities are being made possible by the generative AI surge, which is changing industries. The potential is limitless and can improve anything from creative processes to data synthesis. But we must be cautious with this technology and deal with the moral issues it raises. Gaining the full benefits of generative AI will require a careful and balanced approach as we navigate this disruptive period.