Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label misinformation disinformation threat. Show all posts

Online Misinformation and AI-Driven Fake Content Raise Concerns for Election Integrity

 

With elections drawing near, unease is spreading about how digital falsehoods might influence voter behavior. False narratives on social platforms may skew perception, according to officials and scholars alike. As artificial intelligence advances, deceptive content grows more convincing, slipping past scrutiny. Trust in core societal structures risks erosion under such pressure. Warnings come not just from academics but also from community leaders watching real-time shifts in public sentiment.  

Fake messages have recently circulated online, pretending to be from the City of York Council. Though they looked real, officials later stated these ads were entirely false. One showed a request for people willing to host asylum seekers; another asked volunteers to take down St George flags. A third offered work fixing road damage across neighborhoods. What made them convincing was their design - complete with official logos, formatting, and contact information typical of genuine notices. 

Without close inspection, someone scrolling quickly might believe them. Despite their authentic appearance, none of the programs mentioned were active or approved by local government. The resemblance to actual council material caused confusion until authorities stepped in to clarify. Blurred logos stood out immediately when BBC Verify examined the pictures. Wrong fonts appeared alongside misspelled words, often pointing toward artificial creation. 

Details like fingers looked twisted or incomplete - a frequent issue in computer-made visuals. One poster included an email tied to a real council employee, though that person had no knowledge of the material. Websites referenced in some flyers simply did not exist online. Even so, plenty of individuals passed the content along without questioning its truth. A single fabricated post managed to spread through networks totaling over 500,000 followers. False appearances held strong appeal despite clear warning signs. 

What spreads fast online isn’t always true - Clare Douglas, head of City of York Council, pointed out how today’s tech amplifies old problems in new ways. False stories once moved slowly; now they race across devices at a pace that overwhelms fact-checking efforts. Trust fades when people see conflicting claims everywhere, especially around health or voting matters. Institutions lose ground not because facts disappear, but because attention scatters too widely. When doubt sticks longer than corrections, participation dips quietly over time.  

Ahead of public meetings, tensions surfaced in various regions. Misinformation targeting asylum seekers and councils emerged online in Barnsley, according to Sir Steve Houghton, its council head. False stories spread further due to influencers who keep sharing them - profit often outweighs correction. Although government outlets issued clarifications, distorted messages continue flooding digital spaces. Their sheer number, combined with how long they linger, threatens trust between groups and raises risks for everyday security. Not everyone checks facts these days, according to Ilya Yablokov from the University of Sheffield’s Disinformation Research Cluster. Because AI makes it easier than ever, faking believable content takes little effort now. 

With just a small setup, someone can flood online spaces fast. What helps spread falsehoods is how busy people are - they skip checking details before passing things along. Instead, gut feelings or existing opinions shape what gets shared. Fabricated stories spreading locally might cost almost nothing to create, yet their impact on democracy can be deep. 

When misleading accounts reach more voters, specialists emphasize skills like questioning sources, checking facts, or understanding media messages - these help preserve confidence in public processes while supporting thoughtful engagement during voting events.

Global Executives Rank Misinformation, Cyber Insecurity and AI Risks as Top Threats: WEF Survey 2025

 

Business leaders across major global economies are increasingly concerned about the rapid rise of misinformation, cyber threats and the potential negative impacts of artificial intelligence, according to new findings from the World Economic Forum (WEF).

The WEF Executive Opinion Survey 2025, based on responses from 11,000 executives in 116 countries, asked participants to identify the top five risks most likely to affect their nations over the next two years from a list of 34 possible threats.

While economic issues such as inflation and downturns, along with societal challenges like polarization and inadequate public services, remained dominant, technology-driven risks stood out prominently in this year’s results.

Within G20 nations, concerns over AI were especially visible. “Adverse outcomes of AI technologies” emerged as the leading risk in Germany and the fourth most significant in the US. Australian executives similarly flagged “adverse outcomes of frontier technologies,” including quantum innovations, as a top threat.

Misinformation and disinformation ranked as the third-largest concern for executives in the US, UK and Canada. Meanwhile, in India, cyber insecurity—including threats to critical infrastructure—was identified as the number one risk.

Regionally, mis/disinformation ranked second in North America, third in Europe and fourth in East Asia. Cyber insecurity was the third-highest risk in Central Asia, while concerns around harmful AI outcomes placed fourth in South-east Asia.

AI’s influence is clearly woven through most of the technological risks highlighted. The technology is enabling more sophisticated disinformation efforts, including realistic deepfake audio and video. At the same time, AI is heightening cyber risks by empowering threat actors with advanced capabilities in social engineering, reconnaissance, vulnerability analysis and exploit development, according to the UK’s National Cyber Security Centre (NCSC).

The NCSC’s recent threat outlook cautions that AI will “almost certainly” make several stages of cyber intrusion “more effective and efficient” in the next few years.

The survey’s references to “adverse outcomes” of AI also include potential misuse of agentic or generative AI tools and the manipulation of AI models for disruptive, espionage-related or malicious purposes.

A study released in September found that 26% of US and UK organizations experienced a data poisoning attack in the past year, underscoring the growing risks.

“With the rise of AI, the proliferation of misinformation and disinformation is enabling bad actors to operate more broadly,” said Andrew George, president of Marsh Specialty. “As such, the challenges posed by the rapid adoption of AI and associated cyber threats now top boardroom agendas.”