Search This Blog

Popular Posts

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

India’s Cybersecurity Workforce Struggles to Keep Pace as AI and Cloud Systems Expand

  India’s fast-growing digital economy is creating an urgent demand for cybersecurity professionals, but companies across the country are fi...

All the recent news you need to know

AI Polling Reshapes Political Research as Firms Turn Conversations Into Data

 

Artificial intelligence is rapidly transforming the world of political opinion polling, replacing time-consuming human-led interviews with automated conversational systems capable of analysing public sentiment at scale.

"When you hear the word 'politician', what is the first image or emotion that comes to mind?"

The question is asked not by a human researcher, but by an AI-powered voice assistant. While a respondent shares his views over the phone, multiple AI systems simultaneously analyse the conversation. One verifies whether the person is answering the question correctly, another evaluates the depth of the response, while a third checks for possible fraud or bot-like behaviour.

The technology is being developed by Naratis, a French start-up focused on bringing artificial intelligence into political opinion research.

"The US has start-ups like Outset, Listen Labs and Hey Marvin that do AI polling like this in the commercial sphere. To my knowledge we're the first to do this for political opinion polling as well," says Pierre Fontaine, the 28-year-old engineer who founded the firm in 2025.

The emergence of AI-led polling marks a major shift for an industry traditionally dependent on manual interviews and extensive human analysis. In countries such as France, polling firms are increasingly exploring automation to reduce costs and speed up research processes.

Naratis specifically targets qualitative research, which is widely regarded as the most expensive and labour-intensive form of polling. Traditionally, these studies involve one-on-one interviews or focus groups that can take weeks to organise and analyse. By using conversational AI, the company says it can significantly reduce both time and cost.

Rather than relying on standard multiple-choice surveys, the platform encourages participants to engage in conversations with AI systems. "We don't ask people to tick boxes - they have a conversation with an AI," Fontaine explains. "That means we can explore not just what people think, but how they think - how they build their opinions, and even when those opinions change."

The company claims its approach is "10 times faster, 10 times cheaper and 90% as accurate as human polling".

According to the firm, projects that previously required weeks and substantial budgets can now be completed within a couple of days, with some responses collected in less than 24 hours. Fontaine describes this advantage as "parallelisation", where numerous AI agents conduct interviews simultaneously instead of relying on individual human researchers.

The rise of AI polling comes at a challenging time for the polling industry overall. Survey participation rates have dropped sharply over the decades, increasing operational costs and raising concerns about the reliability and representativeness of public opinion studies.

Supporters of AI polling argue that conversational systems may encourage respondents to be more honest, especially when discussing politically sensitive issues. Some researchers believe this could reduce social desirability bias, where people avoid expressing controversial opinions to human interviewers.

However, critics remain cautious about the growing dependence on AI in political research. Concerns include the possibility of AI systems generating inaccurate conclusions, producing overly generic responses, or creating misleading synthetic data.

Questions have also emerged around the use of "digital twins" and "synthetic people" — AI-generated profiles designed to imitate real human behaviour. While some market research firms use such tools for testing and simulations, many organisations remain reluctant to apply them in political polling.

At Ipsos, AI is already used extensively in consumer and behavioural research, including analysing user-recorded videos and studying social media activity. However, major firms continue to maintain human oversight in politically sensitive projects.

At OpinionWay, AI may assist with conducting interviews, but "we would never publish an opinion poll based on AI-generated data," says CEO of OpinionWay Bruno Jeanbart, citing concerns about trust.

Experts believe the future of polling will likely involve a hybrid approach combining AI efficiency with human supervision. While automation can accelerate research and lower costs, human researchers are still considered essential for validating findings, interpreting nuance and ensuring accountability.

Even AI advocates acknowledge the need for caution. "The goal is end-to-end automation, but today it would be unsafe and socially unacceptable to remove humans entirely," says Le Brun.

As economic pressures continue to push the polling industry toward faster and cheaper methods, companies like Naratis are betting that AI-driven conversations could redefine how public opinion is collected and understood. Whether this transformation strengthens trust in polling or deepens public scepticism may ultimately depend on how responsibly the technology is implemented and regulated.

Ransomware Attacks Reach All Time High, Leaked Over 2.6 Billion Records

 

A recent analysis of cybercrime data of last year (2025) disclosed that ransomware victims have risen rapidly by 45% in the previous year. But this is not important, as there exists something more dangerous. The passive dependence on hacked credentials as the primary entry point tactic is the main concern. Regardless of the platforms used, the accounts you are trying to protect, it is high time users start paying attention to password security. 

State of Cybercrime report 2026


The report from KELA found over 2.86 billion hacked credentials, passwords, session cookies, and other info that allows 2FA authentication. Surprisingly, authentication services and business cloud accounted for over 30% of the leaked data in 2025.

The analysis also revealed that infostealer malware which compromised credentials is immune to whatever OS you are using, “infections on macOS devices increased from fewer than 1,000 cases in 2024 to more than 70,000 in 2025, a 7,000% increase,” the report said.

Expert advice


Experts from Forbes have warned users about the risks associated with infostealer malware endless times. The leaked data includes FBI operations aimed at shutting down cybercrime gangs, millions of gmail passwords within leaked infostealer logs, and much more. Despite the KELA analysis, the risk continues. To make things worse, the damage is increasing year after year.

About infostealer


Kela defined the malware as something that is “designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” What is more troublesome is the ubiquity of malware-as-a-service campaigns in the dark web world. The entry barrier is not closed, but the gates have been kicked wide open for experts as well as amateur threat actors. Data compromise in billions

Infostealer malware, according to Kela, ‘is designed to exfiltrate sensitive data from compromised machines, including login credentials, authentication tokens, and other critical account information.” And with the now almost universal availability of malware-as-a-service operations to the infostealer criminal world, the barrier to entry has not only been lowered but kicked to the curb completely.

In 2025, Kela found around “3.9 million unique machines infected with infostealer malware globally, which collectively yielded 347.5 million compromised credentials.” The grand total amounts to 2.86 billion hacked credentials throughout all platforms: databases of infostealer logs and dark web criminal marketplaces.

Tricks used by infostealers:


AI-generated tailored scams, messaging apps, and email frequently use Phishing-as-a-Service to get around MFA. In so-called "hack your own password" assaults, users are duped into manually running scripts in order to circumvent conventional security measures.

Trojanized software is promoted by malicious advertisements and search results, increasing the risk of infection. In supply chain assaults, high-privilege credentials are the target of poisoned packages and DevTools impersonation. Form-grabbing and cookie theft are made possible via compromised browser extension updates. Fake software updates and pirated apps continued to be successful.

OpenAI Codex Bug Leads to GitHub Token Breach

 

In March 2026, researchers from BeyondTrust showed that a tailored GitHub branch name was enough to steal Codex’s OAuth token in cleartext. Tech giant OpenAI termed it as “Critical P1”. Soon after, Anthropic’s Claude Code source code leaked into the public npm registry, and Adversa’s Claude Code mutely ignored its own deny protocols once a prompt (command) exceeded over 50 subcommands.

Malicious codes in AI These codes were not isolated vulnerabilities. They were new in a nine-month campaign: six research teams revealed exploits against Copilot, Vertex AI, Codex, Claude Code. Every exploit followed the same strategy. An AI agent kept a credential, performed an action, and verified to a production system without any human session supporting the request.

The attack surface was first showcased at Balck Hat USA 2025, where experts hacked ChatGPT, Microsoft Copilot Studio, Gemini, Cursor and many more, on stage, with zero clicks. After nine, threat actors breached those same credentials.

How a branch name in Codex compromised GitHub


Researchers at BeyondTrust found Codex cloned repositories using a GitHub OAuth token attached in the git remote URL. While cloning, the branch name label allowed malicious data into the setup script. A backtick subshell and a semicolon changed the branch name into an extraction payload.

About the bug


The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension. All reported issues have since been fixed in collaboration with OpenAI's security team.

This vulnerability allows an attacker to inject arbitrary commands through the GitHub branch name parameter, potentially leading to the theft of a victim's GitHub User Access Token—the same token Codex uses to authenticate with GitHub—through automated techniques. A victim's GitHub User Access Token, which Codex needs to authenticate with GitHub, may be stolen as a result.

Vulnerability impact


This vulnerability can scale to compromise numerous people interacting with a shared environment or GitHub repository using automated ways. The Codex CLI, Codex SDK, Codex IDE Extension, and the ChatGPT website are all impacted by the vulnerability. Since then, every issue that was reported has been fixed in collaboration with OpenAI's security team.

“OpenAI Codex is a cloud-based coding agent, accessible through ChatGPT. It allows users to point the tool toward a codebase and submit tasks through a prompt. Codex then spins up a managed container instance to execute these tasks—such as generating code, answering questions about a codebase, creating pull requests, and performing code reviews against the selected repository,” said Beyond Trust.

Spotify Verified Badge Targets AI Music Confusion as Human Artist Authentication Expands

 

Now appearing beside artist profiles, Spotify’s new “Verified by Spotify” badge uses a green checkmark to highlight real human creators. Only accounts meeting the platform’s internal authenticity checks receive the label. Rather than algorithm-built personas, these profiles represent actual musicians behind the music. The rollout is happening gradually, changing how artists appear in searches, playlists, and recommendations. 

The update arrives as concerns continue growing around AI-generated music flooding streaming services. Spotify says verification depends on signals such as active social media accounts, consistent listener activity, merchandise listings, and live performance schedules - indicators suggesting a genuine person is tied to the profile. 

According to the company, these measures are designed to separate human creators from automated content increasingly appearing online.  Spotify says most artists users actively search for will eventually receive verification. Artists recognized for meaningful contributions to music culture are expected to be prioritized ahead of bulk-uploaded or mass-generated accounts. 

Over the coming weeks, the checkmarks will gradually appear across the platform, with influence and authenticity carrying more weight than upload volume. The move comes as streaming platforms face mounting criticism over how they handle AI-generated tracks. While the badge confirms a profile belongs to a real person, some critics quickly pointed out that it does not indicate whether artificial intelligence was used to help create the music itself. 

Questions around what counts as “real” music continue growing as AI tools become more involved in production. Creator-rights advocate and former AI executive Ed Newton-Rex warned that systems like Spotify’s may unintentionally disadvantage independent musicians who do not tour, sell merchandise, or maintain strong social media visibility. 

Instead, he suggested platforms should directly label AI-generated songs rather than relying solely on artist verification. Experts also note that defining AI involvement in music is increasingly difficult. Professor Nick Collins from Durham University described AI-assisted music creation as a broad spectrum rather than a simple divide between human-made and machine-made work. Many songs now involve software-assisted mixing, mastering, composition, or editing, making it far harder to classify music by origin alone. 

Spotify has faced years of criticism over AI-generated audio. Across forums and online communities, users have repeatedly called for clearer labels showing whether tracks were created by humans or algorithms. Some developers have even built independent tools aimed at detecting and filtering AI-generated songs on the platform. Concerns intensified after projects like The Velvet Sundown attracted large audiences despite having no interviews, live performances, or publicly traceable history. 

The group later described itself as a “synthetic music project” supported by artificial intelligence, fueling debate around transparency in digital music spaces. Spotify’s latest verification effort appears aimed at rebuilding trust while balancing support for evolving AI technologies. The move also reflects a broader trend across digital platforms, where companies are introducing verification systems to distinguish human-created content from synthetic material as AI-generated media becomes harder to identify.

Friendly AI Chatbots More Likely to Give Wrong Answers, Study Finds

 

Artificial intelligence chatbots that are designed to sound warm, friendly, and empathetic may be more likely to give wrong or misleading answers than their more neutral counterparts, according to a new study by researchers at the Oxford Internet Institute (OII). The findings raise concerns about how much users can trust AI assistants that have been deliberately tuned to feel more human‑like and emotionally supportive. 

What the study found 

The researchers analyzed over 400,000 responses from five major AI systems that had been modified to communicate in a more amiable, empathetic tone. They discovered that these “warm models” produced more factual errors than the original, less friendly versions, with error rates rising by an average of 7.43 percentage points across tasks. In some cases, the warm‑modeled chatbots not only gave incorrect information but also reaffirmed users’ mistaken beliefs, particularly when expressing emotion.

The OII team describes this as a “warmth‑accuracy trade‑off”: the more the models are optimized to be agreeable and supportive, the more their reliability drops. Lead author Lujain Ibrahim told the BBC that, like humans, AI can struggle to deliver honest but uncomfortable truths when its main goal becomes being likable rather than being accurate. This mimics a human tendency to soften harsh feedback to avoid conflict, but in an AI context it can mean dangerous misinformation, especially on topics like health or legal advice. 

 Risks for users

The risk is especially serious because people are increasingly using chatbots for emotional support, mental‑health guidance, or even medical and financial advice. If a friendly AI constantly agrees with users or gives reassuring but false answers, it can reinforce harmful misconceptions instead of correcting them. The study notes that such “warm” tuning can create vulnerabilities that do not exist in the original, less sociable models, making it crucial for users and developers to treat these systems as fallible tools rather than infallible experts. 

The paper urges developers to rethink how they fine‑tune chatbots for companionship or counseling, emphasizing the need to balance empathy with factual rigor. Some industry leaders have already warned against “blindly trusting” AI outputs, and many platforms now include prominent disclaimers about potential inaccuracies. However, the OII research suggests that simply making an AI sound more friendly can quietly increase those risks, meaning future design choices must explicitly prioritize truthfulness over artificial charm.

Why Europe Is Rethinking Its Dependence on US Cloud Providers




Concerns around digital sovereignty are rapidly becoming one of the most important debates shaping the future of cloud computing, artificial intelligence, and government technology infrastructure across Europe and the UK.

The discussion recently gained attention after Chi Onwurah, chair of the UK Science, Innovation and Technology Select Committee, criticized Britain’s broader technology strategy and warned about growing dependence on a small group of major US technology companies. Her remarks pointed to reliance on providers such as Microsoft and Amazon Web Services, while also referencing Palantir Technologies because of its involvement in NHS and defence-related contracts. She also raised concerns about foreign-controlled technology supply chains supporting critical public infrastructure.

At the centre of the debate is the meaning of “digital sovereignty,” a term that is increasingly used by governments but often interpreted differently. In practical terms, sovereignty refers to a country maintaining legal authority and control over its citizens’ sensitive data, including where that information is processed, accessed, and governed. Experts argue that sovereign data should only fall under the jurisdiction of the nation to which it belongs, rather than being exposed to foreign legal systems or overseas regulatory reach.

The issue has become especially significant in the era of public cloud computing. Before large-scale cloud adoption, most government and enterprise data was stored and processed inside domestic datacentres, limiting both physical and remote access to national borders. While foreign software vendors occasionally required access for maintenance or support purposes, control over infrastructure largely remained local.

That model changed as governments and businesses increasingly adopted cloud services operated by US-headquartered providers. As organizations shifted toward subscription-based cloud platforms, concerns began emerging over whether sensitive national data could still be considered sovereign if it was processed through globally distributed infrastructure.

Much of the modern sovereignty debate intensified following the Schrems II ruling, a landmark European court decision that challenged how personal data could be transferred outside the EU to countries viewed as having weaker privacy protections. Since then, governments across Europe have pushed for tighter oversight of where data travels and who ultimately controls cloud infrastructure.

Although sovereignty concerns are often framed as a problem tied only to hyperscalers, industry analysts say the challenge is broader. Companies including IBM, Oracle Corporation, and Hewlett Packard Enterprise also face pressure to adapt their cloud and data processing models to meet stricter sovereignty expectations.

The debate has also been intensified by geopolitical tensions. European governments have become increasingly cautious about long-term dependence on foreign-owned digital infrastructure, particularly as cloud computing and artificial intelligence become more deeply connected to defence, healthcare, and public services. Analysts note that data infrastructure is now being viewed similarly to energy or telecommunications infrastructure: strategically important and politically sensitive.

Among the prominent providers, Microsoft was one of the earliest companies to experiment with sovereign cloud initiatives, including a dedicated German version of Microsoft 365. However, that model was eventually discontinued in 2022. Critics argue the company now faces greater difficulties adapting because many of its cloud services operate through highly interconnected global systems spread across more than 100 countries.

Questions around transparency have also created challenges. Reports previously indicated that Microsoft struggled to provide detailed information about certain data flows when requested by the Scottish Police Authority under data protection obligations. Investigative reporting from ProPublica also stated that US authorities encountered similar difficulties while attempting to evaluate Microsoft cloud services under FedRAMP certification requirements for government environments.

Additional scrutiny has emerged around Microsoft’s artificial intelligence infrastructure plans. The company had previously indicated that in-country AI processing capabilities for Copilot services in the UK would arrive by the end of 2025, though timelines have reportedly shifted into 2026. Some European customers are also expected to receive regional AI processing instead of fully sovereign national deployments.

Industry experts increasingly categorize sovereign cloud approaches into multiple levels. One common method involves creating “data boundaries,” where providers attempt to restrict where customer data is stored or processed while still operating under global cloud architectures. Critics argue this model may not fully satisfy stricter interpretations of sovereignty because some operational control can still remain overseas.

A second approach focuses on partnerships with local operators that manage sovereign services regionally. Amazon Web Services has promoted its European Sovereign Cloud initiative using this framework, arguing that the platform aligns with EU regulatory requirements. However, some analysts contend that EU-level governance is not the same as national sovereignty, particularly for non-EU countries such as the UK. Concerns have also been raised over whether US legislation, including the CLOUD Act, could still apply in certain circumstances.

Meanwhile, Google Cloud has attracted attention through its partnership with French defence and technology company Thales Group. Their joint venture, S3NS, is designed around France-specific sovereign infrastructure with air-gapped operations, meaning the systems can function independently without continuously communicating with external global networks for updates or validation checks.

Security specialists consider air-gapped architecture an important benchmark for sovereign cloud environments because it reduces reliance on foreign operational control. Google’s Distributed Cloud Air-Gapped platform is currently viewed by some analysts as one of the more mature sovereign cloud offerings available, despite still lacking some features present in its broader public cloud ecosystem.

The approach has already attracted major defence-related interest. France, NATO members, and the German military have all shown interest in sovereign infrastructure models, while the UK Ministry of Defence recently announced a £400 million contract spanning five years tied to these types of capabilities.

Competing alternatives are still evolving. AWS offers LocalStack-focused options largely aimed at development environments, while Microsoft’s disconnected Azure Local products have faced criticism from some analysts who argue the offerings remain less mature than competing sovereign platforms.

Despite rapid investment, experts say the sovereign cloud market is still in its early stages. Google’s France-based partnership model currently appears to offer one of the clearest examples of locally controlled hyperscale infrastructure, while AWS continues refining its European-focused model and Microsoft works through broader architectural and transparency challenges.

At the same time, the sovereignty movement may create new opportunities for regional cloud providers and domestic technology companies. However, analysts warn that building competitive sovereign infrastructure will require long-term investment, government support, and procurement strategies that allow interoperability between multiple vendors rather than locking public institutions into a single provider.

Many experts believe the future of sovereign technology infrastructure will likely depend on hybrid and partnership-driven models combining hyperscale cloud capabilities with locally managed operations. Supporters of the S3NS approach argue it offers an early blueprint for how global cloud providers and national operators could collaborate while still preserving local control over sensitive data and critical digital systems.

Featured