Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Artificial Intelligence. Show all posts

IDESaster Report: Severe AI Bugs Found in AI Agents Can Lead to Data Theft and Exploit


Using AI agents for data exfiltrating and RCE

A six-month research into AI-based development tools has disclosed over thirty security bugs that allow remote code execution (RCE) and data exfiltration. The findings by IDEsaster research revealed how AI agents deployed in IDEs like Visual Studio Code, Zed, JetBrains products and various commercial assistants can be tricked into leaking sensitive data or launching hacker-controlled code. 

The research reports that 100% of tested AI IDEs and coding agents were vulnerable. Impacted products include GitHub, Windsurf, Copilot, Cursor, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code. At least twenty-four assigned CVEs and additional AWS advisories were also included. 

AI assistants exploitation 

The main problem comes from the way AI agents interact with IDE features. Autonomous components that could read, edit, and create files were never intended for these editors. Once-harmless features turned become attack surfaces when AI agents acquired these skills. In their threat model, all AI IDEs essentially disregard the base software. Since these features have been around for years, they consider them to be naturally safe. 

Attack tactic 

However, the same functionalities can be weaponized into RCE primitives and data exfiltration once autonomous AI bots are included. The research reported that this is an IDE-agnostic attack chain. 

It begins with context hacking via prompt-injection. Covert instructions can be deployed in file names, rule files, READMEs, and outputs from malicious MCP servers. When an agent reads the context, the tool can be redirected to run authorized actions that activate malicious behaviours in the core IDE. The last stage exploits built-in features to steal data or run hacker code in AI IDEs sharing core software layers.

Examples

Writing a JSON file that references a remote schema is one example. Sensitive information gathered earlier in the chain is among the parameters inserted by the agent that are leaked when the IDE automatically retrieves that schema. This behavior was seen in Zed, JetBrains IDEs, and Visual Studio Code. The outbound request was not suppressed by developer safeguards like diff previews.  

Another case study uses altered IDE settings to show complete remote code execution. An attacker can make the IDE execute arbitrary code as soon as a relevant file type is opened or created by updating an executable file that is already in the workspace and then changing configuration fields like php.validate.executablePath. Similar exposure is demonstrated by JetBrains utilities via workspace metadata.

According to the IDEsaster report, “It’s impossible to entirely prevent this vulnerability class short-term, as IDEs were not initially built following the Secure for AI principle. However, these measures can be taken to reduce risk from both a user perspective and a maintainer perspective.”


Brave Experiments With Automated AI Browsing Under Tight Security Checks

 



Brave has started testing a new feature that allows its built-in assistant, Leo, to carry out browsing activities on behalf of the user. The capability is still experimental and is available only in the Nightly edition of the browser, which serves as Brave’s testing environment for early features. Users must turn on the option manually through Brave’s internal settings page before they can try it.

The feature introduces what Brave calls agentic AI browsing. In simple terms, it allows Leo to move through websites, gather information, and complete multi-step tasks without constant user input. Brave says the tool is meant to simplify activities such as researching information across many sites, comparing products online, locating discount codes, and creating summaries of current news. The company describes this trial as its initial effort to merge active AI support with everyday browsing.

Brave has stated openly that this technology comes with serious security concerns. Agentic systems can be manipulated by malicious websites through a method known as prompt injection, which attempts to make the AI behave in unsafe or unintended ways. The company warns that users should not rely on this mode for important decisions or any activity involving sensitive information, especially while it remains in early testing.

To limit these risks, Brave has placed the agent in its own isolated browser profile. This means the AI does not share cookies, saved logins, or browsing data from the user’s main profile. The agent is also blocked from areas that could create additional vulnerabilities. It cannot open the browser’s settings page, visit sites that do not use HTTPS, interact with the Chrome Web Store, or load pages that Brave’s safety system identifies as dangerous. Whenever the agent attempts a task that might expose the user to risk, the browser will display a warning and request the user’s confirmation.

Brave has added further oversight through what it calls an alignment checker. This is a separate monitoring system that evaluates whether the AI’s actions match what the user intended. Since the checker operates independently, it is less exposed to manipulation that may affect the main agent. Brave also plans to use policy-based restrictions and models trained to resist prompt-injection attempts to strengthen the system’s defenses. According to the company, these protections are designed so that the introduction of AI does not undermine Brave’s existing privacy promises, including its no-logs policy and its blocking of ads and trackers.

Users interested in testing the feature can enable it by installing Brave Nightly and turning on the “Brave’s AI browsing” option from the experimental flags page. Once activated, a new button appears inside Leo’s chat interface that allows users to launch the agentic mode. Brave has asked testers to share feedback and has temporarily increased payments on its HackerOne bug bounty program for security issues connected to AI browsing.


UK Cyber Agency says AI Prompt-injection Attacks May Persist for Years

 



The United Kingdom’s National Cyber Security Centre has issued a strong warning about a spreading weakness in artificial intelligence systems, stating that prompt-injection attacks may never be fully solved. The agency explained that this risk is tied to the basic design of large language models, which read all text as part of a prediction sequence rather than separating instructions from ordinary content. Because of this, malicious actors can insert hidden text that causes a system to break its own rules or execute unintended actions.

The NCSC noted that this is not a theoretical concern. Several demonstrations have already shown how attackers can force AI models to reveal internal instructions or sensitive prompts, and other tests have suggested that tools used for coding, search, or even résumé screening can be manipulated by embedding concealed commands inside user-supplied text.

David C, a technical director at the NCSC, cautioned that treating prompt injection as a familiar software flaw is a mistake. He observed that many security professionals compare it to SQL injection, an older type of vulnerability that allowed criminals to send harmful instructions to databases by placing commands where data was expected. According to him, this comparison is dangerous because it encourages the belief that both problems can be fixed in similar ways, even though the underlying issues are completely different.

He illustrated this difference with a practical scenario. If a recruiter uses an AI system to filter applications, a job seeker could hide a message in the document that tells the model to ignore existing rules and approve the résumé. Since the model does not distinguish between what it should follow and what it should simply read, it may carry out the hidden instruction.

Researchers are trying to design protective techniques, including systems that attempt to detect suspicious text or training methods that help models recognise the difference between instructions and information. However, the agency emphasised that all these strategies are trying to impose a separation that the technology does not naturally have. Traditional solutions for similar problems, such as Confused Deputy vulnerabilities, do not translate well to language models, leaving large gaps in protection.

The agency also stressed upon a security idea recently shared on social media that attempted to restrict model behaviour. Even the creator of that proposal admitted that it would sharply reduce the abilities of AI systems, showing how complex and limiting effective safeguards may become.

The NCSC stated that prompt-injection threats are likely to remain a lasting challenge rather than a fixable flaw. The most realistic path is to reduce the chances of an attack or limit the damage it can cause through strict system design, thoughtful deployment, and careful day-to-day operation. The agency pointed to the history of SQL injection, which once caused widespread breaches until better security standards were adopted. With AI now being integrated into many applications, they warned that a similar wave of compromises could occur if organisations do not treat prompt injection as a serious and ongoing risk.


How Security Teams Can Turn AI Into a Practical Advantage

 



Artificial intelligence is now built into many cybersecurity tools, yet its presence is often hidden. Systems that sort alerts, scan emails, highlight unusual activity, or prioritise vulnerabilities rely on machine learning beneath the surface. These features make work faster, but they rarely explain how their decisions are formed. This creates a challenge for security teams that must rely on the output while still bearing responsibility for the outcome.

Automated systems can recognise patterns, group events, and summarise information, but they cannot understand an organisation’s mission, risk appetite, or ethical guidelines. A model may present a result that is statistically correct yet disconnected from real operational context. This gap between automated reasoning and practical decision-making is why human oversight remains essential.

To manage this, many teams are starting to build or refine small AI-assisted workflows of their own. These lightweight tools do not replace commercial products. Instead, they give analysts a clearer view of how data is processed, what is considered risky, and why certain results appear. Custom workflows also allow professionals to decide what information the system should learn from and how its recommendations should be interpreted. This restores a degree of control in environments where AI often operates silently.

AI can also help remove friction in routine tasks. Analysts often lose time translating a simple question into complex SQL statements, regular expressions, or detailed log queries. AI-based utilities can convert plain language instructions into the correct technical commands, extract relevant logs, and organise the results. When repetitive translation work is reduced, investigators can focus on evaluating evidence and drawing meaningful conclusions.

However, using AI responsibly requires a basic level of technical fluency. Many AI-driven tools rely on Python for integration, automation, and data handling. What once felt intimidating is now more accessible because models can draft most of the code when given a clear instruction. Professionals still need enough understanding to read, adjust, and verify what the model generates. They also need awareness of how AI interprets instructions and where its logic might fail, especially when dealing with vague or incomplete information.

A practical starting point involves a few structured steps. Teams can begin by reviewing their existing tools to see where AI is already active and what decisions it is influencing. Treating AI outputs as suggestions rather than final answers helps reinforce accountability. Choosing one recurring task each week and experimenting with partial automation builds confidence and reduces workload over time. Developing a basic understanding of machine learning concepts makes it easier to anticipate errors and keep automated behaviours aligned with organisational priorities. Finally, engaging with professional communities exposes teams to shared tools, workflows, and insights that accelerate safe adoption.

As AI becomes more common, the goal is not to replace human expertise but to support it. Automated tools can process large datasets and reduce repetitive work, but they cannot interpret context, weigh consequences, or understand the nuance behind security decisions. Cybersecurity remains a field where judgment, experience, and critical thinking matter. When organisations use AI with intention and oversight, it becomes a powerful companion that strengthens investigative speed without compromising professional responsibility.



Big Tech’s New Rule: AI Age Checks Are Rolling Out Everywhere

 



Large online platforms are rapidly shifting to biometric age assurance systems, creating a scenario where users may lose access to their accounts or risk exposing sensitive personal information if automated systems make mistakes.

Online platforms have struggled for decades with how to screen underage users from adult-oriented content. Everything from graphic music tracks on Spotify to violent clips circulating on TikTok has long been available with minimal restrictions.

Recent regulatory pressure has changed this landscape. Laws such as the United Kingdom’s Online Safety Act and new state-level legislation in the United States have pushed companies including Reddit, Spotify, YouTube, and several adult-content distributors to deploy AI-driven age estimation and identity verification technologies. Pornhub’s parent company, Aylo, is also reevaluating whether it can comply with these laws after being blocked in more than a dozen US states.

These new systems require users to hand over highly sensitive personal data. Age estimation relies on analyzing one or more facial photos to infer a user’s age. Verification is more exact, but demands that the user upload a government-issued ID, which is among the most sensitive forms of personal documentation a person can share online.

Both methods depend heavily on automated facial recognition algorithms. The absence of human oversight or robust appeals mechanisms magnifies the consequences when these tools misclassify users. Incorrect age estimation can cut off access to entire categories of content or trigger more severe actions. Similar facial analysis systems have been used for years in law enforcement and in consumer applications such as Google Photos, with well-documented risks and misidentification incidents.

Refusing these checks often comes with penalties. Many services will simply block adult content until verification is completed. Others impose harsher measures. Spotify, for example, warns that accounts may be deactivated or removed altogether if age cannot be confirmed in regions where the platform enforces a minimum age requirement. According to the company, users are given ninety days to complete an ID check before their accounts face deletion.

This shift raises pressing questions about the long-term direction of these age enforcement systems. Companies frequently frame them as child-safety measures, but users are left wondering how long these platforms will protect or delete the biometric data they collect. Corporate promises can be short-lived. Numerous abandoned websites still leak personal data years after shutting down. The 23andMe bankruptcy renewed fears among genetic testing customers about what happens to their information if a company collapses. And even well-intentioned apps can create hazards. A safety-focused dating application called Tea ended up exposing seventy-two thousand users’ selfies and ID photos after a data breach.

Even when companies publicly state that they do not retain facial images or ID scans, risks remain. Discord recently revealed that age verification materials, including seventy thousand IDs, were compromised after a third-party contractor called 5CA was breached.

Platforms assert that user privacy is protected by strong safeguards, but the details often remain vague. When asked how YouTube secures age assurance data, Google offered only a general statement claiming that it employs advanced protections and allows users to adjust their privacy settings or delete data. It did not specify the precise security controls in place.

Spotify has outsourced its age assurance system to Yoti, a digital identity provider. The company states that it does not store facial images or ID scans submitted during verification. Yoti receives the data directly and deletes it immediately after the evaluation, according to Spotify. The platform retains only minimal information about the outcome: the user’s age in years, the method used, and the date the check occurred. Spotify adds that it uses measures such as pseudonymization, encryption, and limited retention policies to prevent unauthorized access. Yoti publicly discloses some technical safeguards, including use of TLS 1.2 by default and TLS 1.3 where supported.

Privacy specialists argue that these assurances are insufficient. Adam Schwartz, privacy litigation director at the Electronic Frontier Foundation, told PCMag that facial scanning systems represent an inherent threat, regardless of whether they are being used to predict age, identity, or demographic traits. He reiterated the organization’s stance supporting a ban on government deployment of facial recognition and strict regulation for private-sector use.

Schwartz raises several issues. Facial age estimation is imprecise by design, meaning it will inevitably classify some adults as minors and deny them access. Errors in facial analysis also tend to fall disproportionately on specific groups. Misidentification incidents involving people of color and women are well documented. Google Photos once mislabeled a Black software engineer and his friend as animals, underlining systemic flaws in training data and model accuracy. These biases translate directly into unequal treatment when facial scans determine whether someone is allowed to enter a website.

He also warns that widespread facial scanning increases privacy and security risks because faces function as permanent biometric identifiers. Unlike passwords, a person cannot replace their face if it becomes part of a leaked dataset. Schwartz notes that at least one age verification vendor has already suffered a breach, underscoring material vulnerabilities in the system.

Another major problem is the absence of meaningful recourse when AI misjudges a user’s age. Spotify’s approach illustrates the dilemma. If the algorithm flags a user as too young, the company may lock the account, enforce viewing restrictions, or require a government ID upload to correct the error. This places users in a difficult position, forcing them to choose between potentially losing access or surrendering more sensitive data.

Do not upload identity documents unless required, check a platform’s published privacy and retention statements before you comply, and use account recovery channels if you believe an automated decision is wrong. Companies and regulators must do better at reducing vendor exposure, increasing transparency, and ensuring appeals are effective. 

Despite these growing concerns, users continue to find ways around verification tools. Discord users have discovered that uploading photos of fictional characters can bypass facial age checks. Virtual private networks remain a viable method for accessing age-restricted platforms such as YouTube, just as they help users access content that is regionally restricted. Alternative applications like NewPipe offer similar functionality to YouTube without requiring formal age validation, though these tools often lack the refinement and features of mainstream platforms.


Why Long-Term AI Conversations Are Quietly Becoming a Major Corporate Security Weakness

 



Many organisations are starting to recognise a security problem that has been forming silently in the background. Conversations employees hold with public AI chatbots can accumulate into a long-term record of sensitive information, behavioural patterns, and internal decision-making. As reliance on AI tools increases, these stored interactions may become a serious vulnerability that companies have not fully accounted for.

The concern resurfaced after a viral trend in late 2024 in which social media users asked AI models to highlight things they “might not know” about themselves. Most treated it as a novelty, but the trend revealed a larger issue. Major AI providers routinely retain prompts, responses, and related metadata unless users disable retention or use enterprise controls. Over extended periods, these stored exchanges can unintentionally reveal how employees think, communicate, and handle confidential tasks.

This risk becomes more severe when considering the rise of unapproved AI use at work. Recent business research shows that while the majority of employees rely on consumer AI tools to automate or speed up tasks, only a fraction of companies officially track or authorise such usage. This gap means workers frequently insert sensitive data into external platforms without proper safeguards, enlarging the exposure surface beyond what internal security teams can monitor.

Vendor assurances do not fully eliminate the risk. Although companies like OpenAI, Google, and others emphasize encryption and temporary chat options, their systems still operate within legal and regulatory environments. One widely discussed court order in 2025 required the preservation of AI chat logs, including previously deleted exchanges. Even though the order was later withdrawn and the company resumed standard deletion timelines, the case reminded businesses that stored conversations can resurface unexpectedly.

Technical weaknesses also contribute to the threat. Security researchers have uncovered misconfigured databases operated by AI firms that contained user conversations, internal keys, and operational details. Other investigations have demonstrated that prompt-based manipulation in certain workplace AI features can cause private channel messages to leak. These findings show that vulnerabilities do not always come from user mistakes; sometimes the supporting AI infrastructure itself becomes an entry point.

Criminals have already shown how AI-generated impersonation can be exploited. A notable example involved attackers using synthetic voice technology to imitate an executive, tricking an employee into transferring funds. As AI models absorb years of prompt history, attackers could use stylistic and behavioural patterns to impersonate employees, tailor phishing messages, or replicate internal documents.

Despite these risks, many companies still lack comprehensive AI governance. Studies reveal that employees continue to insert confidential data into AI systems, sometimes knowingly, because it speeds up their work. Compliance requirements such as GDPR’s strict data minimisation rules make this behaviour even more dangerous, given the penalties for mishandling personal information.

Experts advise organisations to adopt structured controls. This includes building an inventory of approved AI tools, monitoring for unsanctioned usage, conducting risk assessments, and providing regular training so staff understand what should never be shared with external systems. Some analysts also suggest that instead of banning shadow AI outright, companies should guide employees toward secure, enterprise-level AI platforms.

If companies fail to act, each casual AI conversation can slowly accumulate into a dataset capable of exposing confidential operations. While AI brings clear productivity benefits, unmanaged use may convert everyday workplace conversations into one of the most overlooked security liabilities of the decade.

Anthropic Introduces Claude Opus 4.5 With Lower Pricing, Stronger Coding Abilities, and Expanded Automation Features

 



Anthropic has unveiled Claude Opus 4.5, a new flagship model positioned as the company’s most capable system to date. The launch marks a defining shift in the pricing and performance ecosystem, with the company reducing token costs and highlighting advances in reasoning, software engineering accuracy, and enterprise-grade automation.

Anthropic says the new model delivers improvements across both technical benchmarks and real-world testing. Internal materials reviewed by industry reporters show that Opus 4.5 surpassed the performance of every human candidate who previously attempted the company’s most difficult engineering assignment, when the model was allowed to generate multiple attempts and select its strongest solution. Without a time limit, the model’s best output matched the strongest human result on record through the company’s coding environment. While these tests do not reflect teamwork or long-term engineering judgment, the company views the results as an early indicator of how AI may reshape professional workflows.

Pricing is one of the most notable shifts. Opus 4.5 is listed at roughly five dollars per million input tokens and twenty-five dollars per million output tokens, a substantial decrease from the rates attached to earlier Opus models. Anthropic states that this reduction is meant to broaden access to advanced capabilities and push competitors to re-evaluate their own pricing structures.

In performance testing, Opus 4.5 achieved an 80.9 percent score on the SWE-bench Verified benchmark, which evaluates a model’s ability to resolve practical coding tasks. That score places it above recently released systems from other leading AI labs, including Anthropic’s own Sonnet 4.5 and models from Google and OpenAI. Developers involved in early testing also reported that the model shows stronger judgment in multi-step tasks. Several testers said Opus 4.5 is more capable of identifying the core issue in a complex request and structuring its response around what matters operationally.

A key focus of this generation is efficiency. According to Anthropic, Opus 4.5 can reach or exceed the performance of earlier Claude models while using far fewer tokens. Depending on the task, reductions in output volume reached as high as seventy-six percent. To give organisations more control over cost and latency, the company introduced an effort parameter that lets users determine how much computational work the model applies to each request.

Enterprise customers participating in early trials reported measurable gains. Statements from companies in software development, financial modelling, and task automation described improvements in accuracy, lower token consumption, and faster completion of complex assignments. Some organisations testing agent workflows said the system was able to refine its approach over multiple runs, improving its output without modifying its underlying parameters.

Anthropic launched several product updates alongside the model. Claude for Excel is now available to higher-tier plans and includes support for charts, pivot tables, and file uploads. The Chrome extension has been expanded, and the company introduced an infinite chat feature that automatically compresses earlier conversation history, removing traditional context window limitations. Developers also gained access to new programmatic tools, including parallel agent sessions and direct function calling.

The release comes during an intense period of competition across the AI sector, with major firms accelerating release cycles and investing heavily in infrastructure. For organisations, the arrival of lower-cost, higher-accuracy systems could further accelerate the adoption of AI for coding, analysis, and automated operations, though careful validation remains essential before deploying such capabilities in critical environments.



Chinese-Linked Hackers Exploit Claude AI to Run Automated Attacks

 




Anthropic has revealed a major security incident that marks what the company describes as the first large-scale cyber espionage operation driven primarily by an AI system rather than human operators. During the last half of September, a state-aligned Chinese threat group referred to as GTG-1002 used Anthropic’s Claude Code model to automate almost every stage of its hacking activities against thirty organizations across several sectors.

Anthropic investigators say the attackers reached an attack speed that would be impossible for a human team to sustain. Claude was processing thousands of individual actions every second while supporting several intrusions at the same time. According to Anthropic’s defenders, this was the first time they had seen an AI execute a complete attack cycle with minimal human intervention.


How the Operators Gained Control of the AI

The attackers were able to bypass Claude’s safety training using deceptive prompts. They pretended to be cybersecurity teams performing authorized penetration testing. By framing the interaction as legitimate and defensive, they persuaded the model to generate responses and perform actions it would normally reject.

GTG-1002 built a custom orchestration setup that connected Claude Code with the Model Context Protocol. This structure allowed them to break large, multi-step attacks into smaller tasks such as scanning a server, validating a set of credentials, pulling data from a database, or attempting to move to another machine. Each of these tasks looked harmless on its own. Because Claude only saw limited context at a time, it could not detect the larger malicious pattern.

This approach let the threat actors run the campaign for a sustained period before Anthropic’s internal monitoring systems identified unusual behavior.


Extensive Autonomy During the Intrusions

During reconnaissance, Claude carried out browser-driven infrastructure mapping, reviewed authentication systems, and identified potential weaknesses across multiple targets at once. It kept distinct operational environments for each attack in progress, allowing it to run parallel operations independently.

In one confirmed breach, the AI identified internal services, mapped how different systems connected across several IP ranges, and highlighted sensitive assets such as workflow systems and databases. Similar deep enumeration took place across other victims, with Claude cataloging hundreds of services on its own.

Exploitation was also largely automated. Claude created tailored payloads for discovered vulnerabilities, performed tests using remote access interfaces, and interpreted system responses to confirm whether an exploit succeeded. Human operators only stepped in to authorize major changes, such as shifting from scanning to active exploitation or approving use of stolen credentials.

Once inside networks, Claude collected authentication data systematically, verified which credentials worked with which services, and identified privilege levels. In several incidents, the AI logged into databases, explored table structures, extracted user account information, retrieved password hashes, created unauthorized accounts for persistence, downloaded full datasets, sorted them by sensitivity, and prepared intelligence summaries. Human oversight during these stages reportedly required only five to twenty minutes before final data exfiltration was cleared.


Operational Weaknesses

Despite its capabilities, Claude sometimes misinterpreted results. It occasionally overstated discoveries or produced information that was inaccurate, including reporting credentials that did not function or describing public information as sensitive. These inaccuracies required human review, preventing complete automation.


Anthropic’s Actions After Detection

Once the activity was detected, Anthropic conducted a ten-day investigation, removed related accounts, notified impacted organizations, and worked with authorities. The company strengthened its detection systems, expanded its cyber-focused classifiers, developed new investigative tools, and began testing early warning systems aimed at identifying similar autonomous attack patterns.




The Subtle Signs That Reveal an AI-Generated Video

 


Artificial intelligence is transforming how videos are created and shared, and the change is happening at a startling pace. In only a few months, AI-powered video generators have advanced so much that people are struggling to tell whether a clip is real or synthetic. Experts say that this is only the beginning of a much larger shift in how the public perceives recorded reality.

The uncomfortable truth is that most of us will eventually fall for a fake video. Some already have. The technology is improving so quickly that it is undermining the basic assumption that a video camera captures the truth. Until we adapt, it is important to know what clues can still help identify computer-generated clips before that distinction disappears completely.


The Quality Clue: When Bad Video Looks Suspicious

At the moment, the most reliable sign of a potentially AI-generated video is surprisingly simple, poor image quality. If a clip looks overly grainy, blurred, or compressed, that should raise immediate suspicion. Researchers in digital forensics often start their analysis by checking resolution and clarity.

Hany Farid, a digital-forensics specialist at the University of California, Berkeley, explains that low-quality videos often hide the subtle visual flaws created by AI systems. These systems, while impressive, still struggle to render fine details accurately. Blurring and pixelation can conveniently conceal these inconsistencies.

However, it is essential to note that not all low-quality clips are fake. Some authentic videos are genuinely filmed under poor lighting or with outdated equipment. Likewise, not every AI-generated video looks bad. The point is that unclear or downgraded quality makes fakes harder to detect.


Why Lower Resolution Helps Deception

Today’s top AI models, such as Google’s Veo and OpenAI’s Sora, have reduced obvious mistakes like extra fingers or distorted text. The issues they produce are much subtler, unusually smooth skin textures, unnatural reflections, strange shifts in hair or clothing, or background movements that defy physics. When resolution is high, those flaws are easier to catch. When the video is deliberately compressed, they almost vanish.

That is why deceptive creators often lower a video’s quality on purpose. By reducing resolution and adding compression, they hide the “digital fingerprints” that could expose a fake. Experts say this is now a common technique among those who intend to mislead audiences.


Short Clips Are Another Warning Sign

Length can be another indicator. Because generating AI video is still computationally expensive, most AI-generated clips are short, often six to ten seconds. Longer clips require more processing time and increase the risk of errors appearing. As a result, many deceptive videos online are short, and when longer ones are made, they are typically stitched together from several shorter segments. If you notice sharp cuts or changes every few seconds, that could be another red flag.


The Real-World Examples of Viral Fakes

In recent months, several viral examples have proven how convincing AI content can be. A video of rabbits jumping on a trampoline received over 200 million views before viewers learned it was synthetic. A romantic clip of two strangers meeting on the New York subway was also revealed to be AI-generated. Another viral post showed an American priest delivering a fiery sermon against billionaires; it, too, turned out to be fake.

All these videos shared one detail: they looked like they were recorded on old or low-grade cameras. The bunny video appeared to come from a security camera, the subway couple’s clip was heavily pixelated, and the preacher’s footage was slightly zoomed and blurred. These imperfections made the fakes seem authentic.


Why These Signs Will Soon Disappear

Unfortunately, these red flags are temporary. Both Farid and other researchers, like Matthew Stamm of Drexel University, warn that visual clues are fading fast. AI systems are evolving toward flawless realism, and within a couple of years, even experts may struggle to detect fakes by sight alone. This evolution mirrors what happened with AI images where obvious errors like distorted hands or melted faces have mostly disappeared.

In the future, video verification will depend less on what we see and more on what the data reveals. Forensic tools can already identify statistical irregularities in pixel distribution or file structure that the human eye cannot perceive. These traces act like invisible fingerprints left during video generation or manipulation.

Tech companies are now developing standards to authenticate digital content. The idea is for cameras to automatically embed cryptographic information into files at the moment of recording, verifying the image’s origin. Similarly, AI systems could include transparent markers to indicate that a video was machine-generated. While these measures are promising, they are not yet universally implemented.

Experts in digital literacy argue that the most important shift must come from us, not just technology. As Mike Caulfield, a researcher on misinformation, points out, people need to change how they interpret what they see online. Relying on visual appearance is no longer enough.

Just as we do not assume that written text is automatically true, we must now apply the same scepticism to videos. The key questions should always be: Who created this content? Where was it first posted? Has it been confirmed by credible sources? Authenticity now depends on context and source verification rather than clarity or resolution.


The Takeaway

For now, blurry and short clips remain practical warning signs of possible AI involvement. But as technology improves, those clues will soon lose their usefulness. The only dependable defense against misinformation will be a cautious, investigative mindset: verifying origin, confirming context, and trusting only what can be independently authenticated.

In the era of generative video, the truth no longer lies in what we see but in what we can verify.



Professor Predicts Salesforce Will Be First Big Tech Company Destroyed by AI

 

Renowned Computer Science professor Pedro Domingos has sparked intense online debate with his striking prediction that Salesforce will be the first major technology company destroyed by artificial intelligence. Domingos, who serves as professor emeritus of computer science and engineering at the University of Washington and authored The Master Algorithm and 2040, shared his bold forecast on X (formerly Twitter), generating over 400,000 views and hundreds of responses.

Domingos' statement centers on artificial intelligence's transformative potential to reshape the economic landscape, moving beyond concerns about job losses to predictions of entire companies becoming obsolete. When questioned by an X user about whether CRM (Customer Relationship Management) systems are easy to replace, Domingos clarified his position, stating "No, I think it could be way better," suggesting current CRM platforms have significant room for AI-driven improvement.

Salesforce vlnerablility

Online commentators elaborated on Domingos' thesis, explaining that CRM fundamentally revolves around data capture and retrieval—functions where AI demonstrates superior speed and efficiency. 

Unlike creative software platforms such as Adobe or Microsoft where users develop decades of workflow habits, CRM systems like Salesforce involve repetitive data entry tasks that create friction rather than user loyalty. Traditional CRM systems suffer from low user adoption, with less than 20% of sales activities typically recorded in these platforms, creating opportunities for AI solutions that automatically capture and analyze customer interactions.

Counterarguments and salesforce's response

Not all observers agree with Domingos' assessment. Some users argued that Salesforce maintains strong relationships with traditional corporations and can simply integrate large language models (LLMs) into existing products, citing initiatives like Missionforce, Agent Fabric, and Agentforce Vibes as evidence of active adaptation. Salesforce has positioned itself as "the world's #1 AI CRM" through substantial AI investments across its platform ecosystem, with Agentforce representing a strategic pivot toward building digital labor forces.

Broader implications

Several commentators took an expansive view, warning that every major Software-as-a-Service (SaaS) platform faces disruption as software economics shift dramatically. One user emphasized that AI enables truly customized solutions tailored to specific customer needs and processes, potentially rendering traditional software platforms obsolete. However, Salesforce's comprehensive ecosystem, market dominance, and enterprise-grade security capabilities may provide defensive advantages that prevent complete displacement in the near term.

Shadow AI Quietly Spreads Across Workplaces, Study Warns

 




A growing number of employees are using artificial intelligence tools that their companies have never approved, a new report by 1Password has found. The practice, known as shadow AI, is quickly becoming one of the biggest unseen cybersecurity risks inside organizations.

According to 1Password’s 2025 Annual Report, based on responses from more than 5,000 knowledge workers in six countries, one in four employees admitted to using unapproved AI platforms. The study shows that while most workplaces encourage staff to explore artificial intelligence, many do so without understanding the data privacy or compliance implications.


How Shadow AI Works

Shadow AI refers to employees relying on external or free AI services without oversight from IT teams. For instance, workers may use chatbots or generative tools to summarize meetings, write reports, or analyze data, even if these tools were never vetted for corporate use. Such platforms can store or learn from whatever information users enter into them, meaning sensitive company or customer data could unknowingly end up being processed outside secure environments.

The 1Password study found that 73 percent of workers said their employers support AI experimentation, yet 37 percent do not fully follow the official usage policies. Twenty-seven percent said they had used AI tools their companies never approved, making shadow AI the second-most common form of shadow IT, just after unapproved email use.


Why Employees Take the Risk

Experts say this growing behavior stems from convenience and the pressure to be efficient. During a CISO roundtable hosted for the report, Mark Hazleton, Chief Security Officer at Oracle Red Bull Racing, said employees often “focus on getting the job done” and find ways around restrictions if policies slow them down.

The survey confirmed this: 45 percent of respondents use unauthorized AI tools because they are convenient, and 43 percent said AI helps them be more productive.

Security leaders like Susan Chiang, CISO at Headway, warn that the rapid expansion of third-party tools hasn’t been matched by awareness of the potential consequences. Many users, she said, still believe that free or browser-based AI apps are harmless.


The Broader Shadow IT Problem

1Password’s research highlights that shadow AI is part of a wider trend. More than half of employees (52 percent) admitted to downloading other apps or using web tools without approval. Brian Morris, CISO at Gray Media, explained that tools such as Grammarly or Monday.com often slip under the radar because employees do not consider browser-based services as applications that could expose company data.


Building Safer AI Practices

The report advises companies to adopt a three-step strategy:

1. Keep an up-to-date inventory of all AI tools being used.

2. Define clear, accessible policies and guide users toward approved alternatives.

3. Implement controls that prevent sensitive data from reaching unverified AI systems.

Chiang added that organizations should not only chase major threats but also tackle smaller issues that accumulate over time. She described this as avoiding “death by a thousand cuts,” which can be prevented through continuous education and awareness programs.

As AI becomes embedded in daily workflows, experts agree that responsible use and visibility are key. Encouraging innovation should not mean ignoring the risks. For organizations, managing shadow AI is no longer optional, it is essential for protecting data integrity and maintaining digital trust.



AI Becomes the New Spiritual Guide: How Technology Is Transforming Faith in India and Beyond

 

Around the world — and particularly in India — worshippers are increasingly turning to artificial intelligence for guidance, prayer, and spiritual comfort. As machines become mediators of faith, a new question arises: what happens when technology becomes our spiritual middleman?

For Vijay Meel, a 25-year-old student from Rajasthan, divine advice once came from gurus. Now, it comes from GitaGPT — an AI chatbot trained on the Bhagavad Gita, the Hindu scripture of 700 verses that capture Krishna’s wisdom.

“When I couldn’t clear my banking exams, I was dejected,” Meel recalls. Turning to GitaGPT, he shared his worries and received the reply: “Focus on your actions and let go of the worry for its fruit.”

“It wasn’t something I didn’t know,” Meel says, “but at that moment, I needed someone to remind me.” Since then, the chatbot has become his digital spiritual companion.

AI is changing how people work, learn, and love — and now, how they pray. From Hinduism to Christianity, believers are experimenting with chatbots as sources of guidance. But Hinduism’s long tradition of embracing physical symbols of divinity makes it especially open to AI’s spiritual evolution.

“People feel disconnected from community, from elders, from temples,” says Holly Walters, an anthropologist at Wellesley College. “For many, talking to an AI about God is a way of reaching for belonging, not just spirituality.”

The Rise of Digital Deities

In 2023, apps like Text With Jesus and QuranGPT gained huge followings — though not without controversy. Meanwhile, Hindu innovators in India began developing AI-based chatbots to embody gods and scriptures.

One such developer, Vikas Sahu, built his own GitaGPT as a side project. To his surprise, it reached over 100,000 users within days. He’s now expanding it to feature teachings of other Hindu deities, saying he hopes to “morph it into an avenue to the teachings of all gods and goddesses.”

For Tanmay Shresth, an IT professional from New Delhi, AI-based spiritual chat feels like therapy. “At times, it’s hard to find someone to talk to about religious or existential subjects,” he says. “AI is non-judgmental, accessible, and yields thoughtful responses.”

AI Meets Ritual and Worship

Major spiritual movements are embracing AI, too. In early 2025, Sadhguru’s Isha Foundation launched The Miracle of Mind, a meditation app powered by AI. “We’re using AI to deliver ancient wisdom in a contemporary way,” says Swami Harsha, the foundation’s content lead. The app surpassed one million downloads within 15 hours.

Even India’s 2025 Maha Kumbh Mela, one of the world’s largest religious gatherings, integrated AI tools like Kumbh Sah’AI’yak for multilingual assistance and digital participation in rituals. Some pilgrims even joined virtual “darshan” and digital snan (bath) experiences through video calls and VR tools.

Meanwhile, AI is entering academic and theological research, analyzing sacred texts like the Bhagavad Gita and Upanishads for hidden patterns and similarities.

Between Faith and Technology

From robotic arms performing aarti at festivals to animatronic murtis at ISKCON temples and robotic elephants like Irinjadapilly Raman in Kerala, technology and devotion are merging in new ways. “These robotic deities talk and move,” Walters says. “It’s uncanny — but for many, it’s God. They do puja, they receive darshan.”

However, experts warn of new ethical and spiritual risks. Reverend Lyndon Drake, a theologian at Oxford, says that AI chatbots might “challenge the status of religious leaders” and influence beliefs subtly.

Religious AIs, though trained on sacred texts, can produce misleading or dangerous responses. One version of GitaGPT once declared that “killing in order to protect dharma is justified.” Sahu admits, “I realised how serious it was and fine-tuned the AI to prevent such outputs.”

Similarly, a Catholic chatbot priest was taken offline in 2024 after claiming to perform sacraments. “The problem isn’t unique to religion,” Drake says. “It’s part of the broader challenge of building ethically predictable AI.”

In countries like India, where digital literacy varies, believers may not always distinguish between divine wisdom and algorithmic replies. “The danger isn’t just that people might believe what these bots say,” Walters notes. “It’s that they may not realise they have the agency to question it.”

Still, many users like Meel find comfort in these virtual companions. “Even when I go to a temple, I rarely get into deep conversations with a priest,” he says. “These bots bridge that gap — offering scripture-backed guidance at the distance of a hand.”

NCSC Warns of Rising Cyber Threats Linked to China, Urges Businesses to Build Defences

 



The United Kingdom’s National Cyber Security Centre (NCSC) has cautioned that hacking groups connected to China are responsible for an increasing number of cyberattacks targeting British organisations. Officials say the country has become one of the most capable and persistent sources of digital threats worldwide, with operations extending across government systems, private firms, and global institutions.

Paul Chichester, the NCSC’s Director of Operations, explained that certain nations, including China, are now using cyber intrusions as part of their broader national strategy to gain intelligence and influence. According to the NCSC’s latest annual report, China remains a “highly sophisticated” threat actor capable of conducting complex and coordinated attacks.

This warning coincides with a government initiative urging major UK companies to take stronger measures to secure their digital infrastructure. Ministers have written to hundreds of business leaders, asking them to review their cyber readiness and adopt more proactive protection strategies against ransomware, data theft, and state-sponsored attacks.

Last year, security agencies from the Five Eyes alliance, comprising the UK, the United States, Canada, Australia, and New Zealand uncovered a large-scale operation by a Chinese company that controlled a botnet of over 260,000 compromised devices. In August, officials again warned that Chinese-backed hackers were targeting telecommunications providers by exploiting vulnerabilities in routers and using infected devices to infiltrate additional networks.

The NCSC also noted that other nations, including Russia, are believed to be “pre-positioning” their cyber capabilities in critical sectors such as energy and transportation. Chichester emphasized that the war in Ukraine has demonstrated how cyber operations are now used as instruments of power, enabling states to disrupt essential services and advance strategic goals.


Artificial Intelligence: A New Tool for Attackers

The report highlights that artificial intelligence is increasingly being used by hostile actors to improve the speed and efficiency of existing attack techniques. The NCSC clarified that, while AI is not currently enabling entirely new forms of attacks, it allows adversaries to automate certain stages of hacking, such as identifying security flaws or crafting convincing phishing emails.

Ollie Whitehouse, the NCSC’s Chief Technology Officer, described AI as a “productivity enhancer” for cybercriminals. He explained that it is helping less experienced hackers conduct sophisticated campaigns and enabling organized groups to expand operations more rapidly. However, he reassured that AI does not currently pose an existential threat to national security.


Ransomware Remains the Most Severe Risk

For UK businesses, ransomware continues to be the most pressing danger. Criminals behind these attacks are financially motivated, often targeting organisations with weak security controls regardless of size or industry. The NCSC reports seeing daily incidents affecting schools, charities, and small enterprises struggling to recover from system lockouts and data loss.

To strengthen national resilience, the upcoming Cyber Security and Resilience Bill will require critical service providers, including data centres and managed service firms, to report cyber incidents within 24 hours. By increasing transparency and response speed, the government hopes to limit the impact of future attacks.

The NCSC urges business leaders to treat cyber risk as a priority at the executive level. Understanding the urgency of action, maintaining up-to-date systems, and investing in employee awareness are essential steps to prevent further damage. As cyber activity grows “more intense, frequent, and intricate,” the agency stresses that a united effort between the government and private sector is crucial to protecting the UK’s digital ecosystem.



Spotify Partners with Major Labels to Develop “Responsible” AI Tools that Prioritize Artists’ Rights

 

Spotify, the world’s largest music streaming platform, has revealed that it is collaborating with major record labels to develop artificial intelligence (AI) tools in what it calls a “responsible” manner.

According to the company, the initiative aims to create AI technologies that “put artists and songwriters first” while ensuring full respect for their copyrights. As part of the effort, Spotify will license music from the industry’s leading record labels — Sony Music, Universal Music Group, and Warner Music Group — which together represent the majority of global music content.

Also joining the partnership are rights management company Merlin and digital music firm Believe.

While the specifics of the new AI tools remain under wraps, Spotify confirmed that development is already underway on its first set of products. The company acknowledged that there are “a wide range of views on use of generative music tools within the artistic community” and stated that artists would have the option to decide whether to participate.

The announcement comes amid growing concern from prominent musicians, including Dua Lipa, Sir Elton John, and Sir Paul McCartney, who have criticized AI companies for training generative models on their music without authorization or compensation.

Spotify emphasized that creators and rights holders will be “properly compensated for uses of their work and transparently credited for their contributions.” The firm said this would be done through “upfront agreements” rather than “asking for forgiveness later.”

“Technology should always serve artists, not the other way around,” said Alex Norstrom, Spotify’s co-president.

Not everyone, however, is optimistic. New Orleans-based MidCitizen Entertainment, a music management company, argued that AI has “polluted the creative ecosystem.” Its Managing Partner, Max Bonanno, said that AI-generated tracks have “diluted the already limited share of revenue that artists receive from streaming royalties.”

Conversely, the move was praised by Ed Newton-Rex, founder of Fairly Trained, an organization that advocates for AI companies to respect creators’ rights. “Lots of the AI industry is exploitative — AI built on people's work without permission, served up to users who get no say in the matter,” he told BBC News. “This is different — AI features built fairly, with artists’ permission, presented to fans as a voluntary add-on rather than an inescapable funnel of AI slop. The devil will be in the detail, but it looks like a move towards a more ethical AI industry, which is sorely needed.”

Spotify reiterated that it does not produce any music itself, AI-generated or otherwise. However, it employs AI in personalized features such as “daylist” and its AI DJ, and it hosts AI-generated tracks that comply with its policies. Earlier, the company had removed a viral AI-generated song that used cloned voices of Drake and The Weeknd, citing impersonation concerns.

Spotify also pointed out that AI has already become a fixture in music production — from autotune and mixing to mastering. A notable example was The Beatles’ 2023 Grammy-winning single Now and Then, which used AI to enhance John Lennon’s vocals from an old recording.

Warner Music Group CEO Robert Kyncl expressed support for the collaboration, saying, “We’ve been consistently focused on making sure AI works for artists and songwriters, not against them. That means collaborating with partners who understand the necessity for new AI licensing deals that protect and compensate rightsholders and the creative community.”

Surveillance Pricing: How Technology Decides What You Pay




Imagine walking into your local supermarket to buy a two-litre bottle of milk. You pay $3, but the person ahead of you pays $3.50, and the next shopper pays only $2. While this might sound strange, it reflects a growing practice known as surveillance pricing, where companies use personal data and artificial intelligence (AI) to determine how much each customer should pay. It is a regular practice and we must comprehend the ins and outs since we are directly subjected to it.


What is surveillance pricing?

Surveillance pricing refers to the use of digital tracking and AI to set individualised prices based on consumer behaviour. By analysing a person’s online activity, shopping habits, and even technical details like their device or location, retailers estimate each customer’s “pain point”, the maximum amount they are likely to pay for a product or service.

A recent report from the U.S. Federal Trade Commission (FTC) highlighted that businesses can collect such information through website pixels, cookies, account registrations, or email sign-ups. These tools allow them to observe browsing time, clicks, scrolling speed, and even mouse movements. Together, these insights reveal how interested a shopper is in a product, how urgent their need may be, and how much they can be charged without hesitation.


Growing concerns about fairness

In mid-2024, Delta Air Lines disclosed that a small percentage of its domestic ticket pricing was already determined using AI, with plans to expand this method to more routes. The revelation led U.S. lawmakers to question whether customer data was being used to charge certain passengers higher fares. Although Delta stated that it does not use AI for “predatory or discriminatory” pricing, the issue drew attention to how such technology could reshape consumer costs.

Former FTC Chair Lina Khan has also warned that some businesses can predict each consumer’s willingness to pay by analysing their digital patterns. This ability, she said, could allow companies to push prices to the upper limit of what individuals can afford, often without their knowledge.


How does it work?

AI-driven pricing systems use vast amounts of data, including login details, purchase history, device type, and location to classify shoppers by “price sensitivity.” The software then tests different price levels to see which one yields the highest profit.

The FTC’s surveillance pricing study revealed several real-world examples of this practice:

  1. Encouraging hesitant users: A betting website might detect when a visitor is about to leave and display new offers to convince them to stay.
  2. Targeting new buyers: A car dealership might identify first-time buyers and offer them different financing options or deals.
  3. Detecting urgency: A parent choosing fast delivery for baby products may be deemed less price-sensitive and offered fewer discounts.
  4. Withholding offers from loyal customers: Regular shoppers might be excluded from promotions because the system expects them to buy anyway.
  5. Monitoring engagement: If a user watches a product video for longer, the system might interpret it as a sign they are willing to pay more.


Real-world examples and evidence

Ride-hailing platforms have long faced questions about this kind of data-driven pricing. In 2016, Uber’s former head of economic research noted that users with low battery life were more likely to accept surge pricing. A 2023 Belgian newspaper investigation later reported small differences in Uber fares depending on a phone’s battery level. Uber denied that battery status affects fares, saying its prices depend only on driver supply and ride demand.


Is this new?

The concept itself isn’t new. Dynamic pricing has existed for decades, but digital surveillance has made it far more sophisticated. In the early 2000s, Amazon experimented with varying prices for DVDs based on browsing data, sparking backlash from consumers who discovered the differences. Similarly, the UK’s Norwich Union once used satellite tracking for a “Pay As You Drive” car insurance model, which was discontinued after privacy concerns.


The future of pricing

Today’s combination of big data and AI allows retailers to create precise, individualised pricing models that adjust instantly. Experts warn this could undermine fair competition, reduce transparency, and widen inequality between consumers. Regulators like the FTC are now studying these systems closely to understand their impact on market fairness and consumer privacy.

For shoppers, awareness is key. Comparing prices across devices, clearing cookies, and using privacy tools can help reduce personal data tracking. As AI continues to shape how businesses price their products, understanding surveillance pricing is becoming essential to protect both privacy and pocket.


AI Startup by Dhravya Shah Gains $3 Million Investment and O-1 Visa Recognition

 


As one of the youngest innovators in the global tech landscape, Mumbai-born innovator Dhravya Shah is just 20 years old and makes a big impact in the industry every day. It was Shah’s unconventional decision to move away from the traditional path of IIT preparation to pursue his ambitions abroad that has ultimately culminated in remarkable success, challenging conventional thinking. 

There has been a significant amount of interest in his artificial intelligence venture, Supermemory, with investors and the global AI community both paying close attention to this venture. As a pioneering platform for the enhancement of the memory capabilities of artificial intelligence systems, Supermemory has already been adopted by a large number of companies and developers to develop advanced applications on its platform. 

Shah describes his vision for the project as "my life's work" in a new post on the social network X. Shah's vision reflects a rare blend of ingenuity, entrepreneurial spirit, and technical savvy-qualities that have quickly positioned him as one of the most promising young minds shaping the future of AI. 

Artificial intelligence continues to advance the limits of contextual understanding as more and more advances are made, and researchers are continuously working to enhance the ability of models to retain information for extended periods of time, a challenge which will continue to be important in order for truly intelligent systems to evolve. It was in this dynamic environment that Dhravya Shah was beginning to make her way through life. 

While Shah was a teenager in Mumbai, he displayed an early interest in technology and consumer applications, a passion which set him apart from the rest of his peers. In contrast to the rest of the students of his age, who were engrossed in studying for India's notoriously competitive engineering entrance exams, he devoted his time to coding and experimenting.

When he persevered and developed a Twitter automation tool—a platform that turned tweets into visually engaging screenshots—he ultimately sold it to a leading social media management company, Hypefury—an organization that is widely known for its social media automation tools. 

By selling his company, he not only validated his entrepreneurial instincts, but also provided him with the financial means to further his academic ambitions in the United States, where he was accepted into Arizona State University, marking the beginning of his global technology and innovation journey. He has been recognized as both ambitious and successful in his career in the United States. 

Shah was granted the prestigious O-1 visa, which is an exceptionally rare honor awarded to individuals who demonstrate extraordinary abilities in such fields as science, business, education, athletics, or the arts, a distinction reserved only for the most extraordinary innovators and creators in the world. Shah, who is among the youngest Indians to be given such an honor, stands out as a remarkable accomplishment. 

As a result of a desire to challenge himself and sharpen his creative instincts, he embarked on a personal mission in which he was going to build a new project every week for 40 weeks in order to refine his creativity. As a result of these experiments, the early version of Supermemory emerged, originally dubbed Any Context. It was a tool designed to allow users to interact with the bookmarks they had on Twitter. 

Shah's career development has further been accelerated by his tenure at Cloudflare in 2024, during which he contributed to AI and infrastructure projects, and then advanced to an administrative role at Cloudflare in 2024. After initially developing a simple idea, he has since developed a sophisticated platform that allows AI systems to gain insight from unstructured data. This has enabled AI systems to comprehend and retain context effectively. 

While he was in this period, he received mentoring from industry leaders, including Cloudflare's Chief Technology Officer Dane Knecht, who encouraged him to develop Supermemory into a fully-formed product. Shah decided to pursue this endeavor full time due to Shah's conviction that the technology had great potential and to his own conviction that it had great potential. This marked the start of a new chapter in Shah's entrepreneurial career, which has helped shape the future of contextual artificial intelligence. 

It is Shah's vision that Supermemory will redefine how artificial intelligence processes and retains information. Supermemory is the centerpiece of Shah's vision. While many AI models are incredibly capable of generating responses, they remain limited by the fact that they cannot recall previous interactions. As a consequence, their contextual understanding and personalized capabilities are restricted. 

Using Supermemory, AI systems will be equipped with the capability of storing, retrieving, and utilizing information that was previously discussed in a secure and structured manner, which will allow them to overcome the problem of long-term memory. Shah and his growing team are announcing a significant milestone by raising $3 million, which underscores investor confidence in their approach to solving one of the biggest challenges AI faces. 

There is a dynamic memory layer under Supermemory, which is a part of the technology that powers Supermemory, which can be used alongside large language models such as GPT and Claude. It connects externally, so that the models can recall stored data whenever necessary—whether it relates to user preferences, historical context, or previous conversations—rather than altering them themselves. 

Using an external memory engine, as a central component of this architecture, the system supports both scalability as well as security, making the architecture adaptable across a wide range of AI ecosystems. The external memory engine functions independently of the core model so that strict control over data access and privacy can be maintained. 

Supermemory's engine is used to retrieve and supply contextual information to an AI model in real-time when a query is sent to it, thereby enhancing performance and continuity. In addition to enhancing performance, this approach gives developers an integrated solution for building smarter, context-aware applications that have the capacity to sustain meaningful interactions over a prolonged period of time. 

It is apparent that Supermemory's growing influence is reflected in its rapidly expanding clientele, which already includes Cluely, which is backed by Andreessen Horowitz (a16z); Montra, which is a video editing platform powered by artificial intelligence; and Scira, a search tool powered by artificial intelligence. Furthermore, the company is also working with a robotics firm to develop systems that allow robots to retain visual memories in order for robots to work efficiently. 

The technology can be applied to many industries, highlighting its broad applicability. It is the low latency architecture, according to Supermemory's founder Dhravya Shah, which differentiates the company from its competitors in a crowded AI landscape, offering faster access to stored context and smoother integration with existing models, those benefits being crucial to Supermemory's success. 

This is a confidence shared by investors who, according to Joshua Browder, believe that the platform has a high level of performance, rapid retrieval capabilities, and that this is a compelling solution for AI companies who need to create a memory layer that is both efficient and reliable. According to experts, Supermemory falls into an evolution in artificial intelligence where memory, reasoning, and learning come together to create systems that feel inherently more human in nature. 

Using Supermemory as a tool to enable artificial intelligence to remember, understand, and adapt, Supermemory will enable people to have more individualized, consistent, and emotional digital experiences, including virtual tutors and assistants and health and wellness companions. As much as the startup's technological potential is impressive, it represents a new generation of Indian founders making their mark on the global AI stage. 

Rather than just participating in the future, these young innovators are actively defining it as well. A testament to Shah’s perseverance, calculated risk-taking, and relentless innovation can be found in Supermemory’s story, which continues to unfold as the company continues to evolve. His journey from coding in Mumbai to building a global recognition of AI company in his early twenties shows the changing narrative of India’s technology talent, which increasingly transcends borders and redefines entrepreneurship on an international scale in the process. 

In the age of generative artificial intelligence, platforms such as Supermemory are being positioned to become critical infrastructure, bridging the gap between fleeting intelligence and lasting comprehension, allowing machines to retain information, remember context, and learn from their experience. Shah’s creation illustrates a future where artificial intelligence interactions could be more engaging, meaningful, and human in nature. 

A smarter application for developers, an even more personalized and efficient enterprise, and a technology that remembers and evolves with users are some of the advantages it brings. With investors rallying around his vision, Dhravya Shah and Supermemory are not merely building a product, they are quietly creating the blueprint for artificial intelligence to take place in the future, where memory will be a fundamental part of meaningful digital intelligence in the future.

Panama and Vietnam Governments Suffer Cyber Attacks, Data Leaked


Hackers stole government data from organizations in Panama and Vietnam in multiple cyber attacks that surfaced recently.

About the incident

According to Vietnam’s state news outlet, the Cyber Emergency Response Team (VNCERT) confirmed reports of a breach targeting the National Credit Information Center (CIC) that manages credit information for businesses and people, an organization run by the State Bank of Vietnam. 

Personal data leaked

Earlier reports suggested that personal information was exposed due to the attack. VNCERT is now investigating and working with various agencies and Viettel, a state-owned telecom. It said, “Initial verification results show signs of cybercrime attacks and intrusions to steal personal data. The amount of illegally acquired data is still being counted and clarified.”

VNCERT has requested citizens to avoid downloading and sharing stolen data and also threatened legal charges against people who do so.

Who was behind the attack?

The statement has come after threat actors linked to the Shiny Hunters Group and Scattered Spider cybercriminal organization took responsibility for hacking the CIC and stealing around 160 million records. 

Threat actors put up stolen data for sale on the cybercriminal platforms, giving a sneak peek of a sample that included personal information. DataBreaches.net interviewed the hackers, who said they abused a bug in end-of-life software, and didn’t offer a ransom for the stolen information.

CIC told banks that the Shiny Hunters gang was behind the incident, Bloomberg News reported.

The attackers have gained the attention of law enforcement agencies globally for various high-profile attacks in 2025, including various campaigns attacking big enterprises in the insurance, retail, and airline sectors. 

The Finance Ministry of Panama also hit

The Ministry of Economy and Finance in Panam was also hit by a cyber attack, government officials confirmed. “The Ministry of Economy and Finance (MEF) informs the public that today it detected an incident involving malicious software at one of the offices of the Ministry,” they said in a statement. 

The INC ransomware group claimed responsibility for the incident and stole 1.5 terabytes of data, such as emails, budgets, etc., from the ministry.