Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Artificial Intelligence. Show all posts

The Subtle Signs That Reveal an AI-Generated Video

 


Artificial intelligence is transforming how videos are created and shared, and the change is happening at a startling pace. In only a few months, AI-powered video generators have advanced so much that people are struggling to tell whether a clip is real or synthetic. Experts say that this is only the beginning of a much larger shift in how the public perceives recorded reality.

The uncomfortable truth is that most of us will eventually fall for a fake video. Some already have. The technology is improving so quickly that it is undermining the basic assumption that a video camera captures the truth. Until we adapt, it is important to know what clues can still help identify computer-generated clips before that distinction disappears completely.


The Quality Clue: When Bad Video Looks Suspicious

At the moment, the most reliable sign of a potentially AI-generated video is surprisingly simple, poor image quality. If a clip looks overly grainy, blurred, or compressed, that should raise immediate suspicion. Researchers in digital forensics often start their analysis by checking resolution and clarity.

Hany Farid, a digital-forensics specialist at the University of California, Berkeley, explains that low-quality videos often hide the subtle visual flaws created by AI systems. These systems, while impressive, still struggle to render fine details accurately. Blurring and pixelation can conveniently conceal these inconsistencies.

However, it is essential to note that not all low-quality clips are fake. Some authentic videos are genuinely filmed under poor lighting or with outdated equipment. Likewise, not every AI-generated video looks bad. The point is that unclear or downgraded quality makes fakes harder to detect.


Why Lower Resolution Helps Deception

Today’s top AI models, such as Google’s Veo and OpenAI’s Sora, have reduced obvious mistakes like extra fingers or distorted text. The issues they produce are much subtler, unusually smooth skin textures, unnatural reflections, strange shifts in hair or clothing, or background movements that defy physics. When resolution is high, those flaws are easier to catch. When the video is deliberately compressed, they almost vanish.

That is why deceptive creators often lower a video’s quality on purpose. By reducing resolution and adding compression, they hide the “digital fingerprints” that could expose a fake. Experts say this is now a common technique among those who intend to mislead audiences.


Short Clips Are Another Warning Sign

Length can be another indicator. Because generating AI video is still computationally expensive, most AI-generated clips are short, often six to ten seconds. Longer clips require more processing time and increase the risk of errors appearing. As a result, many deceptive videos online are short, and when longer ones are made, they are typically stitched together from several shorter segments. If you notice sharp cuts or changes every few seconds, that could be another red flag.


The Real-World Examples of Viral Fakes

In recent months, several viral examples have proven how convincing AI content can be. A video of rabbits jumping on a trampoline received over 200 million views before viewers learned it was synthetic. A romantic clip of two strangers meeting on the New York subway was also revealed to be AI-generated. Another viral post showed an American priest delivering a fiery sermon against billionaires; it, too, turned out to be fake.

All these videos shared one detail: they looked like they were recorded on old or low-grade cameras. The bunny video appeared to come from a security camera, the subway couple’s clip was heavily pixelated, and the preacher’s footage was slightly zoomed and blurred. These imperfections made the fakes seem authentic.


Why These Signs Will Soon Disappear

Unfortunately, these red flags are temporary. Both Farid and other researchers, like Matthew Stamm of Drexel University, warn that visual clues are fading fast. AI systems are evolving toward flawless realism, and within a couple of years, even experts may struggle to detect fakes by sight alone. This evolution mirrors what happened with AI images where obvious errors like distorted hands or melted faces have mostly disappeared.

In the future, video verification will depend less on what we see and more on what the data reveals. Forensic tools can already identify statistical irregularities in pixel distribution or file structure that the human eye cannot perceive. These traces act like invisible fingerprints left during video generation or manipulation.

Tech companies are now developing standards to authenticate digital content. The idea is for cameras to automatically embed cryptographic information into files at the moment of recording, verifying the image’s origin. Similarly, AI systems could include transparent markers to indicate that a video was machine-generated. While these measures are promising, they are not yet universally implemented.

Experts in digital literacy argue that the most important shift must come from us, not just technology. As Mike Caulfield, a researcher on misinformation, points out, people need to change how they interpret what they see online. Relying on visual appearance is no longer enough.

Just as we do not assume that written text is automatically true, we must now apply the same scepticism to videos. The key questions should always be: Who created this content? Where was it first posted? Has it been confirmed by credible sources? Authenticity now depends on context and source verification rather than clarity or resolution.


The Takeaway

For now, blurry and short clips remain practical warning signs of possible AI involvement. But as technology improves, those clues will soon lose their usefulness. The only dependable defense against misinformation will be a cautious, investigative mindset: verifying origin, confirming context, and trusting only what can be independently authenticated.

In the era of generative video, the truth no longer lies in what we see but in what we can verify.



Professor Predicts Salesforce Will Be First Big Tech Company Destroyed by AI

 

Renowned Computer Science professor Pedro Domingos has sparked intense online debate with his striking prediction that Salesforce will be the first major technology company destroyed by artificial intelligence. Domingos, who serves as professor emeritus of computer science and engineering at the University of Washington and authored The Master Algorithm and 2040, shared his bold forecast on X (formerly Twitter), generating over 400,000 views and hundreds of responses.

Domingos' statement centers on artificial intelligence's transformative potential to reshape the economic landscape, moving beyond concerns about job losses to predictions of entire companies becoming obsolete. When questioned by an X user about whether CRM (Customer Relationship Management) systems are easy to replace, Domingos clarified his position, stating "No, I think it could be way better," suggesting current CRM platforms have significant room for AI-driven improvement.

Salesforce vlnerablility

Online commentators elaborated on Domingos' thesis, explaining that CRM fundamentally revolves around data capture and retrieval—functions where AI demonstrates superior speed and efficiency. 

Unlike creative software platforms such as Adobe or Microsoft where users develop decades of workflow habits, CRM systems like Salesforce involve repetitive data entry tasks that create friction rather than user loyalty. Traditional CRM systems suffer from low user adoption, with less than 20% of sales activities typically recorded in these platforms, creating opportunities for AI solutions that automatically capture and analyze customer interactions.

Counterarguments and salesforce's response

Not all observers agree with Domingos' assessment. Some users argued that Salesforce maintains strong relationships with traditional corporations and can simply integrate large language models (LLMs) into existing products, citing initiatives like Missionforce, Agent Fabric, and Agentforce Vibes as evidence of active adaptation. Salesforce has positioned itself as "the world's #1 AI CRM" through substantial AI investments across its platform ecosystem, with Agentforce representing a strategic pivot toward building digital labor forces.

Broader implications

Several commentators took an expansive view, warning that every major Software-as-a-Service (SaaS) platform faces disruption as software economics shift dramatically. One user emphasized that AI enables truly customized solutions tailored to specific customer needs and processes, potentially rendering traditional software platforms obsolete. However, Salesforce's comprehensive ecosystem, market dominance, and enterprise-grade security capabilities may provide defensive advantages that prevent complete displacement in the near term.

Shadow AI Quietly Spreads Across Workplaces, Study Warns

 




A growing number of employees are using artificial intelligence tools that their companies have never approved, a new report by 1Password has found. The practice, known as shadow AI, is quickly becoming one of the biggest unseen cybersecurity risks inside organizations.

According to 1Password’s 2025 Annual Report, based on responses from more than 5,000 knowledge workers in six countries, one in four employees admitted to using unapproved AI platforms. The study shows that while most workplaces encourage staff to explore artificial intelligence, many do so without understanding the data privacy or compliance implications.


How Shadow AI Works

Shadow AI refers to employees relying on external or free AI services without oversight from IT teams. For instance, workers may use chatbots or generative tools to summarize meetings, write reports, or analyze data, even if these tools were never vetted for corporate use. Such platforms can store or learn from whatever information users enter into them, meaning sensitive company or customer data could unknowingly end up being processed outside secure environments.

The 1Password study found that 73 percent of workers said their employers support AI experimentation, yet 37 percent do not fully follow the official usage policies. Twenty-seven percent said they had used AI tools their companies never approved, making shadow AI the second-most common form of shadow IT, just after unapproved email use.


Why Employees Take the Risk

Experts say this growing behavior stems from convenience and the pressure to be efficient. During a CISO roundtable hosted for the report, Mark Hazleton, Chief Security Officer at Oracle Red Bull Racing, said employees often “focus on getting the job done” and find ways around restrictions if policies slow them down.

The survey confirmed this: 45 percent of respondents use unauthorized AI tools because they are convenient, and 43 percent said AI helps them be more productive.

Security leaders like Susan Chiang, CISO at Headway, warn that the rapid expansion of third-party tools hasn’t been matched by awareness of the potential consequences. Many users, she said, still believe that free or browser-based AI apps are harmless.


The Broader Shadow IT Problem

1Password’s research highlights that shadow AI is part of a wider trend. More than half of employees (52 percent) admitted to downloading other apps or using web tools without approval. Brian Morris, CISO at Gray Media, explained that tools such as Grammarly or Monday.com often slip under the radar because employees do not consider browser-based services as applications that could expose company data.


Building Safer AI Practices

The report advises companies to adopt a three-step strategy:

1. Keep an up-to-date inventory of all AI tools being used.

2. Define clear, accessible policies and guide users toward approved alternatives.

3. Implement controls that prevent sensitive data from reaching unverified AI systems.

Chiang added that organizations should not only chase major threats but also tackle smaller issues that accumulate over time. She described this as avoiding “death by a thousand cuts,” which can be prevented through continuous education and awareness programs.

As AI becomes embedded in daily workflows, experts agree that responsible use and visibility are key. Encouraging innovation should not mean ignoring the risks. For organizations, managing shadow AI is no longer optional, it is essential for protecting data integrity and maintaining digital trust.



AI Becomes the New Spiritual Guide: How Technology Is Transforming Faith in India and Beyond

 

Around the world — and particularly in India — worshippers are increasingly turning to artificial intelligence for guidance, prayer, and spiritual comfort. As machines become mediators of faith, a new question arises: what happens when technology becomes our spiritual middleman?

For Vijay Meel, a 25-year-old student from Rajasthan, divine advice once came from gurus. Now, it comes from GitaGPT — an AI chatbot trained on the Bhagavad Gita, the Hindu scripture of 700 verses that capture Krishna’s wisdom.

“When I couldn’t clear my banking exams, I was dejected,” Meel recalls. Turning to GitaGPT, he shared his worries and received the reply: “Focus on your actions and let go of the worry for its fruit.”

“It wasn’t something I didn’t know,” Meel says, “but at that moment, I needed someone to remind me.” Since then, the chatbot has become his digital spiritual companion.

AI is changing how people work, learn, and love — and now, how they pray. From Hinduism to Christianity, believers are experimenting with chatbots as sources of guidance. But Hinduism’s long tradition of embracing physical symbols of divinity makes it especially open to AI’s spiritual evolution.

“People feel disconnected from community, from elders, from temples,” says Holly Walters, an anthropologist at Wellesley College. “For many, talking to an AI about God is a way of reaching for belonging, not just spirituality.”

The Rise of Digital Deities

In 2023, apps like Text With Jesus and QuranGPT gained huge followings — though not without controversy. Meanwhile, Hindu innovators in India began developing AI-based chatbots to embody gods and scriptures.

One such developer, Vikas Sahu, built his own GitaGPT as a side project. To his surprise, it reached over 100,000 users within days. He’s now expanding it to feature teachings of other Hindu deities, saying he hopes to “morph it into an avenue to the teachings of all gods and goddesses.”

For Tanmay Shresth, an IT professional from New Delhi, AI-based spiritual chat feels like therapy. “At times, it’s hard to find someone to talk to about religious or existential subjects,” he says. “AI is non-judgmental, accessible, and yields thoughtful responses.”

AI Meets Ritual and Worship

Major spiritual movements are embracing AI, too. In early 2025, Sadhguru’s Isha Foundation launched The Miracle of Mind, a meditation app powered by AI. “We’re using AI to deliver ancient wisdom in a contemporary way,” says Swami Harsha, the foundation’s content lead. The app surpassed one million downloads within 15 hours.

Even India’s 2025 Maha Kumbh Mela, one of the world’s largest religious gatherings, integrated AI tools like Kumbh Sah’AI’yak for multilingual assistance and digital participation in rituals. Some pilgrims even joined virtual “darshan” and digital snan (bath) experiences through video calls and VR tools.

Meanwhile, AI is entering academic and theological research, analyzing sacred texts like the Bhagavad Gita and Upanishads for hidden patterns and similarities.

Between Faith and Technology

From robotic arms performing aarti at festivals to animatronic murtis at ISKCON temples and robotic elephants like Irinjadapilly Raman in Kerala, technology and devotion are merging in new ways. “These robotic deities talk and move,” Walters says. “It’s uncanny — but for many, it’s God. They do puja, they receive darshan.”

However, experts warn of new ethical and spiritual risks. Reverend Lyndon Drake, a theologian at Oxford, says that AI chatbots might “challenge the status of religious leaders” and influence beliefs subtly.

Religious AIs, though trained on sacred texts, can produce misleading or dangerous responses. One version of GitaGPT once declared that “killing in order to protect dharma is justified.” Sahu admits, “I realised how serious it was and fine-tuned the AI to prevent such outputs.”

Similarly, a Catholic chatbot priest was taken offline in 2024 after claiming to perform sacraments. “The problem isn’t unique to religion,” Drake says. “It’s part of the broader challenge of building ethically predictable AI.”

In countries like India, where digital literacy varies, believers may not always distinguish between divine wisdom and algorithmic replies. “The danger isn’t just that people might believe what these bots say,” Walters notes. “It’s that they may not realise they have the agency to question it.”

Still, many users like Meel find comfort in these virtual companions. “Even when I go to a temple, I rarely get into deep conversations with a priest,” he says. “These bots bridge that gap — offering scripture-backed guidance at the distance of a hand.”

NCSC Warns of Rising Cyber Threats Linked to China, Urges Businesses to Build Defences

 



The United Kingdom’s National Cyber Security Centre (NCSC) has cautioned that hacking groups connected to China are responsible for an increasing number of cyberattacks targeting British organisations. Officials say the country has become one of the most capable and persistent sources of digital threats worldwide, with operations extending across government systems, private firms, and global institutions.

Paul Chichester, the NCSC’s Director of Operations, explained that certain nations, including China, are now using cyber intrusions as part of their broader national strategy to gain intelligence and influence. According to the NCSC’s latest annual report, China remains a “highly sophisticated” threat actor capable of conducting complex and coordinated attacks.

This warning coincides with a government initiative urging major UK companies to take stronger measures to secure their digital infrastructure. Ministers have written to hundreds of business leaders, asking them to review their cyber readiness and adopt more proactive protection strategies against ransomware, data theft, and state-sponsored attacks.

Last year, security agencies from the Five Eyes alliance, comprising the UK, the United States, Canada, Australia, and New Zealand uncovered a large-scale operation by a Chinese company that controlled a botnet of over 260,000 compromised devices. In August, officials again warned that Chinese-backed hackers were targeting telecommunications providers by exploiting vulnerabilities in routers and using infected devices to infiltrate additional networks.

The NCSC also noted that other nations, including Russia, are believed to be “pre-positioning” their cyber capabilities in critical sectors such as energy and transportation. Chichester emphasized that the war in Ukraine has demonstrated how cyber operations are now used as instruments of power, enabling states to disrupt essential services and advance strategic goals.


Artificial Intelligence: A New Tool for Attackers

The report highlights that artificial intelligence is increasingly being used by hostile actors to improve the speed and efficiency of existing attack techniques. The NCSC clarified that, while AI is not currently enabling entirely new forms of attacks, it allows adversaries to automate certain stages of hacking, such as identifying security flaws or crafting convincing phishing emails.

Ollie Whitehouse, the NCSC’s Chief Technology Officer, described AI as a “productivity enhancer” for cybercriminals. He explained that it is helping less experienced hackers conduct sophisticated campaigns and enabling organized groups to expand operations more rapidly. However, he reassured that AI does not currently pose an existential threat to national security.


Ransomware Remains the Most Severe Risk

For UK businesses, ransomware continues to be the most pressing danger. Criminals behind these attacks are financially motivated, often targeting organisations with weak security controls regardless of size or industry. The NCSC reports seeing daily incidents affecting schools, charities, and small enterprises struggling to recover from system lockouts and data loss.

To strengthen national resilience, the upcoming Cyber Security and Resilience Bill will require critical service providers, including data centres and managed service firms, to report cyber incidents within 24 hours. By increasing transparency and response speed, the government hopes to limit the impact of future attacks.

The NCSC urges business leaders to treat cyber risk as a priority at the executive level. Understanding the urgency of action, maintaining up-to-date systems, and investing in employee awareness are essential steps to prevent further damage. As cyber activity grows “more intense, frequent, and intricate,” the agency stresses that a united effort between the government and private sector is crucial to protecting the UK’s digital ecosystem.



Spotify Partners with Major Labels to Develop “Responsible” AI Tools that Prioritize Artists’ Rights

 

Spotify, the world’s largest music streaming platform, has revealed that it is collaborating with major record labels to develop artificial intelligence (AI) tools in what it calls a “responsible” manner.

According to the company, the initiative aims to create AI technologies that “put artists and songwriters first” while ensuring full respect for their copyrights. As part of the effort, Spotify will license music from the industry’s leading record labels — Sony Music, Universal Music Group, and Warner Music Group — which together represent the majority of global music content.

Also joining the partnership are rights management company Merlin and digital music firm Believe.

While the specifics of the new AI tools remain under wraps, Spotify confirmed that development is already underway on its first set of products. The company acknowledged that there are “a wide range of views on use of generative music tools within the artistic community” and stated that artists would have the option to decide whether to participate.

The announcement comes amid growing concern from prominent musicians, including Dua Lipa, Sir Elton John, and Sir Paul McCartney, who have criticized AI companies for training generative models on their music without authorization or compensation.

Spotify emphasized that creators and rights holders will be “properly compensated for uses of their work and transparently credited for their contributions.” The firm said this would be done through “upfront agreements” rather than “asking for forgiveness later.”

“Technology should always serve artists, not the other way around,” said Alex Norstrom, Spotify’s co-president.

Not everyone, however, is optimistic. New Orleans-based MidCitizen Entertainment, a music management company, argued that AI has “polluted the creative ecosystem.” Its Managing Partner, Max Bonanno, said that AI-generated tracks have “diluted the already limited share of revenue that artists receive from streaming royalties.”

Conversely, the move was praised by Ed Newton-Rex, founder of Fairly Trained, an organization that advocates for AI companies to respect creators’ rights. “Lots of the AI industry is exploitative — AI built on people's work without permission, served up to users who get no say in the matter,” he told BBC News. “This is different — AI features built fairly, with artists’ permission, presented to fans as a voluntary add-on rather than an inescapable funnel of AI slop. The devil will be in the detail, but it looks like a move towards a more ethical AI industry, which is sorely needed.”

Spotify reiterated that it does not produce any music itself, AI-generated or otherwise. However, it employs AI in personalized features such as “daylist” and its AI DJ, and it hosts AI-generated tracks that comply with its policies. Earlier, the company had removed a viral AI-generated song that used cloned voices of Drake and The Weeknd, citing impersonation concerns.

Spotify also pointed out that AI has already become a fixture in music production — from autotune and mixing to mastering. A notable example was The Beatles’ 2023 Grammy-winning single Now and Then, which used AI to enhance John Lennon’s vocals from an old recording.

Warner Music Group CEO Robert Kyncl expressed support for the collaboration, saying, “We’ve been consistently focused on making sure AI works for artists and songwriters, not against them. That means collaborating with partners who understand the necessity for new AI licensing deals that protect and compensate rightsholders and the creative community.”

Surveillance Pricing: How Technology Decides What You Pay




Imagine walking into your local supermarket to buy a two-litre bottle of milk. You pay $3, but the person ahead of you pays $3.50, and the next shopper pays only $2. While this might sound strange, it reflects a growing practice known as surveillance pricing, where companies use personal data and artificial intelligence (AI) to determine how much each customer should pay. It is a regular practice and we must comprehend the ins and outs since we are directly subjected to it.


What is surveillance pricing?

Surveillance pricing refers to the use of digital tracking and AI to set individualised prices based on consumer behaviour. By analysing a person’s online activity, shopping habits, and even technical details like their device or location, retailers estimate each customer’s “pain point”, the maximum amount they are likely to pay for a product or service.

A recent report from the U.S. Federal Trade Commission (FTC) highlighted that businesses can collect such information through website pixels, cookies, account registrations, or email sign-ups. These tools allow them to observe browsing time, clicks, scrolling speed, and even mouse movements. Together, these insights reveal how interested a shopper is in a product, how urgent their need may be, and how much they can be charged without hesitation.


Growing concerns about fairness

In mid-2024, Delta Air Lines disclosed that a small percentage of its domestic ticket pricing was already determined using AI, with plans to expand this method to more routes. The revelation led U.S. lawmakers to question whether customer data was being used to charge certain passengers higher fares. Although Delta stated that it does not use AI for “predatory or discriminatory” pricing, the issue drew attention to how such technology could reshape consumer costs.

Former FTC Chair Lina Khan has also warned that some businesses can predict each consumer’s willingness to pay by analysing their digital patterns. This ability, she said, could allow companies to push prices to the upper limit of what individuals can afford, often without their knowledge.


How does it work?

AI-driven pricing systems use vast amounts of data, including login details, purchase history, device type, and location to classify shoppers by “price sensitivity.” The software then tests different price levels to see which one yields the highest profit.

The FTC’s surveillance pricing study revealed several real-world examples of this practice:

  1. Encouraging hesitant users: A betting website might detect when a visitor is about to leave and display new offers to convince them to stay.
  2. Targeting new buyers: A car dealership might identify first-time buyers and offer them different financing options or deals.
  3. Detecting urgency: A parent choosing fast delivery for baby products may be deemed less price-sensitive and offered fewer discounts.
  4. Withholding offers from loyal customers: Regular shoppers might be excluded from promotions because the system expects them to buy anyway.
  5. Monitoring engagement: If a user watches a product video for longer, the system might interpret it as a sign they are willing to pay more.


Real-world examples and evidence

Ride-hailing platforms have long faced questions about this kind of data-driven pricing. In 2016, Uber’s former head of economic research noted that users with low battery life were more likely to accept surge pricing. A 2023 Belgian newspaper investigation later reported small differences in Uber fares depending on a phone’s battery level. Uber denied that battery status affects fares, saying its prices depend only on driver supply and ride demand.


Is this new?

The concept itself isn’t new. Dynamic pricing has existed for decades, but digital surveillance has made it far more sophisticated. In the early 2000s, Amazon experimented with varying prices for DVDs based on browsing data, sparking backlash from consumers who discovered the differences. Similarly, the UK’s Norwich Union once used satellite tracking for a “Pay As You Drive” car insurance model, which was discontinued after privacy concerns.


The future of pricing

Today’s combination of big data and AI allows retailers to create precise, individualised pricing models that adjust instantly. Experts warn this could undermine fair competition, reduce transparency, and widen inequality between consumers. Regulators like the FTC are now studying these systems closely to understand their impact on market fairness and consumer privacy.

For shoppers, awareness is key. Comparing prices across devices, clearing cookies, and using privacy tools can help reduce personal data tracking. As AI continues to shape how businesses price their products, understanding surveillance pricing is becoming essential to protect both privacy and pocket.


AI Startup by Dhravya Shah Gains $3 Million Investment and O-1 Visa Recognition

 


As one of the youngest innovators in the global tech landscape, Mumbai-born innovator Dhravya Shah is just 20 years old and makes a big impact in the industry every day. It was Shah’s unconventional decision to move away from the traditional path of IIT preparation to pursue his ambitions abroad that has ultimately culminated in remarkable success, challenging conventional thinking. 

There has been a significant amount of interest in his artificial intelligence venture, Supermemory, with investors and the global AI community both paying close attention to this venture. As a pioneering platform for the enhancement of the memory capabilities of artificial intelligence systems, Supermemory has already been adopted by a large number of companies and developers to develop advanced applications on its platform. 

Shah describes his vision for the project as "my life's work" in a new post on the social network X. Shah's vision reflects a rare blend of ingenuity, entrepreneurial spirit, and technical savvy-qualities that have quickly positioned him as one of the most promising young minds shaping the future of AI. 

Artificial intelligence continues to advance the limits of contextual understanding as more and more advances are made, and researchers are continuously working to enhance the ability of models to retain information for extended periods of time, a challenge which will continue to be important in order for truly intelligent systems to evolve. It was in this dynamic environment that Dhravya Shah was beginning to make her way through life. 

While Shah was a teenager in Mumbai, he displayed an early interest in technology and consumer applications, a passion which set him apart from the rest of his peers. In contrast to the rest of the students of his age, who were engrossed in studying for India's notoriously competitive engineering entrance exams, he devoted his time to coding and experimenting.

When he persevered and developed a Twitter automation tool—a platform that turned tweets into visually engaging screenshots—he ultimately sold it to a leading social media management company, Hypefury—an organization that is widely known for its social media automation tools. 

By selling his company, he not only validated his entrepreneurial instincts, but also provided him with the financial means to further his academic ambitions in the United States, where he was accepted into Arizona State University, marking the beginning of his global technology and innovation journey. He has been recognized as both ambitious and successful in his career in the United States. 

Shah was granted the prestigious O-1 visa, which is an exceptionally rare honor awarded to individuals who demonstrate extraordinary abilities in such fields as science, business, education, athletics, or the arts, a distinction reserved only for the most extraordinary innovators and creators in the world. Shah, who is among the youngest Indians to be given such an honor, stands out as a remarkable accomplishment. 

As a result of a desire to challenge himself and sharpen his creative instincts, he embarked on a personal mission in which he was going to build a new project every week for 40 weeks in order to refine his creativity. As a result of these experiments, the early version of Supermemory emerged, originally dubbed Any Context. It was a tool designed to allow users to interact with the bookmarks they had on Twitter. 

Shah's career development has further been accelerated by his tenure at Cloudflare in 2024, during which he contributed to AI and infrastructure projects, and then advanced to an administrative role at Cloudflare in 2024. After initially developing a simple idea, he has since developed a sophisticated platform that allows AI systems to gain insight from unstructured data. This has enabled AI systems to comprehend and retain context effectively. 

While he was in this period, he received mentoring from industry leaders, including Cloudflare's Chief Technology Officer Dane Knecht, who encouraged him to develop Supermemory into a fully-formed product. Shah decided to pursue this endeavor full time due to Shah's conviction that the technology had great potential and to his own conviction that it had great potential. This marked the start of a new chapter in Shah's entrepreneurial career, which has helped shape the future of contextual artificial intelligence. 

It is Shah's vision that Supermemory will redefine how artificial intelligence processes and retains information. Supermemory is the centerpiece of Shah's vision. While many AI models are incredibly capable of generating responses, they remain limited by the fact that they cannot recall previous interactions. As a consequence, their contextual understanding and personalized capabilities are restricted. 

Using Supermemory, AI systems will be equipped with the capability of storing, retrieving, and utilizing information that was previously discussed in a secure and structured manner, which will allow them to overcome the problem of long-term memory. Shah and his growing team are announcing a significant milestone by raising $3 million, which underscores investor confidence in their approach to solving one of the biggest challenges AI faces. 

There is a dynamic memory layer under Supermemory, which is a part of the technology that powers Supermemory, which can be used alongside large language models such as GPT and Claude. It connects externally, so that the models can recall stored data whenever necessary—whether it relates to user preferences, historical context, or previous conversations—rather than altering them themselves. 

Using an external memory engine, as a central component of this architecture, the system supports both scalability as well as security, making the architecture adaptable across a wide range of AI ecosystems. The external memory engine functions independently of the core model so that strict control over data access and privacy can be maintained. 

Supermemory's engine is used to retrieve and supply contextual information to an AI model in real-time when a query is sent to it, thereby enhancing performance and continuity. In addition to enhancing performance, this approach gives developers an integrated solution for building smarter, context-aware applications that have the capacity to sustain meaningful interactions over a prolonged period of time. 

It is apparent that Supermemory's growing influence is reflected in its rapidly expanding clientele, which already includes Cluely, which is backed by Andreessen Horowitz (a16z); Montra, which is a video editing platform powered by artificial intelligence; and Scira, a search tool powered by artificial intelligence. Furthermore, the company is also working with a robotics firm to develop systems that allow robots to retain visual memories in order for robots to work efficiently. 

The technology can be applied to many industries, highlighting its broad applicability. It is the low latency architecture, according to Supermemory's founder Dhravya Shah, which differentiates the company from its competitors in a crowded AI landscape, offering faster access to stored context and smoother integration with existing models, those benefits being crucial to Supermemory's success. 

This is a confidence shared by investors who, according to Joshua Browder, believe that the platform has a high level of performance, rapid retrieval capabilities, and that this is a compelling solution for AI companies who need to create a memory layer that is both efficient and reliable. According to experts, Supermemory falls into an evolution in artificial intelligence where memory, reasoning, and learning come together to create systems that feel inherently more human in nature. 

Using Supermemory as a tool to enable artificial intelligence to remember, understand, and adapt, Supermemory will enable people to have more individualized, consistent, and emotional digital experiences, including virtual tutors and assistants and health and wellness companions. As much as the startup's technological potential is impressive, it represents a new generation of Indian founders making their mark on the global AI stage. 

Rather than just participating in the future, these young innovators are actively defining it as well. A testament to Shah’s perseverance, calculated risk-taking, and relentless innovation can be found in Supermemory’s story, which continues to unfold as the company continues to evolve. His journey from coding in Mumbai to building a global recognition of AI company in his early twenties shows the changing narrative of India’s technology talent, which increasingly transcends borders and redefines entrepreneurship on an international scale in the process. 

In the age of generative artificial intelligence, platforms such as Supermemory are being positioned to become critical infrastructure, bridging the gap between fleeting intelligence and lasting comprehension, allowing machines to retain information, remember context, and learn from their experience. Shah’s creation illustrates a future where artificial intelligence interactions could be more engaging, meaningful, and human in nature. 

A smarter application for developers, an even more personalized and efficient enterprise, and a technology that remembers and evolves with users are some of the advantages it brings. With investors rallying around his vision, Dhravya Shah and Supermemory are not merely building a product, they are quietly creating the blueprint for artificial intelligence to take place in the future, where memory will be a fundamental part of meaningful digital intelligence in the future.

Panama and Vietnam Governments Suffer Cyber Attacks, Data Leaked


Hackers stole government data from organizations in Panama and Vietnam in multiple cyber attacks that surfaced recently.

About the incident

According to Vietnam’s state news outlet, the Cyber Emergency Response Team (VNCERT) confirmed reports of a breach targeting the National Credit Information Center (CIC) that manages credit information for businesses and people, an organization run by the State Bank of Vietnam. 

Personal data leaked

Earlier reports suggested that personal information was exposed due to the attack. VNCERT is now investigating and working with various agencies and Viettel, a state-owned telecom. It said, “Initial verification results show signs of cybercrime attacks and intrusions to steal personal data. The amount of illegally acquired data is still being counted and clarified.”

VNCERT has requested citizens to avoid downloading and sharing stolen data and also threatened legal charges against people who do so.

Who was behind the attack?

The statement has come after threat actors linked to the Shiny Hunters Group and Scattered Spider cybercriminal organization took responsibility for hacking the CIC and stealing around 160 million records. 

Threat actors put up stolen data for sale on the cybercriminal platforms, giving a sneak peek of a sample that included personal information. DataBreaches.net interviewed the hackers, who said they abused a bug in end-of-life software, and didn’t offer a ransom for the stolen information.

CIC told banks that the Shiny Hunters gang was behind the incident, Bloomberg News reported.

The attackers have gained the attention of law enforcement agencies globally for various high-profile attacks in 2025, including various campaigns attacking big enterprises in the insurance, retail, and airline sectors. 

The Finance Ministry of Panama also hit

The Ministry of Economy and Finance in Panam was also hit by a cyber attack, government officials confirmed. “The Ministry of Economy and Finance (MEF) informs the public that today it detected an incident involving malicious software at one of the offices of the Ministry,” they said in a statement. 

The INC ransomware group claimed responsibility for the incident and stole 1.5 terabytes of data, such as emails, budgets, etc., from the ministry.

AdaptixC2 Raises Security Alarms Amid Active Use in Cyber Incidents

 


During this time, when digital resilience has become more important than digital innovation, there is an increasing gap between strengthened defences and the relentless adaptability of cybercriminals, which is becoming increasingly evident as we move into the next decade. According to a recent study by Veeam, seven out of ten organisations still suffered cyberattacks in the past year, despite spending more on security and recovery capabilities. 

Rather than simply preventing intrusions, the issue has now evolved into ensuring rapid recovery of mission-critical data once an attack has succeeded, a far more complex challenge. As a result of this uneasiness, the emergence of AdaptixC2, an open-source framework for emulating post-exploitation adversarial adversaries, is making people more concerned. 

With its modular design, support for multiple beacon formats, and advanced tunnelling features, AdaptixC2 is one of the most versatile platforms available for executing commands, transferring files, and exfiltrating data from compromised systems. As a result, analysts have observed its use in attacks ranging from social engineering campaigns via Microsoft Teams to automated scripts likely to be used in many of these attacks, and in some cases in combination with ransomware attacks. 

In light of the ever-evolving threat landscape, the increasing prevalence of such customizable frameworks has heightened the pressure on CISOs and IT leaders to ensure both the recovery and continuity of business under fire are possible not only by building stronger defences, but also by providing a framework that can be customised to suit specific requirements. 

In May 2025, researchers from Unit 42 discovered evidence that the AdaptixC2 malware was being used in active campaigns to infect multiple systems and demonstrated that it is becoming increasingly relevant as a cyber threat. The original goal of AdaptixC2 was to develop a framework for post-exploitation and adversarial emulation by penetration testers, but it has quietly evolved into a weaponised tool that is preferred by threat actors because of its stealth and adaptability. 

It is noteworthy that, unlike other widely recognised command-and-control frameworks, AdaptixC2 has been virtually unnoticed, with limited reports documenting its usage in actual-life situations. The framework has a wide array of capabilities, allowing malicious actors to perform command execution, transfer files, and exfiltrate sensitive data at alarming speeds. 

Since it is an open source platform, it is very easy to customise, allowing adversaries to take advantage of it with ease and make it highly versatile. Several recent investigations have also indicated that Microsoft Teams is used in social engineering campaigns to deliver malicious payloads, including those instances in which Microsoft Teams was utilized to deliver malicious payloads. AI-generated scripts are also suspected to have been used in some operations. 

The development of such tools demonstrates the trend of attackers increasingly employing modular and customizable frameworks as a means of bypassing traditional defences. Nevertheless, artificial intelligence-powered threats are adding new layers of complexity to the threat landscape. Deepfake-based phishing scams, adaptive bot operations that are similar to human beings, and more. 

Several recent incidents, such as the Hong Kong case, in which scammers used fake video impersonations to swindle US$25 million from their victims, demonstrate how devastating these tactics can be. 

With AI enabling adversaries to imitate voices, behaviours, and even writing styles with uncanny accuracy, it is escalating the challenges that security teams face to remain on top of the ever-changing threats they face: Keeping up with adversaries who are evolving faster, deceiving more convincingly, and evading detection at a much faster pace. In the past few years, AdaptixC2 has evolved into a formidable open-source command-and-control framework known as AdaptixC2. 

As a result of its flexible architecture, modular design, and support for various beacon agent formats, the beacon agent has become an integral part of the threat actor arsenal when it comes to persistence and stealth. This has been a weapon that has been used for penetration testing and adversarial simulation. 

With the flexibility of the framework, operators are able to customise modules, integrate AI-generated scripts into the application, and deploy sophisticated tunnelling mechanisms across a wide range of communication channels, including HTTP, DNS, and even their own foggyweb protocols, thanks to its extensible nature. 

By virtue of its adaptability, AdaptixC2 is a versatile toolkit for post-exploitation, allowing it to execute commands, transfer files, and exfiltrate encrypted data while ensuring minimal detection. As part of their investigations, researchers have been able to identify the malware's deployment methods. Social engineering campaigns were able to use Microsoft Teams as a tool, while payload droppers were likely crafted with artificial intelligence scripting. 

Those attackers established resilient tunnels, maintained long-term persistence, and carefully orchestrated the exfiltration of sensitive data. AdaptixC2 has also been used to combine with ransomware campaigns, enabling adversaries to harvest credentials, map networks, and exfiltrate critical data before unleashing disruptive encryption payloads to gain financial gain. 

In addition, open-source C2 frameworks are becoming increasingly integrated into multi-phase attacks, which blur the line between reconnaissance, lateral movement, and destructive activity within the threat ecosystem, highlighting a broader shift in the threat landscape. It is clear from this growing threat that defenders need to build layered detection strategies to monitor anomalous beacons, foggy web traffic, and unauthorised script execution, as well as to raise user awareness about social engineering within collaboration platforms, which is of paramount importance. 

The more AdaptixC2 is analysed in detail, the more evident it becomes how comprehensive and dangerous its capabilities are when deployed in real-life environments. In spite of being designed initially as a tool to perform red-teaming, the framework provides comprehensive control over compromised machines and is increasingly exploited by malicious actors. 

 The threat operators have several tools available to them, including manipulating the file system, creating or deleting files, enumerating processes, terminating applications, and even initiating new program executions, all of which can be used to extend their reach. In order to carry out such actions, attackers need to be able to use advanced tunnelling features - such as SOCKS4/5 proxying and port forwarding - which enable them to maintain covert communication channels even within highly secured networks. 

Its modular architecture, built upon "extenders" which function as plugins, allows adversaries to craft custom payloads and evasion techniques. Beacon Object Files (BOFs) further enhance the stealth capabilities of an agent by executing small C programs directly within the agent's process. As part of this framework, beacon agents can be generated in multiple formats, including executables, DLLs, service binaries, or raw shell code, on both x86 and x64 architectures.

These agents can perform discreet data exfiltration using their specialised commands, even dividing up file transfers into small chunks in order to avoid triggering detection tools by network-based systems. AdaptixC2 has also been designed with operational security features embedded in it, enabling attackers to blend into normal traffic flow without being detected. 

A number of parameters can be configured to prevent beacons from activating during off-hours monitoring, such as "KillDate" and "WorkingTime". By using this system, it is possible to configure beacons in three primary ways, which include HTTP, SMB, and TCP, all of which are tailored to different communication paths and protocols. 

There are three major types of HTTP disguise methods: those that hide traffic using familiar web parameters such as headers, URIs, and user-agent strings, those which leverage Windows named pipes and those which use TCP to obfuscate connections by using lightweight obfuscation to disguise traffic. 

A study published in the Journal of Computer Security has highlighted the fact that despite the RC4 encryption in the configuration, its predictable structure enables defenders to build tools that get an overview of malicious samples, retrieve server details, and display communication profiles automatically. 

In addition to the modularity, covert tunnelling, and operational security measures AdaptixC2 offers attackers, it has also provided a significant leap forward in the evolution of open-source C2 frameworks by providing a persistent challenge for defenders who have to deal with detecting threats and responding to them. As AdaptixC2 becomes increasingly popular, it becomes increasingly evident that both its adaptability and its escalating risks to enterprises are becoming more significant. 

A modular design, combined with the increasing use of artificial intelligence-assisted code generation, makes it possible for adversaries to improve their techniques at a rapid rate, making detection and containment more challenging for defenders. 

The framework’s flexibility has made it a favourite choice for sophisticated campaigns where rapid customisations are able to transform even routine intrusions into long-term, persistent threats. Researchers warn that this makes the framework a preferred choice for sophisticated campaigns. Security providers are enhancing their defences in an attempt to counter these developments by investing in advanced detection and prevention mechanisms. 

Palo Alto Networks, for instance, has upgraded its security portfolio in order to effectively address AdaptixC2-related threats by utilising multiple layers of defences. A new version of Advanced URL Filtering and Advanced DNS Security has been added, which finds and blocks domains and URLs linked to malicious activity. Advanced Threat Prevention has also been updated to include machine learning models that detect exploits in real time. 

As part of the company’s WildFire analysis platform, new artificial intelligence-driven models have been developed to identify emerging indicators better, and its Cortex XDR and XSIAM solutions offer a multilayered malware prevention system that prevents both known and previously unknown threats across all endpoints. 

 A proactive defence strategy such as this highlights the importance of tracking not only the progress of AdaptixC2 technology but also continuously updating mitigation strategies in order to stay ahead of adversaries, who are increasingly relying on customised frameworks to outperform traditional security controls in an ever-changing threat landscape. 

It is, in my opinion, clear that the emergence of AdaptixC2 underscores the fact that cyber defence is no longer solely about building barriers, but rather about fostering resilience in the face of adversaries who are growing more sophisticated, quicker, and more resourceful each day. Increasingly, organisations need to integrate adaptability into every layer of their security posture rather than relying on static strategies. 

The key to achieving this is not simply deploying advanced technology - it involves cultivating a culture of vigilance, where employees recognise emerging social engineering tactics and IT teams are proactive in seeking out potential threats before they escalate. The balance can be shifted to favour the defences by investing in zero-trust frameworks, enhanced threat intelligence, and automated response mechanisms. 

The importance of industry-wide collaboration cannot be overstated, where information sharing and coordinated efforts make it much harder for tools like AdaptixC2 to remain hidden from view. Because threat actors are increasingly leveraging artificial intelligence and customizable frameworks to refine their attacks, defenders are also becoming more and more adept at using AI-based analytics and automation in order to detect anomalies and respond swiftly to them. 

With the high stakes of this contest at stake, those who consider adaptability a continuous discipline - rather than a one-off fix-all exercise - will be the most prepared to safeguard their mission-critical assets and ensure operational continuity despite the relentless cyber threats they face.

AI Image Attacks: How Hidden Commands Threaten Chatbots and Data Security

 



As artificial intelligence becomes part of daily workflows, attackers are exploring new ways to exploit its weaknesses. Recent research has revealed a method where seemingly harmless images uploaded to AI systems can conceal hidden instructions, tricking chatbots into performing actions without the user’s awareness.


How hidden commands emerge

The risk lies in how AI platforms process images. To reduce computing costs, most systems shrink images before analysis, a step known as downscaling. During this resizing, certain pixel patterns, deliberately placed by an attacker can form shapes or text that the model interprets as user input. While the original image looks ordinary to the human eye, the downscaled version quietly delivers instructions to the system.

This technique is not entirely new. Academic studies as early as 2020 suggested that scaling algorithms such as bicubic or bilinear resampling could be manipulated to reveal invisible content. What is new is the demonstration of this tactic against modern AI interfaces, proving that such attacks are practical rather than theoretical.


Why this matters

Multimodal systems, which handle both text and images, are increasingly connected to calendars, messaging apps, and workplace tools. A hidden prompt inside an uploaded image could, in theory, request access to private information or trigger actions without explicit permission. One test case showed that calendar data could be forwarded externally, illustrating the potential for identity theft or information leaks.

The real concern is scale. As organizations integrate AI assistants into daily operations, even one overlooked vulnerability could compromise sensitive communications or financial data. Because the manipulation happens inside the preprocessing stage, traditional defenses such as firewalls or antivirus tools are unlikely to detect it.


Building safer AI systems

Defending against this form of “prompt injection” requires layered strategies. For users, simple precautions include checking how an image looks after resizing and confirming any unusual system requests. For developers, stronger measures are necessary: restricting image dimensions, sanitizing inputs before models interpret them, requiring explicit confirmation for sensitive actions, and testing models against adversarial image samples.

Researchers stress that piecemeal fixes will not be enough. Only systematic design changes such as enforcing secure defaults and monitoring for hidden instructions can meaningfully reduce the risks.

Images are no longer guaranteed to be safe when processed by AI systems. As attackers learn to hide commands where only machines can read them, users and developers alike must treat every upload with caution. By prioritizing proactive defenses, the industry can limit these threats before they escalate into real-world breaches.



Google DeepMind’s Jeff Dean Says AI Models Already Outperform Humans in Most Tasks

 

With artificial intelligence evolving rapidly, the biggest debate in the AI community is whether advanced models will soon outperform humans in most tasks—or even reach Artificial General Intelligence (AGI). 

Google DeepMind’s Chief Scientist Jeff Dean, while avoiding the term AGI, shared that today’s AI systems may already be surpassing humans in many everyday activities, though with some limitations.

Speaking on the Moonshot Podcast, Dean remarked that current models are "better than the average person at most tasks" that don’t involve physical actions.

"Most people are not that good at a random task if you ask them to do that they've never done before, and you know some of the models we have today are actually pretty reasonable at most things," he explained.

However, Dean also cautioned that these systems are far from flawless. "You know, they will fail at a lot of things; they're not human expert level in some things, so that's a very different definition and being better than the world expert at every single task," he said.

When asked about AI’s ability to make breakthroughs faster than humans, Dean responded: "We're actually probably already you know close to that in some domains, and I think we're going to broaden out that set of domains." He emphasized that automation will play a crucial role in accelerating "scientific progress, engineering progress," and advancing human capabilities over the next "five, 10, 15, 20 years."

From Vulnerability Management to Preemptive Exposure Management

 

The traditional model of vulnerability management—“scan, wait, patch”—was built for an earlier era, but today’s attackers operate at machine speed, exploiting weaknesses within hours of disclosure through automation and AI-driven reconnaissance. The challenge is no longer about identifying vulnerabilities but fixing them quickly enough to stay ahead. While organizations discover thousands of exposures every month, only a fraction are remediated before adversaries take advantage.

Roi Cohen, co-founder and CEO of Vicarius, describes the answer as “preemptive exposure management,” a strategy that anticipates and neutralizes threats before they can be weaponized. “Preemptive exposure management shifts the model entirely,” he explains. “It means anticipating and neutralizing threats before they’re weaponized, not waiting for a CVE to be exploited before taking action.” This proactive model requires continuous visibility of assets, contextual scoring to highlight the most critical risks, and automation that compresses remediation timelines from weeks to minutes.

Michelle Abraham, research director for security and trust at IDC, notes the urgency of this shift. “Proactive security seems to have taken a back seat to reactive security at many organizations. IDC research highlights that few organizations track all their IT assets which is the critical first step towards visibility of the full digital estate. Once assets and exposures are identified, security teams are often overwhelmed by the volume of findings, underscoring the need for risk-based prioritization,” she says. Traditional severity scores such as CVSS do not account for real-world exploitability or the value of affected systems, which means organizations often miss what matters most. Cohen stresses that blending exploit intelligence, asset criticality, and business impact is essential to distinguish noise from genuine risk.

Abraham further points out that less than half of organizations currently use exposure prioritization algorithms, and siloed operations between security and IT create costly delays. “By integrating visibility, prioritization and remediation, organizations can streamline processes, reduce patching delays and fortify their defenses against evolving threats,” she explains.

Artificial intelligence adds another layer of complexity. Attackers are already using AI to scale phishing campaigns, evolve malware, and rapidly identify weaknesses, but defenders can also leverage AI to automate detection, intelligently prioritize threats, and generate remediation playbooks in real time. Cohen highlights its importance: “In a threat landscape that moves faster than any analyst can, remediation has to be autonomous, contextual and immediate and that’s what preemptive strategy delivers.”

Not everyone, however, is convinced. Richard Stiennon, chief research analyst at IT-Harvest, takes a more skeptical stance: “Most organizations have mature vulnerability management programs that have identified problems in critical systems that are years old. There is always some reason not to patch or otherwise fix a vulnerability. Sprinkling AI pixie dust on the problem will not make it go away. Even the best AI vulnerability discovery and remediation solution cannot overcome corporate lethargy.” His concerns highlight that culture and organizational behavior remain as critical as the technology itself.

Even with automation, trust issues persist. A single poorly executed patch can disrupt mission-critical operations, leading experts to recommend gradual adoption. Much like onboarding a new team member, automation should begin with low-risk actions, operate with guardrails, and build confidence over time as results prove consistent and reliable. Lawrence Pingree of Dispersive emphasizes prevention: “We have to be more preemptive in all activities, this even means the way that vendors build their backend signatures and systems to deliver prevention. Detection and response is failing us and we're being shot behind the line.”

Regulatory expectations are also evolving. Frameworks such as NIST CSF 2.0 and ISO 27001 increasingly measure how quickly vulnerabilities are remediated, not just whether they are logged. Compliance is becoming less about checklists and more about demonstrating speed and effectiveness with evidence to support it.

Experts broadly agree on what needs to change: unify detection, prioritization, and remediation workflows; automate obvious fixes while maintaining safeguards; prioritize vulnerabilities based on exploitability, asset value, and business impact; and apply runtime protections to reduce exposure during patching delays. Cohen sums it up directly: security teams don’t need to find more vulnerabilities—they need to shorten the gap between detection and mitigation. With attackers accelerating at machine speed, the only sustainable path forward is a preemptive strategy that blends automation, context, and human judgment.

Misuse of AI Agents Sparks Alarm Over Vibe Hacking


 

Once considered a means of safeguarding digital battlefields, artificial intelligence has now become a double-edged sword —a tool that can not only arm defenders but also the adversaries it was supposed to deter, giving them both a tactical advantage in the digital fight. According to Anthropic's latest Threat Intelligence Report for August 2025, shown below, this evolving reality has been painted in a starkly harsh light. 

It illustrates how cybercriminals are developing AI as a product of choice, no longer using it to support their attacks, but instead executing them as a central instrument of attack orchestration. As a matter of fact, according to the report, malicious actors are now using advanced artificial intelligence in order to automate phishing campaigns on a large scale, circumvent traditional security measures, and obtain sensitive information very efficiently, with very little human oversight needed. As a result of AI's precision and scalability, the threat landscape is escalating in troubling ways. 

By leveraging AI's accuracy and scalability, modern cyberattacks are being accelerated, reaching, and sophistication. A disturbing evolution of cybercrime is being documented by Anthropologic, as it turns out that artificial intelligence is no longer just used to assist with small tasks such as composing phishing emails or generating malicious code fragments, but is also serving as a force multiplier for lone actors, giving them the capacity to carry out operations at scale and with precision that was once reserved for organized criminal syndicates to accomplish. 

Investigators have been able to track down a sweeping extortion campaign back to a single perpetrator in one particular instance. This perpetrator used Claude Code's execution environment as a means of automating key stages of intrusion, such as reconnaissance, credential theft, and network penetration, to carry out the operation. The individual compromised at least 17 organisations, ranging from government agencies to hospitals to financial institutions, and he has made ransom demands that have sometimes exceeded half a million dollars in some instances. 

It was recently revealed that researchers have conceived of a technique called “vibe hacking” in which coding agents can be used not just as tools but as active participants in attacks, marking a profound shift in both cybercriminal activity’s speed and reach. It is believed by many researchers that the concept of “vibe hacking” has emerged as a major evolution in cyberattacks, as instead of exploiting conventional network vulnerabilities, it focuses on the logic and decision-making processes of artificial intelligence systems. 

In the year 2025, Andrej Karpathy started a research initiative called “vibe coding” - an experiment in artificial intelligence-generated problem-solving. Since then, the concept has been co-opted by cybercriminals to manipulate advanced language models and chatbots for unauthorised access, disruption of operations, or the generation of malicious outputs, originating from a research initiative. 

By using AI, as opposed to traditional hacking, in which technical defences are breached, this method exploits the trust and reasoning capabilities of machine learning itself, making detection especially challenging. Furthermore, the tactic is reshaping social engineering as well: attackers can create convincing phishing emails, mimic human speech, build fraudulent websites, create clones of voices, and automate whole scam campaigns at an unprecedented level using large language models that simulate human conversations with uncanny realism. 

With tools such as artificial intelligence-driven vulnerability scanners and deepfake platforms, the threat is amplified even further, creating a new frontier of automated deception, according to experts. In one notable variant of scamming, known as “vibe scamming,” adversaries can launch large-scale fraud operations in which they generate fake portals, manage stolen credentials, and coordinate follow-up communications all from a single dashboard, which is known as "vibe scamming." 

Vibe hacking is one of the most challenging cybersecurity tasks people face right now because it is a combination of automation, realism, and speed. The attackers are not relying on conventional ransomware tactics anymore; they are instead using artificial intelligence systems like Claude to carry out all aspects of an intrusion, from reconnaissance and credential harvesting to network penetration and data extraction.

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. 

An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to officials in the United States, these operations channel hundreds of millions of dollars every year into Pyongyang's technical weapon program, replacing years of training with on-demand artificial intelligence assistance. This reveals a troubling shift: artificial intelligence is not only enabling cybercrime but is also amplifying its speed, scale, and global reach, as evidenced by these revelations. A report published by Anthropological documents how Claude Code has been used not just for breaching systems, but for monetising stolen information at large scales as well. 

As a result of using the software, thousands of records containing sensitive identifiers, financial information, and even medical information were sifted through, and then customised ransom notes and multilayered extortion strategies were generated based on the victim's characteristics. As the company pointed out, so-called "agent AI" tools now provide attackers with both technical expertise and hands-on operational support, which effectively eliminates the need to coordinate teams of human operators, which is an important factor in preventing cyberattackers from taking advantage of these tools. 

Researchers warn that these systems can be dynamically adapted to defensive countermeasures, such as malware detection, in real time, thus making traditional enforcement efforts increasingly difficult. There are a number of cases to illustrate the breadth of abuse that occurs in the workplace, and there is a classifier developed by Anthropic to identify the behaviour. However, a series of case studies indicates this behaviour occurs in a multitude of ways. 

In the North Korean case, Claude was used to fabricate summaries and support fraudulent IT worker schemes. In the U.K., a criminal known as GTG-5004 was selling ransomware variants based on artificial intelligence on darknet forums; Chinese actors utilised artificial intelligence to compromise Vietnamese critical infrastructure; and Russian and Spanish-speaking groups were using the software to create malicious software and steal credit card information. 

In order to facilitate sophisticated fraud campaigns, even low-skilled actors have begun integrating AI into Telegram bots around romance scams as well as false identity services, significantly expanding the number of fraud campaigns available. A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. 

It is a disturbing truth that is highlighted in Anthropic’s report: although artificial intelligence was once hailed as a shield for defenders, it is now increasingly being used as a weapon, putting digital security at risk. Nevertheless, people must not retreat from AI adoption, but instead develop defensive strategies in parallel that are geared toward keeping up with AI adoption. Proactive guardrails must be set up in order to prevent artificial intelligence from being misused, including stricter oversight and transparency by developers, as well as continuous monitoring and real-time detection systems to recognise abnormal AI behaviour before it escalates into a serious problem. 

A company's resilience should go beyond its technical defences, and that means investing in employee training, incident response readiness, and partnerships that enable data sharing across sectors. In addition to this, governments are also under mounting pressure to update their regulatory frameworks in order to keep pace with the evolution of threat actors in terms of policy.

By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world. 

A significant difference from earlier AI-assisted attacks was that Claude demonstrated "on-keyboard" capability as well, performing tasks such as scanning VPN endpoints, generating custom malware, and analysing stolen datasets in order to prioritise the victims with the highest payout potential. As soon as the system was installed, it created tailored ransom notes in HTML, containing the specific financial requirements, workforce statistics, and regulatory threats of each organisation, all based on the data that had been collected. 

The amount of payments requested ranged from $75,000 to $500,000 in Bitcoin, which illustrates that with the assistance of artificial intelligence, one individual could control the entire cybercrime network. Additionally, the report emphasises how artificial intelligence and cryptocurrency have increasingly become intertwined. 

For example, ransom notes include wallet addresses in ransom notes, and dark web forums are exclusively selling AI-generated malware kits in cryptocurrency. An investigation by the FBI has revealed that North Korea is increasingly using artificial intelligence (AI) to evade sanctions, which is used to secure fraudulent positions at Western tech companies by state-backed IT operatives who use it for the fabrication of summaries, passing interviews, debugging software, and managing day-to-day tasks. 

According to U.S. officials, these operations funnel hundreds of millions of dollars a year into Pyongyang's technical weapons development program, replacing years of training with on-demand AI assistance. All in all, these revelations indicate an alarming trend: artificial intelligence is not simply enabling cybercrime, but amplifying its scale, speed, and global reach. 

According to the report by Anthropic, Claude Code has been weaponised not only to breach systems, but also to monetise stolen data. This particular tool has been used in several instances to sort through thousands of documents containing sensitive information, including identifying information, financial details, and even medical records, before generating customised ransom notes and layering extortion strategies based on each victim's profile. 

The company explained that so-called “agent AI” tools are now providing attackers with both technical expertise and hands-on operational support, effectively eliminating the need for coordinated teams of human operators to perform the same functions. Despite the warnings of researchers, these systems are capable of dynamically adapting to defensive countermeasures like malware detection in real time, making traditional enforcement efforts increasingly difficult, they warned. 

Using a classifier built by Anthropic to identify this type of behaviour, the company has shared technical indicators with trusted partners in an attempt to combat the threat. The breadth of abuse is still evident through a series of case studies: North Korean operatives use Claude to create false summaries and maintain fraud schemes involving IT workers; a UK-based criminal with the name GTG-5004 is selling AI-based ransomware variants on darknet forums. 

Some Chinese actors use artificial intelligence to penetrate Vietnamese critical infrastructure, while Russians and Spanish-speaking groups use Claude to create malware and commit credit card fraud. The use of artificial intelligence in Telegram bots marketed for romance scams or synthetic identity services has even reached the level of low-skilled actors, allowing sophisticated fraud campaigns to become more accessible to the masses. 

A new report by Anthropic researchers Alex Moix, Ken Lebedev, and Jacob Klein argues that artificial intelligence, based on the results of their research, is continually lowering the barriers to entry for cybercriminals, enabling fraudsters to create profiles of victims, automate identity theft, and orchestrate operations at a speed and scale that is unimaginable with traditional methods. In the report published by Anthropic, it appears to be revealed that artificial intelligence is increasingly being used as a weapon to challenge the foundations of digital security, despite being once seen as a shield for defenders. 

There is a solution to this, but it is not in retreating from AI adoption, but by accelerating the parallel development of defensive strategies that are at the same pace as AI adoption. According to experts, proactive guardrails are necessary to ensure that AI deployments are monitored, developers are held more accountable, and there is continuous monitoring and real-time detection systems available that can be used to identify abnormal AI behaviour before it becomes a serious problemOrganisationss must not only focus on technical defences; they must also invest in employee training, incident response readiness, and partnerships that facilitate intelligence sharing between sectors as well.

Governments are also under increasing pressure to update regulatory frameworks to keep pace with the evolving threat actors, in order to ensure that policy is updated at the same pace as they evolve. By harnessing artificial intelligence responsibly, people can still make it a powerful ally—automating defensive operations, detecting anomalies, and even predicting threats before they are even visible. In order to ensure that it continues in a manner that favours protection over exploitation, protecting not just individual enterprises, but the overall trust people have in the future of the digital world.