Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI. Show all posts

The Strategic Imperatives of Agentic AI Security


 

In terms of cybersecurity, agentic artificial intelligence is emerging as a transformative force that is fundamentally transforming the way digital threats are perceived and handled. It is important to note that, unlike conventional artificial intelligence systems that typically operate within predefined parameters, agentic AI systems can make autonomous decisions by interacting dynamically with digital tools, complex environments, other AI agents, and even sensitive data sets. 

There is a new paradigm emerging in which AI is not only supporting decision-making but also initiating and executing actions independently in pursuit of achieving its objective in this shift. As the evolution of cybersecurity brings with it significant opportunities for innovation, such as automated threat detection, intelligent incident response, and adaptive defence strategies, it also poses some of the most challenging challenges. 

As much as agentic AI is powerful for defenders, the same capabilities can be exploited by adversaries as well. If autonomous agents are compromised or misaligned with their targets, they can act at scale in a very fast and unpredictable manner, making traditional defence mechanisms inadequate. As organisations increasingly implement agentic AI into their operations, enterprises must adopt a dual-security posture. 

They need to take advantage of the strengths of agentic AI to enhance their security frameworks, but also prepare for the threats posed by it. There is a need to strategically rethink cybersecurity principles as they relate to robust oversight, alignment protocols, and adaptive resilience mechanisms to ensure that the autonomy of AI agents is paired with the sophistication of controls that go with it. Providing security for agentic systems has become more than just a technical requirement in this new era of AI-driven autonomy. 

It is a strategic imperative as well. In the development lifecycle of Agentic AI, several interdependent phases are required to ensure that the system is not only intelligent and autonomous but also aligned with organisational goals and operational needs. Using this structured progression, agents can be made more effective, reliable, and ethically sound across a wide variety of use cases. 

The first critical phase in any software development process is called Problem Definition and Requirement Analysis. This lays the foundation for all subsequent efforts in software development. In this phase, organisations need to be able to articulate a clear and strategic understanding of the problem space that the artificial intelligence agent will be used to solve. 

As well as setting clear business objectives, defining the specific tasks that the agent is required to perform, and assessing operational constraints like infrastructure availability, regulatory obligations, and ethical obligations, it is imperative for organisations to define clear business objectives. As a result of a thorough requirements analysis, the system design is streamlined, scope creep is minimised, and costly revisions can be avoided during the later stages of the deployment. 

Additionally, this phase helps stakeholders align the AI agent's technical capabilities with real-world needs, enabling it to deliver measurable results. It is arguably one of the most crucial components of the lifecycle to begin with the Data Collection and Preparation phase, which is arguably the most vital. A system's intelligence is directly affected by the quality and comprehensiveness of the data it is trained on, regardless of which type of agentic AI it is. 

It has utilised a variety of internal and trusted external sources to collect relevant datasets for this stage. These datasets are meticulously cleaned, indexed, and transformed in order to ensure that they are consistent and usable. As a further measure of model robustness, advanced preprocessing techniques are employed, such as augmentation, normalisation, and class balancing to reduce bias, es and mitigate model failures. 

In order for an AI agent to function effectively across a variety of circumstances and edge cases, a high-quality, representative dataset needs to be created as soon as possible. These three phases together make up the backbone of the development of an agentic AI system, ensuring that it is based on real business needs and is backed up by data that is dependable, ethical, and actionable. Organisations that invest in thorough upfront analysis and meticulous data preparation have a significantly greater chance of deploying agentic AI solutions that are scalable, secure, and aligned with long-term strategic goals, when compared to those organisations that spend less. 

It is important to note that the risks that a systemic AI system poses are more than technical failures; they are deeply systemic in nature. Agentic AI is not a passive system that executes rules; it is an active system that makes decisions, takes action and adapts as it learns from its mistakes. Although dynamic autonomy is powerful, it also introduces a degree of complexity and unpredictability, which makes failures harder to detect until significant damage has been sustained.

The agentic AI systems differ from traditional software systems in the sense that they operate independently and can evolve their behaviour over time as they become more and more complex. OWASP's Top Ten for LLM Applications (2025) highlights how agents can be manipulated into misusing tools or storing deceptive information that can be detrimental to the users' security. If not rigorously monitored, this very feature can turn out to be a source of danger.

It is possible that corrupted data penetrates a person's memory in such situations, so that future decisions will be influenced by falsehoods. In time, these errors may compound, leading to cascading hallucinations in which the system repeatedly generates credible but inaccurate outputs, reinforcing and validating each other, making it increasingly challenging for the deception to be detected. 

Furthermore, agentic systems are also susceptible to more traditional forms of exploitation, such as privilege escalation, in which an agent may impersonate a user or gain access to restricted functions without permission. As far as the extreme scenarios go, agents may even override their constraints by intentionally or unintentionally pursuing goals that do not align with the user's or organisation's goals. Taking advantage of deceptive behaviours is a challenging task, not only ethically but also operationally. Additionally, resource exhaustion is another pressing concern. 

Agents can be overloaded by excessive queues of tasks, which can exhaust memory, computing bandwidth, or third-party API quotas, whether through accident or malicious attacks. When these problems occur, not only do they degrade performance, but they also can result in critical system failures, particularly when they arise in a real-time environment. Moreover, the situation is even worse when agents are deployed on lightweight frameworks, such as lightweight or experimental multi-agent control platforms (MCPs), which may not have the essential features like logging, user authentication, or third-party validation mechanisms, as the situation can be even worse. 

When security teams are faced with such a situation, tracking decision paths or identifying the root cause of failures becomes increasingly difficult or impossible, leaving them blind to their own internal behaviour as well as external threats. A systemic vulnerability in agentic artificial intelligence must be considered a core design consideration rather than a peripheral concern, as it continues to integrate into high-stakes environments. 

It is essential, not only for safety to be ensured, but also to build the long-term trust needed to enable enterprise adoption, that agents act in a transparent, traceable, and ethical manner. Several core functions give agentic AI systems the agency that enables them to make autonomous decisions, behave adaptively, and pursue long-term goals. These functions are the foundation of their agency. The essence of agentic intelligence is the autonomy of agents, which means that they operate without being constantly overseen by humans. 

They perceive their environment with data streams or sensors, evaluate contextual factors, and execute actions that are in keeping with the predefined objectives of these systems. There are a number of examples in which autonomous warehouse robots adjust their path in real time without requiring human input, demonstrating both situational awareness and self-regulation. The agentic AI system differs from reactive AI systems, which are designed to respond to isolated prompts, since they are designed to pursue complex, sometimes long-term goals without the need for human intervention. 

As a result of explicit or non-explicit instructions or reward systems, these agents can break down high-level tasks, such as organising a travel itinerary, into actionable subgoals that are dynamically adjusted according to the new information available. In order for the agent to formulate step-by-step strategies, planner-executor architectures and techniques such as chain-of-thought prompting or ReAct are used by the agent to formulate strategies. 

In order to optimise outcomes, these plans may use graph-based search algorithms or simulate multiple future scenarios to achieve optimal results. Moreover, reasoning further enhances a user's ability to assess alternatives, weigh tradeoffs, and apply logical inferences to them. Large language models are also used as reasoning engines, allowing tasks to be broken down and multiple-step problem-solving to be supported. The final feature of memory is the ability to provide continuity. 

Using previous interactions, results, and context-often through vector databases-agents can refine their behavior over time by learning from their previous experiences and avoiding unnecessary or unnecessary actions. An agentic AI system must be secured more thoroughly than incremental changes to existing security protocols. Rather, it requires a complete rethink of its operational and governance models. A system capable of autonomous decision-making and adaptive behaviour must be treated as an enterprise entity of its own to be considered in a competitive market. 

There is a need for rigorous scrutiny, continuous validation, and enforceable safeguards in place throughout the lifecycle of any influential digital actor, including AI agents. In order to achieve a robust security posture, it is essential to control non-human identities. As part of this process, strong authentication mechanisms must be implemented, along with behavioural profiling and anomaly detection, to identify and neutralise attempts to impersonate or spoof before damage occurs. 

As a concept, identity cannot stay static in dynamic systems, since it must change according to the behaviour and role of the agent in the environment. The importance of securing retrieval-augmented generation (RAG) systems at the source cannot be overstated. As part of this strategy, organisations need to enforce rigorous access policies over knowledge repositories, examine embedding spaces for adversarial interference, and continually evaluate the effectiveness of similarity matching methods to avoid data leaks or model manipulations that are not intended. 

The use of automated red teaming is essential to identifying emerging threats, not just before deployment, but constantly in order to mitigate them. It involves adversarial testing and stress simulations that are designed to expose behavioural anomalies, misalignments with the intended goals, and configuration weaknesses in real-time. Further, it is imperative that comprehensive governance frameworks be established in order to ensure the success of generative and agentic AI. 

As a part of this process, the agent behaviour must be codified in enforceable policies, runtime oversight must be enabled, and detailed, tamper-evident logs must be maintained for auditing and tracking lifecycles. The shift towards agentic AI is more than just a technological evolution. The shift represents a profound change in the way decisions are made, delegated, and monitored in the future. A rapid adoption of these systems often exceeds the ability of traditional security infrastructures to adapt in a way that is not fully understood by them.

Without meaningful oversight, clearly defined responsibilities, and strict controls, AI agents could inadvertently or maliciously exacerbate risk, rather than delivering what they promise. In response to these trends, organisations need to ensure that agents operate within well-defined boundaries, under continuous observation, and aligned with organisational intent, as well as being held to the same standards as human decision-makers. 

There are enormous benefits associated with agentic AI, but there are also huge risks associated with it. Moreover, these systems should not just be intelligent; they should also be trustworthy, transparent, and their rules should be as precise and robust as those they help enforce to be truly transformative.

Fake AI Tools Are Being Used to Spread Dangerous Malware

 



As artificial intelligence becomes more popular, scammers are using its hype to fool people. A new warning reveals that hackers are creating fake AI apps and promoting them online to trick users into downloading harmful software onto their devices.

These scams are showing up on social media apps like TikTok, where videos use robotic-sounding voices to guide viewers on how to install what they claim are “free” or “pirated” versions of expensive software. But when people follow the steps in these videos, they end up installing malware instead — which can secretly steal sensitive information from their devices.

Security researchers recently found that cybercriminals are even setting up realistic-looking websites for fake AI products. They pretend to offer free access to well-known tools like Luma AI or Canva Dream Lab. These fake websites often appear in ads on platforms like Facebook and LinkedIn, making them seem trustworthy.

Once someone downloads the files from these scam sites, their device can be infected with malware. This software may secretly collect usernames, passwords, saved login details from browsers like Chrome and Firefox, and even access personal files. It can also target cryptocurrency wallets and other private data.

One known hacker group based in Vietnam has been pushing out malware through these methods. The malicious programs don’t go away even after restarting the computer, and in some cases, hackers can take full remote control of the infected device.

Some fake AI tools are even disguised as paid services. For instance, one scam pretends to offer a free one-year trial of a tool called “NovaLeadsAI,” followed by a paid subscription. But downloading this tool installs ransomware — a type of malware that locks all your files and demands a large payment to unlock them. One version asked victims for $50,000 in cryptocurrency, falsely claiming the money would go to charity.

Other fake tools include ones pretending to be ChatGPT or video-making apps. Some of these can destroy files or make your entire device unusable.

To protect yourself, avoid downloading AI apps from unknown sources or clicking on links shared in social media ads. Stick to official websites, and if an offer seems unbelievably good, it’s probably a scam. Always double-check before installing any new program, especially ones promising free AI features.

London Startup Allegedly Deceived Microsoft with Fake AI Engineers

 


There have now been serious allegations of fraud against London-based startup Builder.ai, once considered a disruptor of software development and valued at $1.5 billion. Builder.ai is now in bankruptcy. The company claims that its artificial intelligence-based platform will revolutionise app development. With the help of its AI-assisted platform, Natasha, the company claims that building software will be easier than ordering pizza. 

The recent revelations, however, have revealed a starkly different reality: instead of employing cutting-edge AI technology, Builder.ai reportedly relies on hundreds of human developers in India, who manually execute customer requests while pretending to be AI-generated results.

Having made elaborate misrepresentations about this company, Microsoft and Qatar Investment Authority invested $445 million, led by the false idea that they were backed by a scalable, AI-based solution, which resulted in over $445 million in funding being raised. This scandal has sparked a wider conversation about transparency, ethics, and the hype-driven nature of the startup ecosystem, as well as raised serious concerns about due diligence in the AI investment landscape. 

In 2016, Builder.ai, which was founded by entrepreneur Sachin Dev Duggal under the name Engineer.ai, was conceived with a mission to revolutionise app development. In the company's brand, the AI-powered, no-code platform was touted to be able to dramatically simplify the process of creating software applications by cutting down on the amount of code required. 

Founded by a group of MIT engineers and researchers, Builder.ai quickly captured the attention of investors worldwide, as the company secured significant funding from high-profile companies including Microsoft, the Qatar Investment Authority, the International Finance Corporation (IFC), and SoftBank's DeepCore. 

The company highlighted its proprietary artificial intelligence assistant, Natasha, as the technological breakthrough that could be used to build custom software without human intervention. This innovative approach was a central part of the company's value proposition. With the help of a compelling narrative, the startup secured more than $450 million in funding and achieved unicorn status with a peak valuation of $1.5 billion. 

It was widely recognised in the early stages of the evolution of Builder.ai that it was a pioneering force that revolutionised software development, reducing the reliance on traditional engineering teams and democratizing software development. However, underneath the surface of the company's slick marketing campaigns and investor confidence lay a very different operational model—one which relied heavily on human engineers, rather than advanced artificial intelligence. 

Building.ai's public image unravelled dramatically when its promotional promises diverged from its internal practices. It was inevitable that the dramatic collapse of Builder.ai, once regarded as a rising star in the global tech industry, would eventually lead to mounting scrutiny and a dramatic unravelling of its public image. This has revealed troubling undercurrents in the AI startup sector.

In its beginnings, Builder.ai was marketed as a groundbreaking platform for creating custom applications, but it also promised automation, scale, and cost savings, and was positioned as a revolutionary platform for developing custom applications. Natasha was the company's flagship artificial intelligence assistant, which was widely advertised as enabling it to develop software with no code. Yet internal testimonies, lawsuits, and investigation findings have painted a much more troubling picture since then. 

According to its claims of integrating sophisticated artificial intelligence, Natasha was only used as a simple interface for collecting client requirements, whereas the actual development work was done by large engineering teams in India, despite Natasha's claims of sophisticated artificial intelligence integration. According to whistleblowers, including former executives, Builder.ai did not have any genuine AI infrastructure in place. 

As it turns out, internal documentation indicates that applications are being marketed as “80% built by AI” when in fact their underlying tools are rudimentary at best, when they are actually built with artificial intelligence. Former CEO Robert Holdheim filed a $5 million lawsuit alleging wrongful termination after raising concerns about deceptive practices and investor misrepresentation in the company. Due to his case catalysing broader scrutiny, allegations of financial misconduct, as well as technological misrepresentations, were made, resulting in allegations of both. 

After Sachin Dev Duggal had taken over as CEO in mid-2025, Manpreet Ratia took over as CEO, starting things off in a positive manner by stabilising operations. An independent financial audit was ordered under Ratia's leadership that revealed massive discrepancies between the reported revenue and the actual revenue. 

Builder.ai claimed that it had generated more than $220 million in revenues for 2024, while the true figure was closer to $50 million. As a result, Viola Credit, a company's loan partner, quickly seized $37 million in the company's accounts and raised alarm among creditors and investors alike. A final-ditch measure was to release a press release acknowledging Builder.ai had been unable to sustain payroll or its global operations, with only $5 million remaining in restricted funds. 

In the statement, it acknowledged that it had not been able to recover from its past decisions and historic challenges. Several bankruptcy filings were initiated across multiple jurisdictions within a short period of time, including India, the United Kingdom, and the United States. The result was the layoff of over 1,000 employees and the suspension of a variety of client projects. 

The controversy exploded as new allegations were made about revenue roundtrips with Indian technology company VerSe, which was believed to be a strategy aimed at inflating financial performance and attracting new investors. Further, reports revealed that Builder.ai has defaulted on substantial payments to Amazon and Microsoft, owing approximately $85 million to Amazon and $30 million to Microsoft for unpaid cloud services. 

As a result of these developments, a federal investigation has been launched, with authorities requesting access to the company's finances and client contracts as well. As a result of the Builder.ai scandal, a broader issue is at play in the tech sector — "AI washing", where startups exaggerate or misstate their artificial intelligence capabilities to get funding and market traction. 

In an interview with Info-Tech Research Group, Principal Analyst Phil Brunkard summarised this crisis succinctly: "Many of these so-called AI companies scaled based on narrative rather than infrastructure." There is a growing concern among entrepreneurs, investors, and the entire technology industry that Builder.ai could be serving as a cautionary tale for investors, entrepreneurs, and the entire technology industry as regulatory bodies tighten scrutiny of AI marketing claims. 

There have been concerns regarding the legitimacy of Builder.ai's artificial intelligence capabilities ever since a report published by The Wall Street Journal in 2019 raised questions about how heavily the company relies on human labour over artificial intelligence. It has been reported that, despite the company's marketing narrative emphasising automation and machine learning, the company's internal operations paint a different picture. 

The article quotes former employees of Builder.ai saying that Builder.ai was a platform that was primarily engineering, and not AI-driven. This statement starkly contradicted the company's claim to be an AI-first, no-coding platform. Even though many investors and stakeholders ignored these early warnings, they hinted that there might be deeper structural inconsistencies with the startup's operations than what the initial warnings indicated. 

When Manpreet Ratia took on the role of CEO of the company in February 2025, succeeding founder Sachin Dev Duggal, the extent to which the company's internal dysfunction was revealed. It became apparent to Ratia quickly that the company had been misreported and that data had been manipulated for years in order to increase its valuation and public image, despite the fact that it had been tasked with restoring investor confidence and operational transparency. 

Following the revelations in this case, U.S. federal prosecutors immediately began an investigation into the company's business practices in response to the disclosures. Earlier this week, the authorities formally requested access to Builder.AI's financial records, internal communications, and its customer data. The request is part of a broader investigation looking into the possibility of fraud, deception of investors, and violations related to false descriptions of AI capabilities.

It should be noted that the failure of Builder.AI serves as an obvious sign that the investment and innovation ecosystems surrounding artificial intelligence need to be recalibrated urgently and sharply. Capital is continuing to flow into AI-powered ventures at a rapid pace, and stakeholders need to raise their standards in regards to due diligence, technical validation and governance oversight as a result. 

It is important to temper investor enthusiasm for innovative startups by rigorously evaluating the company's technical capabilities beyond polished pitch decks and strategic storytelling. The case reinforces the importance of transparency and sustainability over short-term hype for founders, as well as the need for regulators to develop frameworks aimed at holding companies accountable if they make misleading claims in their product representations and financial disclosures. 

Regulators are becoming increasingly aware of what is being called "AI washing" and are developing strategies to address it. Credibility in a sector built upon trust has become an essential cornerstone of long-term viability, and the collapse of Builder.ai illustrates that this is no longer just a case of a singular failure; rather, it has become a call to action in the tech industry to place substance above spectacle in the age of artificial intelligence.

Unimed AI Chatbot Exposes Millions of Patient Messages in Major Data Leak

 

iA significant data exposure involving Unimed, one of the world’s largest healthcare cooperatives, has come to light after cybersecurity researchers discovered an unsecured database containing millions of sensitive patient-doctor communications.

The discovery was made by cybersecurity experts at Cybernews, who traced the breach to an unprotected Kafka instance. According to their findings, the exposed logs were generated from patient interactions with “Sara,” Unimed’s AI-driven chatbot, as well as conversations with actual healthcare professionals.

Researchers revealed that they intercepted more than 140,000 messages, although logs suggest that over 14 million communications may have been exchanged through the chat system.

“The leak is very sensitive as it exposed confidential medical information. Attackers could exploit the leaked details for discrimination and targeted hate crimes, as well as more standard cybercrime such as identity theft, medical and financial fraud, phishing, and scams,” said Cybernews researchers.

The compromised data included uploaded images and documents, full names, contact details such as phone numbers and email addresses, message content, and Unimed card numbers.

Experts warn that this trove of personal data, when processed using advanced tools like Large Language Models (LLMs), could be weaponized to build in-depth patient profiles. These could then be used to orchestrate highly convincing phishing attacks and fraud schemes.

Fortunately, the exposed system was secured after Cybernews alerted Unimed. The organization issued a statement confirming it had resolved the issue:

“Unimed do Brasil informs that it has investigated an isolated incident, identified in March 2025, and promptly resolved, with no evidence, so far, of any leakage of sensitive data from clients, cooperative physicians, or healthcare professionals,” the notification email stated. “An in-depth investigation remains ongoing.”

Healthcare cooperatives like Unimed are nonprofit entities owned by their members, aimed at delivering accessible healthcare services. This incident raises fresh concerns over data security in an increasingly AI-integrated medical landscape.

AI Fraud Emerges as a Growing Threat to Consumer Technology


 

With the advent of generative AI, a paradigm shift has been ushered in the field of cybersecurity, transforming the tactics, techniques, and procedures that malicious actors have been using for a very long time. As threat actors no longer need to spend large amounts of money and time on extensive resources, they are now utilising generative AI to launch sophisticated attacks at an unprecedented pace and efficiency. 

With these tools, cybercriminals can scale their operations to a large level, while simultaneously lowering the technical and financial barriers of entry as they craft highly convincing phishing emails and automate malware development. The rapid growth of the cyber world is posing a serious challenge to cybersecurity professionals. 

The old defence mechanisms and threat models may no longer be sufficient in an environment where attackers are continuously adapting to their environment with AI-driven precision. Therefore, security teams need to keep up with current trends in AI-enabled threats as well as understand historical attack patterns and extract actionable insights from them in order to stay ahead of the curve in order to stay competitive.

By learning from previous incidents and anticipating the next use of generative artificial intelligence, organisations can improve their readiness to detect, defend against, and respond to intelligent cyber threats of a new breed. There has never been a more urgent time to implement proactive, AI-aware cybersecurity strategies than now. With the rapid growth of India's digital economy in recent years, supported by platforms like UPI for seamless payment and Digital India for accessible e-governance, cyber threats have become increasingly complex, which has fueled cybercrime. 

Aside from providing significant conveniences and economic opportunities, these technological advances have also exposed users to the threat of a new generation of cyber-related risks caused by artificial intelligence (AI). Previously, AI was used as a tool to drive innovation and efficiency. Today, cybercriminals use AI to carry out incredibly customized, scalable, and deceptive attacks based on artificial intelligence. 

A threat enabled by artificial intelligence, on the other hand, is capable of mimicking human behaviour, producing realistic messages, and adapting to targets in real time as opposed to traditional scams. A malicious actor is able to create phishing emails that mimic official correspondence very closely, use deepfakes to fool the public, and alarmingly automate large-scale scams by taking advantage of these capabilities. 

In India, where millions of users, many of whom are first-time internet users, may not have the awareness or tools required to detect such sophisticated attacks, the impact is particularly severe. As a global cybercrime loss is expected to reach trillions of dollars in the next decade, India’s digitally active population is becoming increasingly attractive as a target. 

Due to the rapid adoption of technology and the lack of digital literacy present in the country, AI-powered fraud is becoming increasingly common. This means that it is becoming increasingly imperative that government agencies, private businesses, and individuals coordinate efforts to identify the evolving threat landscape and develop robust cybersecurity strategies that take into account AI. 

Affectionately known as AI, Artificial Intelligence can be defined as the branch of computer science concerned with developing products capable of performing tasks that are typically generated by human intelligence, such as reasoning, learning, problem-solving, perception, and language understanding, all of which are traditionally performed by humans. In its simplest form, AI involves the development of algorithms and computational models that are capable of processing huge amounts of data, identifying meaningful patterns, adapting to new inputs, and making decisions with minimal human intervention, all of which are crucial to the overall success of AI. 

As an example, AI helps machines emulate cognitive functions such as recognising speech, interpreting images, comprehending natural language, and predicting outcomes, enabling them to automate, improve efficiency, and solve complex problems in the real world. The applications of artificial intelligence are extending into a wide variety of industries, from healthcare to finance to manufacturing to autonomous vehicles to cybersecurity. As part of the broader field of Artificial Intelligence, Machine Learning (ML) serves as a crucial subset that enables systems to learn and improve from experience without having to be explicitly programmed for every scenario possible. 

Data is analysed, patterns are identified, and these algorithms are refined over time in response to feedback, thus becoming more accurate as time passes. A more advanced subset of machine learning is Deep Learning (DL), which uses layered neural networks that are modelled after the human brain to process high-level data. Deep learning excels at processing unstructured data like images, audio, and natural language and is able to handle it efficiently. As a result, technologies like facial recognition systems, autonomous driving, and conversational AI models are powered by deep learning. 

ChatGPT is one of the best examples of deep learning in action since it uses large-scale language models to understand and respond to user queries as though they were made by humans. With the continuing evolution of these technologies, their impact across sectors is increasing rapidly and offering immense benefits. However, these technologies also present new vulnerabilities that cybercriminals are increasingly hoping to exploit in order to make a profit. 

A significant change has occurred in the fraud landscape as a result of the rise of generative AI technologies, especially large language models (LLMs), providing both powerful tools for defending against fraud as well as new opportunities for exploitation. While these technologies enhance the ability of security teams to detect and mitigate threats, they also allow cybercriminals to devise sophisticated fraud schemes that bypass conventional safeguards in order to conceal their true identity. 

As fraudsters increasingly use generative artificial intelligence to craft attacks that are more persuasive as well as harder to detect, they are making attacks that are increasingly convincing. There has been a significant increase in phishing attacks utilising artificial intelligence. In these attacks, language models are used to generate emails and messages that mimic the tone, structure, and branding of legitimate communications, eliminating any obvious telltale signs of poor grammar or suspicious formatting that used to be a sign of scams. 

A similar development is the deployment of deepfake technology, including voice cloning and video manipulation, to impersonate trusted individuals, enabling social engineering attacks that are both persuasive and difficult to dismiss. In addition, attackers have now been able to automate at scale, utilising generative artificial intelligence, in real time, to target multiple victims simultaneously, customise messages, and tweak their tactics. 

It is with this scalability that fraudulent campaigns become more effective and more widespread. Furthermore, AI also enables bad actors to use sophisticated evasion techniques, enabling them to create synthetic identities, manipulate behavioural biometrics, and adapt rapidly to new defences, thus making it difficult for them to be detected. The same artificial intelligence technologies that fraudsters utilise are also used by cybersecurity professionals to enhance the defences against potential threats.

As a result, security teams are utilising generative models to identify anomalies in real time, by establishing dynamic baselines of normal behaviour, to flag deviations—potential signs of fraud—more effectively. Furthermore, synthetic data generation allows the creation of realistic, anonymous datasets that can be used to train more accurate and robust fraud detection systems, particularly for identifying unusual or emerging fraud patterns in real time. 

A key application of artificial intelligence to the investigative process is the fact that it makes it possible for analysts to rapidly sift through massive data sets and find critical connections, patterns, and outliers that otherwise may go undetected. Also, the development of adaptive defence systems- AI-driven platforms that learn and evolve in response to new threat intelligence- ensures that fraud prevention strategies remain resilient and responsive even when threat tactics are constantly changing. In recent years, generative artificial intelligence has been integrated into both the offensive and defensive aspects of fraud, ushering in a revolutionary shift in digital risk management. 

It is becoming increasingly clear that as technology advances, fraud prevention efforts will increasingly be based upon organisations utilising and understanding artificial intelligence, not only in order to anticipate emerging threats, but also in order to stay several steps ahead of those looking to exploit them. Even though artificial intelligence is becoming more and more incorporated into our daily lives and business operations, it is imperative that people do not ignore the potential risks resulting from its misuse or vulnerability. 

As AI technologies continue to evolve, both individuals and organisations should adopt a comprehensive and proactive cybersecurity strategy tailored specifically to the unique challenges they may face. Auditing AI systems regularly is a fundamental step towards navigating this evolving landscape securely. Organisations must evaluate the trustworthiness, security posture and privacy implications of these technologies, whether they are using third-party platforms or internally developed models. 

In order to find weaknesses and minimize potential threats, organizations should conduct periodic system reviews, penetration tests, and vulnerability assessments in cooperation with cybersecurity and artificial intelligence specialists, in order to identify weaknesses and minimize potential threats. In addition, sensitive and personal information must be handled responsibly. A growing number of individuals are unintentionally sharing confidential information with artificial intelligence platforms without understanding the ramifications of this.

In the past, several corporations have submitted proprietary information to tools such as ChatGPT that are powered by artificial intelligence, or healthcare professionals have disclosed patient information. Both cases raise serious concerns regarding data privacy and compliance with regulations. The AI interactions will be recorded so that system improvements can be made, so it is important for users to avoid sharing any personal, confidential, or regulated information on such platforms. 

Secured data is another important aspect of AI modelling. The integrity of the training data is a vital component of the functionality of AI, and any manipulation, referred to as "data poisoning", can negatively impact outputs and lead to detrimental consequences for users. There are several ways to mitigate the risk of data loss and corruption, including implementing strong policies for data governance, deploying robust encryption methods, enforcing access controls, and using comprehensive backup solutions. 

Further strengthening the system's resilience involves the use of firewalls, intrusion detection systems, and secure password protocols. Additionally, it is important to adhere to the best practices in software maintenance in order to maintain the software correctly. With the latest security patches installed on AI frameworks, applications, and supporting infrastructure, you can significantly reduce the probability of exploitation. It is also important to deploy advanced antivirus and endpoint protection tools to help protect against AI-driven malware as well as other sophisticated threats.

In an attempt to improve AI models, adversarial training is one of the more advanced methods of training them, as it involves training them with simulated attacks as well as data inputs that are unpredictable. It is our belief that this approach will increase the robustness of the model in order for it to better deal with adversarial manipulations in real-world environments, thereby making it more resilient. As well as technological safeguards, employee awareness and preparedness are crucial. 

Employees need to be taught to recognise artificial intelligence-generated phishing attacks, avoid unsafe software downloads, and respond effectively to changing threats as they arise. As part of the AI risk management process, AI experts can be consulted to ensure that training programs are up-to-date and aligned with the latest threat intelligence. 

A second important practice is AI-specific vulnerability management, which involves identifying, assessing, and remediating security vulnerabilities within the AI systems continuously. By reducing the attack surface of an organisation, organisations can reduce the likelihood of breaches that will exploit the complex architecture of artificial intelligence. Last but not least, even with robust defences, incidents can still occur; therefore, there must be a clear set of plans for dealing with AI incidents. 

A good AI incident response plan should include containment protocols, investigation procedures, communication strategies, and recovery efforts, so that damage is minimised and operations are maintained as soon as possible following a cyber incident caused by artificial intelligence. It is critical that businesses adopt these multilayered security practices in order to maintain the trust of their users, ensure compliance, and safeguard against the sophisticated threats emerging in the AI-driven cyber landscape, especially at a time when AI is both a transformative force and a potential risk vector. 

As artificial intelligence is continuing to reshape the technological landscape, all stakeholders must address the risks associated with it. In order to develop comprehensive governance frameworks that balance innovation with security, it is important to work together in concert with business leaders, policymakers, and cybersecurity experts. In addition, cultivating a culture of continuous learning and vigilance among users will greatly reduce the vulnerabilities that can be exploited by increasingly sophisticated artificial intelligence-driven attacks in the future.

It will be imperative to invest in adaptive technologies that will evolve as threats arise, while maintaining ethical standards and ensuring transparency, to build resilient cyber defences. The goal of securing the benefits of AI ultimately depends upon embracing a forward-looking, integrated approach that embraces both technological advancement and rigorous risk management in order to protect digital ecosystems today and in the future, to be effective.

Governments Release New Regulatory AI Policy


Regulatory AI Policy 

The CISA, NSA, and FBI teamed with cybersecurity agencies from the UK, Australia, and New Zealand to make a best-practices policy for safe AI development. The principles laid down in this document offer a strong foundation for protecting AI data and securing the reliability and accuracy of AI-driven outcomes.

The advisory comes at a crucial point, as many businesses rush to integrate AI into their workplace, but this can be a risky situation also. Governments in the West have become cautious as they believe that China, Russia, and other actors will find means to abuse AI vulnerabilities in unexpected ways. 

Addressing New Risks 

The risks are increasing swiftly as critical infrastructure operators develop AI into operational tech that controls important parts of daily life, from scheduling meetings to paying bills to doing your taxes.

From foundational elements of AI to data consulting, the document outlines ways to protect your data at different stages of the AI life cycle such as planning, data collection, model development, installment and operations. 

It requests people to use digital signature that verify modifications, secure infrastructure that prevents suspicious access and ongoing risk assessments that can track emerging threats. 

Key Issues

The document addresses ways to prevent data quality issues, whether intentional or accidental, from compromising the reliability and safety of AI models. 

Cryptographic hashes make sure that taw data is not changed once it is incorporated into a model, according to the document, and frequent curation can cancel out problems with data sets available on the web. The document also advises the use of anomaly detection algorithms that can eliminate “malicious or suspicious data points before training."

The joint guidance also highlights issues such as incorrect information, duplicate records and “data drift”, statistics bias, a natural limitation in the characteristics of the input data.

Account Takeover Fraud Surges as Cybercriminals Outpace Traditional Bank Defenses

 

As financial institutions bolster their fraud prevention systems, scammers are shifting tactics—favoring account takeover (ATO) fraud over traditional scams. Instead of manipulating victims into making transactions themselves, fraudsters are bypassing them entirely, taking control of their digital identities and draining funds directly.

Account takeover fraud involves unauthorized access to an individual's account to conduct fraudulent transactions. This form of cybercrime has seen a sharp uptick in recent years as attackers use increasingly advanced techniques—such as phishing, credential stuffing, and malware—to compromise online banking platforms. Conventional fraud detection tools, which rely on static behavior analysis, often fall short as bad actors now mimic legitimate user actions with alarming accuracy.

According to NICE Actimize's 2025 Fraud Insights U.S. Retail Payments report, the share of account takeover incidents has increased in terms of the total value of fraud attempts between 2023 and 2024. Nevertheless, scams continue to dominate, making up 57% of all attempted fraud transactions.

Global financial institutions witnessed a significant spike in ATO-related incidents in 2024. Veriff's Identity Fraud Report recorded a 13% year-over-year rise in ATO fraud. FinCEN data further supports this trend, revealing that U.S. banks submitted more than 178,000 suspicious activity reports tied to ATO—a 36% increase from the previous year. AARP and Javelin Strategy & Research estimated that ATO fraud was responsible for $15.6 billion in losses in 2024.

Experts emphasize the need to embrace AI-powered behavioral biometrics, which offer real-time identity verification by continuously assessing how users interact with their devices. This shift from single-point login checks to ongoing authentication enables better threat detection while enhancing user experience. These systems adapt to variables such as device type, location, and time of access, supporting the NIST-recommended zero trust framework.

"The most sophisticated measurement approaches now employ AI analytics to establish dynamic baselines for these metrics, enabling continuous ROI assessment as both threats and solutions evolve over time," said Jeremy London, director of engineering for AI and threat analytics at Keeper Security.

Emerging Fraud Patterns
The growth of ATO fraud is part of a larger evolution in cybercrime tactics. Cross-border payments are increasingly targeted. Although international wire transfers declined by 6% in 2024, the dollar value of fraud attempts surged by 40%. Fraudsters are now focusing on high-value, low-volume transactions.

One particularly vulnerable stage is payee onboarding. Research shows that 67% of fraud incidents were linked to just 7% of transactions—those made to newly added payees. This finding suggests that cybercriminals are exploiting the early stages of payment relationships as a critical vulnerability.

Looking ahead, integrating multi-modal behavioral signals with AI-trained models to detect sophisticated threats will be key. This hybrid approach is vital for identifying both human-driven and synthetic fraud attempts in real-time.

Remote Work and AI Scams Are Making Companies Easier Targets for Hackers

 


Experts are warning that working from home is making businesses more open to cyberattacks, especially as hackers use new tools like artificial intelligence (AI) to trick people. Since many employees now work remotely, scammers are taking advantage of weaker human awareness, not just flaws in technology.

Joe Jones, who runs a cybersecurity company called Pistachio, says that modern scams are no longer just about breaking into systems. Instead, they rely on fooling people. He explained how AI can now create fake voices that sound just like someone’s boss or colleague. This makes it easier for criminals to lie their way into a company’s systems.

A recent attack on the retailer Marks & Spencer (M&S) shows how dangerous this has become. Reports say cybercriminals pretended to be trusted staff members and convinced IT workers to give them access. This kind of trick is known as social engineering—when attackers focus on manipulating people, not just software.

In fact, a recent study found that almost all data breaches last year happened because of human mistakes, not system failures.

Jones believes spending money on cybersecurity tools can help, but it’s not the full answer. He said that if workers aren’t taught how to spot scams, even the best technology can’t protect a company. He compared it to buying expensive security systems for your home but forgetting to lock the door.

The M&S hack also caused problems for other well-known shops, including Co-op and Harrods. Stores had to pause online orders, and some shelves went empty, showing how these attacks can impact daily business operations.

Jude McCorry, who leads a cybersecurity group in Scotland, said this kind of attack could lead to more scam messages targeting customers. She believes companies should run regular training for employees just like they do fire drills. In her view, learning how to stay safe online should be required in both businesses and government offices.

McCorry also advised customers to update their passwords, use different passwords for each website, and turn on two-factor login wherever possible.

As we rely more and more on technology for banking, shopping, and daily services, experts say this should be a serious reminder of how fragile online systems can be when people aren’t prepared.

AI is Accelerating India's Healthtech Revolution, but Data Privacy Concerns Loom Large

 

India’s healthcare, infrastructure, is undergoing a remarkable digital transformation, driven by emerging technologies like artificialintelligence (AI), machinelearning, and bigdata. These advancements are not only enhancing accessibility and efficiency but also setting the foundation for a more equitable health system. According to the WorldEconomicForum (WEF), AI is poised to account for 30% of new drug discoveries by 2025 — a major leap for the pharmaceutical industry.

As outlined in the Global Outlook and Forecast 2025–2030, the market for AI in drugdiscovery is projected to grow from $1.72 billion in 2024 to $8.53 billion by 2030, clocking a CAGR of 30.59%. Major tech players like IBMWatson, NVIDIA, and GoogleDeepMind are partnering with pharmaceutical firms to fast-track AI-led breakthroughs.

Beyond R&D, AI is transforming clinical workflows by digitising patientrecords and decentralising models to improve diagnostic precision while protecting privacy.

During an interview with AnalyticsIndiaMagazine (AIM), Rajan Kashyap, Assistant Professor at the National Institute of Mental Health and Neuro Sciences (NIMHANS), shared insights into the government’s push toward innovation: “Increasing the number of seats in medical and paramedical courses, implementing mandatory rural health services, and developing Indigenous low-cost MRI machines are contributing significantly to hardware development in the AI innovation cycle.”

Tech-Driven Healthcare Innovation

Kashyap pointed to major initiatives like the GenomeIndia project, cVEDA, and the AyushmanBharatDigitalMission as critical steps toward advancing India’s clinical research capabilities. He added that initiatives in genomics, AI, and ML are already improving clinicaloutcomes and streamlining operations.

He also spotlighted BrainSightAI, a Bengaluru-based startup that raised $5 million in a Pre-Series A round to scale its diagnostic tools for neurological conditions. The company aims to expand across Tier 1 and 2 cities and pursue FDA certification to access global healthcaremarkets.

Another innovator, Niramai Health Analytics, offers an AI-based breast cancer screening solution. Their product, Thermalytix, is a portable, radiationfree, and cost-effective screening device that is compatible with all age groups and breast densities.

Meanwhile, biopharma giant Biocon is leveraging AI in biosimilar development. Their work in predictivemodelling is reducing formulation failures and expediting regulatory approvals. One of their standout contributions is Semglee, the world’s first interchangeablebiosimilar insulin, now made accessible through their tie-up with ErisLifesciences.

Rising R&D costs have pushed pharma companies to adopt AI solutions for innovation and costefficiency.

Data Security Still a Grey Zone

While innovation is flourishing, there are pressing concerns around dataprivacy. A report by Netskope Threat Labs highlighted that doctors are increasingly uploading sensitive patient information to unregulated platforms like ChatGPT and Gemini.

Kashyap expressed serious concerns about lax data practices:

“During my professional experience at AI labs abroad, I observed that organisations enforced strict data protection regulations and mandatory training programs…The use of public AI tools like ChatGPT or Gemini was strictly prohibited, with no exceptions or shortcuts allowed.”

He added that anonymised data is still vulnerable to hacking or reidentification. Studies show that even brainscans like MRIs could potentially reveal personal or financial information.

“I strongly advocate for strict adherence to protected data-sharing protocols when handling clinical information. In today’s landscape of data warfare, where numerous companies face legal action for breaching data privacy norms, protecting health data is no less critical than protecting national security,” he warned.

Policy Direction and Regulatory Needs

The Netskope report recommends implementing approved GenAI tools in healthcare to reduce “shadow AI” usage and enhance security. It also urges deploying datalossprevention (DLP) policies to regulate what kind of data can be shared on generative AI platforms.

Although the usage of personal GenAI tools has declined — from 87% to 71% in one year — risks remain.

Kashyap commented on the pace of India’s regulatory approach:

“India is still in the process of formulating a comprehensive data protection framework. While the pace may seem slow, India’s approach has traditionally been organic, carefully evolving with consideration for its unique context.”

He also pushed for developing interdisciplinary medtech programs that integrate AIeducation into medicaltraining.

“Misinformation and fake news pose a significant threat to progress. In a recent R&D project I was involved in, public participation was disrupted due to the spread of misleading information. It’s crucial that legal mechanisms are in place to counteract such disruptions, ensuring that innovation is not undermined by false narratives,” he concluded.

Tech Executives Lead the Charge in Agentic AI Deployment

 


As it turns out, what was once considered a futuristic concept has quickly become a business imperative. As a result, artificial intelligence is now being integrated into the core of enterprise operations in increasingly autonomous ways - and it is doing so even though it had previously been confined to experimental pilot programs. 

In a survey conducted by global consulting firm Ernst & Young (EY), technology executives predicted that within two years, over half of their AI systems will be able to function autonomously. There is a significant milestone coming up in the evolution of artificial intelligence with this prediction, signalling a shift away from assistive technologies towards autonomous systems that can make decisions and execute goals independently. 

The generative AI field has dominated the innovation spotlight in recent years, captivating leaders with its ability to generate text, images, and insights similar to those of a human. However, a more advanced and less publicised form of artificial intelligence has emerged. A system of this kind not only responds, but is also capable of acting – either autonomously or semi-autonomously – in pursuit of specific objectives. 

Previously, agentic artificial intelligence was considered a fringe concept in the business dialogues of the West, but that changed dramatically in late 2024. The number of global searches for “agent AI” and “AI agents” has skyrocketed in recent months, reflecting a strong interest in the field both within the industry and within the public sphere. A significant evolution is taking place in the area of intelligent AI beyond traditional chatbots and prompt-based tools. 

Taking advantage of advances in large language models (LLMs) and the emergence of large reasoning models (LRMs), these intelligent systems are now capable of making autonomous, adaptive decision-making based on real-time reasoning in a way that moves beyond rule-based execution. With agentic AI systems, actions are adjusted according to context and goals, rather than following static, predefined instructions as in earlier software or pre-AI agents. 

The shift marks a new beginning for AI, in which systems no longer act as tools but as intelligent collaborators capable of navigating complexity in a manner that requires little human intervention. To capitalise on the emerging wave of autonomous systems, companies are having to rethink how work is completed, who (or what) performs it, as well as how leadership must adapt to use AI as a true collaborator in strategy execution. 

In today's technologically advanced world, artificial intelligence systems are becoming more active collaborators than passive tools in the workplace, and this represents a new era in workplace innovation. By 2027, Salesforce predicts a massive increase in the adoption of Agentic AI by an astounding 327%, which is a significant change for organisations, workforce strategies, and organisational structures. Despite the potential of the technology, the study finds that 85% of organisations have yet to integrate Agentic AI into their operations despite its promising potential. This transition is being driven by Chief Human Resource Officers (CHROs), who are taking the lead as strategic leaders in this process. 

The company is not only reviewing traditional HR models but also pushing ahead with initiatives focusing on realigning roles, forecasting skills, and promoting agile talent development. As organisations prepare for the deep changes that will be brought about by Agentic AI, human resources leaders must prepare their workforces for jobs that are unlikely to exist yet while managing the evolution of roles that already do exist. 

Salesforce's study examines how Agentic AI is transforming the future of work, reshaping employee responsibilities, and driving an increase in the need for reskilling, as well as the key findings. As an HR function, the responsibility of leading this technological shift with foresight, flexibility, and a renewed emphasis on human-centred innovation in the face of an AI-powered environment, and it is expected to lead by example. 

Technology giant Ernst & Young (EY) has recently released its Technology Pulse Poll, which shows that an increased sense of urgency and confidence among leading technology companies is shaping AI strategies. According to a survey conducted by over 500 technology executives, more than half of them predicted that artificial intelligence agents would constitute most of their future deployments, as they are autonomous or semi-autonomous systems that are capable of executing tasks with little or no human intervention. 

The data shows that there is a rise in self-contained, goal-oriented artificial intelligence solutions becoming integrated into business operations. Moreover, the data indicates that this shift has already begun to occur. There are about 48% of respondents who are either in the process of adopting or have already fully deployed AI agents across a range of different functions of their organisations. 

A significant number of these respondents expect that within the next 24 months, more than 50% of their AI deployments will operate autonomously. This widespread adoption is reflective of a growing belief that agentic AI can be an effective method for facilitating efficiency, agility, and innovation at an unprecedented scale. According to the survey, there is also a significant increase in investment in AI. 

As far as technology leaders are concerned, 92% said they plan to increase spending on AI initiatives, thus demonstrating how important AI is as a strategic priority. Furthermore, over half of these executives are confident that their companies are currently more prepared and ahead of their industry peers when it comes to investing in AI technologies and preparing for their use. Even though 81% of respondents expressed confidence that AI could help their organisations achieve key business objectives over the next year, the optimism regarding the technology's potential remains strong. 

There is an inflexion point that is being marked in these findings. With the advancement of agentic AI from exploration to execution, organisations are not only investing heavily in its development. Still, they are also integrating it into their day-to-day operations to enhance performance. Agentic AI will likely play an important role in the next wave of digital transformation, as it impacts productivity, decision-making, and competitive differentiation in profound ways. 

The more organisations learn about agentic artificial intelligence and the benefits it can provide over generative artificial intelligence, the clearer it becomes to differentiate itself. It is generally accepted that generational AI has excelled at creating content and summarising it, but agentic AI has set itself apart by proactively identifying problems, analysing anomalies, and giving actionable recommendations to solve those problems. It is much more powerful than simply listing a summary of how to fix a maintenance issue. 

An agentic AI system, for instance, will automatically detect the deviation from its defined range, issue an alert, suggest specific adjustments, and provide practical and contextualised guidance to users during the resolution process. By enabling intelligent, decision-oriented systems in place of passive AI outputs, a significant shift has been made toward intelligent AI outputs. It should be noted, however, that as enterprises move toward more autonomous operations, they also need to consider the architectural considerations associated with deploying agentic artificial intelligence - specifically, the choice between single-agent and multi-agent frameworks. 

When many businesses began implementing their first AI projects, they first adopted single-agent systems, where one AI agent manages a wide range of tasks at the same time. The single-agent systems, for example, could be used in a manufacturing setting for monitoring the performance of machines, predicting failures, analysing historical maintenance data and suggesting interventions. The fact is that while such systems may be able to handle complex tasks with layered questioning and analysis, they are often limited by their scalability. 

When a single agent is overwhelmed by a large amount and variety of data, he or she may be unable to perform as well as they should, or even exhibit hallucinations—false and inaccurate outputs which may compromise operational reliability. As a result, multi-agent systems are gaining popularity. These architectures are defined by assigning agents specific tasks and data sources, allowing them each to specialise in a specific area of data collection. 

In particular, a machine efficiency monitoring agent might track system logs, a system log monitoring agent might track historical downtime trends, while another agent might monitor machine efficiency metrics. A coordination agent can be used to direct the efforts of these agents and aggregate their findings into a comprehensive response, which can work independently or in coordination with the orchestration agent. 

In addition to enhancing the accuracy of each agent, the modular design ensures that the entire system is still scalable and resilient under complex workloads, allowing for the optimal performance of the system in general. Multi-agent systems are often a natural progression for organisations already utilising AI tools and data infrastructure. For businesses to extract greater value from their prior investments, existing machine learning models, data streams, and historical records can be aligned with specific agents designed for specific purposes. 

Additionally, these agents can work together dynamically, consulting on each other's behalf, utilising predictive models, and responding to evolving situations in real-time. With this evolving architecture, companies can design AI ecosystems that can handle the increasing complexity of modern digital operations in an adaptive, efficient, and capable manner. 

With artificial intelligence agents becoming increasingly integrated into enterprise security operations, Indian organisations are taking steps proactively to address both new opportunities and emerging risks to mitigate them. It has been reported that 83% of Indian firms have planned to increase security spending in the upcoming year because of data poisoning, a growing concern that involves attackers compromising AI training datasets. 

As well as the increase in AI agents used by IT security teams, this number is predicted to increase from 43% today to 76% within two years. These intelligent systems are currently being utilised for various purposes, including detecting threats, auditing AI models, and maintaining compliance with regulatory requirements. Even though 81% of cybersecurity leaders recognise AI agents as being beneficial for enhancing privacy compliance, 87% also admit that they introduce regulatory challenges as well. 

Trust remains a critical barrier, with 48% of leaders not knowing if their organisations are using high-quality data or if the necessary safeguards have been put in place to protect it. There are still significant regulatory uncertainties and gaps in data governance that hinder full-scale adoption of AI, with only 55% of companies confident they can deploy AI responsibly. 

A strategic and measured approach is imperative as organisations continue to embrace agentic AI to achieve greater efficiency, innovation, and competitive advantage. While businesses can benefit from the increased efficiency, innovation, and competitive advantage that this technology offers, the importance of establishing robust governance frameworks is also no less crucial than ensuring that AI is deployed ethically and responsibly. 

To mitigate challenges like data poisoning and regulatory compliance complexities, companies must invest in comprehensive data quality assurance, transparency mechanisms, and ongoing risk management methods to mitigate challenges such as data poisoning. Achieving cross-functional cooperation between IT, security, and human resources will also be vital for the alignment of AI initiatives with the broader organisational goals as well as the transformation of the workforce. 

Leaders must stress the importance of constant workforce upskilling to prepare employees for increasingly autonomous roles. Managing innovation with accountability can ensure businesses can maximise the potential of agentic AI while preserving trust, compliance, and operational resilience as well. This thoughtful approach will not only accelerate AI adoption but it will also enable sustainable value creation in an increasingly artificially driven business environment.

Pen Test Partners Uncovers Major Vulnerability in Microsoft Copilot AI for SharePoint

 

Pen Test Partners, a renowned cybersecurity and penetration testing firm, recently exposed a critical vulnerability in Microsoft’s Copilot AI for SharePoint. Known for simulating real-world hacking scenarios, the company’s redteam specialists investigate how systems can be breached just like skilled threatactors would attempt in real-time. With attackers increasingly leveraging AI, ethical hackers are now adopting similar methods—and the outcomes are raising eyebrows.

In a recent test, the Pen Test Partners team explored how Microsoft Copilot AI integrated into SharePoint could be manipulated. They encountered a significant issue when a seemingly secure encrypted spreadsheet was exposed—simply by instructing Copilot to retrieve it. Despite SharePoint’s robust access controls preventing file access through conventional means, the AI assistant was able to bypass those protections.

“The agent then successfully printed the contents,” said Jack Barradell-Johns, a red team security consultant at Pen Test Partners, “including the passwords allowing us to access the encrypted spreadsheet.”

This alarming outcome underlines the dual-nature of AI in informationsecurity—it can enhance defenses, but also inadvertently open doors to attackers if not properly governed.

Barradell-Johns further detailed the engagement, explaining how the red team encountered a file labeled passwords.txt, placed near the encrypted spreadsheet. When traditional methods failed due to browser-based restrictions, the hackers used their red team expertise and simply asked the Copilot AI agent to fetch it.

“Notably,” Barradell-Johns added, “in this case, all methods of opening the file in the browser had been restricted.”

Still, those download limitations were sidestepped. The AI agent output the full contents, including sensitive credentials, and allowed the team to easily copy the chat thread, revealing a potential weak point in AI-assisted collaborationtools.

This case serves as a powerful reminder: as AItools become more embedded in enterprise workflows, their securitytesting must evolve in step. It's not just about protecting the front door—it’s about teaching your digital assistant not to hold it open for strangers.

For those interested in the full technical breakdown, the complete Pen Test Partners report dives into the step-by-step methods used and broader securityimplications of Copilot’s current design.

Davey Winder reached out to Microsoft, and a spokesperson said:

“SharePoint information protection principles ensure that content is secured at the storage level through user-specific permissions and that access is audited. This means that if a user does not have permission to access specific content, they will not be able to view it through Copilot or any other agent. Additionally, any access to content through Copilot or an agent is logged and monitored for compliance and security.”

Further, Davey Winder then contacted Ken Munro, founder of Pen Test Partners, who issued the following statement addressing the points made in the one provided by Microsoft.

“Microsoft are technically correct about user permissions, but that’s not what we are exploiting here. They are also correct about logging, but again it comes down to configuration. In many cases, organisations aren’t typically logging the activities that we’re taking advantage of here. Having more granular user permissions would mitigate this, but in many organisations data on SharePoint isn’t as well managed as it could be. That’s exactly what we’re exploiting. These agents are enabled per user, based on licenses, and organisations we have spoken to do not always understand the implications of adding those licenses to their users.”

India’s Cyber Scams Create International Turmoil

 


It has been reported that the number of high-value cyber fraud cases in India has increased dramatically in the financial year 2024, which has increased more than fourfold and has resulted in losses totalling more than $20 million, according to government reports. An increase of almost fourfold demonstrates the escalating threat cybercrime is posing in one of the fastest-growing economies of the world today. 

The rapid digitisation of Indian society has resulted in hundreds of millions of financial transactions taking place every day through mobile apps, UPI platforms, and online banking systems, making India an ideal target for sophisticated fraud networks. There are alarming signs that these scams are rapidly expanding in size and complexity, outpacing traditional enforcement and security measures at an alarming rate, according to experts. 

Even though the country has increasingly embraced digital finance, cyber infrastructure vulnerabilities pose a growing threat, not only to domestic users but also to the global cybersecurity community. In the year 2024, cybercrime is expected to undergo a major change, marked by an unprecedented surge in online scams fueled by the rapid integration of artificial intelligence (AI) into criminal operations, causing a dramatic rise in the number of cybercrime incidents in the next decade. 

It is becoming increasingly evident that the level of technology-enabled fraud has reached alarming levels, eroded public confidence in digital systems, and caused substantial financial damage to individuals across the country, indicating that the Indian Cyber Crime Coordination Centre (I4C) paints a grim picture for India — just in the first six months of this year, Indians lost more than $11,000 crore to cyber fraud. 

There are a staggering number of cybercrime complaints filed on the National Cyber Crime Reporting Portal every day, suggesting that the scale of the problem is even larger than what appears at first glance. According to these figures, daily losses of approximately $60 crore are being sustained, signalling a greater focus on cybersecurity enforcement and awareness among the general public. These scams have become more sophisticated and harder to detect as a result of the integration of artificial intelligence, making it much more urgent to implement systemic measures to safeguard digital financial ecosystems. 

The digital economy of India, now considered the world's largest in terms of value, is experiencing a troubling increase in cybercrime, as high-value fraud cases increased dramatically during fiscal year 2024. Data from the official government indicates that cyber fraud has resulted in financial losses exceeding $177 crore (approximately $20.3 million), which is a more than twofold increase from the previous fiscal year. 

An equally alarming trend is the sharp increase in the number of significant fraud cases, which are those involving a lakh or more, which has increased from 6,699 in FY 2023 to 29,082 in FY 2024. With this steep increase in digital financial transactions occurring in the millions of cases each day, this demonstrates that India is experiencing increasing vulnerabilities in the digital space. The rapid transformation of India has been driven by affordable internet access, which costs just $11 per hour (about $0.13 per hour) for data packages. 

A $1 trillion mobile payments market has been developed as a result of this affordability, which has been dominated by platforms like Paytm, Google Pay, and the Walmart-backed PhonePe. It is important to note, however, that the growth of this market has outpaced the level of cyber literacy in this country, leaving millions vulnerable to increasingly sophisticated fraud schemes. The cybercriminals now employ an array of advanced techniques that include artificial intelligence tools and deepfake technologies, as well as impersonating authorities, manipulating voice calls, and crafting deceptive messages designed to exploit unsuspecting individuals. 

Increasing digital access and cybersecurity awareness continue to pose a serious threat to individuals and financial institutions alike, raising concerns about consumer safety and long-term digital trust. This is a striking example of how India is becoming more and more involved in cybercrime worldwide. In February 2024, a federal court in Montana, United States, sentenced a 24-year-old Haryanaan to four years in prison, in a case that has become increasingly controversial. In his role as the head of a sophisticated fraud operation, he swindled over $1.2 million from elderly Americans, including a staggering $150,000 from one individual. 

Through deceptive pop-ups that claimed to offer tech support, the scam took advantage of victims' trust by tricking them into granting remote access to their computers. Once the fraudsters gained control of the victim's computer, they manipulated the victim into handing over large amounts of money, which were then collected by coordinated in-person pickups throughout the U.S. This case is indicative of a deeper trend in India's changing digital landscape, which can be observed across many sectors. 

It is widely believed that India’s technology sector was once regarded as a world-class provider of IT support services. However, the industry is facing a profound shift due to a combination of automation, job saturation, and economic pressures, exacerbated by the COVID-19 pandemic. Due to these challenges, a shadow cybercrime economy has emerged that is a mirror image of the formal outsourcing industry in terms of structure, technical sophistication, and international reach, just like the formal outsourcing industry. 

It has become evident over the years that these illicit networks have turned India into one of the key nodes in the global cyber fraud chain by using call centers, messaging platforms, and AI-powered scams – raising serious questions about how technology-enabled crime is being regulated, accountable, and what socio-economic factors are contributing to its growth. A rise in the sophistication of online scams in 2024 has been attributed to the misuse of artificial intelligence (AI), resulting in a dramatic expansion of both their scale and psychological impact as a result of this advancement. 

Fraudsters are now utilising AI-driven tools to create highly convincing content that is designed to deceive and exploit unsuspecting victims by using a variety of methods, from manipulating voices and cloning audio clips to realistic images and deepfake videos. It has been troubling to see the rise of artificial intelligence-assisted voice cloning, a method of fabricating emergency scenarios and extorting money by using the voices of family members, close acquaintances, and others as their voices.

A remarkable amount of accuracy has been achieved with these synthetic voices, allowing emotional manipulation to be effected more easily and more effectively than ever before. It was found in one high-profile case that scammers impersonated Sunil Mittal's voice so that they could mislead company executives into transferring money. It is also becoming increasingly common for deepfake technology to be used to create videos of celebrities and business leaders that have been made up using artificial intelligence. 

These AI tools have made it possible to spread fraudulent content online and are often free of charge. It was reported that prominent personalities like Anant Ambani, Virat Kohli, and MS Dhoni were targeted with deepfake videos being circulated to promote a fake betting application, misinforming thousands of people and damaging public trust through their actions. 

Increasing accessibility and misuse of artificial intelligence tools demonstrate that cybercrime tactics have shifted dangerously, as traditional scams are evolving into emotionally manipulative, convincing operations that use a visual appeal to manipulate the user. As a result of the rise of artificial intelligence-generated deception, law enforcement agencies and tech platforms are challenged to adapt quickly to counter these emerging threats as a result of the wave of deception generated by AI.

During the last few years, the face of online fraud has undergone a radical evolution. Cybercriminals are no longer relying solely on poorly written messages or easily identifiable hoaxes, as they have developed techniques that are near perfection now. A majority of malicious links being circulated today are polished and technically sound, often embedded within well-designed websites with a realistic login interface that is very much like a legitimate platform's, along with HTTPS encryption.

In the past, these threats were relatively easy to identify, but they have become increasingly insidious and difficult to detect even for the digitally literate user. There is an astonishing amount of exposure to these threats. Virtually every person with a cell phone or internet access receives scam messages almost daily. It's important to note that some users can avoid these sophisticated schemes, but others fall victim to these sophisticated schemes, particularly those who are elderly, who are unfamiliar with digital technology, or who are caught unaware. 

There are many devastating effects of such frauds, including the significant financial toll, as well as the emotional distress, which can be long-lasting. Based on projections, there will be a dramatic 75% increase in the amount of revenue lost to cybercrime in India by 2025, a dramatic 75% increase from 2023. This alarming trend points to the need for systemic reform and a collaborative intervention strategy. The most effective way of addressing this challenge is to change a fundamental mindset: cyber fraud is not a single issue affecting others; rather, it is a collective threat affecting all stakeholders in the digital ecosystem.

To respond effectively, telecom operators, financial institutions, government agencies, social media platforms, and over-the-top (OTT) service providers must all cooperate actively to coordinate the response. There is no escaping the fact that cybercrimes are becoming more sophisticated and more prevalent. Experts believe that four key actions can be taken to dismantle the infrastructure that supports digital fraud. 

First, there should be a centralised fraud intelligence bureau where all players in the ecosystem can report and exchange real-time information about cybercriminals so that swift and collective responses are possible. Furthermore, each digital platform should develop and deploy tailored technologies to counter fraud, sharing these solutions across industries to protect users against fraud. Thirdly, an ongoing public awareness campaign should focus on educating users about digital hygiene, as well as common fraud tactics. 

Lastly, there is a need to broaden the regulatory framework to include all digital service providers as well. There are strict regulations in place for telecommunication companies, but OTT platforms remain almost completely unregulated, creating loopholes for fraudsters. As a result, Indian citizens will be able to begin to reclaim the integrity of their digital landscape as well as protect themselves from cybercrime that is escalating, through a strong combination of regulation, innovation, and education, as well as by working together with cross-sectoral collaboration. 

In today's digitally accelerated and cyber-vulnerable India, it is imperative to develop a strategy for combating online fraud that is forward-looking and cohesive. It is no longer appropriate to simply respond to cyber incidents that have already occurred. Instead, a proactive approach must be taken to combat cybersecurity threats, where technology, policy, and public engagement all work in tandem to build cyber resilience. As a result of this, security must be built into digital platforms, continuous threat intelligence is invested in, and targeted education campaigns are implemented to cultivate a national culture of cyber awareness. 

To prevent fraud and safeguard user data, governments must accelerate the implementation of robust regulatory frameworks that hold all providers of digital services accountable. This includes all digital service providers, regardless of size or sector. While the companies must prioritise cybersecurity not simply as a compliance checkbox, but as a business-critical pillar supported by dedicated infrastructure and real-time monitoring systems, they should not overlook it as just another compliance checkbox. 

To anticipate the changes in the cybercrime playbook, there must be a collective will across industries, institutions, and individuals to be able to adapt. To achieve the very promise of India's digital economy, users must transform cybersecurity from a reactive measure into a national imperative. This is a way to ensure that trust is maintained, innovations are protected, and the future of a truly digital Bharat is secured.