Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI. Show all posts

Governments Release New Regulatory AI Policy


Regulatory AI Policy 

The CISA, NSA, and FBI teamed with cybersecurity agencies from the UK, Australia, and New Zealand to make a best-practices policy for safe AI development. The principles laid down in this document offer a strong foundation for protecting AI data and securing the reliability and accuracy of AI-driven outcomes.

The advisory comes at a crucial point, as many businesses rush to integrate AI into their workplace, but this can be a risky situation also. Governments in the West have become cautious as they believe that China, Russia, and other actors will find means to abuse AI vulnerabilities in unexpected ways. 

Addressing New Risks 

The risks are increasing swiftly as critical infrastructure operators develop AI into operational tech that controls important parts of daily life, from scheduling meetings to paying bills to doing your taxes.

From foundational elements of AI to data consulting, the document outlines ways to protect your data at different stages of the AI life cycle such as planning, data collection, model development, installment and operations. 

It requests people to use digital signature that verify modifications, secure infrastructure that prevents suspicious access and ongoing risk assessments that can track emerging threats. 

Key Issues

The document addresses ways to prevent data quality issues, whether intentional or accidental, from compromising the reliability and safety of AI models. 

Cryptographic hashes make sure that taw data is not changed once it is incorporated into a model, according to the document, and frequent curation can cancel out problems with data sets available on the web. The document also advises the use of anomaly detection algorithms that can eliminate “malicious or suspicious data points before training."

The joint guidance also highlights issues such as incorrect information, duplicate records and “data drift”, statistics bias, a natural limitation in the characteristics of the input data.

Account Takeover Fraud Surges as Cybercriminals Outpace Traditional Bank Defenses

 

As financial institutions bolster their fraud prevention systems, scammers are shifting tactics—favoring account takeover (ATO) fraud over traditional scams. Instead of manipulating victims into making transactions themselves, fraudsters are bypassing them entirely, taking control of their digital identities and draining funds directly.

Account takeover fraud involves unauthorized access to an individual's account to conduct fraudulent transactions. This form of cybercrime has seen a sharp uptick in recent years as attackers use increasingly advanced techniques—such as phishing, credential stuffing, and malware—to compromise online banking platforms. Conventional fraud detection tools, which rely on static behavior analysis, often fall short as bad actors now mimic legitimate user actions with alarming accuracy.

According to NICE Actimize's 2025 Fraud Insights U.S. Retail Payments report, the share of account takeover incidents has increased in terms of the total value of fraud attempts between 2023 and 2024. Nevertheless, scams continue to dominate, making up 57% of all attempted fraud transactions.

Global financial institutions witnessed a significant spike in ATO-related incidents in 2024. Veriff's Identity Fraud Report recorded a 13% year-over-year rise in ATO fraud. FinCEN data further supports this trend, revealing that U.S. banks submitted more than 178,000 suspicious activity reports tied to ATO—a 36% increase from the previous year. AARP and Javelin Strategy & Research estimated that ATO fraud was responsible for $15.6 billion in losses in 2024.

Experts emphasize the need to embrace AI-powered behavioral biometrics, which offer real-time identity verification by continuously assessing how users interact with their devices. This shift from single-point login checks to ongoing authentication enables better threat detection while enhancing user experience. These systems adapt to variables such as device type, location, and time of access, supporting the NIST-recommended zero trust framework.

"The most sophisticated measurement approaches now employ AI analytics to establish dynamic baselines for these metrics, enabling continuous ROI assessment as both threats and solutions evolve over time," said Jeremy London, director of engineering for AI and threat analytics at Keeper Security.

Emerging Fraud Patterns
The growth of ATO fraud is part of a larger evolution in cybercrime tactics. Cross-border payments are increasingly targeted. Although international wire transfers declined by 6% in 2024, the dollar value of fraud attempts surged by 40%. Fraudsters are now focusing on high-value, low-volume transactions.

One particularly vulnerable stage is payee onboarding. Research shows that 67% of fraud incidents were linked to just 7% of transactions—those made to newly added payees. This finding suggests that cybercriminals are exploiting the early stages of payment relationships as a critical vulnerability.

Looking ahead, integrating multi-modal behavioral signals with AI-trained models to detect sophisticated threats will be key. This hybrid approach is vital for identifying both human-driven and synthetic fraud attempts in real-time.

Remote Work and AI Scams Are Making Companies Easier Targets for Hackers

 


Experts are warning that working from home is making businesses more open to cyberattacks, especially as hackers use new tools like artificial intelligence (AI) to trick people. Since many employees now work remotely, scammers are taking advantage of weaker human awareness, not just flaws in technology.

Joe Jones, who runs a cybersecurity company called Pistachio, says that modern scams are no longer just about breaking into systems. Instead, they rely on fooling people. He explained how AI can now create fake voices that sound just like someone’s boss or colleague. This makes it easier for criminals to lie their way into a company’s systems.

A recent attack on the retailer Marks & Spencer (M&S) shows how dangerous this has become. Reports say cybercriminals pretended to be trusted staff members and convinced IT workers to give them access. This kind of trick is known as social engineering—when attackers focus on manipulating people, not just software.

In fact, a recent study found that almost all data breaches last year happened because of human mistakes, not system failures.

Jones believes spending money on cybersecurity tools can help, but it’s not the full answer. He said that if workers aren’t taught how to spot scams, even the best technology can’t protect a company. He compared it to buying expensive security systems for your home but forgetting to lock the door.

The M&S hack also caused problems for other well-known shops, including Co-op and Harrods. Stores had to pause online orders, and some shelves went empty, showing how these attacks can impact daily business operations.

Jude McCorry, who leads a cybersecurity group in Scotland, said this kind of attack could lead to more scam messages targeting customers. She believes companies should run regular training for employees just like they do fire drills. In her view, learning how to stay safe online should be required in both businesses and government offices.

McCorry also advised customers to update their passwords, use different passwords for each website, and turn on two-factor login wherever possible.

As we rely more and more on technology for banking, shopping, and daily services, experts say this should be a serious reminder of how fragile online systems can be when people aren’t prepared.

AI is Accelerating India's Healthtech Revolution, but Data Privacy Concerns Loom Large

 

India’s healthcare, infrastructure, is undergoing a remarkable digital transformation, driven by emerging technologies like artificialintelligence (AI), machinelearning, and bigdata. These advancements are not only enhancing accessibility and efficiency but also setting the foundation for a more equitable health system. According to the WorldEconomicForum (WEF), AI is poised to account for 30% of new drug discoveries by 2025 — a major leap for the pharmaceutical industry.

As outlined in the Global Outlook and Forecast 2025–2030, the market for AI in drugdiscovery is projected to grow from $1.72 billion in 2024 to $8.53 billion by 2030, clocking a CAGR of 30.59%. Major tech players like IBMWatson, NVIDIA, and GoogleDeepMind are partnering with pharmaceutical firms to fast-track AI-led breakthroughs.

Beyond R&D, AI is transforming clinical workflows by digitising patientrecords and decentralising models to improve diagnostic precision while protecting privacy.

During an interview with AnalyticsIndiaMagazine (AIM), Rajan Kashyap, Assistant Professor at the National Institute of Mental Health and Neuro Sciences (NIMHANS), shared insights into the government’s push toward innovation: “Increasing the number of seats in medical and paramedical courses, implementing mandatory rural health services, and developing Indigenous low-cost MRI machines are contributing significantly to hardware development in the AI innovation cycle.”

Tech-Driven Healthcare Innovation

Kashyap pointed to major initiatives like the GenomeIndia project, cVEDA, and the AyushmanBharatDigitalMission as critical steps toward advancing India’s clinical research capabilities. He added that initiatives in genomics, AI, and ML are already improving clinicaloutcomes and streamlining operations.

He also spotlighted BrainSightAI, a Bengaluru-based startup that raised $5 million in a Pre-Series A round to scale its diagnostic tools for neurological conditions. The company aims to expand across Tier 1 and 2 cities and pursue FDA certification to access global healthcaremarkets.

Another innovator, Niramai Health Analytics, offers an AI-based breast cancer screening solution. Their product, Thermalytix, is a portable, radiationfree, and cost-effective screening device that is compatible with all age groups and breast densities.

Meanwhile, biopharma giant Biocon is leveraging AI in biosimilar development. Their work in predictivemodelling is reducing formulation failures and expediting regulatory approvals. One of their standout contributions is Semglee, the world’s first interchangeablebiosimilar insulin, now made accessible through their tie-up with ErisLifesciences.

Rising R&D costs have pushed pharma companies to adopt AI solutions for innovation and costefficiency.

Data Security Still a Grey Zone

While innovation is flourishing, there are pressing concerns around dataprivacy. A report by Netskope Threat Labs highlighted that doctors are increasingly uploading sensitive patient information to unregulated platforms like ChatGPT and Gemini.

Kashyap expressed serious concerns about lax data practices:

“During my professional experience at AI labs abroad, I observed that organisations enforced strict data protection regulations and mandatory training programs…The use of public AI tools like ChatGPT or Gemini was strictly prohibited, with no exceptions or shortcuts allowed.”

He added that anonymised data is still vulnerable to hacking or reidentification. Studies show that even brainscans like MRIs could potentially reveal personal or financial information.

“I strongly advocate for strict adherence to protected data-sharing protocols when handling clinical information. In today’s landscape of data warfare, where numerous companies face legal action for breaching data privacy norms, protecting health data is no less critical than protecting national security,” he warned.

Policy Direction and Regulatory Needs

The Netskope report recommends implementing approved GenAI tools in healthcare to reduce “shadow AI” usage and enhance security. It also urges deploying datalossprevention (DLP) policies to regulate what kind of data can be shared on generative AI platforms.

Although the usage of personal GenAI tools has declined — from 87% to 71% in one year — risks remain.

Kashyap commented on the pace of India’s regulatory approach:

“India is still in the process of formulating a comprehensive data protection framework. While the pace may seem slow, India’s approach has traditionally been organic, carefully evolving with consideration for its unique context.”

He also pushed for developing interdisciplinary medtech programs that integrate AIeducation into medicaltraining.

“Misinformation and fake news pose a significant threat to progress. In a recent R&D project I was involved in, public participation was disrupted due to the spread of misleading information. It’s crucial that legal mechanisms are in place to counteract such disruptions, ensuring that innovation is not undermined by false narratives,” he concluded.

Tech Executives Lead the Charge in Agentic AI Deployment

 


As it turns out, what was once considered a futuristic concept has quickly become a business imperative. As a result, artificial intelligence is now being integrated into the core of enterprise operations in increasingly autonomous ways - and it is doing so even though it had previously been confined to experimental pilot programs. 

In a survey conducted by global consulting firm Ernst & Young (EY), technology executives predicted that within two years, over half of their AI systems will be able to function autonomously. There is a significant milestone coming up in the evolution of artificial intelligence with this prediction, signalling a shift away from assistive technologies towards autonomous systems that can make decisions and execute goals independently. 

The generative AI field has dominated the innovation spotlight in recent years, captivating leaders with its ability to generate text, images, and insights similar to those of a human. However, a more advanced and less publicised form of artificial intelligence has emerged. A system of this kind not only responds, but is also capable of acting – either autonomously or semi-autonomously – in pursuit of specific objectives. 

Previously, agentic artificial intelligence was considered a fringe concept in the business dialogues of the West, but that changed dramatically in late 2024. The number of global searches for “agent AI” and “AI agents” has skyrocketed in recent months, reflecting a strong interest in the field both within the industry and within the public sphere. A significant evolution is taking place in the area of intelligent AI beyond traditional chatbots and prompt-based tools. 

Taking advantage of advances in large language models (LLMs) and the emergence of large reasoning models (LRMs), these intelligent systems are now capable of making autonomous, adaptive decision-making based on real-time reasoning in a way that moves beyond rule-based execution. With agentic AI systems, actions are adjusted according to context and goals, rather than following static, predefined instructions as in earlier software or pre-AI agents. 

The shift marks a new beginning for AI, in which systems no longer act as tools but as intelligent collaborators capable of navigating complexity in a manner that requires little human intervention. To capitalise on the emerging wave of autonomous systems, companies are having to rethink how work is completed, who (or what) performs it, as well as how leadership must adapt to use AI as a true collaborator in strategy execution. 

In today's technologically advanced world, artificial intelligence systems are becoming more active collaborators than passive tools in the workplace, and this represents a new era in workplace innovation. By 2027, Salesforce predicts a massive increase in the adoption of Agentic AI by an astounding 327%, which is a significant change for organisations, workforce strategies, and organisational structures. Despite the potential of the technology, the study finds that 85% of organisations have yet to integrate Agentic AI into their operations despite its promising potential. This transition is being driven by Chief Human Resource Officers (CHROs), who are taking the lead as strategic leaders in this process. 

The company is not only reviewing traditional HR models but also pushing ahead with initiatives focusing on realigning roles, forecasting skills, and promoting agile talent development. As organisations prepare for the deep changes that will be brought about by Agentic AI, human resources leaders must prepare their workforces for jobs that are unlikely to exist yet while managing the evolution of roles that already do exist. 

Salesforce's study examines how Agentic AI is transforming the future of work, reshaping employee responsibilities, and driving an increase in the need for reskilling, as well as the key findings. As an HR function, the responsibility of leading this technological shift with foresight, flexibility, and a renewed emphasis on human-centred innovation in the face of an AI-powered environment, and it is expected to lead by example. 

Technology giant Ernst & Young (EY) has recently released its Technology Pulse Poll, which shows that an increased sense of urgency and confidence among leading technology companies is shaping AI strategies. According to a survey conducted by over 500 technology executives, more than half of them predicted that artificial intelligence agents would constitute most of their future deployments, as they are autonomous or semi-autonomous systems that are capable of executing tasks with little or no human intervention. 

The data shows that there is a rise in self-contained, goal-oriented artificial intelligence solutions becoming integrated into business operations. Moreover, the data indicates that this shift has already begun to occur. There are about 48% of respondents who are either in the process of adopting or have already fully deployed AI agents across a range of different functions of their organisations. 

A significant number of these respondents expect that within the next 24 months, more than 50% of their AI deployments will operate autonomously. This widespread adoption is reflective of a growing belief that agentic AI can be an effective method for facilitating efficiency, agility, and innovation at an unprecedented scale. According to the survey, there is also a significant increase in investment in AI. 

As far as technology leaders are concerned, 92% said they plan to increase spending on AI initiatives, thus demonstrating how important AI is as a strategic priority. Furthermore, over half of these executives are confident that their companies are currently more prepared and ahead of their industry peers when it comes to investing in AI technologies and preparing for their use. Even though 81% of respondents expressed confidence that AI could help their organisations achieve key business objectives over the next year, the optimism regarding the technology's potential remains strong. 

There is an inflexion point that is being marked in these findings. With the advancement of agentic AI from exploration to execution, organisations are not only investing heavily in its development. Still, they are also integrating it into their day-to-day operations to enhance performance. Agentic AI will likely play an important role in the next wave of digital transformation, as it impacts productivity, decision-making, and competitive differentiation in profound ways. 

The more organisations learn about agentic artificial intelligence and the benefits it can provide over generative artificial intelligence, the clearer it becomes to differentiate itself. It is generally accepted that generational AI has excelled at creating content and summarising it, but agentic AI has set itself apart by proactively identifying problems, analysing anomalies, and giving actionable recommendations to solve those problems. It is much more powerful than simply listing a summary of how to fix a maintenance issue. 

An agentic AI system, for instance, will automatically detect the deviation from its defined range, issue an alert, suggest specific adjustments, and provide practical and contextualised guidance to users during the resolution process. By enabling intelligent, decision-oriented systems in place of passive AI outputs, a significant shift has been made toward intelligent AI outputs. It should be noted, however, that as enterprises move toward more autonomous operations, they also need to consider the architectural considerations associated with deploying agentic artificial intelligence - specifically, the choice between single-agent and multi-agent frameworks. 

When many businesses began implementing their first AI projects, they first adopted single-agent systems, where one AI agent manages a wide range of tasks at the same time. The single-agent systems, for example, could be used in a manufacturing setting for monitoring the performance of machines, predicting failures, analysing historical maintenance data and suggesting interventions. The fact is that while such systems may be able to handle complex tasks with layered questioning and analysis, they are often limited by their scalability. 

When a single agent is overwhelmed by a large amount and variety of data, he or she may be unable to perform as well as they should, or even exhibit hallucinations—false and inaccurate outputs which may compromise operational reliability. As a result, multi-agent systems are gaining popularity. These architectures are defined by assigning agents specific tasks and data sources, allowing them each to specialise in a specific area of data collection. 

In particular, a machine efficiency monitoring agent might track system logs, a system log monitoring agent might track historical downtime trends, while another agent might monitor machine efficiency metrics. A coordination agent can be used to direct the efforts of these agents and aggregate their findings into a comprehensive response, which can work independently or in coordination with the orchestration agent. 

In addition to enhancing the accuracy of each agent, the modular design ensures that the entire system is still scalable and resilient under complex workloads, allowing for the optimal performance of the system in general. Multi-agent systems are often a natural progression for organisations already utilising AI tools and data infrastructure. For businesses to extract greater value from their prior investments, existing machine learning models, data streams, and historical records can be aligned with specific agents designed for specific purposes. 

Additionally, these agents can work together dynamically, consulting on each other's behalf, utilising predictive models, and responding to evolving situations in real-time. With this evolving architecture, companies can design AI ecosystems that can handle the increasing complexity of modern digital operations in an adaptive, efficient, and capable manner. 

With artificial intelligence agents becoming increasingly integrated into enterprise security operations, Indian organisations are taking steps proactively to address both new opportunities and emerging risks to mitigate them. It has been reported that 83% of Indian firms have planned to increase security spending in the upcoming year because of data poisoning, a growing concern that involves attackers compromising AI training datasets. 

As well as the increase in AI agents used by IT security teams, this number is predicted to increase from 43% today to 76% within two years. These intelligent systems are currently being utilised for various purposes, including detecting threats, auditing AI models, and maintaining compliance with regulatory requirements. Even though 81% of cybersecurity leaders recognise AI agents as being beneficial for enhancing privacy compliance, 87% also admit that they introduce regulatory challenges as well. 

Trust remains a critical barrier, with 48% of leaders not knowing if their organisations are using high-quality data or if the necessary safeguards have been put in place to protect it. There are still significant regulatory uncertainties and gaps in data governance that hinder full-scale adoption of AI, with only 55% of companies confident they can deploy AI responsibly. 

A strategic and measured approach is imperative as organisations continue to embrace agentic AI to achieve greater efficiency, innovation, and competitive advantage. While businesses can benefit from the increased efficiency, innovation, and competitive advantage that this technology offers, the importance of establishing robust governance frameworks is also no less crucial than ensuring that AI is deployed ethically and responsibly. 

To mitigate challenges like data poisoning and regulatory compliance complexities, companies must invest in comprehensive data quality assurance, transparency mechanisms, and ongoing risk management methods to mitigate challenges such as data poisoning. Achieving cross-functional cooperation between IT, security, and human resources will also be vital for the alignment of AI initiatives with the broader organisational goals as well as the transformation of the workforce. 

Leaders must stress the importance of constant workforce upskilling to prepare employees for increasingly autonomous roles. Managing innovation with accountability can ensure businesses can maximise the potential of agentic AI while preserving trust, compliance, and operational resilience as well. This thoughtful approach will not only accelerate AI adoption but it will also enable sustainable value creation in an increasingly artificially driven business environment.

Pen Test Partners Uncovers Major Vulnerability in Microsoft Copilot AI for SharePoint

 

Pen Test Partners, a renowned cybersecurity and penetration testing firm, recently exposed a critical vulnerability in Microsoft’s Copilot AI for SharePoint. Known for simulating real-world hacking scenarios, the company’s redteam specialists investigate how systems can be breached just like skilled threatactors would attempt in real-time. With attackers increasingly leveraging AI, ethical hackers are now adopting similar methods—and the outcomes are raising eyebrows.

In a recent test, the Pen Test Partners team explored how Microsoft Copilot AI integrated into SharePoint could be manipulated. They encountered a significant issue when a seemingly secure encrypted spreadsheet was exposed—simply by instructing Copilot to retrieve it. Despite SharePoint’s robust access controls preventing file access through conventional means, the AI assistant was able to bypass those protections.

“The agent then successfully printed the contents,” said Jack Barradell-Johns, a red team security consultant at Pen Test Partners, “including the passwords allowing us to access the encrypted spreadsheet.”

This alarming outcome underlines the dual-nature of AI in informationsecurity—it can enhance defenses, but also inadvertently open doors to attackers if not properly governed.

Barradell-Johns further detailed the engagement, explaining how the red team encountered a file labeled passwords.txt, placed near the encrypted spreadsheet. When traditional methods failed due to browser-based restrictions, the hackers used their red team expertise and simply asked the Copilot AI agent to fetch it.

“Notably,” Barradell-Johns added, “in this case, all methods of opening the file in the browser had been restricted.”

Still, those download limitations were sidestepped. The AI agent output the full contents, including sensitive credentials, and allowed the team to easily copy the chat thread, revealing a potential weak point in AI-assisted collaborationtools.

This case serves as a powerful reminder: as AItools become more embedded in enterprise workflows, their securitytesting must evolve in step. It's not just about protecting the front door—it’s about teaching your digital assistant not to hold it open for strangers.

For those interested in the full technical breakdown, the complete Pen Test Partners report dives into the step-by-step methods used and broader securityimplications of Copilot’s current design.

Davey Winder reached out to Microsoft, and a spokesperson said:

“SharePoint information protection principles ensure that content is secured at the storage level through user-specific permissions and that access is audited. This means that if a user does not have permission to access specific content, they will not be able to view it through Copilot or any other agent. Additionally, any access to content through Copilot or an agent is logged and monitored for compliance and security.”

Further, Davey Winder then contacted Ken Munro, founder of Pen Test Partners, who issued the following statement addressing the points made in the one provided by Microsoft.

“Microsoft are technically correct about user permissions, but that’s not what we are exploiting here. They are also correct about logging, but again it comes down to configuration. In many cases, organisations aren’t typically logging the activities that we’re taking advantage of here. Having more granular user permissions would mitigate this, but in many organisations data on SharePoint isn’t as well managed as it could be. That’s exactly what we’re exploiting. These agents are enabled per user, based on licenses, and organisations we have spoken to do not always understand the implications of adding those licenses to their users.”

India’s Cyber Scams Create International Turmoil

 


It has been reported that the number of high-value cyber fraud cases in India has increased dramatically in the financial year 2024, which has increased more than fourfold and has resulted in losses totalling more than $20 million, according to government reports. An increase of almost fourfold demonstrates the escalating threat cybercrime is posing in one of the fastest-growing economies of the world today. 

The rapid digitisation of Indian society has resulted in hundreds of millions of financial transactions taking place every day through mobile apps, UPI platforms, and online banking systems, making India an ideal target for sophisticated fraud networks. There are alarming signs that these scams are rapidly expanding in size and complexity, outpacing traditional enforcement and security measures at an alarming rate, according to experts. 

Even though the country has increasingly embraced digital finance, cyber infrastructure vulnerabilities pose a growing threat, not only to domestic users but also to the global cybersecurity community. In the year 2024, cybercrime is expected to undergo a major change, marked by an unprecedented surge in online scams fueled by the rapid integration of artificial intelligence (AI) into criminal operations, causing a dramatic rise in the number of cybercrime incidents in the next decade. 

It is becoming increasingly evident that the level of technology-enabled fraud has reached alarming levels, eroded public confidence in digital systems, and caused substantial financial damage to individuals across the country, indicating that the Indian Cyber Crime Coordination Centre (I4C) paints a grim picture for India — just in the first six months of this year, Indians lost more than $11,000 crore to cyber fraud. 

There are a staggering number of cybercrime complaints filed on the National Cyber Crime Reporting Portal every day, suggesting that the scale of the problem is even larger than what appears at first glance. According to these figures, daily losses of approximately $60 crore are being sustained, signalling a greater focus on cybersecurity enforcement and awareness among the general public. These scams have become more sophisticated and harder to detect as a result of the integration of artificial intelligence, making it much more urgent to implement systemic measures to safeguard digital financial ecosystems. 

The digital economy of India, now considered the world's largest in terms of value, is experiencing a troubling increase in cybercrime, as high-value fraud cases increased dramatically during fiscal year 2024. Data from the official government indicates that cyber fraud has resulted in financial losses exceeding $177 crore (approximately $20.3 million), which is a more than twofold increase from the previous fiscal year. 

An equally alarming trend is the sharp increase in the number of significant fraud cases, which are those involving a lakh or more, which has increased from 6,699 in FY 2023 to 29,082 in FY 2024. With this steep increase in digital financial transactions occurring in the millions of cases each day, this demonstrates that India is experiencing increasing vulnerabilities in the digital space. The rapid transformation of India has been driven by affordable internet access, which costs just $11 per hour (about $0.13 per hour) for data packages. 

A $1 trillion mobile payments market has been developed as a result of this affordability, which has been dominated by platforms like Paytm, Google Pay, and the Walmart-backed PhonePe. It is important to note, however, that the growth of this market has outpaced the level of cyber literacy in this country, leaving millions vulnerable to increasingly sophisticated fraud schemes. The cybercriminals now employ an array of advanced techniques that include artificial intelligence tools and deepfake technologies, as well as impersonating authorities, manipulating voice calls, and crafting deceptive messages designed to exploit unsuspecting individuals. 

Increasing digital access and cybersecurity awareness continue to pose a serious threat to individuals and financial institutions alike, raising concerns about consumer safety and long-term digital trust. This is a striking example of how India is becoming more and more involved in cybercrime worldwide. In February 2024, a federal court in Montana, United States, sentenced a 24-year-old Haryanaan to four years in prison, in a case that has become increasingly controversial. In his role as the head of a sophisticated fraud operation, he swindled over $1.2 million from elderly Americans, including a staggering $150,000 from one individual. 

Through deceptive pop-ups that claimed to offer tech support, the scam took advantage of victims' trust by tricking them into granting remote access to their computers. Once the fraudsters gained control of the victim's computer, they manipulated the victim into handing over large amounts of money, which were then collected by coordinated in-person pickups throughout the U.S. This case is indicative of a deeper trend in India's changing digital landscape, which can be observed across many sectors. 

It is widely believed that India’s technology sector was once regarded as a world-class provider of IT support services. However, the industry is facing a profound shift due to a combination of automation, job saturation, and economic pressures, exacerbated by the COVID-19 pandemic. Due to these challenges, a shadow cybercrime economy has emerged that is a mirror image of the formal outsourcing industry in terms of structure, technical sophistication, and international reach, just like the formal outsourcing industry. 

It has become evident over the years that these illicit networks have turned India into one of the key nodes in the global cyber fraud chain by using call centers, messaging platforms, and AI-powered scams – raising serious questions about how technology-enabled crime is being regulated, accountable, and what socio-economic factors are contributing to its growth. A rise in the sophistication of online scams in 2024 has been attributed to the misuse of artificial intelligence (AI), resulting in a dramatic expansion of both their scale and psychological impact as a result of this advancement. 

Fraudsters are now utilising AI-driven tools to create highly convincing content that is designed to deceive and exploit unsuspecting victims by using a variety of methods, from manipulating voices and cloning audio clips to realistic images and deepfake videos. It has been troubling to see the rise of artificial intelligence-assisted voice cloning, a method of fabricating emergency scenarios and extorting money by using the voices of family members, close acquaintances, and others as their voices.

A remarkable amount of accuracy has been achieved with these synthetic voices, allowing emotional manipulation to be effected more easily and more effectively than ever before. It was found in one high-profile case that scammers impersonated Sunil Mittal's voice so that they could mislead company executives into transferring money. It is also becoming increasingly common for deepfake technology to be used to create videos of celebrities and business leaders that have been made up using artificial intelligence. 

These AI tools have made it possible to spread fraudulent content online and are often free of charge. It was reported that prominent personalities like Anant Ambani, Virat Kohli, and MS Dhoni were targeted with deepfake videos being circulated to promote a fake betting application, misinforming thousands of people and damaging public trust through their actions. 

Increasing accessibility and misuse of artificial intelligence tools demonstrate that cybercrime tactics have shifted dangerously, as traditional scams are evolving into emotionally manipulative, convincing operations that use a visual appeal to manipulate the user. As a result of the rise of artificial intelligence-generated deception, law enforcement agencies and tech platforms are challenged to adapt quickly to counter these emerging threats as a result of the wave of deception generated by AI.

During the last few years, the face of online fraud has undergone a radical evolution. Cybercriminals are no longer relying solely on poorly written messages or easily identifiable hoaxes, as they have developed techniques that are near perfection now. A majority of malicious links being circulated today are polished and technically sound, often embedded within well-designed websites with a realistic login interface that is very much like a legitimate platform's, along with HTTPS encryption.

In the past, these threats were relatively easy to identify, but they have become increasingly insidious and difficult to detect even for the digitally literate user. There is an astonishing amount of exposure to these threats. Virtually every person with a cell phone or internet access receives scam messages almost daily. It's important to note that some users can avoid these sophisticated schemes, but others fall victim to these sophisticated schemes, particularly those who are elderly, who are unfamiliar with digital technology, or who are caught unaware. 

There are many devastating effects of such frauds, including the significant financial toll, as well as the emotional distress, which can be long-lasting. Based on projections, there will be a dramatic 75% increase in the amount of revenue lost to cybercrime in India by 2025, a dramatic 75% increase from 2023. This alarming trend points to the need for systemic reform and a collaborative intervention strategy. The most effective way of addressing this challenge is to change a fundamental mindset: cyber fraud is not a single issue affecting others; rather, it is a collective threat affecting all stakeholders in the digital ecosystem.

To respond effectively, telecom operators, financial institutions, government agencies, social media platforms, and over-the-top (OTT) service providers must all cooperate actively to coordinate the response. There is no escaping the fact that cybercrimes are becoming more sophisticated and more prevalent. Experts believe that four key actions can be taken to dismantle the infrastructure that supports digital fraud. 

First, there should be a centralised fraud intelligence bureau where all players in the ecosystem can report and exchange real-time information about cybercriminals so that swift and collective responses are possible. Furthermore, each digital platform should develop and deploy tailored technologies to counter fraud, sharing these solutions across industries to protect users against fraud. Thirdly, an ongoing public awareness campaign should focus on educating users about digital hygiene, as well as common fraud tactics. 

Lastly, there is a need to broaden the regulatory framework to include all digital service providers as well. There are strict regulations in place for telecommunication companies, but OTT platforms remain almost completely unregulated, creating loopholes for fraudsters. As a result, Indian citizens will be able to begin to reclaim the integrity of their digital landscape as well as protect themselves from cybercrime that is escalating, through a strong combination of regulation, innovation, and education, as well as by working together with cross-sectoral collaboration. 

In today's digitally accelerated and cyber-vulnerable India, it is imperative to develop a strategy for combating online fraud that is forward-looking and cohesive. It is no longer appropriate to simply respond to cyber incidents that have already occurred. Instead, a proactive approach must be taken to combat cybersecurity threats, where technology, policy, and public engagement all work in tandem to build cyber resilience. As a result of this, security must be built into digital platforms, continuous threat intelligence is invested in, and targeted education campaigns are implemented to cultivate a national culture of cyber awareness. 

To prevent fraud and safeguard user data, governments must accelerate the implementation of robust regulatory frameworks that hold all providers of digital services accountable. This includes all digital service providers, regardless of size or sector. While the companies must prioritise cybersecurity not simply as a compliance checkbox, but as a business-critical pillar supported by dedicated infrastructure and real-time monitoring systems, they should not overlook it as just another compliance checkbox. 

To anticipate the changes in the cybercrime playbook, there must be a collective will across industries, institutions, and individuals to be able to adapt. To achieve the very promise of India's digital economy, users must transform cybersecurity from a reactive measure into a national imperative. This is a way to ensure that trust is maintained, innovations are protected, and the future of a truly digital Bharat is secured.

Child Abuse Detection Efforts Face Setbacks Due to End-to-End Encryption


 

Technology has advanced dramatically in the last few decades, and data has been exchanged across devices, networks, and borders at a rapid pace. It is imperative to safeguard sensitive information today, as it has never been more important-or more complicated—than it is today. End-to-end encryption is among the most robust tools available for the purpose of safeguarding digital communication, and it ensures that data remains safe from its origin to its destination, regardless of where it was created. 

The benefits of encryption are undeniable when it comes to maintaining privacy and preventing unauthorised access, however, the process of effectively implementing such encryption presents both a practical and ethical challenge for both public organisations as well as private organisations. Several law enforcement agencies and public safety agencies are also experiencing a shift in their capabilities due to the emergence of artificial intelligence (AI). 

Artificial intelligence has access to technologies that support the solving of cases and improving operational efficiency to a much greater degree. AI has several benefits, including facial recognition, head detection, and intelligent evidence management systems. However, the increasing use of artificial intelligence also raises serious concerns about personal privacy, regulatory compliance, and possible data misuse.

A critical aspect of government and organisation adoption of these powerful technologies is striking a balance between harnessing the strengths of artificial intelligence and encryption while maintaining the commitment to public trust, privacy laws, and ethical standards. As a key pillar of modern data protection, end-to-end encryption (E2EE) has become a vital tool for safeguarding digital information. It ensures that only the intended sender and recipient can access the information being exchanged, providing a robust method of protecting digital communication.

It is highly effective for preventing unauthorised access to data by encrypting it at origin and decrypting it only at the destination, even by service providers or intermediaries who manage the data transfer infrastructure. By implementing this secure framework, information is protected from interception, manipulation, or surveillance during its transit, eliminating any potential for interception or manipulation.

A company that handles sensitive or confidential data, especially in the health, financial, or legal sectors, isn't just practising best practices when it comes to encrypting data in a secure manner. It is a strategic imperative that the company adopt this end-to-end encryption technology as soon as possible. By strengthening overall cybersecurity posture, cultivating client trust and ensuring regulatory compliance, these measures strengthen overall cybersecurity posture. 

As the implementation of E2EE technologies has become increasingly important to complying with stringent data privacy laws like the Health Insurance Portability and Accountability Act (HIPAA) in the United States, and the General Data Protection Regulation (GDPR) in Europe, as well as other jurisdictional frameworks, it is increasingly important that the implementation of E2EE technologies is implemented. 

Since cyber threats are on the rise and are both frequent and sophisticated, the implementation of end-to-end encryption is an effective way to safeguard against information exposure in this digital age. With it, businesses can confidently manage digital communication, giving stakeholders peace of mind that their personal and professional data is protected throughout the entire process. While end-to-end encryption is widely regarded as a vital tool for safeguarding digital privacy, its increasing adoption by law enforcement agencies as well as child protection agencies is posing significant challenges to these agencies. 

There have been over 1 million attempts made by New Zealanders to access illegal online material over the past year alone, which range from child sexual abuse to extreme forms of explicit content like bestiality and necrophilia. During these efforts, 13 individuals were arrested for possessing, disseminating, or generating such content, according to the Department of Internal Affairs (DIA). The DIA has expressed concerns about the increasing difficulty in detecting and reacting to criminal activity that is being caused by encryption technologies. 

As the name implies, end-to-end encryption restricts the level of access to message content to just the sender and recipient, thus preventing third parties from monitoring harmful exchanges, including regulatory authorities. Several of these concerns were also expressed by Eleanor Parkes, National Director of End Child Prostitution and Trafficking (ECPAT), who warned that the widespread use of encryption could make it possible for illegal material to circulate undetected. 

Since digital platforms are increasingly focusing on privacy-enhanced technologies, striking a balance between individual rights and collective safety has become an issue not only for technical purposes but also for societal reasons  It has never been more clearly recognised how important it is to ensure users' privacy on the Internet, and standard encryption remains a cornerstone for the protection of their personal information across a wide array of digital services. 

In the banking industry, the healthcare industry, as well as private communications, encryption ensures the integrity and security of information that is being transmitted across networks. This form of technology is called end-to-end encryption (E2EE), which is a more advanced and more restrictive implementation of this technology. It enhances privacy while significantly restricting oversight at the same time. In contrast to traditional methods of encrypting information, E2EE allows only the sender and recipient of the message to access its content. 

As the service provider operating the platform has no power to view or intercept communications, it appears that this is the perfect solution in theory. However, the absence of oversight mechanisms poses serious risks in practice, especially when it comes to the protection of children. Platforms may inadvertently be used as a safe haven for the sharing of illegal material, including images of child sexual abuse, if they do not provide built-in safeguards or the ability to monitor content. Despite this, there remains the troubling paradox: the same technology that is designed to protect users' privacy can also shield criminals from detection, thus creating a troubling paradox. 

As digital platforms continue to place a high value on user privacy, it becomes increasingly important to explore balanced approaches that do not compromise the safety and well-being of vulnerable populations, especially children, that are also being safe. A robust Digital Child Exploitation Filtering System has been implemented by New Zealand's Department of Internal Affairs (DIA) to combat the spread of illegal child sexual abuse material online. This system has been designed to block access to websites that host content that contains child sexual abuse, even when they use end-to-end encryption as part of their encryption method.

Even though encrypted platforms do present inherent challenges, the system has proven to be an invaluable weapon in the fight against the exploitation of children online. In the last year alone, it enabled the execution of 60 search warrants and the seizure of 235 digital devices, which demonstrates how serious the issue is and how large it is. The DIA reports that investigators are increasingly encountering offenders with vast quantities of illegal material on their hands, which not only increases in quantity but also in intensity as they describe the harm they cause to society. 

According to Eleanor Parkes, National Director of End Child Prostitution and Trafficking (ECPAT), the widespread adoption of encryption is indicative of the public's growing concern over digital security. Her statement, however, was based on a recent study which revealed an alarming reality that revealed a far more distressing reality than most people know. Parkes said that young people, who are often engaged in completely normal online interactions, are particularly vulnerable to exploitation in this changing digital environment since child abuse material is alarmingly prevalent far beyond what people might believe. 

A prominent representative of the New Zealand government made a point of highlighting the fact that this is not an isolated or distant issue, but a deeply rooted problem that requires urgent attention and collective responsibility within the country as well as internationally. As technology continues to evolve at an exponential rate, it becomes increasingly important to be sure that, particularly in sensitive areas like child protection, both legally sound and responsible. As with all technological innovations, these tools must be implemented within a clearly defined legislative framework which prioritises privacy while enabling effective intervention within the context of an existing legislative framework.

To detect child sexual abuse material, safeguarding technologies should be used exclusively for that purpose, with the intent of identifying and eliminating content that is clearly harmful and unacceptable. Law enforcement agencies that rely on artificial intelligence-driven systems, such as biometric analysis and head recognition systems, need to follow strict legal frameworks to ensure compliance with complex legal frameworks. As the General Data Protection Regulation (GDPR) is established in the European Union, and the California Consumer Privacy Act (CCPA) is established in the United States, there is a clear understanding of how to handle, consent to, and disclose data. 

The use of biometric data is also tightly regulated, as legislation such as Illinois' Biometric Information Privacy Act (BIPA) imposes very strict limitations on how this data can be used. Increasingly, AI governance policies are being developed at both the national and regional levels, reinforcing the importance of ethical, transparent, and accountable technology use. Noncompliance not only results in legal repercussions, but it also threatens to undermine public trust, which is essential for successfully integrating AI into public safety initiatives. 

The future will require striking a delicate balance between innovation and regulation, ensuring that technology empowers protective efforts while protecting fundamental rights in the meantime. For all parties involved—policymakers, technology developers, law enforcement, as well as advocacy organisations—to address the complex interplay between safeguarding privacy and ensuring child protection, they must come together and develop innovative, forward-looking approaches. The importance of moving beyond the viewpoint of privacy and safety as opposing priorities must be underscored to foster innovations that learn from the past and build strong ethical protections into the core of their designs. 

The steps that must be taken to ensure privacy-conscious technology is developed that can detect harmful content without compromising user confidentiality, that secure and transparent reporting channels are established within encrypted platforms, and that international cooperation is enhanced to combat exploitation effectively and respect data sovereignty at the same time. Further, industry transparency must be promoted through independent oversight and accountability mechanisms to maintain public trust and validate the integrity of these protective measures. 

Regulatory frameworks and technological solutions should be adapted rapidly to safeguard vulnerable populations without sacrificing fundamental rights to keep pace with the rapid evolution of the digital landscape. As the world becomes increasingly interconnected, technology will only be able to fulfil its promise as a force for good if it is properly balanced, ethically robust, and proactive in its approach in terms of the protection of children and ensuring privacy rights for everyone.