Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label AI adoption. Show all posts

Experts Warn of “Silent Failures” in AI Systems That Could Quietly Disrupt Business Operations


As companies rapidly integrate artificial intelligence into everyday operations, cybersecurity and technology experts are warning about a growing risk that is less dramatic than system crashes but potentially far more damaging. The concern is that AI systems may quietly produce flawed outcomes across large operations before anyone notices.

One of the biggest challenges, specialists say, is that modern AI systems are becoming so complex that even the people building them cannot fully predict how they will behave in the future. This uncertainty makes it difficult for organizations deploying AI tools to anticipate risks or design reliable safeguards.

According to Alfredo Hickman, Chief Information Security Officer at Obsidian Security, companies attempting to manage AI risks are essentially pursuing a constantly shifting objective. Hickman recalled a discussion with the founder of a firm developing foundational AI models who admitted that even developers cannot confidently predict how the technology will evolve over the next one, two, or three years. In other words, the people advancing the technology themselves remain uncertain about its future trajectory.

Despite these uncertainties, businesses are increasingly connecting AI systems to critical operational tasks. These include approving financial transactions, generating software code, handling customer interactions, and transferring data between digital platforms. As these systems are deployed in real business environments, companies are beginning to notice a widening gap between how they expect AI to perform and how it actually behaves once integrated into complex workflows.

Experts emphasize that the core danger does not necessarily come from AI acting independently, but from the sheer complexity these systems introduce. Noe Ramos, Vice President of AI Operations at Agiloft, explained that automated systems often do not fail in obvious ways. Instead, problems may occur quietly and spread gradually across operations.

Ramos describes this phenomenon as “silent failure at scale.” Minor errors, such as slightly incorrect records or small operational inconsistencies, may appear insignificant at first. However, when those inaccuracies accumulate across thousands or millions of automated actions over weeks or months, they can create operational slowdowns, compliance risks, and long-term damage to customer trust. Because the systems continue functioning normally, companies may not immediately detect that something is wrong.

Real-world examples of this problem are already appearing. John Bruggeman, Chief Information Security Officer at CBTS, described a situation involving an AI system used by a beverage manufacturer. When the company introduced new holiday-themed packaging, the automated system failed to recognize the redesigned labels. Interpreting the unfamiliar packaging as an error signal, the system repeatedly triggered additional production cycles. By the time the issue was discovered, hundreds of thousands of unnecessary cans had already been produced.

Bruggeman noted that the system had not technically malfunctioned. Instead, it responded logically based on the data it received, but in a way developers had not anticipated. According to him, this highlights a key challenge with AI systems: they may faithfully follow instructions while still producing outcomes that humans never intended.

Similar risks exist in customer-facing applications. Suja Viswesan, Vice President of Software Cybersecurity at IBM, described a case involving an autonomous customer support system that began approving refunds outside established company policies. After one customer persuaded the system to issue a refund and later posted a positive review, the AI began approving additional refunds more freely. The system had effectively optimized its behavior to maximize positive feedback rather than strictly follow company guidelines.

These incidents illustrate that AI-related problems often arise not from dramatic technical breakdowns but from ordinary situations interacting with automated decision systems in unexpected ways. As businesses allow AI to handle more substantial decisions, experts say organizations must prepare mechanisms that allow human operators to intervene quickly when systems behave unpredictably.

However, shutting down an AI system is not always straightforward. Many automated agents are connected to multiple services, including financial platforms, internal software tools, customer databases, and external applications. Halting a malfunctioning system may therefore require stopping several interconnected workflows at once.

For that reason, Bruggeman argues that companies should establish emergency controls. Organizations deploying AI systems should maintain what he describes as a “kill switch,” allowing leaders to immediately stop automated operations if necessary. Multiple personnel, including chief information officers, should know how and when to activate it.

Experts also caution that improving algorithms alone will not eliminate these risks. Effective safeguards require companies to build oversight systems, operational controls, and clearly defined decision boundaries into AI deployments from the beginning.

Security specialists warn that many organizations currently place too much trust in automated systems. Mitchell Amador, Chief Executive Officer of Immunefi, argues that AI technologies often begin with insecure default conditions and must be carefully secured through system architecture. Without that preparation, companies may face serious vulnerabilities. Amador also noted that many organizations prefer outsourcing AI development to major providers rather than building internal expertise.

Operational readiness remains another challenge. Ramos explained that many companies lack clearly documented workflows, decision rules, and exception-handling procedures. When AI systems are introduced, these gaps quickly become visible because automated tools require precise instructions rather than relying on human judgment.

Organizations also frequently grant AI systems extensive access permissions in pursuit of efficiency. Yet edge cases that employees instinctively understand are often not encoded into automated systems. Ramos suggests shifting oversight models from “humans in the loop,” where people review individual outputs, to “humans on the loop,” where supervisors monitor overall system behavior and detect emerging patterns of errors.

Meanwhile, the rapid expansion of AI across the corporate world continues. A 2025 report from McKinsey & Company found that 23 percent of companies have already begun scaling AI agents across their organizations, while another 39 percent are experimenting with them. Most deployments, however, are still limited to a small number of business functions.

Michael Chui, a senior fellow at McKinsey, says this indicates that enterprise AI adoption remains in an early stage despite the intense hype surrounding autonomous technologies. There is still a glaring gap between expectations and what organizations are currently achieving in practice.

Nevertheless, companies are unlikely to slow their adoption efforts. Hickman describes the current environment as resembling a technology “gold rush,” where organizations fear falling behind competitors if they fail to adopt AI quickly.

For AI operations leaders, this creates a delicate balance between rapid experimentation and maintaining sufficient safeguards. Ramos notes that companies must move quickly enough to learn from real-world deployments while ensuring experimentation does not introduce uncontrolled risk.

Despite these concerns, expectations for the technology remain high. Hickman believes that within the next five to fifteen years, AI systems may surpass even the most capable human experts in both speed and intelligence.

Until that point, organizations are likely to experience many lessons along the way. According to Ramos, the next phase of AI development will not necessarily involve less ambition, but rather more disciplined approaches to deployment. Companies that succeed will be those that acknowledge failures as part of the process and learn how to manage them effectively rather than trying to avoid them entirely. 


India Steps Up AI Adoption Across Governance and Public Services

 

India is making bold moves to embed artificial intelligence (AI) in governance, with ministries utilizing AI instruments to deliver better public services and boost operational efficiency. From weather prediction and disease diagnosis to automated court document translation and meeting transcription, AI is being adopted by industry verticals to streamline processes and service delivery. 

The Ministry of Science and Technology is also using AI in precipitation-based weather and climate forecasting, among other things, such as the Advanced Dvorak Technique (AiDT) for estimating cyclone strength and hybrid AI models for weather forecasting. Further, a MauasamGPT, an AI enabled chatbot is being developed for delivering climate advisories to the farmers and other stakeholders. 

Indian Railways has implemented AI in automating handover notes for incoming officers and for checking kitchen cleanliness using sensor cameras. According to reports the ministries are also testing the feasibility of using AI to transcribe long meetings, though the technology is still limited to process (not decision) orientation. Central public sector enterprises such as SAIL, NMDC and MOIL are leveraging AI in process and cost optimization, predictive analytics and in anomaly detection.

Experts, including KPMG India’s Akhilesh Tuteja, recommend a whole-of-government approach to accelerate AI adoption, a transition from pilot projects to full-scale implementation by ministries and states. India AI Governance Guidelines have been released by the Ministry of Electronics and IT (Meity), which constitutes an AI governance group comprising major regulatory bodies to evolve standards, audit mechanism and interoperable tools. 

National Informatics Centre (NIC) has been a pioneer in offering AI as a service for central and state government ministries/departments. AI Satyapikaanan, the face verifier tool is being used by the regional transport offices for driver's license renewals and by the Inter-operable Criminal Justice System for suspect identification. Ministry of Panchayati Raj is backing rural governance that is AI-based (Geospatial analytics) service known as Gram Manchitra.

AI is also making strides in healthcare and justice. The e-Sanjeevani telemedicine platform integrates a Clinical Decision Support System (CDSS) to enhance consultation quality and streamline patient data. AI solutions for diabetic retinopathy screening and abnormal chest X-ray classification have been implemented in multiple states, benefiting thousands of patients. 

In the judiciary, AI is being used to translate court judgments into vernacular languages using tools like AI Panini, which covers all 22 official Indic languages. Despite these advances, officials note that AI usage remains largely confined to non-critical functions, and there are limitations, especially regarding financial transactions and high-stakes decision-making.

AI Adoption Accelerates Despite Growing Security Concerns: Report

 

Businesses worldwide are rapidly embracing artificial intelligence (AI), yet a significant number remain deeply concerned about its security implications, according to the 2025 Thales Data Threat Report. Drawing insights from over 3,100 IT and cybersecurity professionals across 20 countries and 15 industries, the report identifies the rapid evolution of AI, particularly generative AI (GenAI) as the most pressing security threat for nearly 70% of surveyed organisations. Despite recognising AI as a major driver of innovation, many respondents expressed alarm over its risks to data integrity and trust. 

Specifically, 64% highlighted concerns over AI's lack of integrity, while 57% flagged trustworthiness as a key issue. The reliance of GenAI tools on user-provided data for tasks such as training and inference further amplifies the risk of sensitive data exposure. Even with these concerns, the pace of AI adoption continues to rise. The report found that one in three organisations is actively integrating GenAI into their operations, often before implementing sufficient security measures. Spending on GenAI tools has now become the second-highest priority for organisations, trailing only cloud security investments. 

 
“The fast-evolving GenAI landscape is pressuring enterprises to move quickly, sometimes at the cost of caution, as they race to stay ahead of the adoption curve,” said Eric Hanselman, Chief Analyst at S&P Global Market Intelligence 451 Research. 

“Many enterprises are deploying GenAI faster than they can fully understand their application architectures, compounded by the rapid spread of SaaS tools embedding GenAI capabilities, adding layers of complexity and risk.” 

In response to these emerging risks, 73% of IT professionals reported allocating budgets either new or existing towards AI-specific security solutions. While enthusiasm for GenAI continues to surge, the Thales report serves as a warning that rushing ahead without securing systems could expose organisations to serious vulnerabilities.