Search This Blog

Powered by Blogger.

Blog Archive

Labels

About Me

Showing posts with label AI technology. Show all posts

OpenAI Launching AI-Powered Web Browser to Rival Chrome, Drive ChatGPT Integration

 

OpenAI is reportedly developing its own web browser, integrating artificial intelligence to offer users a new way to explore the internet. According to sources cited by Reuters, the tool is expected to be unveiled in the coming weeks, although an official release date has not yet been announced. With this move, OpenAI seems to be stepping into the competitive browser space with the goal of challenging Google Chrome’s dominance, while also gaining access to valuable user data that could enhance its AI models and advertising potential. 

The browser is expected to serve as more than just a window to the web—it will likely come packed with AI features, offering users the ability to interact with tools like ChatGPT directly within their browsing sessions. This integration could mean that AI-generated responses, intelligent page summaries, and voice-based search capabilities are no longer separate from web activity but built into the browsing experience itself. Users may be able to complete tasks, ask questions, and retrieve information all within a single, unified interface. 

A major incentive for OpenAI is the access to first-party data. Currently, most of the data that fuels targeted advertising and search engine algorithms is captured by Google through Chrome. By creating its own browser, OpenAI could tap into a similar stream of data—helping to both improve its large language models and create new revenue opportunities through ad placements or subscription services. While details on privacy controls are unclear, such deep integration with AI may raise concerns about data protection and user consent. 

Despite the potential, OpenAI faces stiff competition. Chrome currently holds a dominant share of the global browser market, with nearly 70% of users relying on it for daily web access. OpenAI would need to provide compelling reasons for people to switch—whether through better performance, advanced AI tools, or stronger privacy options. Meanwhile, other companies are racing to enter the same space. Perplexity AI, for instance, recently launched a browser named Comet, giving early adopters a glimpse into what AI-first browsing might look like. 

Ultimately, OpenAI’s browser could mark a turning point in how artificial intelligence intersects with the internet. If it succeeds, users might soon navigate the web in ways that are faster, more intuitive, and increasingly guided by AI. But for now, whether this approach will truly transform online experiences—or simply add another player to the browser wars—remains to be seen.

Why Running AI Locally with an NPU Offers Better Privacy, Speed, and Reliability

 

Running AI applications locally offers a compelling alternative to relying on cloud-based chatbots like ChatGPT, Gemini, or Deepseek, especially for those concerned about data privacy, internet dependency, and speed. Though cloud services promise protections through subscription terms, the reality remains uncertain. In contrast, using AI locally means your data never leaves your device, which is particularly advantageous for professionals handling sensitive customer information or individuals wary of sharing personal data with third parties. 

Local AI eliminates the need for a constant, high-speed internet connection. This reliable offline capability means that even in areas with spotty coverage or during network outages, tools for voice control, image recognition, and text generation remain functional. Lower latency also translates to near-instantaneous responses, unlike cloud AI that may lag due to network round-trip times. 

A powerful hardware component is essential here: the Neural Processing Unit (NPU). Typical CPUs and GPUs can struggle with AI workloads like large language models and image processing, leading to slowdowns, heat, noise, and shortened battery life. NPUs are specifically designed for handling matrix-heavy computations—vital for AI—and they allow these models to run efficiently right on your laptop, without burdening the main processor. 

Currently, consumer devices such as Intel Core Ultra, Qualcomm Snapdragon X Elite, and Apple’s M-series chips (M1–M4) come equipped with NPUs built for this purpose. With one of these devices, you can run open-source AI models like DeepSeek‑R1, Qwen 3, or LLaMA 3.3 using tools such as Ollama, which supports Windows, macOS, and Linux. By pairing Ollama with a user-friendly interface like OpenWeb UI, you can replicate the experience of cloud chatbots entirely offline.  

Other local tools like GPT4All and Jan.ai also provide convenient interfaces for running AI models locally. However, be aware that model files can be quite large (often 20 GB or more), and without NPU support, performance may be sluggish and battery life will suffer.  

Using AI locally comes with several key advantages. You gain full control over your data, knowing it’s never sent to external servers. Offline compatibility ensures uninterrupted use, even in remote or unstable network environments. In terms of responsiveness, local AI often outperforms cloud models due to the absence of network latency. Many tools are open source, making experimentation and customization financially accessible. Lastly, NPUs offer energy-efficient performance, enabling richer AI experiences on everyday devices. 

In summary, if you’re looking for a faster, more private, and reliable AI workflow that doesn’t depend on the internet, equipping your laptop with an NPU and installing tools like Ollama, OpenWeb UI, GPT4All, or Jan.ai is a smart move. Not only will your interactions be quick and seamless, but they’ll also remain securely under your control.

AI and the Rise of Service-as-a-Service: Why Products Are Becoming Invisible

 

The software world is undergoing a fundamental shift. Thanks to AI, product development has become faster, easier, and more scalable than ever before. Tools like Cursor and Lovable—along with countless “co-pilot” clones—have turned coding into prompt engineering, dramatically reducing development time and enhancing productivity. 

This boom has naturally caught the attention of venture capitalists. Funding for software companies hit $80 billion in Q1 2025, with investors eager to back niche SaaS solutions that follow the familiar playbook: identify a pain point, build a narrow tool, and scale aggressively. Y Combinator’s recent cohort was full of “Cursor for X” startups, reflecting the prevailing appetite for micro-products. 

But beneath this surge of point solutions lies a deeper transformation: the shift from product-led growth to outcome-driven service delivery. This evolution isn’t just about branding—it’s a structural redefinition of how software creates and delivers value. Historically, the SaaS revolution gave rise to subscription-based models, but the tools themselves remained hands-on. For example, when Adobe moved Creative Suite to the cloud, the billing changed—not the user experience. Users still needed to operate the software. SaaS, in that sense, was product-heavy and service-light. 

Now, AI is dissolving the product layer itself. The software is still there, but it’s receding into the background. The real value lies in what it does, not how it’s used. Glide co-founder Gautam Ajjarapu captures this perfectly: “The product gets us in the door, but what keeps us there is delivering results.” Take Glide’s AI for banks. It began as a tool to streamline onboarding but quickly evolved into something more transformative. Banks now rely on Glide to improve retention, automate workflows, and enhance customer outcomes. 

The interface is still a product, but the substance is service. The same trend is visible across leading AI startups. Zendesk markets “automated customer service,” where AI handles tickets end-to-end. Amplitude’s AI agents now generate product insights and implement changes. These offerings blur the line between tool and outcome—more service than software. This shift is grounded in economic logic. Services account for over 70% of U.S. GDP, and Nobel laureate Bengt Holmström’s contract theory helps explain why: businesses ultimately want results, not just tools. 

They don’t want a CRM—they want more sales. They don’t want analytics—they want better decisions. With agentic AI, it’s now possible to deliver on that promise. Instead of selling a dashboard, companies can sell growth. Instead of building an LMS, they offer complete onboarding services powered by AI agents. This evolution is especially relevant in sectors like healthcare. Corti’s CEO Andreas Cleve emphasizes that doctors don’t want more interfaces—they want more time. AI that saves time becomes invisible, and its value lies in what it enables, not how it looks. 

The implication is clear: software is becoming outcome-first. Users care less about tools and more about what those tools accomplish. Many companies—Glean, ElevenLabs, Corpora—are already moving toward this model, delivering answers, brand voices, or research synthesis rather than just access. This isn’t the death of the product—it’s its natural evolution. The best AI companies are becoming “services in a product wrapper,” where software is the delivery mechanism, but the value lies in what gets done. 

For builders, the question is no longer how to scale a product. It’s how to scale outcomes. The companies that succeed in this new era will be those that understand: users don’t want features—they want results. Call it what you want—AI-as-a-service, agentic delivery, or outcome-led software. But the trend is unmistakable. Service-as-a-Service isn’t just the next step for SaaS. It may be the future of software itself.

Personal AI Agents Could Become Digital Advocates in an AI-Dominated World

 

As generative AI agents proliferate, a new concept is gaining traction: AI entities that act as loyal digital advocates, protecting individuals from overwhelming technological complexity, misinformation, and data exploitation. Experts suggest these personal AI companions could function similarly to service animals—trained not just to assist, but to guard user interests in an AI-saturated world. From scam detection to helping navigate automated marketing and opaque algorithms, these agents would act as user-first shields. 

At a recent Imagination in Action panel, Consumer Reports’ Ginny Fahs explained, “As companies embed AI deeper into commerce, it becomes harder for consumers to identify fair offers or make informed decisions. An AI that prioritizes users’ interests can build trust and help transition toward a more transparent digital economy.” The idea is rooted in giving users agency and control in a system where most AI is built to serve businesses. Panelists—including experts like Dazza Greenwood, Amir Sarhangi, and Tobin South—discussed how loyal, trustworthy AI advocates could reshape personal data rights, online trust, and legal accountability. 

Greenwood drew parallels to early internet-era reforms such as e-signatures and automated contracts, suggesting a similar legal evolution is needed now to govern AI agents. South added that AI agents must be “loyal by design,” ensuring they act within legal frameworks and always prioritize the user. Sarhangi introduced the concept of “Know Your Agent” (KYA), which promotes transparency by tracking the digital footprint of an AI. 

With unique agent wallets and activity histories, bad actors could be identified and held accountable. Fahs described a tool called “Permission Slip,” which automates user requests like data deletion. This form of AI advocacy predates current generative models but shows how user-authorized agents could manage privacy at scale. Agents could also learn from collective behavior. For instance, an AI noting a negative review of a product could share that experience with other agents, building an automated form of word-of-mouth. 

This concept, said panel moderator Sandy Pentland, mirrors how Consumer Reports aggregates user feedback to identify reliable products. South emphasized that cryptographic tools could ensure safe data-sharing without blindly trusting tech giants. He also referenced NANDA, a decentralized protocol from MIT that aims to enable trustworthy AI infrastructure. Still, implementing AI agents raises usability questions. “We want agents to understand nuanced permissions without constantly asking users to approve every action,” Fahs said. 

Getting this right will be crucial to user adoption. Pentland noted that current AI models struggle to align with individual preferences. “An effective agent must represent you—not a demographic group, but your unique values,” he said. Greenwood believes that’s now possible: “We finally have the tools to build AI agents with fiduciary responsibilities.” In closing, South stressed that the real bottleneck isn’t AI capability but structuring and contextualizing information properly. “If you want AI to truly act on your behalf, we must design systems that help it understand you.” 

As AI becomes deeply embedded in daily life, building personalized, privacy-conscious agents may be the key to ensuring technology serves people—not the other way around.

WhatsApp Under Fire for AI Update Disrupting Group Communication


The new artificial intelligence capability introduced by WhatsApp aims to transform the way users interact with their conversations through sophisticated artificial intelligence. It uses advanced technology from Meta AI to provide a concise summary of unread messages across individual chats as well as group chats, which is referred to as Message Summaries. 

The tool was created to help users stay informed in increasingly active chat environments by automatically compiling key points and contextual highlights, allowing them to catch up in just a few clicks without having to scroll through lengthy message histories to catch up. The company claims all summaries are generated privately, so that confidentiality can be maintained and the process of use is as simple as possible for the user. 

WhatsApp announces its intention of integrating artificial intelligence-driven solutions into its app to improve user convenience as well as reshape communication habits for its global community with this rollout, sparking both excitement and controversy as a result. Despite being announced last month, WhatsApp’s innovative Message Summaries feature has moved from pilot testing to a full-scale rollout after successfully passing pilot testing. 

Having refined the tool and collected feedback from its users, it is now considered to be stable and has been formally launched for wider use. In the initial phase, the feature is only available to US users and is restricted to the English language at this time. This indicates that WhatsApp is cautious when it comes to deploying large-scale artificial intelligence. 

Nevertheless, the platform announced plans to extend its availability to more regions at some point in the future, along with the addition of multilingual support. The phased rollout strategy emphasises that the company is focused on ensuring that the technology is reliable and user-friendly before it is extended to the vast global market. 

It is WhatsApp's intention to focus on a controlled release so as to gather more insights about users' interaction with the AI-generated conversation summaries, as well as to fine-tune the experience before expanding internationally. As a result of WhatsApp's inability to provide an option for enabling or concealing the Message Summaries feature, there has been a significant amount of discontent among users. 

Despite the fact that Meta has refused to clarify the reason regarding the lack of an opt-out mechanism or why users were not offered the opportunity to opt out of the AI integration, they have not provided any explanation so far. As concerning as the technology itself is, the lack of transparency has been regarded equally as a cause for concern by many, raising questions about the control people have over their personal communications. As a result of these limitations, some people have attempted to circumvent the chatbot by switching to a WhatsApp Business account as a response. 

In addition, several users have commented that this strategy removed the AI functionality from Meta AI, but others have noted that the characteristic blue circle, which indicates Meta AI's presence, still appeared, which exacerbated the dissatisfaction and uncertainty. 

The Meta team hasn’t confirmed whether the business-oriented version of WhatsApp will continue to be exempt from AI integration for years to come. This rollout also represents Meta’s broader goal of integrating generative AI into all its platforms, which include Facebook and Instagram, into its ecosystem. 

Towards the end of 2024, Meta AI was introduced for the first time in Facebook Messenger in the United Kingdom, followed by a gradual extension into WhatsApp as part of a unified vision to revolutionise digital interactions. However, many users have expressed their frustration with this feature because it often feels intrusive and ultimately is useless, despite these ambitions. 

The chatbot appears to activate frequently when individuals are simply searching for past conversations or locating contacts, which results in obstructions rather than streamlining the experience. According to the initial feedback received, AI-generated responses are frequently perceived as superficial, repetitive, or even irrelevant to the conversation's context, as well as generating a wide range of perceptions of their value.

A Meta AI platform has been integrated directly into WhatsApp, unlike standalone platforms such as ChatGPT and Google Gemini, which are separately accessible by users. WhatsApp is a communication application that is used on a daily basis to communicate both personally and professionally. Because the feature was integrated without explicit consent and there were doubts about its usefulness, many users are beginning to wonder whether such pervasive AI assistance is really necessary or desirable. 

It has also been noted that there is a growing chorus of criticism about the inherent limitations of artificial intelligence in terms of reliably interpreting human communication. Many users have expressed their scepticism about AI's ability to accurately condense even one message within an active group chat, let alone synthesise hundreds of exchanges. It is not the first time Apple has faced similar challenges; Apple has faced similar challenges in the past when it had to pull an AI-powered feature that produced unintended and sometimes inaccurate summaries. 

As of today, the problem of "hallucinations," which occur in the form of factually incorrect or contextually irrelevant content generated by artificial intelligence, remains a persistent problem across nearly every generative platform, including commonly used platforms like ChatGPT. Aside from that, artificial intelligence continues to struggle with subtleties such as humour, sarcasm, and cultural nuance-aspects of natural conversation that are central to establishing a connection. 

In situations where the AI is not trained to recognise offhand or joking remarks, it can easily misinterpret those remarks. This leads to summaries that are alarmist, distorted, or completely inaccurate, as compared to human recipients' own. Due to the increased risk of misrepresentation, users who rely on WhatsApp for authentic, nuanced communication with colleagues, friends, and family are becoming more apprehensive than before. 

A philosophical objection has been raised beyond technical limitations, stating that the act of participating in a conversation is diminished by substituting real engagement for machine-generated recaps. There is a shared sentiment that the purpose of group chats lies precisely in the experience of reading and responding to the genuine voices of others while scrolling through a backlog of messages. 

However, there is a consensus that it is exhausting to scroll through such a large backlog of messages. It is believed that the introduction of Message Summaries not only threatens clear communication but also undermines the sense of personal connection that draws people into these digital communities in the first place, which is why these critics are concerned. 

In order to ensure user privacy, WhatsApp has created the Message Summaries feature using a new framework known as Private Processing, which is designed to safeguard user privacy. Meta and WhatsApp are specifically ensuring that neither the contents of their conversations nor the summaries that the AI system produces are able to be accessed by them, which is why this approach was developed. 

Instead of sending summaries to external servers, the platform is able to generate them locally on the users' devices, reinforcing its commitment to privacy. Each summary, presented in a clear bullet point format, is clearly labelled as "visible only to you," emphasising WhatsApp's privacy-centric design philosophy behind the feature as well. 

Message Summaries have shown to be especially useful in group chats in which the amount of unread messages is often overwhelming, as a result of the large volume of unread messages. With this tool, users are able to remain informed without having to read every single message, because lengthy exchanges are distilled into concise snapshots that enable them to stay updated without having to scroll through each and every individual message. 

The feature is disabled by default and needs to be activated manually, which addresses privacy concerns. Upon activating the feature, eligible chats display a discreet icon, signalling the availability of a summary without announcing it to other participants. Meta’s confidential computing infrastructure is at the core of its system, and in principle, it is comparable to Apple’s private cloud computing architecture. 

A Trusted Execution Environment (TEE) provides a foundation for Private Processing, ensuring that confidential information is handled in an effective manner, with robust measures against tampering, and clear mechanisms for ensuring transparency are in place.

A system's architecture is designed to shut down automatically or to generate verifiable evidence of the intrusion whenever any attempt is made to compromise the security assurances of the system. As well as supporting independent third-party audits, Meta has intentionally designed the framework in such a way that it will remain stateless, forward secure, and immune to targeted attacks so that Meta's claims about data protection can be verified. 

Furthermore, advanced chat privacy settings are included as a complement to these technical safeguards, as they allow users to select the conversations that will be eligible for AI-generated summaries and thus offer granular control over the use of the feature. Moreover, when a user decides to enable summaries in a chat, no notification is sent to other participants, allowing for greater discretion on the part of other participants.

There is currently a phase in which Message Summaries are being gradually introduced to users in the United States. They can only be read in English at the moment. There has been confirmation by Meta that the feature will be expanded to additional regions and supported in additional languages shortly, as part of their broader effort to integrate artificial intelligence into all aspects of their service offerings. 

As WhatsApp intensifies its efforts to embed AI capabilities deeper and deeper into everyday communication, Message Summaries marks a pivotal moment in the evolution of relationships between technology and human interaction as the company accelerates its ambition to involve AI capabilities across the entire enterprise. 

Even though the company has repeatedly reiterated that it is committed to privacy, transparency, and user autonomy, the response to this feature has been polarised, which highlights the challenges associated with incorporating artificial intelligence in spaces where trust, nuance, and human connection are paramount. 

It is a timely reminder that, for both individuals and organisations, the growth of convenience-driven automation impacts the genuine social fabric that is a hallmark of digital communities and requires a careful assessment. 

As platforms evolve, stakeholders would do well to remain vigilant with the changes to platform policies, evaluate whether such tools align with the communication values they hold dear, and consider offering structured feedback in order for these technologies to mature with maturity. As artificial intelligence continues to redefine the contours of messaging, users will need to be open to innovation while also expressing critical thought about the long-term implications on privacy, comprehension, and even the very nature of meaningful dialogue as AI use continues to grow in popularity.

Security Teams Struggle to Keep Up With Generative AI Threats, Cobalt Warns

 

A growing number of cybersecurity professionals are expressing concern that generative AI is evolving too rapidly for their teams to manage. 

According to new research by penetration testing company Cobalt, over one-third of security leaders and practitioners admit that the pace of genAI development has outstripped their ability to respond. Nearly half of those surveyed (48%) said they wish they could pause and reassess their defense strategies in light of these emerging threats—though they acknowledge that such a break isn’t realistic. 

In fact, 72% of respondents listed generative AI-related attacks as their top IT security risk. Despite this, one in three organizations still isn’t conducting regular security evaluations of their large language model (LLM) deployments, including basic penetration testing. 

Cobalt CTO Gunter Ollmann warned that the security landscape is shifting, and the foundational controls many organizations rely on are quickly becoming outdated. “Our research shows that while generative AI is transforming how businesses operate, it’s also exposing them to risks they’re not prepared for,” said Ollmann. 
“Security frameworks must evolve or risk falling behind.” The study revealed a divide between leadership and practitioners. Executives such as CISOs and VPs are more concerned about long-term threats like adversarial AI attacks, with 76% listing them as a top issue. Meanwhile, 45% of practitioners are more focused on immediate operational challenges such as model inaccuracies, compared to 36% of executives. 

A majority of leaders—52%—are open to rethinking their cybersecurity strategies to address genAI threats. Among practitioners, only 43% shared this view. The top genAI-related concerns identified by the survey included the risk of sensitive information disclosure (46%), model poisoning or theft (42%), data inaccuracies (40%), and leakage of training data (37%). Around half of respondents also expressed a desire for more transparency from software vendors about how vulnerabilities are identified and patched, highlighting a widening trust gap in the AI supply chain. 

Cobalt’s internal pentest data shows a worrying trend: while 69% of high-risk vulnerabilities are typically fixed across all test types, only 21% of critical flaws found in LLM tests are resolved. This is especially alarming considering that nearly one-third of LLM vulnerabilities are classified as serious. Interestingly, the average time to resolve these LLM-specific vulnerabilities is just 19 days—the fastest across all categories. 

However, researchers noted this may be because organizations prioritize easier, low-effort fixes rather than tackling more complex threats embedded in foundational AI models. Ollmann compared the current scenario to the early days of cloud adoption, where innovation outpaced security readiness. He emphasized that traditional controls aren’t enough in the age of LLMs. “Security teams can’t afford to be reactive anymore,” he concluded. “They must move toward continuous, programmatic AI testing if they want to keep up.”

Meta.ai Privacy Lapse Exposes User Chats in Public Feed

 

Meta’s new AI-driven chatbot platform, Meta.ai, launched recently with much fanfare, offering features like text and voice chats, image generation, and video restyling. Designed to rival platforms like ChatGPT, the app also includes a Discover feed, a space intended to showcase public content generated by users. However, what Meta failed to communicate effectively was that many users were unintentionally sharing their private conversations in this feed—sometimes with extremely sensitive content attached. 

In May, journalists flagged the issue when they discovered public chats revealing deeply personal user concerns—ranging from financial issues and health anxieties to legal troubles. These weren’t obscure posts either; they appeared in a publicly accessible area of the app, often containing identifying information. Conversations included users seeking help with medical diagnoses, children talking about personal experiences, and even incarcerated individuals discussing legal strategies—none of whom appeared to realize their data was visible to others. 

Despite some recent tweaks to the app’s sharing settings, disturbing content still appears on the Discover feed. Users unknowingly uploaded images and video clips, sometimes including faces, alongside alarming or bizarre prompts. One especially troubling instance featured a photo of a child at school, accompanied by a prompt instructing the AI to “make him cry.” Such posts reflect not only poor design choices but also raise ethical questions about the purpose and moderation of the Discover feed itself. 

The issue evokes memories of other infamous data exposure incidents, such as AOL’s release of anonymized user searches in 2006, which provided unsettling insight into private thoughts and behaviors. While social media platforms are inherently public, users generally view AI chat interactions as private, akin to using a search engine. Meta.ai blurred that boundary—perhaps unintentionally, but with serious consequences. Many users turned to Meta.ai seeking support, companionship, or simple productivity help. Some asked for help with job listings or obituary writing, while others vented emotional distress or sought comfort during panic attacks. 

In some cases, users left chats expressing gratitude—believing the bot had helped. But a growing number of conversations end in frustration or embarrassment when users realize the bot cannot deliver on its promises or that their content was shared publicly. These incidents highlight a disconnect between how users engage with AI tools and how companies design them. Meta’s ambition to merge AI capabilities with social interaction seems to have ignored the emotional and psychological expectations users bring to private-sounding features. 

For those using Meta.ai as a digital confidant, the lack of clarity around privacy settings has turned an experiment in convenience into a public misstep. As AI systems become more integrated into daily life, companies must rethink how they handle user data—especially when users assume privacy. Meta.ai’s rocky launch serves as a cautionary tale about transparency, trust, and design in the age of generative AI.

How Generative AI Is Accelerating the Rise of Shadow IT and Cybersecurity Gaps

 

The emergence of generative AI tools in the workplace has reignited concerns about shadow IT—technology solutions adopted by employees without the knowledge or approval of the IT department. While shadow IT has always posed security challenges, the rapid proliferation of AI tools is intensifying the issue, creating new cybersecurity risks for organizations already struggling with visibility and control. 

Employees now have access to a range of AI-powered tools that can streamline daily tasks, from summarizing text to generating code. However, many of these applications operate outside approved systems and can send sensitive corporate data to third-party cloud environments. This introduces serious privacy concerns and increases the risk of data leakage. Unlike legacy software, generative AI solutions can be downloaded and used with minimal friction, making them harder for IT teams to detect and manage. 

The 2025 State of Cybersecurity Report by Ivanti reveals a critical gap between awareness and preparedness. More than half of IT and security leaders acknowledge the threat posed by software and API vulnerabilities. Yet only about one-third feel fully equipped to deal with these risks. The disparity highlights the disconnect between theory and practice, especially as data visibility becomes increasingly fragmented. 

A significant portion of this problem stems from the lack of integrated data systems. Nearly half of organizations admit they do not have enough insight into the software operating on their networks, hindering informed decision-making. When IT and security departments work in isolation—something 55% of organizations still report—it opens the door for unmonitored tools to slip through unnoticed. 

Generative AI has only added to the complexity. Because these tools operate quickly and independently, they can infiltrate enterprise environments before any formal review process occurs. The result is a patchwork of unverified software that can compromise an organization’s overall security posture. 

Rather than attempting to ban shadow IT altogether—a move unlikely to succeed—companies should focus on improving data visibility and fostering collaboration between departments. Unified platforms that connect IT and security functions are essential. With a shared understanding of tools in use, teams can assess risks and apply controls without stifling innovation. 

Creating a culture of transparency is equally important. Employees should feel comfortable voicing their tech needs instead of finding workarounds. Training programs can help users understand the risks of generative AI and encourage safer choices. 

Ultimately, AI is not the root of the problem—lack of oversight is. As the workplace becomes more AI-driven, addressing shadow IT with strategic visibility and collaboration will be critical to building a strong, future-ready defense.