Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Google Introduces AI-Powered Side Panel in Chrome to Automate Browsing




Google has updated its Chrome browser by adding a built-in artificial intelligence panel powered by its Gemini model, marking a stride toward automated web interaction. The change reflects the company’s broader push to integrate AI directly into everyday browsing activities.

Chrome, which currently holds more than 70 percent of the global browser market, is now moving in the same direction as other browsers that have already experimented with AI-driven navigation. The idea behind this shift is to allow users to rely on AI systems to explore websites, gather information, and perform online actions with minimal manual input.

The Gemini feature appears as a sidebar within Chrome, reducing the visible area of websites to make room for an interactive chat interface. Through this panel, users can communicate with the AI while keeping their main work open in a separate tab, allowing multitasking without constant tab switching.

Google explains that this setup can help users organize information more effectively. For example, Gemini can compare details across multiple open tabs or summarize reviews from different websites, helping users make decisions more quickly.

For subscribers to Google’s higher-tier AI plans, Chrome now offers an automated browsing capability. This allows Gemini to act as a software agent that can follow instructions involving multiple steps. In demonstrations shared by Google, the AI can analyze images on a webpage, visit external shopping platforms, identify related products, and add items to a cart while staying within a user-defined budget. The final purchase, however, still requires user approval.

The browser update also includes image-focused AI tools that allow users to create or edit images directly within Chrome, further expanding the browser’s role beyond simple web access.

Chrome’s integration with other applications has also been expanded. With user consent, Gemini can now interact with productivity tools, communication apps, media services, navigation platforms, and shopping-related Google services. This gives the AI broader context when assisting with tasks.

Google has indicated that future updates will allow Gemini to remember previous interactions across websites and apps, provided users choose to enable this feature. The goal is to make AI assistance more personalized over time.

Despite these developments, automated browsing faces resistance from some websites. Certain platforms have already taken legal or contractual steps to limit AI-driven activity, particularly for shopping and transactions. This underlines the ongoing tension between automation and website control.

To address these concerns, Google says Chrome will request human confirmation before completing sensitive actions such as purchases or social media posts. The browser will also support an open standard designed to allow AI-driven commerce in collaboration with participating retailers.

Currently, these features are available on Chrome for desktop systems in the United States, with automated browsing restricted to paid subscribers. How widely such AI-assisted browsing will be accepted across the web remains uncertain.


SK hynix Launches New AI Company as Data Center Demand Drives Growth

 

A surge in demand for data center hardware has lifted SK hynix into stronger market standing, thanks to limited availability of crucial AI chips. Though rooted in memory production, the company now pushes further - launching a dedicated arm centered on tailored AI offerings. Rising revenues reflect investor confidence, fueled by sustained component shortages. Growth momentum builds quietly, shaped more by timing than redirection. Market movements align closely with output constraints rather than strategic pivots. 

Early next year, the business will launch a division known as “AI Company” (AI Co.), set to begin operations in February. This offshoot aims to play a central role within the AI data center landscape, positioning itself alongside major contributors. As demand shifts toward bundled options, clients prefer complete packages - ones blending infrastructure, programs, and support - over isolated gear. According to SK hynix, such changes open doors previously unexplored through traditional component sales alone. 

Though little is known so far, news has emerged that AI Co., according to statements given to The Register, plans industry-specific AI tools through dedicated backing of infrastructure tied to processing hubs. Starting out, attention turns toward programs meant to refine how artificial intelligence operates within machines. From there, financial commitments may stretch into broader areas linked to computing centers as months pass. Alongside funding external ventures and novel tech, reports indicate turning prototypes into market-ready offerings might shape a core piece of its evolving strategy.  

About $10 billion is being set aside by SK hynix for the fresh venture. Next month should bring news of a temporary leadership group and governing committee. Instead of staying intact, the California-focused SSD unit known as Solidigm will undergo reorganization. What was once Solidigm becomes AI Co. under the shift. Meanwhile, production tied to SSDs shifts into a separate entity named Solidigm Inc., built from the ground up.  

Now shaping up, the AI server industry leans into tailored chips instead of generic ones. By 2027, ASIC shipments for these systems could rise threefold, according to Counterpoint Research. Come 2028, annual units sold might go past fifteen million. Such growth appears set to overtake current leaders - data center GPUs - in volume shipped. While initial prices for ASICs sometimes run high, their running cost tends to stay low compared to premium graphics processors. Inference workloads commonly drive demand, favoring efficiency-focused designs. Holding roughly six out of every ten units delivered in 2027, Broadcom stands positioned near the front. 

A wider shortage of memory chips keeps lifting SK hynix forward. Demand now clearly exceeds available stock, according to IDC experts, because manufacturers are directing more output into server and graphics processing units instead of phones or laptops. As a result, prices throughout the sector have climbed - this shift directly boosting the firm's earnings. Revenue for 2025 reached ₩97.14 trillion ($67.9 billion), up 47%. During just the last quarter, income surged 66% compared to the same period the previous year, hitting ₩32.8 trillion ($22.9 billion). 

Suppliers such as ASML are seeing gains too, thanks to rising demand in semiconductor production. Though known mainly for photolithography equipment, its latest quarterly results revealed €9.7 billion in revenue - roughly $11.6 billion. Even so, forecasts suggest a sharp rise in orders for their high-end EUV tools during the current year. Despite broader market shifts, performance remains strong across key segments. 

Still, experts point out that a lack of memory chips might hurt buyers, as devices like computers and phones could become more expensive. Predictions indicate computer deliveries might drop during the current year because supplies are tight and expenses are climbing.

Researchers Uncover Pakistan-Linked Cyber Activity Targeting India


 

A familiar, uneasy brink appears to be looming between India and Pakistan once again, where geopolitical tension spills over borders into less visible spheres and risks spilling over into more obscure regions. As the war intensified in May 2025, cyberspace became one of the next arenas that was contested. 

Pakistan-linked hacktivist groups began claiming widespread cyberattacks on Indian government bodies, academic institutions, and critical infrastructure elements as the result of heightened hostilities. It appeared, at first glance, that the volume of asserted attacks indicated that there was a broad cyber offensive on the part of the perpetrators. There is, however, a more nuanced story to be told when we take a closer look at the reports. 

According to findings from security firm CloudSEK, many of these alleged breaches were either overstated or entirely fabrications, based on recycled data dumps, cosmetic website defacements, and short-lived interruptions that caused little harm to operations. 

Despite the symphonic noise surrounding the Pahalgam terror attack, a more sobering development lay instead behind the curtain. It was an intrusion campaign targeting Indian defense-linked networks based on the Crimson RAT malware that was deployed by the APT36 advanced persistent threat group. 

Using a clear distinction between spectacle and substance, this study examines what transpired in India-Pakistan cyber conflict, why it matters, and where the real risks lie in the coming months in order to discern what has truly unfolded. 

In spite of the noise of hacktivist claims, researchers warn that a much more methodical and state-aligned cyber espionage effort has been quietly unfolding beneath the surface level noise. There has been a significant increase in the focus of Pakistan-linked threat actors operating under the designation APT36, also referred to by cybersecurity experts as Earth Karkaddan, Mythic Leopard, Operation C-Major, and Transparent Tribe in the past couple of years. 

It has been more than a decade since this group established itself, and it has demonstrated a track record of conducting targeted intelligence-gathering operations against Indian institutions through its work. 

Analysts observed in August 2025 a shift in tactics for a campaign known as APT36 that focused on Linux-based systems, using carefully designed malware delivery techniques, rather than targeting Windows-based systems. 

APT36 used procurement-themed phishing lures to distribute malware ZIP archives disguised as routine documents, allowing attackers to distribute malware. The malware dropper was coveredtly downloaded and installed by these files, which were then executed through Windows desktop entry configurations. 

A decoy PDF was also displayed to avoid suspicion, while the malware dropper itself retrieved a malware dropper on Google Drive. According to a further analysis, the payload was designed to avoid detection using anti-debugging and anti-sandbox measures, maintain persistence on compromised systems, and establish covert communication with command-and-control infrastructure over WebSockets, which were all hallmarks of a calculated espionage operation rather than an opportunistic intrusion. 

According to further analysis conducted by Zscaler ThreatLabz, the activity appears to be part of two coordinated campaigns, identified as Gopher Strike and Sheet Attack, both of which were carried out from September 2025 to October 2025. It is worth keeping in mind that while elements of the operations bear resemblance to techniques that have historically been associated with APT36, researchers are generally inclined to believe that the observed activity may be the work of a distinct subgroup or a separate threat actor which is linked to Pakistan. 

There are two main types of attacks known as Sheet Attacks and they are characterized by their use of trusted cloud-based platforms for command-and-control communications, including Google Sheets, Firebase, and email services, which enables your attack traffic to blend into legitimate network traffic. 

It has been reported that the Gopher Strike, on the other hand, is initiated by phishing emails that provide PDF attachments which are meant to deceive recipients into installing an Adobe Acrobat Reader DC update that is falsely advertised. A blurred image is displayed on top of a seemingly benign prompt, which instructs users to download the update before they can view the contents of this document. 

A user selecting the embedded option will initiate the download of an ISO image, but only when the request originated from an address in India and corresponds to an Indian user agent specified in a Windows registry - server-side checks to frustrate automated analysis and prevent delivery to a specific audience.

A downloader built on the Golang programming language is embedded within the ISO copy, named GOGITTER, in order for it to be able to establish persistent downloads across multiple directories of the system by creating and repeatedly executing Visual Basic scripts in several locations. 

A portion of the malware periodically retrieves commands from preconfigured command-and-control servers and can, if necessary, access additional payloads from a private GitHub repository, which was created earlier in 2025. This indicates the campaign was deliberately designed and has sustained operational intent for the above period. 

An intrusion sequence is initiated once the malicious payload has been retrieved by executing a tightly coordinated series of actions designed to establish deeper control as well as confirm compromise. The investigator notes that the infected system first sends a HTTP GET request to a domain adobe-acrobat[.]in in order to inform the operator that the target had been successfully breached.

GOGBITTER downloaders unpack and launch executable files that are then executed from previously delivered archives, called edgehost.exe. It is this component's responsibility to deploy GITSHELLPAD, a lightweight Golang backdoor which relies heavily on attackers' control of private GitHub repositories for command-and-control purposes. This backdoor keeps in close touch with the operators by periodically polling a remote server for instructions stored in a file called command.txt that is updated every few seconds.

In addition to being able to navigate directories and execute processes on a compromised system, attackers are also able to transfer files between the compromised and non-compromised system. The execution results are recorded in a separate file and sent back to GitHub, where they are then exfiltrated and stored until the forensic trace is completely removed.

Moreover, Zscaler researchers have observed that operators after initial access downloaded additional RAR archives using the cURL-based command line. As part of these packages, there were tools for system reconnaissance, as well as a custom Golang loader known as GOSHELL that was used to eventually deploy a Cobalt Strike beacon after several decoding stages were completed. 

There is no doubt about the fact that the loader was intentionally padded with extraneous data in order to increase its size to about one gigabyte, which is a tactic that was used as a way to bypass antivirus detections. 

When the auxiliary tools had fulfilled their purpose, they were systematically removed from the host, reflecting a disciplined effort to keep the campaign as stealthy as possible. 

Recently, investigations indicate that cyber tensions between India and Pakistan are intensifying. It is important to distinguish between high-impact threats and performative digital noise in order to avoid the loss of privacy. 

Even though waves of hacktivist claims created the illusion of a widespread cyberattack on Indian institutions in mid-2025, detailed analysis reveals that the majority of these disruptions were exaggerated or of inconsequential nature. Among the more consequential risks that Pakistan-linked actors, including groups such as APT36, are associated with is sustained and technically sophisticated espionage operations. 

The attacks illustrate a clear evolution in the use of tradecraft, combining targeted phishing attacks, exploitation of trusted cloud platforms, and the use of custom malware frameworks, all of which are being used to quietly penetrate both Linux and Windows environments within governments and defense organizations.

It is important to note that selective delivery mechanisms, stealthy persistence techniques, and layering of payloads-all culminating in the deployment of advanced post-exploitation tools-underline a strategic focus on long-term access rather than immediate disruption of the network. 

The findings underscore to policymakers and security teams that the importance of detecting covert, state-aligned intrusions over headline-driven hacktivist activity needs to be prioritized, and that in an increasingly contested cyber world, it is crucial that cybersecurity defenses are strengthened against phishing, cloud abuse, and endpoint monitoring.

Cyberattack Paralyzes Russia's Delta Security Systems

 

A massive cyberattack was launched against Delta, a leading Russian smart alarm system supplier for residential, commercial, and automotive use, on 26 January 2026, causing widespread operational disruptions across the country. The attack crippled Delta’s information technology systems, bringing down websites, telephony, and critical services for tens of thousands of subscribers. Delta labeled the incident a “large-scale external attack” designed to bring operations to a standstill, with no signs of customer data compromise identified at the time.

 End users were immediately affected as car alarms failed to turn off, preventing unlocking and engine start functions in many cases. Home and commercial building alarm systems defaulted to emergency modes that could not be overridden by users, while range-based services like vehicle start functions malfunctioned, sometimes causing engines to shut down during use. Information from Telegram groups like Baza and other news sources, such as Kommersant, shed light on these operational issues, highlighting the weaknesses of IoT security devices connected to the internet. 

Delta’s marketing director, Valery Ushkov, addressed the situation through a video message, stating that the company’s infrastructure was not capable of withstanding the “well-coordinated” global attack. The prolonged recovery effort was necessary due to continued threats following the attack, forcing updates to be posted through VKontakte instead of the company’s own channels. Although Delta claimed that most services would be restored soon with professional help, disruptions continued into 27 January, eroding trust in the company’s cybersecurity efforts. 

Unverified claims emerged on a Telegram channel allegedly linked to the hackers in which they shared one of ten alleged data dumps taken from Delta's systems. Though authenticity remains unconfirmed, fears grew over the mobile app's storage of payment and tracking data, compatible with most vehicles. No hacking group has claimed responsibility, leaving speculation about DDoS, ransomware, or wipers unresolved.

The breach is part of a wave of IT issues in Russia, which included the travel booking service being down that day, although the two incidents are not related, according to officials. It illustrates vulnerabilities in IoT-based security at a time of geopolitical strain and as Delta blamed a “hostile foreign state.” The incident sparks renewed demands for more robust safeguards in critical infrastructure to mitigate real-world physical safety risks from cyber incidents.

Anthropic Cracks Down on Claude Code Spoofing, Tightens Access for Rivals and Third-Party Tools

 

Anthropic has rolled out a new set of technical controls aimed at stopping third-party applications from impersonating its official coding client, Claude Code, to gain cheaper access and higher usage limits to Claude AI models. The move has directly disrupted workflows for users of popular open-source coding agents such as OpenCode.

At the same time—but through a separate enforcement action—Anthropic has also curtailed the use of its models by competing AI labs, including xAI, which accessed Claude through the Cursor integrated development environment. Together, these steps signal a tightening of Anthropic’s ecosystem as demand for Claude Code surges.

The anti-spoofing update was publicly clarified on Friday by Thariq Shihipar, a Member of Technical Staff at Anthropic working on Claude Code. Writing on X (formerly Twitter), Shihipar said the company had "tightened our safeguards against spoofing the Claude Code harness." He acknowledged that the rollout caused unintended side effects, explaining that some accounts were automatically banned after triggering abuse detection systems—an issue Anthropic says it is now reversing.

While those account bans were unintentional, the blocking of third-party integrations themselves appears to be deliberate.

Why Harnesses Were Targeted

The changes focus on so-called “harnesses”—software wrappers that control a user’s web-based Claude account via OAuth in order to automate coding workflows. Tools like OpenCode achieved this by spoofing the client identity and sending headers that made requests appear as if they were coming from Anthropic’s own command-line interface.

This effectively allowed developers to link flat-rate consumer subscriptions, such as Claude Pro or Max, with external automation tools—bypassing the intended limits of plans designed for human, chat-based use.

According to Shihipar, technical instability was a major motivator for the block. Unauthorized harnesses can introduce bugs and usage patterns that Anthropic cannot easily trace or debug. When failures occur in third-party wrappers like OpenCode or certain Cursor configurations, users often blame the model itself, which can erode trust in the platform.

The Cost Question and the “Buffet” Analogy

Developers, however, have largely framed the issue as an economic one. In extended discussions on Hacker News, users compared Claude’s consumer subscriptions to an all-you-can-eat buffet: Anthropic offers a flat monthly price—up to $200 for Max—but controls consumption speed through its official Claude Code tool.

Third-party harnesses remove those speed limits. Autonomous agents running inside tools like OpenCode can execute intensive loops—writing code, running tests, fixing errors—continuously and unattended, often overnight. At that scale, the same usage would be prohibitively expensive under per-token API pricing.

"In a month of Claude Code, it's easy to use so many LLM tokens that it would have cost you more than $1,000 if you'd paid via the API," wrote Hacker News user dfabulich.

By cutting off spoofed harnesses, Anthropic is effectively pushing heavy automation into two approved channels: its metered Commercial API, or Claude Code itself, where execution speed and environment constraints are fully controlled.

Community Reaction and Workarounds

The response from developers has been swift and mixed. Some criticized the move as hostile to users. "Seems very customer hostile," wrote Danish programmer David Heinemeier Hansson (DHH), creator of Ruby on Rails, in a post on X.

Others were more understanding. "anthropic crackdown on people abusing the subscription auth is the gentlest it could’ve been," wrote Artem K aka @banteg on X. "just a polite message instead of nuking your account or retroactively charging you at api prices."

The OpenCode team moved quickly, launching a new $200-per-month tier called OpenCode Black that reportedly routes usage through an enterprise API gateway rather than consumer OAuth. OpenCode creator Dax Raad also announced plans to work with Anthropic rival OpenAI so users could access Codex directly within OpenCode, punctuating the announcement with a Gladiator GIF captioned "Are you not entertained?"

The xAI and Cursor Enforcement

Running parallel to the technical crackdown, developers at Elon Musk’s AI lab xAI reportedly lost access to Claude models around the same time. While the timing suggested coordination, sources indicate this was a separate action rooted in Anthropic’s commercial terms.

As reported by tech journalist Kylie Robison of Core Memory, xAI staff had been using Claude models through the Cursor IDE to accelerate internal development. "Hi team, I believe many of you have already discovered that Anthropic models are not responding on Cursor," wrote xAI co-founder Tony Wu in an internal memo. "According to Cursor this is a new policy Anthropic is enforcing for all its major competitors."

Anthropic’s Commercial Terms of Service explicitly prohibit using its services to build or train competing AI systems. In this case, Cursor itself was not the issue; rather, xAI’s use of Claude through the IDE for competitive research triggered the block.

This is not the first time Anthropic has cut off access to protect its models. In August 2025, the company revoked OpenAI’s access to the Claude API under similar circumstances. At the time, an Anthropic spokesperson said, "Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI's own technical staff were also using our coding tools."

Earlier, in June 2025, the coding environment Windsurf was abruptly informed that Anthropic was cutting off most first-party capacity for Claude 3.x models. Windsurf was forced to pivot to a bring-your-own-key model and promote alternatives like Google’s Gemini.

Together with the xAI and OpenCode actions, these incidents underscore a consistent message: Anthropic will sever access when usage threatens its business model or competitive position.

Claude Code’s Rapid Rise

The timing of the crackdowns closely follows a dramatic surge in Claude Code’s popularity. Although released in early 2025, it remained niche until December 2025 and early January 2026, when community-driven experimentation—popularized by the so-called “Ralph Wiggum” plugin—demonstrated powerful self-healing coding loops.

The real prize, however, was not the Claude Code interface itself but the underlying Claude Opus 4.5 model. By spoofing the official client, third-party tools allowed developers to run large-scale autonomous workflows on Anthropic’s most capable reasoning model at a flat subscription price—effectively arbitraging consumer pricing against enterprise-grade usage.

As developer Ed Andersen noted on X, some of Claude Code’s popularity may have been driven by this very behavior.

For enterprise AI teams, the message is clear: pipelines built on unofficial wrappers or personal subscriptions carry significant risk. While flat-rate tools like OpenCode reduced costs, Anthropic’s enforcement highlights the instability and compliance issues they introduce.

Organizations now face a trade-off between predictable subscription fees and variable, per-token API costs—but with the benefit of guaranteed support and stability. From a security standpoint, the episode also exposes the dangers of “Shadow AI,” where engineers quietly bypass enterprise controls using spoofed credentials.

As Anthropic consolidates control over access to Claude’s models, the reliability of official APIs and sanctioned tools is becoming more important than short-term cost savings. In this new phase of the AI arms race, unrestricted access to top-tier reasoning models is no longer a given—it’s a privilege tightly guarded by their creators.

Some ChatGPT Browser Extensions Are Putting User Accounts at Risk

 


Cybersecurity researchers are cautioning users against installing certain browser extensions that claim to improve ChatGPT functionality, warning that some of these tools are being used to steal sensitive data and gain unauthorized access to user accounts.

These extensions, primarily found on the Chrome Web Store, present themselves as productivity boosters designed to help users work faster with AI tools. However, recent analysis suggests that a group of these extensions was intentionally created to exploit users rather than assist them.

Researchers identified at least 16 extensions that appear to be connected to a single coordinated operation. Although listed under different names, the extensions share nearly identical technical foundations, visual designs, publishing timelines, and backend infrastructure. This consistency indicates a deliberate campaign rather than isolated security oversights.

As AI-powered browser tools become more common, attackers are increasingly leveraging their popularity. Many malicious extensions imitate legitimate services by using professional branding and familiar descriptions to appear trustworthy. Because these tools are designed to interact deeply with web-based AI platforms, they often request extensive permissions, which exponentially increases the potential impact of abuse.

Unlike conventional malware, these extensions do not install harmful software on a user’s device. Instead, they take advantage of how browser-based authentication works. To operate as advertised, the extensions require access to active ChatGPT sessions and advanced browser privileges. Once installed, they inject hidden scripts into the ChatGPT website that quietly monitor network activity.

When a logged-in user interacts with ChatGPT, the platform sends background requests that include session tokens. These tokens serve as temporary proof that a user is authenticated. The malicious extensions intercept these requests, extract the tokens, and transmit them to external servers controlled by the attackers.

Possession of a valid session token allows attackers to impersonate users without needing passwords or multi-factor authentication. This can grant access to private chat histories and any external services connected to the account, potentially exposing sensitive personal or organizational information. Some extensions were also found to collect additional data, including usage patterns and internal access credentials generated by the extension itself.

Investigators also observed synchronized publishing behavior, shared update schedules, and common server infrastructure across the extensions, reinforcing concerns that they are part of a single, organized effort.

While the total number of installations remains relatively low, estimated at fewer than 1,000 downloads, security experts warn that early-stage campaigns can scale rapidly. As AI-related extensions continue to grow in popularity, similar threats are likely to emerge.

Experts advise users to carefully evaluate browser extensions before installation, pay close attention to permission requests, and remove tools that request broad access without clear justification. Staying cautious is increasingly important as browser-based attacks become more subtle and harder to detect.

Threat Actors Target Misconfigured Proxies for Paid LLM Access

 

GreyNoise, a cybersecurity company, has discovered two campaigns against the infrastructure of large language models (LLMs) where the attackers used misconfigured proxies to gain illicit access to commercial AI services. Starting late December 2025, the attackers scanned over 73 LLM endpoints and had more than 80,000 sessions in 11 days, using harmless queries to evade detection. These efforts highlight the growing threat to AI systems as attackers begin to map vulnerable systems for potential exploitation. 

The first campaign, which started in October 2025, focused on server-side request forgery (SSRF) vulnerabilities in Ollama honeypots, resulting in a cumulative 91,403 attack sessions. The attackers used malicious registry URLs via Ollama’s model pull functionality and manipulated Twilio SMS webhooks to trigger outbound connections to their own infrastructure. A significant spike during Christmas resulted in 1,688 sessions over 48 hours from 62 IP addresses in 27 countries, using ProjectDiscovery’s OAST tools, indicating the involvement of grey-hat researchers rather than full-fledged malware attacks.

The second campaign began on December 28 from IP addresses 45.88.186.70 and 204.76.203.125. This campaign systematically scanned endpoints that supported OpenAI and Google Gemini API formats. The targets included leading providers such as OpenAI’s GPT-4o, Anthropic’s Claude series, Meta’s Llama 3.x, Google’s Gemini, Mistral, Google’s Gemini, Alibaba’s Qwen, Alibaba’s DeepSeek-R1, and xAI’s Grok. The attackers used low-noise queries like basic greetings or factual questions like “How many states in the US?” to identify models while avoiding detection systems. 

GreyNoise links the scanning IPs to prior CVE exploits, including CVE-2025-55182, indicating professional reconnaissance rather than casual probing.While no immediate exploitation or data theft was observed, the scale signals preparation for abuse, like free-riding on paid APIs or injecting malicious prompts. "Threat actors don't map infrastructure at this scale without plans to use that map," the report warns.

Organizations should restrict Ollama pulls to trusted registries, implement egress filtering, and block OAST domains like *.oast.live at DNS. Additional defenses include rate-limiting suspicious ASNs (e.g., AS210558, AS51396), monitoring JA4 fingerprints, and alerting on multi-endpoint probes. As AI surfaces expand, proactive securing of proxies and APIs is crucial to thwart these evolving threats.

Cybercriminals Report Monetizing Stolen Data From US Medical Company


Modern healthcare operations are frequently plagued by ransomware attacks, but the recent attack on Change Healthcare marks a major turning point in terms of scale and consequence. In the context of an industry that is increasingly relying on digital platforms, there is a growing threat environment characterized by organized cybercrime, fragile third-party dependency, and an increasing data footprint as a result of an increasingly hostile threat environment. 

With hundreds of ransomware incidents and broader security incidents already occurring in a matter of months, recent figures from 2025 illustrate just how serious this shift is. It is important to note that a breach will not only disrupt clinical and administrative workflows, but also put highly sensitive patient information at risk, which can result in cascading operational, financial, and legal consequences for organizations. 

The developments highlighted here highlight a stark reality: safeguarding healthcare data does not just require technical safeguards; it now requires a coordinated risk management strategy that anticipates breaches, limits their impacts, and ensures institutional resilience should prevention fail. 

Connecticut's Community Health Center (CHC) recently disclosed a significant data breach that occurred when an unauthorized access to its internal systems was allowed to result in a significant data breach, which exemplifies the sector's ongoing vulnerability to cyber risk. 

In January 2025, the organization was alerted to irregular network activity, resulting in an urgent forensic investigation that confirmed there was a criminal on site. Upon further analysis, it was found that the attacker had maintained undetected access to the system from mid-October 2024, thereby allowing a longer window for data exfiltration before the breach was contained and publicly disclosed later that month. 

There was no ransomware or disruption of operations during the incident, but the extent of the data accessed was significant, including names, dates of birth, Social Security numbers, health insurance details, and clinical records of patients and employees, which included sensitive patient and employee information.

More than one million people, including several thousand employees, were affected according to CHC, demonstrating the difficulties that persist in early detection of threats and data protection across healthcare networks, and highlighting the urgent need for strengthened security measures as medical records continue to attract cybercriminals. 

According to Cytek Biosciences' notification to affected individuals, it was learned in early November 2025 that an outside party had gained access to portions of the Biotechnology company's systems and that the company later determined that personal information had been obtained by an outside party. 

As soon as the company became aware of the extent of the exposure, it took immediate steps to respond, including offering free identity theft protection and credit monitoring services for up to two years to eligible individuals, which the company said it had been working on. 

As part of efforts to mitigate potential harm resulting from the incident, enrollment in the program continues to be open up until the end of April 2026. Threat intelligence sources have identified the breach as being connected to Rhysida, which is known for being a ransomware group that first emerged in 2023 and has since established itself as a prolific operation within the cybercrime ecosystem.

A ransomware-as-a-service model is employed by the group which combines data theft with system encryption, as well as allowing affiliates to conduct attacks using its malware and infrastructure in return for a share of the revenue. 

The Rhysida malware has been responsible for a number of attacks across several sectors since its inception, and healthcare is one of the most frequent targets. A number of the group's intrusions have previously been credited to hospitals and care providers, but the Cytek incident is the group's first confirmed attack on a healthcare manufacturer, aligning with a trend which is increasingly involving ransomware activity that extends beyond direct patient care companies to include medical suppliers and technology companies. 

Research indicates that these types of attacks are capable of exposing millions of records, disrupting critical services, and amplifying risks to patient privacy as well as operational continuity, which highlights that the threat landscape facing the U.S. healthcare system is becoming increasingly complex. 

As a result of the disruption that occurred in the U.S. healthcare system, organizations and individuals affected by the incident have stepped back and examined how Change Healthcare fits into the system and why its outage was so widespread. 

With over 15 years of experience in healthcare technology and payment processing under the UnitedHealth Group umbrella, Change Healthcare has played a critical role as a vital intermediary between healthcare providers, insurers, and pharmacists by verifying eligibility, getting prior authorizations, submitting claims, and facilitating payment processes. 

A failure of this organization in its role at the heart of these transactions can lead to cascading delays in prescription, reimbursement, and claim processing across the country when its operational failure extends far beyond the institution at fault. 

According to findings from a survey conducted by the American Medical Association, which documented widespread financial and administrative stress among physician practices, this impact was of a significant magnitude. There have been numerous reports of suspended or delayed claims payments, the inability to submit claims, or the inability to receive electronic remittance advice, and widespread service interruptions as a consequence. 

Several practices cited significant revenue losses, forcing some to rely on personal funds or find an alternative clearinghouse in order to continue to operate. There have been some relief measures relating to emergency funding and advance payments, but disruptions continue to persist, prompting UnitedHealth Group to disburse more than $2 billion towards these efforts. 

Moreover, patients have suffered indirect effects not only through billing delays, unexpected charges, and notifications about potential data exposures but also outside the provider community. This has contributed to increased public concern and renewed scrutiny of the systemic risks posed by the compromise of an organization's central healthcare infrastructure provider. 

The fact that the incidents have been combined in this fashion highlights a clear and cautionary message for healthcare stakeholders: it is imperative to treat cyber resilience as a strategic priority, rather than a purely technical function. 

Considering that large-scale ransomware campaigns have been running for some time now, undetected intrusions for a prolonged period of time, as well as failures at critical intermediaries, it is evident that even a single breach can escalate into a systemic disruption that affects providers, manufacturers, and patients. 

A growing number of industry leaders and regulators are called upon to improve the oversight of third parties, enhance the tools available for breach detection, and integrate financial, legal, and operational preparedness into their cybersecurity strategies. 

It is imperative that healthcare organizations adopt proactive, enterprise-wide approaches to risk management as the volume and value of healthcare data continues to grow. Organizations that fail to adopt this approach may not only find themselves unable to cope with cyber incidents, but also struggle to maintain trust, continuity, and care delivery in the aftermath of them.

Indonesia Temporarily Blocks Grok After AI Deepfake Misuse Sparks Outrage

 

A sudden pause in accessibility marks Indonesia’s move against Grok, Elon Musk’s AI creation, following claims of misuse involving fabricated adult imagery. News of manipulated visuals surfaced, prompting authorities to act - Reuters notes this as a world-first restriction on the tool. Growing unease about technology aiding harm now echoes across borders. Reaction spreads, not through policy papers, but real-time consequences caught online.  

A growing number of reports have linked Grok to incidents where users created explicit imagery of women - sometimes involving minors - without consent. Not long after these concerns surfaced, Indonesia’s digital affairs minister, Meutya Hafid, labeled the behavior a severe breach of online safety norms. 

As cited by Reuters, she described unauthorized sexually suggestive deepfakes as fundamentally undermining personal dignity and civil rights in digital environments. Her office emphasized that such acts fall under grave cyber offenses, demanding urgent regulatory attention Temporary restrictions appeared in Indonesia after Antara News highlighted risks tied to AI-made explicit material. 

Protection of women, kids, and communities drove the move, aimed at reducing mental and societal damage. Officials pointed out that fake but realistic intimate imagery counts as digital abuse, according to statements by Hafid. Such fabricated visuals, though synthetic, still trigger actual consequences for victims. The state insists artificial does not mean harmless - impact matters more than origin. Following concerns over Grok's functionality, officials received official notices demanding explanations on its development process and observed harms. 

Because of potential risks, Indonesian regulators required the firm to detail concrete measures aimed at reducing abuse going forward. Whether the service remains accessible locally hinges on adoption of rigorous filtering systems, according to Hafid. Compliance with national regulations and adherence to responsible artificial intelligence practices now shape the outcome. 

Only after these steps are demonstrated will operation be permitted to continue. Last week saw Musk and xAI issue a warning: improper use of the chatbot for unlawful acts might lead to legal action. On X, he stated clearly - individuals generating illicit material through Grok assume the same liability as those posting such content outright. Still, after rising backlash over the platform's inability to stop deepfake circulation, his stance appeared to shift slightly. 

A re-shared post from one follower implied fault rests more with people creating fakes than with the system hosting them. The debate spread beyond borders, reaching American lawmakers. A group of three Senate members reached out to both Google and Apple, pushing for the removal of Grok and X applications from digital marketplaces due to breaches involving explicit material. Their correspondence framed the request around existing rules prohibiting sexually charged imagery produced without consent. 

What concerned them most was an automated flood of inappropriate depictions focused on females and minors - content they labeled damaging and possibly unlawful. When tied to misuse - like deepfakes made without consent - AI tools now face sharper government reactions, Indonesia's move part of this rising trend. Though once slow to act, officials increasingly treat such technology as a risk needing strong intervention. 

A shift is visible: responses that were hesitant now carry weight, driven by public concern over digital harm. Not every nation acts alike, yet the pattern grows clearer through cases like this one. Pressure builds not just from incidents themselves, but how widely they spread before being challenged.

Fake Tax Emails Used to Target Indian Users in New Malware Campaign

 


A newly identified cyberattack campaign is actively exploiting trust in India’s tax system to infect computers with advanced malware designed for long-term surveillance and data theft. The operation relies on carefully crafted phishing emails that impersonate official tax communications and has been assessed as potentially espionage-driven, though no specific hacking group has been confirmed.

The attack begins with emails that appear to originate from the Income Tax Department of India. These messages typically warn recipients about penalties, compliance issues, or document verification, creating urgency and fear. Victims are instructed to open an attached compressed file, believing it to be an official notice.

Once opened, the attachment initiates a hidden infection process. Although the archive contains several components, only one file is visible to the user. This file is disguised as a legitimate inspection or review document. When executed, it quietly loads a concealed malicious system file that operates without the user’s awareness.

This hidden component performs checks to ensure it is not being examined by security analysts and then connects to an external server to download additional malicious code. The next stage exploits a Windows system mechanism to gain administrative privileges without triggering standard security prompts, allowing the attackers deeper control over the system.

To further avoid detection, the malware alters how it identifies itself within the operating system, making it appear as a normal Windows process. This camouflage helps it blend into everyday system activity.

The attackers then deploy another installer that adapts its behavior based on the victim’s security setup. If a widely used antivirus program is detected, the malware does not shut it down. Instead, it simulates user actions, such as mouse movements, to quietly instruct the antivirus to ignore specific malicious files. This allows the attack to proceed while the security software remains active, reducing suspicion.

At the core of the operation is a modified banking-focused malware strain known for targeting organizations across multiple countries. Alongside it, attackers install a legitimate enterprise management tool originally designed for system administration. In this campaign, the software is misused to remotely control infected machines, monitor user behavior, and manage stolen data centrally.

Supporting files are also deployed to strengthen control. These include automated scripts that change folder permissions, adjust user access rights, clean traces of activity, and enable detailed logging. A coordinating program manages these functions to ensure the attackers maintain persistent access.

Researchers note that the campaign combines deception, privilege escalation, stealth execution, and abuse of trusted software, reflecting a high level of technical sophistication and clear intent to maintain prolonged visibility into compromised systems.

Manage My Health Warns of Impersonation Scams as Fallout From Major Data Breach Continues

 

The repercussions of the Manage My Health data breach are still unfolding, with the company cautioning that cybercriminals may now be targeting affected users by posing as the online patient portal.

Manage My Health, which runs a widely used digital health platform across New Zealand, has confirmed that the majority of individuals impacted by the incident have been notified. At the same time, the organization has raised concerns that opportunistic criminals are attempting to exploit the situation by circulating phishing or spam messages designed to look like official communications from Manage My Health.

“We’re also aware that secondary actors may impersonate MMH and send spam or phishing emails to prompt engagement. These communications are not from MMH,” the company said in a statement. It added that steps are being examined to curb this activity, alongside issuing safety guidance to help users avoid further harm.

The cyberattack, which took place toward the end of last year, involved unauthorized access to documents stored within a limited section of the platform. According to reports, the attackers demanded a ransom of several thousand dollars, threatening to publish sensitive data on the dark web. Had this occurred, personal medical information belonging to more than 120,000 New Zealanders could have been exposed.

Manage My Health clarified that core services remained unaffected by the breach. Live GP clinical systems, prescriptions, appointment bookings, secure messaging, and real-time medical records were not compromised. The intrusion was restricted to documents housed in the “My Health Documents” feature.

The affected files included user-uploaded materials such as correspondence, medical reports, and test results, along with certain clinical documents. These clinical records consisted of hospital discharge summaries and clinical letters linked to care provided in Northland Te Tai Tokerau. After detecting suspicious activity, the company said it swiftly locked down the affected feature, prevented further unauthorized access, and activated its incident response protocols. Independent cybersecurity experts were brought in to assess the breach and verify its extent.

Manage My Health has since confirmed that the incident is contained and that testing shows the vulnerability has been fully addressed.

Notifications and Regulatory Response

The company acknowledged that its early response resulted in some users being contacted before the full scope of the breach was understood. “When we first identified the breach, our priority was to promptly inform all potentially affected patients,” it said, explaining that this precautionary approach meant some individuals were later found not to be impacted.

Those users were subsequently advised that their data was not involved. Individuals can also verify their status by logging into the Manage My Health web application, where a green “No Impact” banner confirms no exposure.

Notification efforts are continuing, with the company citing the complexity of coordinating communications across patient groups, regulators, and data controllers while meeting obligations under the New Zealand Privacy Act.

The breach has drawn regulatory attention, with the Office of the Privacy Commissioner (OPC) launching an inquiry into the privacy implications of the incident. Manage My Health said it is cooperating closely with the OPC, Health New Zealand | Te Whatu Ora, the National Cyber Security Centre, and the New Zealand Police.

Legal Action and Monitoring Efforts

As part of its response, Manage My Health successfully obtained an interim injunction from the High Court, preventing any third party from accessing, publishing, or sharing the compromised data.

The company is also monitoring known data leak sites and stands ready to issue takedown notices if any information surfaces online. Additional steps include resetting compromised credentials, temporarily disabling the Health Documents module, and maintaining continuous system monitoring while wider security improvements are implemented. An independent forensic investigation is still underway, though the company has declined to disclose specific technical details at this time.

Manage My Health has reiterated that it will never request passwords or one-time security codes and has urged users to be cautious of unsolicited or urgent messages claiming to be from the platform.

Anyone contacted by individuals alleging they possess health data is advised not to engage and to report the matter to New Zealand Police via 105, or 111 in an emergency, and to inform Manage My Health support. To further assist users worried about identity misuse, the company has partnered with IDCARE to provide free and confidential cyber and identity support across Australia and New Zealand.

“We take the privacy of our clients and staff very seriously, and we sincerely apologise for any concern or inconvenience this incident may have caused,” Manage My Health said, adding that it remains committed to transparency as investigations into the cyberattack on Manage My Health continue.

ShinyHunters Allege Massive Data Leaks from SoundCloud, Crunchbase, and Betterment After Extortion Refusals

 

The cybercrime group known as ShinyHunters has resurfaced, claiming responsibility for leaking millions of records allegedly taken from SoundCloud, Crunchbase, and Betterment after failed extortion efforts. The group has launched a new dark web (.onion) leak platform and has already shared what it says are partial databases connected to the three organizations.

The activity reportedly began on 22 January 2026, when ShinyHunters posted messages on its Telegram channel containing links to onion services where the data could be accessed freely. The hackers assert that the disclosures were carried out because the targeted companies refused to meet their extortion demands.

“We are after corporate regime change in all parts of the world. Pay or leak. We will aggressively and viciously come after you once we have your data. By the time you are listed here, it will be too late. Next time. You will learn from it. It will ALWAYS be your best decision, choice, and option to engage with us and come to an agreement with us. Proceed wisely,” the group’s message on the leak site says.

Among the affected firms, SoundCloud had previously acknowledged a breach in December 2025 that impacted roughly 20% of its users. With the platform reporting between 175 and 180 million users overall, this translates to approximately 35–36 million accounts—closely aligning with the volume of data ShinyHunters claims to possess.

According to the hackers, the leaked information includes more than 20 million records tied to Betterment containing Personally Identifiable Information, over 2 million alleged records associated with Crunchbase, and upwards of 30 million records linked to SoundCloud that are now circulating online.

On the same day the leaks emerged, 22 January 2026, Okta issued a security advisory warning of an ongoing Okta SSO vishing campaign that has already affected multiple victims, though the total number has not been disclosed. In a LinkedIn post, Alon Gal of Israel-based cybersecurity firm Hudson Rock stated that ShinyHunters contacted him, claiming responsibility for the Okta SSO vishing activity and suggesting that more data leaks are forthcoming.

This development has prompted speculation about whether the alleged breaches at all three companies are connected to Okta. At this stage, no definitive link has been confirmed. Hackread.com has reached out directly to ShinyHunters to seek clarification on the matter.

The purported datasets linked to SoundCloud, Crunchbase, and Betterment remain accessible for download, and Hackread.com reports that the links are now being widely shared across prominent cybercrime forums, including French- and Russian-language communities.

Hackread.com has also contacted all three companies for comment. Until the organizations involved verify the legitimacy of the leaked data, the incidents should be treated strictly as unconfirmed claims.

In a statement shared with Hackread.com, a SoundCloud spokesperson said the company detected unauthorized activity within an ancillary service dashboard in mid-December and acted immediately to contain the situation. The response included engaging third-party cybersecurity experts and launching a comprehensive investigation.

According to SoundCloud’s blog post, the investigation found that no sensitive information such as passwords or financial data was accessed. The exposure was limited to email addresses and details already visible on public profiles, affecting about one-fifth of its users. SoundCloud further stated that while a group claiming responsibility has made public accusations and carried out email flooding tactics, there is no evidence to support broader claims. The company says it is cooperating with law enforcement and continuing to strengthen monitoring, access controls, and other security defenses

WhatsApp-Based Astaroth Banking Trojan Targets Brazilian Users in New Malware Campaign

 

A fresh look at digital threats shows malicious software using WhatsApp to spread the Astaroth banking trojan, mainly affecting people in Brazil. Though messaging apps are common tools for connection, they now serve attackers aiming to steal financial data. This method - named Boto Cor-de-Rosa by analysts at Acronis Threat Research - stands out because it leans on social trust within widely used platforms. Instead of relying on email or fake websites, hackers piggyback on real conversations, slipping malware through shared links. 
While such tactics aren’t entirely new, their adaptation to local habits makes them harder to spot. In areas where nearly everyone uses WhatsApp daily, blending in becomes easier for cybercriminals. Researchers stress that ordinary messages can now carry hidden risks when sent from compromised accounts. Unlike older campaigns, this one avoids flashy tricks, favoring quiet infiltration over noise. As behavior shifts online, so do attack strategies - quietly, persistently adapting. 

Acronis reports that the malware targets WhatsApp contact lists, sending harmful messages automatically - spreading fast with no need for constant hacker input. Notably, even though the main Astaroth component sticks with Delphi, and the setup script remains in Visual Basic, analysts spotted a fresh worm-style feature built completely in Python. Starting off differently this time, the mix of languages shows how cyber attackers now build adaptable tools by blending code types for distinct jobs. Ending here: such variety supports stealthier, more responsive attack systems. 

Astaroth - sometimes called Guildma - has operated nonstop since 2015, focusing mostly on Brazil within Latin America. Stealing login details and enabling money scams sits at the core of its activity. By 2024, several hacking collectives, such as PINEAPPLE and Water Makara, began spreading it through deceptive email messages. This newest push moves away from that method, turning instead to WhatsApp; because so many people there rely on the app daily, fake requests feel far more believable. 

Although tactics shift, the aim stays unchanged. Not entirely new, exploiting WhatsApp to spread banking trojans has gained speed lately. Earlier, Trend Micro spotted the Water Saci group using comparable methods to push financial malware like Maverick and a version of Casbaneierio. Messaging apps now appear more appealing to attackers than classic email phishing. Later that year, Sophos disclosed details of an evolving attack series labeled STAC3150, closely tied to previous patterns. This operation focused heavily on individuals in Brazil using WhatsApp, distributing the Astaroth malware through deceptive channels. 

Nearly all infected machines - over 95 percent - were situated within Brazilian territory, though isolated instances appeared across the U.S. and Austria. Running uninterrupted from early autumn 2025, the method leaned on compressed archives paired with installer files, triggering script-based downloads meant to quietly embed the malicious software. What Acronis has uncovered fits well with past reports. Messages on WhatsApp now carry harmful ZIP files sent straight to users. Opening one reveals what seems like a safe document - but it is actually a Visual Basic Script. Once executed, the script pulls down further tools from remote servers. 

This step kicks off the full infection sequence. After activation, this malware splits its actions into two distinct functions. While one part spreads outward by pulling contact data from WhatsApp and distributing infected files without user input, the second runs hidden, observing online behavior - especially targeting visits to financial sites - to capture login details. 

It turns out the software logs performance constantly, feeding back live updates on how many messages succeed or fail, along with transmission speed. Attackers gain a constant stream of operational insight thanks to embedded reporting tools spotted by Acronis.

Looking Beyond the Hype Around AI Built Browser Projects


Cursor, the company that provides an artificial intelligence-integrated development environment, recently gained attention from the industry after suggesting that it had developed a fully functional browser using its own artificial intelligence agents, which is known as the Cursor AI-based development environment. In a series of public statements made by Cursor chief executive Michael Truell, it was claimed that the browser was built with the use of GPT-5.2 within the Cursor platform. 


Approximately three million lines of code are spread throughout thousands of files in Truell's project, and there is a custom rendering engine in Rust developed from scratch to implement this project. 

Moreover, he explained that the system also supports the main features of the browser, including HTML parsing, CSS cascading and layout, text shaping, painting, and a custom-built JavaScript virtual machine that is responsible for the rendering of HTML on the browser. 

Even though the statements did not explicitly assert that a substantial amount of human involvement was not involved with the creation of the browser, they have sparked a heated debate within the software development community about whether or not the majority of the work is truly attributed to autonomous AI systems, and whether or not these claims should be interpreted in light of the growing popularity of AI-based software development in recent years. 

There are a couple of things to note about the episode: it unfolds against a backdrop of intensifying optimism regarding generative AI, an optimism that has inspired unprecedented investment in companies across a variety of industries. In spite of the optimism, a more sobering reality is beginning to emerge in the process. 

A McKinsey study indicates that despite the fact that roughly 80 percent of companies report having adopted the most advanced AI tools, a similar percentage has seen little to no improvement in either revenue growth or profitability. 

In general, general-purpose AI applications are able to improve individual productivity, but they have rarely been able to translate their incremental time savings into tangible financial results. While higher value, domain-specific applications continue to stall in the experimental or pilot stage, analysts increasingly describe this disconnect as the generative AI value paradox since higher-value, domain-specific applications tend to stall in the experimental or pilot stages. 

There has been a significant increase in tension with the advent of so-called agentic artificial intelligence, which essentially is an autonomous system that is capable of planning, deciding, and acting independently in order to achieve predefined objectives. 

It is important to note, however, that these kinds of systems offer a range of benefits beyond assistive tools, as well as increasing the stakes for credibility and transparency in the case of Cursor's browser project, in which the decision to make its code publicly available was crucial. 

Developers who examined the repository found the software frequently failed to compile, rarely ran as advertised, and rarely exceeded the capabilities implied by the product's advertising despite enthusiastic headlines. 

If one inspects and tests the underlying code closely, it becomes evident that the marketing claims are not in line with the actual code. Ironically, most developers found the accompanying technical document—which detailed the project's limitations and partial successes—to be more convincing than the original announcement of the project. 

During a period of about a week, Cursor admits that it deployed hundreds of GPT-5.2-style agents, which generated about three million lines of code, assembling what on the surface amounted to a partially functional browser prototype. 

Several million dollars at prevailing prices for frontier AI models is the cost of the experiment, as estimated by Perplexity, an AI-driven search and analysis platform. At such times, it would be possible to consume between 10 and 20 trillion tokens during the experiment, which would translate into a cost of several million dollars at the current price. 

Although such figures demonstrate the ambition of the effort, they also emphasize the skepticism that exists within the industry at the moment: scale alone does not equate to sustained value or technical maturity. It can be argued that a number of converging forces are driving AI companies to increasingly target the web browser itself, rather than focusing on plug-ins or standalone applications.

For many years, browsers have served as the most valuable source of behavioral data - and, by extension, an excellent source of ad revenue - and this has been true for decades. They have been able to capture search queries, clicks, and browsing patterns for a number of years, which have paved the way for highly profitable ad targeting systems.

Google has gained its position as the world's most powerful search engine by largely following this model. The browser provides AI providers with direct access to this stream of data exhaust, which reduces the dependency on third party platforms and secures a privileged position in the advertising value chain. 

A number of analysts note that controlling the browser can also be a means of anchoring a company's search product and the commercial benefits that follow from it as well. It has been reported that OpenAI's upcoming browser is explicitly intended to collect information on users' web behavior from first-party sources, a strategy intended to challenge Google's ad-driven ecosystem. 

Insiders who have been contacted by the report suggest they were motivated to build a browser rather than an extension for Chrome or Edge because they wanted more control over their data. In addition to advertising, the continuous feedback loop that users create through their actions provides another advantage: each scroll, click, and query can be used to refine and personalize AI models, which in turn strengthens a product over time.

In the meantime, advertising remains one of the few scalable monetization paths for consumer-facing artificial intelligence, and both OpenAI and Perplexity appear to be positioning their browsers accordingly, as highlighted by recent hirings and the quiet development of ad-based services. 

Meanwhile, AI companies claim that browsers offer the chance to fundamentally rethink the user experience of the web, arguing that it can be remodeled in the future. Traditional browsing, which relied heavily on tabs, links, and manual comparison, has become increasingly viewed as an inefficient and cognitively fragmented activity. 

By replacing navigation-heavy workflows with conversational, context-aware interactions, artificial intelligence-first browsers aim to create a new type of browsing. It is believed that Perplexity's Comet browser, which is positioned as an “intelligent interface”, can be accessed by the user at any moment, enabling the artificial intelligence to research, summarize, and synthesize information in real time, thus creating a real-time “intelligent interface.” 

Rather than clicking through multiple pages, complex tasks are condensed into seamless interactions that maintain context across every step by reducing the number of pages needed to complete them. As with OpenAI's planned browser, it is likely to follow a similar approach by integrating a ChatGPT-like assistant directly into the browsing environment, allowing users to act on information without leaving the page. 

The browser is considered to be a constant co-pilot, one that will be able to draft messages, summarise content, or perform transactions on the user's behalf, rather than just performing searches. These shifts have been described by some as a shift from search to cognition. 

The companies who are deeply integrating artificial intelligence into everyday browsing hope that, in addition to improving convenience, they will be able to keep their users engaged in their ecosystems for longer periods of time, strengthening their brand recognition and boosting habitual usage. Having a proprietary browser also enables the integration of artificial intelligence services and agent-based systems that are difficult to deliver using third-party platforms. 

A comprehensive understanding of browser architecture provides companies with the opportunity to embed language models, plugins, and autonomous agents at a foundational level of the browser. OpenAI's browser, for instance, is expected to be integrated directly with the company's emerging agent platform, enabling software capable of navigating websites, completing forms, and performing multi-step actions on its own.

It is apparent that further ambitions are evident elsewhere too: 
The Browser Company's Dia features an AI assistant right in the address bar, offering a combination of search and chat functionality along with task automation, while maintaining awareness of the context of the user across multiple tabs. These types of browsers are an indicator of a broader trend toward building browsers around artificial intelligence rather than adding artificial intelligence features to existing browsers. 

By following such a method, a company's AI services become the default experience for users whenever they search or interact with the web. This ensures that the company's AI services are not optional enhancements, but rather the default experience. 

Last but not least, competitive pressure is a serious issue. Search and browser dominance by Google have long been mutually reinforcing each other, channeling data and traffic through Chrome into the company's advertising empire in an effort to consolidate its position.

A direct threat to this structure is the development of AI first browsers, whose aim is to divert users away from traditional search and towards AI-mediated discovery as a result. 

The browser that Perplexity is creating is part of a broader effort to compete with Google in search. However, Reuters reports that OpenAI is intensifying its rivalry with Google by moving into browsers. The ability to control the browser allows AI companies to intercept user intent at an earlier stage, so that they are not dependent on existing platforms and are protected from future changes in default settings and access rules that may be implemented. 

Furthermore, the smaller AI players must also be prepared to defend themselves from the growing integration of artificial intelligence into their browsers, as Google, Microsoft, and others are rapidly integrating it into their own browsers.

In a world where browsers remain a crucial part of our everyday lives as well as work, the race to integrate artificial intelligence into these interfaces is becoming increasingly important, and many observers are already beginning to describe this conflict as the beginning of a new era in browsers driven by artificial intelligence.

In the context of the Cursor episode and the trend toward AI-first browsers, it is imperative to note a cautionary mark for an industry rushing ahead of its own trials and errors. It is important to recognize, however, that open repositories and independent scrutiny continue to be the ultimate arbiters of technical reality, regardless of the public claims of autonomy and scale. 

It is becoming increasingly apparent that a number of companies are repositioning the browser as a strategic battleground, promising efficiency, personalization, and control - and that developers, enterprises, and users are being urged to separate ambition from implementation in real life. 

Among analysts, it appears that AI-powered browsers will not fail, but rather that their impact will be less dependent on headline-grabbing demonstrations than on evidence-based reliability, transparent attribution of human work to machine work, and a thoughtful evaluation of security and economic trade-offs. During this period of speed and spectacle in an industry that is known for its speed and spectacle, it may yet be the scariest resource of all.