Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Data Breach. Show all posts

Google API Keys Expose Gemini AI Data via Leaked Credentials

 

Google API keys, once considered harmless when embedded in public websites for services like Maps or YouTube, have turned into a serious security risk following the integration of Google's Gemini AI assistant. Security researchers at Truffle Security uncovered this issue, revealing that nearly 3,000 live API keys—prefixed with "AIza"—are exposed in client-side JavaScript code across popular sites.

Truffle Security's scan of the November 2025 Common Crawl dataset, which captures snapshots of major websites, identified 2,863 active keys from diverse sectors including finance, security firms, and even Google's own infrastructure. These keys, deployed sometimes years ago (one traced back to February 2023), were originally safe as mere billing identifiers but gained unauthorized access to Gemini endpoints without developers' knowledge.Attackers can simply copy a key from page source, authenticate to Gemini, and extract sensitive data like uploaded files, cached contexts, or datasets via simple prompts.

The danger extends beyond data theft to massive financial abuse, as Gemini API calls consume tokens that rack up charges—potentially thousands of dollars daily per compromised account, depending on the model and context window. Truffle Security demonstrated this by querying the /models endpoint with exposed keys, confirming access to private Gemini features. One reported case highlighted an $82,314 bill from a stolen key, underscoring the real-world impact.

Google acknowledged the flaw as "single-service privilege escalation" after Truffle's disclosure on November 21, 2025, and implemented fixes by January 2026, including blocking leaked keys from Gemini access, defaulting new AI Studio keys to Gemini-only scope, and sending proactive leak notifications. Despite these measures, the "retroactive privilege expansion" caught many off-guard, as enabling Gemini in projects silently empowered old keys.

Developers must immediately audit Google Cloud projects for Gemini API enablement, rotate all exposed keys, and restrict scopes to essentials—avoiding the default "unrestricted" setting. Tools like TruffleHog can scan code repositories for leaks, while regular monitoring prevents future exposures in an era where AI services amplify API risks. This incident highlights the need for vigilance as cloud features evolve.

Iran-Linked Handala Hackers Claim Breach of Israel’s Clalit Healthcare Network

 

A breach at Israel’s biggest health provider has been tied to an Iranian-affiliated hacking collective, which posted stolen patient records online. Claiming credit, a network calling itself Handala detailed the intrusion via public posts. Access reportedly reached Clalit Health Services’ core data stores. That institution cares for around fifty percent of the country’s residents. 

More than ten thousand people saw their medical files exposed, the hackers stated. Samples of what they say is real data now sit on public servers - names, test results, health scans tucked inside. Handala issued a statement saying Israel's hospital networks were left reeling after the breach, calling defenses weak and slow. What followed was not subtle: laughter at how easily systems gave way.  

Not just an attack, but positioned as resistance - this action followed claims of long-standing control and abuse. Echoing past messages, the announcement carried familiar tones seen when digital strikes hit Israeli bodies before. 

A strange post appeared online just hours before the reveal - hinting at something unfolding within Israel’s medical system. By next morning, reports confirmed a possible leak of sensitive information. Right after hearing about it, Clalit's cyber defense units started looking into what happened. Government agencies got updates right away, since detection tools kicked in under standard procedures. 

While checks are still underway, hospital networks remain stable and running without disruption. A fresh incident highlights ongoing digital operations tied to Iran, aimed at entities and people in Israel. In recent years, outfits connected to Tehran have faced claims of seeking information, interfering with key bodies, while also trying to pull in collaborators using internet exchanges along with money offers. 

Now known for bold statements, Handala has taken credit for multiple major cyber events, experts note. While Check Point Research points out that some assertions appear inflated, a few of those declarations align with verified breaches. Unexpected overlaps between claim and evidence keep scrutiny alive. 

In December, hackers revealed they had gained access to ex-Prime Minister Naftali Bennett’s Telegram messages. Confirmation came from Bennett's team - yes, the account was reached, yet his device remained untouched. 

Later, these attackers stated they went after more individuals in politics. Among them: ex-minister Ayelet Shaked and Tzachi Braverman, a close associate of Netanyahu. Earlier, Israel's medical system dealt with digital attacks. Last October, hackers targeted Assaf Harofeh Medical Center using ransomware linked to Qilin. Patient records were at risk when the criminals asked for 70,000 dollars. Threats to expose sensitive information followed if payment failed. 

Later, officials pointed to Iran’s likely involvement in that incident too - showing how digital attacks are becoming a key part of the strain between these nations.

Hackers Exploit Claude to Target Multiple Mexican Government Agencies

 


As generative artificial intelligence emerges, digital innovation is evolving at an unprecedented rate, but it is also quietly reshaping cybercrime in a subtle way. Tools originally designed for the purpose of research, coding, and problem-solving are now being explored for a variety of less benign purposes as well. 

This fact has been illustrated in a troubling fashion by recent revelations that threat actors have exploited the capabilities of Claude in order to support a large-scale intrusion targeting Mexican government networks. 

A security researcher at Gambit Security reported that attackers extracted approximately 150 gigabytes of sensitive information from multiple Mexican government agencies, demonstrating how widely accessible artificial intelligence systems can be manipulated to assist sophisticated cyber operations despite built-in safeguards despite their ease of use. 

It has been determined that the intrusion was not limited to passive reconnaissance. The attacker is believed to have used Claude throughout the campaign as an interactive tool for research and development. 

Gambit Security has released an analysis that indicates that the activity began in December, and continued for approximately a month, during which the chatbot was repeatedly instructed to identify potential vulnerabilities within government networks and to create scripts for exploiting those vulnerabilities. 

Using the same AI model, methods were also outlined for automating sensitive information extraction, effectively turning the model into an assistant for data extraction. In a series of carefully structured prompts, the operator gradually weakened the built-in safeguards of the model, thereby manipulating it slowly. 

There have been reports that the system has rejected initial requests, but subsequent iterations seem to have bypassed the platform's guardrails and generated increasingly more actionable material. The extent of the assistance presented by the model raised particular concerns among analysts. 

According to Curtis Simpson, the system produced thousands of analytical outputs which detailed potential attack paths, internal network targets, and credential-related strategies, thereby providing guidance on how to proceed within compromised environments. These outputs were more structured operational guidance for the campaign's human operator than casual responses. 

According to Anthropic, an internal investigation had been initiated following the disclosure and that the activity had been disrupted and the accounts associated with the misuse were permanently banned. According to a company representative, safeguards are continuing to develop. 

For example, the Claude Opus 4.6 model incorporates additional mechanisms to detect and block similar forms of abuse in the latest iteration. In the time of publishing, it had not been officially determined that the individuals responsible for the intrusion were part of any advanced persistent threat group that had been publicly identified.

Nonetheless, analysts examining the operation noted several similarities with tactics historically associated with espionage campaigns involving Chinese actors. As a result of intelligence gathered by Gambit Security and corroborated by SecurityAffairs, the tradecraft demonstrated in the operation - particularly disciplined operational security and systematic reconnaissance - appears to resemble patterns previously observed in state-aligned cyber espionage. 

A separate disclosure from Anthropic confirmed that state-sponsored actors have misused its AI programming tools to benefit dozens of organizations worldwide. It has been determined that investigators at this incident heavily relied on artificial intelligence-assisted workflows to accelerate the exploit development process, effectively reducing the technical barrier to assembling complex multi-stage intrusion chains while retaining high levels of operational secrecy. 

Technical analysis indicates that the campaign aimed at weaponizing Claude Code, by utilizing prompt engineering techniques in order to circumvent the system's built-in security measures. Over 1,000 prompts were submitted to the artificial intelligence environment, some of which were presented as legitimate bug bounty testing scenarios to bypass ethical restrictions embedded within the model by the researchers. 

In this iterative process, attackers were reported to have developed customized exploit scripts, lateral movement tooling, and operational playbooks tailored to the architecture of compromised networks through this iterative interaction. 

Following the generation of AI-generated material, successive phases of the intrusion chain, including privilege escalation, credential harvesting, and automated data extraction, were carried out. According to reports, the operators began shifting portions of their workflow to GPT-4.1 to continue developing credential handling utilities and refine network traversal techniques when restrictions began limiting output from Claude's environment. 

It was possible for the attackers to maintain a workflow that was largely automated and able to quickly adapt to defensive obstacles within the targeted infrastructure by chaining outputs from both AI systems. As a result of this approach, investigators identified behavioural indicators that stood out during forensic examination.

Among them were unusually large amounts of automated scripting activity, repeated instances of AI-generated code fragments appearing within attack tools, and the presence of AI-aided development processes operating from compromised government infrastructures. 

A series of stages has been involved in the intrusion, which began with compromising systems related to the Mexican tax authority before spreading to other public infrastructures. The attacker, according to investigators, then moved through a network of interconnected systems involving several regional government environments, municipal systems in Mexico City, public utility infrastructure in Monterrey, as well as at least one major financial institution, as well as the national electoral institute. 

As a result of the operation, approximately 150 gigabytes of sensitive data - including administrative information and individually identifiable information - were exfiltrated from these environments. MITER ATT&CK knowledge base analysis revealed a familiar sequence of intrusion techniques based on the observed activity. There is evidence that the initial access was obtained through valid accounts, followed by lateral movement with remote services, credential acquisition through operating system credential dump mechanisms, and large-scale data exfiltration. 

The researchers also observed additional measures intended to undermine defensive monitoring by interfering with security controls within the targeted environments in order to weaken defensive monitoring. 

Researchers noted that each of these tactics has been observed in conventional cyberespionage operations; however, the distinctive feature of the campaign was the systematic integration of generative artificial intelligence into the attack process. 

It is possible for attackers to coordinate complex intrusion chains at a speed and scale that is not possible with traditional manual methods, as they were able to automate reconnaissance, exploit development, and operational planning. This incident underscores how generative artificial intelligence systems are rapidly becoming a new layer within the cyber threat landscape that can enhance both defensive and offensive capabilities. 

In response to the threat of AI-aided attacks, security experts recommend that organizations, particularly those operating critical public infrastructure, adapt their defensive strategies accordingly. A number of measures are being taken to strengthen identity and access controls, identify anomalous automation patterns, and implement advanced behavioral analytics to identify tooling and scripting generated by AI. 

It is also recommended that AI developers, cybersecurity firms, and government agencies collaborate continuously so that safeguards can be refined to ensure that large language models are not manipulated for malicious purposes. 

It is becoming increasingly important for the cybersecurity community to ensure that innovations in artificial intelligence do not inadvertently become a force multiplier for sophisticated digital intrusions as platforms such as Claude and other generative AI systems continue to evolve.

Marquis Sues SonicWall Over Alleged Security Flaws Linked to Major Ransomware Attack

 

A legal battle is escalating in Texas after fintech company Marquis filed a lawsuit against firewall vendor SonicWall, claiming that weaknesses in the company’s cloud backup service played a key role in a large ransomware attack.

The case was filed Monday in the U.S. District Court for the Eastern District of Texas, where Marquis is requesting a jury trial. The company argues that a 2025 cybersecurity incident at SonicWall "exposed critical security information for Marquis and every customer that used SonicWall's firewall cloud backup service."

According to the complaint, cybercriminals were able to obtain sensitive firewall configuration backup files, which were later used to infiltrate Marquis’ internal network.

Firewalls are meant to prevent unauthorized access to private networks. However, Marquis claims attackers used data taken from SonicWall’s cloud backup service to analyze how customers configured their firewall protections. This information allegedly provided them with a detailed roadmap to circumvent security controls.

The stolen information reportedly included emergency administrative access credentials known as scratch codes. These codes are designed to enable urgent system access but, according to the lawsuit, were exploited by attackers to bypass protections and gain entry into Marquis’ network.

"SonicWall allowed a threat actor to obtain the keys to bypass that line of defense and walk right into Marquis's internal network, the very thing that SonicWall's firewall was supposed to prevent," the lawsuit states.

After gaining access, the hackers allegedly launched a ransomware attack that disrupted Marquis’ operations and exfiltrated sensitive data.

Marquis, which offers data visualization solutions used by hundreds of banks and credit unions, reported that the attackers accessed "personally identifiable information concerning customers of some of Marquis's financial institution clients."

The compromised data reportedly includes names, dates of birth, mailing addresses, and financial information such as bank account numbers, debit card numbers, and credit card numbers. Social Security numbers were also exposed during the breach.

Expanding Impact of the Breach

SonicWall initially disclosed the security incident in mid-September 2025, stating that fewer than 5% of firewall configuration backup files belonging to customers had been taken from storage servers hosted on Amazon’s cloud infrastructure and managed by SonicWall.

However, the company later updated its disclosure in October, acknowledging that the attackers had actually obtained backup files belonging to all customers.

Marquis began notifying impacted individuals in December 2025, explaining that its systems had been compromised earlier in August. SonicWall has not revealed when the attackers initially accessed its environment, leaving questions about how long the vulnerability may have remained undetected.

In the lawsuit, Marquis claims that a modification made in February 2025 to one of SonicWall’s application programming interfaces (APIs) "created a vulnerability exploitable by threat actors." The complaint further alleges that this weakness enabled attackers to retrieve firewall configuration backup files "without proper authentication" by predicting firewall serial numbers.

The company has not yet confirmed the full scope of affected individuals. However, a report filed with the Texas attorney general indicates that at least 400,000 people across the United States may have been impacted. That number could rise as more breach notifications are submitted to regulators in other states.

The case now raises serious questions about SonicWall’s security controls surrounding its cloud backup service. A jury in the Eastern District of Texas will ultimately decide whether the vulnerabilities and subsequent ransomware attack were the result of security failures on SonicWall’s part, as Marquis alleges.

New Copilot Setting May Access Activity From Other Microsoft Services. Here’s How Users Can Disable It

 



A recently noticed configuration inside Microsoft Copilot may allow the AI tool to reference activity from several other Microsoft platforms, prompting renewed discussion around data privacy and AI personalization. The option, which appears within Copilot’s settings, enables the assistant to use information connected to services such as Bing, MSN, and the Microsoft Edge browser. Users who are uncomfortable with this level of integration can switch the feature off.

Like many modern artificial intelligence systems, Copilot attempts to improve the usefulness of its responses by understanding more about the person interacting with it. The assistant normally does this by remembering past conversations and storing certain details that users intentionally share during chats. These stored elements help the AI maintain context across multiple interactions and generate responses that feel more tailored.

However, a specific configuration called “Microsoft usage data” expands that capability. According to reporting first highlighted by the technology outlet Windows Latest, this setting allows Copilot to reference information associated with other Microsoft services a user has interacted with. The option appears within the assistant’s Memory controls and is available through both the Copilot website and its mobile applications. Observers believe the setting was introduced recently as part of Microsoft’s effort to strengthen personalization features in its AI tools.

The Memory feature in Copilot is designed to help the assistant retain useful context. Through this system, the AI can recall earlier conversations, remember instructions or factual information shared by users, and potentially reference certain account-linked activity from other Microsoft products. The idea is that by understanding more about a user’s interests or previous discussions, the assistant can provide more relevant answers.

In practice, such capabilities can be helpful. For instance, a user who discussed a topic with Copilot previously may want to continue that conversation later without repeating the entire background. Similarly, individuals seeking guidance about personal or professional matters may receive more relevant suggestions if the assistant has some awareness of their preferences or circumstances.

Despite the convenience, the feature also raises questions about privacy. Some users may be concerned that allowing an AI assistant to accumulate information from multiple services could expose more personal data than expected. Others may want to know how that information is used beyond personalizing conversations.

Microsoft addresses these concerns in its official Copilot documentation. In its frequently asked questions section, the company states that user conversations are processed only for limited purposes described in its privacy policies. According to Microsoft, this information may be used to evaluate Copilot’s performance, troubleshoot operational issues, identify software bugs, prevent misuse of the service, and improve the overall quality of the product.

The company also says that conversations are not used to train AI models by default. Model training is controlled through a separate configuration, which users can choose to disable if they do not want their interactions contributing to AI development.

Microsoft further clarifies that Copilot’s personalization settings do not determine whether a user receives targeted advertisements. Advertising preferences are managed through a different option available in the Microsoft account privacy dashboard. Users who want to stop personalized advertising must adjust the Personalized ads and offers setting separately.

Even with these explanations, privacy concerns remain understandable, particularly because Microsoft documentation indicates that Copilot’s personalization features may already be activated automatically in some cases. When reviewing the settings on a personal device, these options were found to be switched on. Users who prefer not to allow Copilot to access broader usage data may therefore wish to disable them.

Checking these settings is straightforward. Users can open Copilot through its website or mobile application and ensure they are signed in with their Microsoft account. On the web interface, selecting the account name at the bottom of the left-hand panel opens the Settings menu, where the Memory section can be accessed. In the mobile application, the same controls are available through the side navigation menu by tapping the account name and choosing Memory.

Inside the Memory settings, users will see a general control labeled “Personalization and memory.” Two additional options appear beneath it: “Facts you’ve shared,” which stores information provided directly during conversations, and “Microsoft usage data,” which allows Copilot to reference activity from other Microsoft services.

To limit this behavior, users can switch off the Microsoft usage data toggle. They may also disable the broader Personalization and memory option if they prefer that the AI assistant does not retain contextual information about their interactions. Copilot also provides a “Delete all memory” function that removes all stored data from the system. If individual personal details have been recorded, they can be reviewed and deleted through the editing option next to “Facts you’ve shared.”

Security and privacy experts generally advise caution when sharing information with AI assistants, even when personalization features remain enabled. Sensitive or confidential details should not be entered into conversations. Microsoft itself recommends avoiding the disclosure of certain types of highly personal data, including information related to health conditions or sexual orientation.

The broader development reflects a growing trend in the technology industry. As AI assistants become integrated across multiple platforms and services, companies are increasingly using cross-service data to make these tools more helpful and personalized. While this approach can improve convenience and usability, it also underlines the grave necessity for transparent privacy controls so users remain aware of how their information is being used and can adjust those settings when necessary.

Optimizely Reports Data Breach Linked to Sophisticated Vishing Incident


 

In addition to serving as a crossroads of technology, marketing intelligence, and vast networks of corporate data, digital advertising platforms are becoming increasingly attractive targets for cybercriminals seeking an entry point into enterprise infrastructure.

Optimizely recently revealed that a security incident was initiated not by sophisticated malware, but rather by a social engineering scheme that was carefully orchestrated. A voice-phishing tactic was utilized by attackers linked to the threat group ShinyHunters to deceive a company employee earlier in February 2026 and gain access to some parts of the company's internal environment without authorization. 

Investigators determined that the attackers were able to extract limited business contact information from internal resources even though the intrusion was contained before it could reach sensitive customer databases or critical operational systems. 

Throughout this episode, we learn that even mature technology companies remain vulnerable to manipulation-based attacks aimed at bypassing technical defenses and targeting the human layer of security. 

Optimizely, a leading provider of digital experience infrastructure, develops tools that assist organizations in managing web properties, conducting marketing experiments, and refining online customer journeys based on data. 

Among its many capabilities are A/B experimentation frameworks, enterprise-grade content management systems, and integrated ecommerce tools that are designed to assist businesses in improving conversion performance and audience engagement across a variety of digital channels. 

Over 10,000 organizations worldwide use the company's technology stack, including H&M, PayPal, Toyota, Nike, and Salesforce, among others. A number of customers have recently received notifications detailing this incident. According to the company, the attackers gained access through what it described as a "sophisticated voice-phishing attack" on February 11. 

The internal investigation indicates that although the threat actors were able to penetrate a limited segment of the corporate environment, the intrusion did not result in privilege escalation and no malicious payloads or malware were deployed within the network during the intrusion. 

Therefore, the breach remained constrained within a narrow scope, confirming our assessment that the attackers were limited in their access and were not permitted to reach sensitive customer and operational data. Researchers have identified the intrusion as the work of the threat actor collective ShinyHunters, a financially motivated group involved in cybercrime since at least 2020. 

It is well known for orchestrating high-visibility data theft operations and subsequently distributing or monetizing compromised databases through dark web forums and underground marketplaces. A great deal of its campaign effort has been directed toward technology and telecommunications organizations, areas where internal access to corporate databases and partner information can prove to be very useful. 

According to analysts, this group has demonstrated a high degree of flexibility in its intrusion techniques, combining credential-based attacks, such as credential stuffing, with increasingly persuasive social engineering techniques, such as voice-based deception schemes, to achieve their objectives. 

Despite the fact that the precise geographical origins of the actors remain unknown, their operational footprint spans multiple regions, reflecting their focus on monetizing corporate information or using stolen data to exert reputational and financial pressure on targeted organizations through exploitation of stolen data. In the immediate case, organizations connected to the affected environment appear to be only exposed to basic business contact information, not sensitive customer information. 

Cybersecurity specialists caution, however, that even seemingly routine information can provide a foothold for follow-on attacks. By using contact directories, email addresses, and professional identifiers, attackers may be able to craft convincing phishing emails or conduct additional social engineering attempts in order to gather credentials or financial information. 

In addition to facilitating spam operations, this type of data can also facilitate fraudulent outreach that impersonates trusted partners or internal employees. A precautionary measure, security experts recommend that employees and partners be aware of unexpected communications, independently verify the legitimacy of telephone calls or e-mail requests, and maintain multi-factor authentication on all corporate accounts as a precautionary measure. 

A proactive approach to security hygiene and maintaining open communication with affected stakeholders are widely regarded as essential measures in order to minimize the impact of incidents of this nature on organizational operations and reputations. 

Optimizely did not disclose the exact number of customers whose information may have been exposed; however, it indicated in its breach notification that the activity closely resembles that of a network of attackers known for persistent social engineering campaigns involving loosely connected attackers. 

According to the firm, during the incident, communications were received that reflected patterns commonly associated with groups that utilize voice phishing to manipulate employees into providing access to corporate systems. 

As stated in the description, the operational style of ShinyHunters is commonly attributed to those responsible for a series of breaches affecting major online platforms and consumer brands recently, such as Canada Goose, Panera Bread, Betterment, SoundCloud, Pornhub, Figure, and Match Group, which operates Tinder, Hinge, Meetic, Match.com, and OkCupid, among others. 

It should be noted that not every incident has been related to a single coordinated campaign; however, numerous victims have reported a successful intrusion attempt related to voice phishing operations designed to compromise enterprise single sign-on environments. 

It has been reported that attackers have impersonated internal IT support staff and contacted employees directly, leading them to counterfeit authentication portals that mimic legitimate corporate logins. These interactions led to the attackers bypassing standard access controls by obtaining account credentials and one-time multi-factor authentication codes from victims. These techniques have also been observed to evolve, with threat actors using device-code phishing methods to obtain authentication tokens tied to enterprise identity services by exploiting the legitimate OAuth device authorization flow. 

Once a single sign-on account has been compromised, attackers can pivot among integrated corporate applications and cloud-based platforms using compromised employee accounts. The same access may be extended to enterprise tools such as Microsoft Entra ID, Microsoft 365, Google Workspace, Salesforce, Zendesk, Dropbox, SAP, Slack, Adobe, and Atlassian, enabling an intruder to move laterally across connected services and collect additional corporate information once an initial foothold has been established. 

Ultimately, this incident serves as a reminder that technical safeguards alone are rarely sufficient to prevent determined social engineering campaigns from gaining traction. It is not uncommon for attackers to exploit human trust and routine operational processes to breach the security architecture of organizations with mature security architectures. 

Identify-verification procedures should be strengthened for internal support interactions, voice-based fraud should be regularly discussed with employees and strong monitoring should be implemented around single sign-on activity and unusual authentication requests, according to security professionals. 

Taking measures such as implementing conditional access policies, enforcing multi-factor authentication strictly, and implementing rapid incident response protocols can greatly reduce an attacker's ability to attack after the initial attempt has been made. 

The development of voice-driven deception tactics is continuing to prompt companies across the technology sector to prioritize social engineering resilience as a core component of enterprise cybersecurity strategy, rather than as a peripheral issue.

LexisNexis Confirms Data Breach After Hackers Exploit Unpatched React App

 

A breach at LexisNexis Legal & Professional exposed some customer and business data, the firm confirmed. News surfaced after FulcrumSec claimed responsibility and leaked about two gigabytes of files on underground platforms. Hackers accessed parts of the company’s systems, though the breach scope was limited. The American analytics provider confirmed the incident days later, stating only a small portion of its infrastructure was affected. 

The company said an outside actor gained access to a limited number of servers. LexisNexis Legal & Professional provides legal research, regulatory information, and analytics tools to lawyers, corporations, government agencies, and universities in more than 150 countries. According to the firm, most of the accessed information came from older systems and was not considered sensitive, which reduced the potential impact.  

Internal findings showed that much of the exposed data originated from legacy systems storing information created before 2020. Records included customer names, user IDs, and business contact details. Some files contained product usage information and logs from past support tickets, including IP addresses from survey responses. However, sensitive personal identifiers such as Social Security numbers or driver’s license data were not included. Financial information, active passwords, search queries, and confidential client case data were also not part of the compromised dataset. 

The breach reportedly occurred around February 24 after attackers exploited the React2Shell vulnerability in an outdated front-end application built with React. The flaw allowed entry into cloud resources hosted on Amazon Web Services before it was addressed. 

While LexisNexis described the affected systems as containing mostly obsolete data, FulcrumSec claimed the intrusion was broader. The group said it extracted about 2.04GB of structured data from the company’s cloud infrastructure, including numerous database tables, millions of records, and internal system configurations. According to the attacker, the breach exposed more than 21,000 customer accounts and information linked to over 400,000 cloud user profiles, including names, email addresses, phone numbers, and job roles. 

Some of the records reportedly belonged to individuals with .gov email addresses, including U.S. government employees, federal judges and law clerks, Department of Justice attorneys, and staff connected to the Securities and Exchange Commission. FulcrumSec also criticized the company’s cloud security setup, alleging that a single ECS task role had access to numerous stored secrets, including credentials linked to production databases. The group said it attempted to contact the company but claimed no cooperation occurred. 

LexisNexis stated that the breach has been contained and confirmed that its products and customer-facing services were not affected. The company notified law enforcement and engaged external cybersecurity experts to assist with investigation and response. Customers, both current and former, have also been informed about the incident. The company had disclosed another breach last year after a compromised corporate account exposed data belonging to roughly 364,000 customers. 

The latest case highlights how vulnerabilities in cloud applications and outdated software can expose enterprise systems even when they contain primarily legacy information.

University of Hawaiʻi Cancer Center Suffers Data Breach from Ransomware Attacks


A ransomware attack on the University of Hawaii Cancer Center's epidemiology division last year resulted in information leaks for up to 1.2 million people. 

About the incident

According to a statement issued by the organization last week, hackers gained access to documents that included 1998 voter registration records from the City and County of Honolulu, as well as Social Security numbers (SSNs) and driver's license numbers gathered from the HawaiÊ»i State Department of Transportation. 

A 1993 Multiethnic Cohort (MEC) Study was shown to be partially responsible for the breach. The institution recruited study participants using voter registration information and driver's license numbers. Health information was included in some of the files that were made public.

Leaked information

Files related to three other epidemiological studies of diet and cancer were retrieved, along with data on MEC Study participants. To determine whether further sensitive data was obtained, the hack is still being investigated. According to the university, "additional individuals whose personal information may have been included in the historical driver's license and voter registration records with SSN identifiers number approximately 1.15 million." 

A total of 87,493 study participants had their information taken. The cyber problem was initially found on August 31, 2025, according to a report the university gave to the state assembly in January.

Attack discovery

The stolen data was found in a subset of research files on specific servers supporting the epidemiological research activities of the University of Hawaii Cancer Center. The University of Hawaii Cancer Center's clinical trials activities, patient care, and other divisions were unaffected by the ransomware attack. The University of Hawaii Cancer Center's director, Naoto Ueno, expressed regret for the incident last week and stated that the organization was "committed to transparency." 

According to the institution, in order to address the issue, they hired cybersecurity specialists and notified law enforcement after the attackers encrypted and probably stole data. The cybersecurity company acquired "an affirmation that any information obtained was destroyed" and a decryption tool.

Three universities, seven community colleges, one employment training center, and numerous research institutions dispersed over six islands make up the University of Hawaii system. About 50,000 students are served by it.

Microsoft Copilot Bug Exposes Confidential Outlook Emails

 
























A critical bug in Microsoft 365 Copilot, tracked as CW1226324, allowed the AI assistant to access and summarize confidential emails in Outlook's Sent Items and Drafts folders, bypassing sensitivity labels and Data Loss Prevention (DLP) policies. Microsoft first detected the issue on January 21, 2026, with exposure lasting from late January until early to mid-February 2026. This flaw affected enterprise users worldwide, including organizations like the UK's NHS, despite protections meant to block AI from processing sensitive data.

 The vulnerability stemmed from a code error that ignored confidentiality labels on user-authored emails stored in desktop Outlook.When users queried Copilot Chat, it retrieved and summarized content from these folders, potentially including business contracts, legal documents, police investigations, and health records. Importantly, the bug did not grant unauthorized access; summaries only appeared to users already permitted to view the mailbox. However, feeding such data into a large language model raised fears of unintended processing or training data incorporation.

Microsoft swiftly responded by deploying a global configuration update in early February 2026, restoring proper exclusion of protected content from Copilot. The company continues monitoring rollout and contacting affected customers for verification, though no full remediation timeline or user impact numbers have been disclosed.As of late February, the patch was in place for most enterprise accounts, tagged as a limited-scope advisory.

This incident underscores persistent AI privacy risks in enterprise tools, marking the second Copilot-related email exposure in eight months—the prior EchoLeak involved prompt injection attacks. It highlights how even brief bugs can erode trust in AI assistants handling confidential workflows. Security experts urge organizations to audit DLP configurations and monitor AI behaviors closely.

For Microsoft 365 users, especially in high-stakes sectors like healthcare and finance, the event emphasizes the need for robust sensitivity labeling and regular Copilot audits. While fixed, expanded DLP enforcement across storage locations won't complete until late April 2026. Businesses should prioritize data governance to mitigate future AI flaws, ensuring productivity doesn't compromise security.

Korean Tax Agency Leaks Seed Phrase, Loses $4.8M in Crypto

 

South Korea's National Tax Service (NTS) turned a major tax evasion crackdown into a $4.8 million cryptocurrency catastrophe by accidentally exposing a seized wallet's seed phrase in a public press release. Hackers drained 4 million Pre-Retogeum (PRTG) tokens from the Ledger hardware wallet within hours of the February 26, 2026, announcement. This blunder exposed profound gaps in government handling of digital assets. 

The NTS raided 124 wealthy tax dodgers, confiscating crypto worth 8.1 billion won ($5.6 million total). Their celebratory photos showed the Ledger device next to an unredacted handwritten 24-word mnemonic—the master key granting full wallet access anywhere, without needing the physical hardware or passwords. By failing to blur this critical information, officials broadcast the equivalent of a bank vault combination nationwide. 

On-chain sleuthing confirmed the rapid heist: an attacker added Ethereum for gas fees, then siphoned the PRTG in three transactions to new addresses. Blockchain experts, including Hansung University's Professor Cho Jae-woo, slammed the NTS for crypto illiteracy, comparing it to "leaving a safe wide open for public plunder." Local reports noted subsequent chaos—one hacker allegedly returned funds, only for another to steal them again, pushing losses toward 6.9 billion won. 

In response, the NTS yanked the images, issued a full apology admitting fault for "careless vividness," and called in police for a cyber probe. Deputy PM Koo Yun-cheol announced multi-agency reviews by the Financial Services Commission to overhaul seizure protocols. This follows prior embarrassments, like police losing 22 BTC ($1.5 million) in a 2021 custody failure.

The incident underscores seed phrases' immense power in crypto security—irreversible access that demands ironclad protection. Governments worldwide must adopt air-gapped storage, expert audits, and redaction training for digital seizures. For users: etch seeds on metal, store offline, never snap photos. Such lapses risk taxpayer funds in the exploding crypto enforcement era.

Madison Square Garden Notifies Victims of SSN Data Breach

 



The Madison Square Garden Family of Companies has disclosed that it recently alerted an undisclosed number of individuals about a cybersecurity incident that occurred in August 2025. The company confirmed that the exposed information includes names and Social Security numbers.

According to MSG’s notification letter, attackers exploited a previously unknown vulnerability in Oracle’s E-Business Suite, an enterprise software platform widely used for finance, human resources, and back-office operations. The affected system was hosted and managed by an unnamed third-party vendor, indicating the intrusion occurred through an externally maintained environment rather than MSG’s core internal network.

Oracle informed customers that an undisclosed condition in the application had been abused by an unauthorized party to obtain access to stored data. MSG stated that its investigation, completed in late November 2025, determined that unauthorized access had taken place in August 2025. The gap between compromise and confirmation reflects a common pattern in zero-day attacks, where flaws are exploited before vendors are aware of their existence or able to issue patches.

In November 2025, the ransomware group known as Clop, also stylized as Cl0p, publicly claimed responsibility for the breach. During the same period, the group carried out a broader campaign targeting hundreds of organizations by leveraging the same Oracle vulnerability. MSG has not acknowledged Clop’s claim, and independent verification of the group’s involvement has not been established. The company has not disclosed how many people were notified, whether a ransom demand was made, or whether any payment occurred. A request for further comment remains pending.

MSG is offering eligible individuals one year of complimentary credit monitoring through TransUnion. Affected recipients have 90 days from receiving the notice letter to enroll.

Clop first appeared in 2019 and has become known for exploiting zero-day flaws in enterprise software. Beyond Oracle’s E-Business Suite, the group has targeted Cleo file transfer software and, more recently, vulnerabilities in Gladinet CentreStack file servers. Unlike traditional ransomware operators that focus primarily on encrypting systems, Clop frequently prioritizes data theft. The group exfiltrates information and then threatens to publish or sell it if payment is not made.

In 2025, Clop claimed responsibility for 456 ransomware incidents. Of those, 31 targeted organizations publicly confirmed resulting data breaches, collectively exposing approximately 3.75 million personal records. Institutions reportedly affected by the Oracle zero-day campaign include Harvard University, GlobalLogic, SATO Corporation, and Dartmouth College.

So far in 2026, Clop has claimed another 123 victims, including the French labor union CFDT. Its most recent operations reportedly leverage a newer vulnerability in Gladinet CentreStack servers.

Ransomware activity across the United States remains extensive. In 2025, researchers recorded 646 confirmed ransomware attacks against U.S. organizations, along with 3,193 additional unverified claims made by ransomware groups. Confirmed incidents resulted in nearly 42 million exposed records. One of the largest cases linked to Clop involved exploitation of the Oracle vulnerability at the University of Phoenix, which later notified 3.5 million individuals. In 2026 to date, 17 confirmed attacks and 624 unconfirmed claims are under review.

Other incidents disclosed this week include a December 2024 breach affecting the City of Carthage, Texas, reportedly claimed by Rhysida; a March 2025 breach at Hennessy Advisors impacting 12,643 individuals and attributed to LockBit; an August 2025 breach at KCI Telecommunications linked to Akira; and a December 2025 incident at The Lewis Bear Company affecting 555 individuals and also claimed by Akira.

Ransomware attacks can both disable systems through encryption and involve large-scale data theft. In Clop’s case, data exfiltration appears to be the primary tactic. Organizations that refuse to meet ransom demands may face public disclosure of stolen data, extended operational disruption, and increased fraud risks for affected individuals.

The Madison Square Garden Family of Companies includes Madison Square Garden Sports Corp., Madison Square Garden Entertainment Corp., and Sphere Entertainment Co.. The group owns and operates major venues such as Madison Square Garden, Radio City Music Hall, and the Las Vegas Sphere.



How Poorly Secured Endpoints Are Expanding Risk in LLM Infrastructure

 


As organizations build and host their own Large Language Models, they also create a network of supporting services and APIs to keep those systems running. The growing danger does not usually originate from the model’s intelligence itself, but from the technical framework that delivers, connects, and automates it. Every new interface added to support an LLM expands the number of possible entry points into the system. During rapid rollouts, these interfaces are often trusted automatically and reviewed later, if at all.

When these access points are given excessive permissions or rely on long-lasting credentials, they can open doors far wider than intended. A single poorly secured endpoint can provide access to internal systems, service identities, and sensitive data tied to LLM operations. For that reason, managing privileges at the endpoint level is becoming a central security requirement.

In practical terms, an endpoint is any digital doorway that allows a user, application, or service to communicate with a model. This includes APIs that receive prompts and return generated responses, administrative panels used to update or configure models, monitoring dashboards, and integration points that allow the model to interact with databases or external tools. Together, these interfaces determine how deeply the LLM is embedded within the broader technology ecosystem.

A major issue is that many of these interfaces are designed for experimentation or early deployment phases. They prioritize speed and functionality over hardened security controls. Over time, temporary testing configurations remain active, monitoring weakens, and permissions accumulate. In many deployments, the endpoint effectively becomes the security perimeter. Its authentication methods, secret management practices, and assigned privileges ultimately decide how far an intruder could move.

Exposure rarely stems from a single catastrophic mistake. Instead, it develops gradually. Internal APIs may be made publicly reachable to simplify integration and left unprotected. Access tokens or API keys may be embedded in code and never rotated. Teams may assume that internal networks are inherently secure, overlooking the fact that VPN access, misconfigurations, or compromised accounts can bridge that boundary. Cloud settings, including improperly configured gateways or firewall rules, can also unintentionally expose services to the internet.

These risks are amplified in LLM ecosystems because models are typically connected to multiple internal systems. If an attacker compromises one endpoint, they may gain indirect access to databases, automation tools, and cloud resources that already trust the model’s credentials. Unlike traditional APIs with narrow functions, LLM interfaces often support broad, automated workflows. This enables lateral movement at scale.

Threat actors can exploit prompts to extract confidential information the model can access. They may also misuse tool integrations to modify internal resources or trigger privileged operations. Even limited access can be dangerous if attackers manipulate input data in ways that influence the model to perform harmful actions indirectly.

Non-human identities intensify this exposure. Service accounts, machine credentials, and API keys allow models to function continuously without human intervention. For convenience, these identities are often granted broad permissions and rarely audited. If an endpoint tied to such credentials is breached, the attacker inherits trusted system-level access. Problems such as scattered secrets across configuration files, long-lived static credentials, excessive permissions, and a growing number of unmanaged service accounts increase both complexity and risk.

Mitigating these threats requires assuming that some endpoints will eventually be reached. Security strategies should focus on limiting impact. Access should follow strict least-privilege principles for both people and systems. Elevated rights should be granted only temporarily and revoked automatically. Sensitive sessions should be logged and reviewed. Credentials must be rotated regularly, and long-standing static secrets should be eliminated wherever possible.

Because LLM systems operate autonomously and at scale, traditional access models are no longer sufficient. Strong endpoint privilege governance, continuous verification, and reduced standing access are essential to protecting AI-driven infrastructure from escalating compromise.

PayPal Alerts Users to Data Exposure Linked to Loan App Software Glitch

 

PayPal has informed customers about a data exposure incident caused by a software error in its loan application platform, which left sensitive personal information visible for nearly six months in 2025.

The issue involved the company’s PayPal Working Capital (PPWC) loan application, a service designed to provide small businesses with fast financing solutions.

According to PayPal, the problem was identified on December 12, 2025. An internal review revealed that customer information — including names, email addresses, phone numbers, business addresses, Social Security numbers, and dates of birth — had been accessible since July 1, 2025.

The company stated it corrected the coding error within a day of detection, preventing further unauthorized access.

In breach notification letters sent to affected individuals, PayPal said: "On December 12, 2025, PayPal identified that due to an error in its PayPal Working Capital ("PPWC") loan application, the PII of a small number of customers was exposed to unauthorized individuals during the timeframe of July 1, 2025 to December 13, 2025," PayPal said in breach notification letters sent to affected users."PayPal has since rolled back the code change responsible for this error, which potentially exposed the PII. We have not delayed this notification as a result of any law enforcement investigation."

The company confirmed that a limited number of users experienced unauthorized account transactions connected to the exposure. Those customers have been reimbursed.

To support impacted individuals, PayPal is offering two years of complimentary three-bureau credit monitoring and identity restoration services through Equifax. Customers must enroll by June 30, 2026, to receive the benefits.

Users are encouraged to closely monitor account activity and credit reports for unusual behavior. PayPal reiterated that it does not request passwords, one-time passcodes, or authentication details via phone calls, text messages, or emails — warning customers to remain cautious of phishing attempts that often follow breach disclosures.

Additionally, passwords for affected accounts have been reset. Customers who have not already updated their credentials will be required to do so at their next login.

This is not the first security-related incident involving the fintech firm. In January 2023, PayPal disclosed a credential stuffing attack that compromised approximately 35,000 accounts between December 6 and December 8, 2022. In January 2025, the State of New York announced a $2 million settlement with the company over allegations that it failed to meet state cybersecurity compliance standards tied to the 2022 breach.

Following publication of the report, a PayPal spokesperson clarified the scope of the incident in a statement to BleepingComputer, emphasizing that core systems were not breached and that roughly 100 customers were potentially affected.

"When there is a potential exposure of customer information, PayPal is required to notify affected customers," the spokesperson said. "In this case, PayPal’s systems were not compromised. As such, we contacted the approximately 100 customers who were potentially impacted to provide awareness on this matter.”

Critical better-auth Flaw Enables API Key Account Takeover

 

A flaw in the better-auth authentication library could let attackers take over user accounts without logging in. The issue affects the API keys plugin and allows unauthenticated actors to generate privileged API keys for any user by abusing weak authorization logic. Researchers warn that successful exploitation grants full authenticated access as the targeted account, potentially exposing sensitive data or enabling broader application compromise, depending on the user’s privileges. 

The better-auth library records around 300,000 weekly downloads on npm, making the issue significant for applications that rely on API keys for automation and service-to-service communication. Unlike interactive logins, API keys often bypass multi-factor authentication and can remain valid for long periods. If misused, a single key can enable scripted access, backend manipulation, or large-scale impersonation of privileged users. 

Tracked as CVE-2025-61928, the vulnerability stems from flawed logic in the createApiKey and updateApiKey handlers. These functions decide whether authentication is required by checking for an active session and the presence of a userId in the request body. When no session exists but a userId is supplied, the system incorrectly skips authentication and builds user context directly from attacker-controlled input. This bypass avoids server-side validation meant to protect sensitive fields such as permissions and rate limits. 

In practical terms, an attacker can send a single request to the API key creation endpoint with a valid userId and receive a working key tied to that account. The same weakness allows unauthorized modification of existing keys. Because exploitation requires only knowledge or guessing of user identifiers, attack complexity is low. Once obtained, the API key allows attackers to bypass MFA and operate as the victim until the key is revoked. 

A patched version of better-auth has been released to fix the authorization checks. Organizations are advised to upgrade immediately, rotate potentially exposed API keys, review logs for suspicious unauthenticated requests, and tighten key governance through least-privilege permissions, expiration policies, and monitoring. 

The incident highlights broader risks tied to third-party authentication libraries. Authorization flaws in widely adopted components can silently undermine security controls, reinforcing the need for continuous validation, disciplined credential management, and zero-trust approaches across modern, API-driven environments.

Global Data Indicates Slowdown in Ransomware Targeting Education


 

It is evident on campuses once defined by open exchange and quiet routine that a new kind of disruption has taken hold, one that does not arrive in force but rather with encrypted files, locked networks, and terse ransom notes. 

Over the past year, ransomware has steadily evolved from an isolated IT emergency to a systemic operation crisis for school districts, universities, and public agencies. There are stalling lecture schedules, freezing admissions systems, and wobbling payroll cycles, and administrators are faced with more than just technical recovery challenges; reputational and legal risks also arise. 

What was once considered a cybersecurity issue has now spread into governance, continuity planning, and public trust. Recent figures indicate that the pace has somewhat slowed down. With approximately 180 attacks documented worldwide across the first three quarters of 2025, ransomware incidents targeting the education sector have recorded their first quarterly decline since early 2024. 

It appears on the surface that there has been a pause in digital extortion. However, beneath the statistical dip, there is a complex reality beneath that dip. As opposed to strengthening defenses, the slowdown seems more likely to be the result of a recalibration of attacker priorities rather than a retreat. 

Rather than casting a wide net, they are selecting targets with more deliberate consideration, spending more time on reconnaissance, and applying pressure to areas where disruption has the greatest impact. Therefore, this apparent decline is not indicative of diminished risk, rather it reflects adaptation. 

Data from the U.K.-based research firm Comparitech confirms that this recalibration has been made. In its latest education ransomware roundup, the company reports that 251 attacks have been publicly reported against educational institutions worldwide in 2025, a marginal increase from 247 in 2024. A total of 94 of these incidents have been formally acknowledged by the affected institutions.

The volume appears to have remained relatively unchanged on paper, but the operational consequences have not remained unchanged. As of 2025, approximately 3.9 million records have been exposed through confirmed breaches, which represents an increase of 27 percent over the 3.1 million records compromised last year. 

Analysts caution that this figure is preliminary. It is common for disclosure timelines to be delayed in public sector organizations, particularly in the aftermath of an intrusion, and several incidents from the second half of the year are still being evaluated. The cumulative impact of data loss is expected to increase as further breach notifications are filed, suggesting that the true extent of the data loss may not yet be fully apparent. 

An in-depth examination of institutional segmentation reveals a significant divergence in impact. K-12 districts continued to constitute a significant proportion of reported incidents in both 2024 and 2025, accounting for roughly three quarters of incidents. However, higher education institutions were more likely to experience substantial data exposures. 

The disparity between K-12 institutions and higher education institutions increased sharply by the year 2025, with approximately 1.1 million compromised records reported in 2024 as compared to 1.9 million in 2025. In the United States, approximately 175,000 records were exposed as a result of K-12 breaches, while approximately 3.7 million records were exposed at colleges and universities. 

Comparitech attributed much of the increase to a small number of high-impact intrusions that were linked to a previously unseen vulnerability in Oracle E-Business Suite discovered in August that was previously undisclosed. 

CLOP exploited a zero-day flaw that was not known to the vendor at the time it was exploited to gain unauthorized access to enterprise environments, resulting in confirmed breaches at five academic institutions. There is a broader pattern underlying the current threat landscape highlighted by this episode: there are fewer opportunistic attacks, more targeted exploitation of enterprise-grade software, and a greater emphasis on high-yield compromises which result in large data exposures. 

Rather than a sustained defensive advantage, there appears to be a shifting criminal economics at play in the education sector that is contributing to relative stability in incident counts. In Comparitech's January analysis, some threat groups may have directed operational resources towards manufacturing, where supply chain dependency and production downtime can lead to more rapid ransom negotiations. 

Despite overall ransomware activity remaining active across other verticals, schools and universities have experienced a plateau in annual attack totals due to that redistribution of focus. There has also been a decline in the average global ransom demand between 2024 and 2025, falling from $694,000 to $464,000 on average. 

Financial demands within the education sector have also adapted. At first glance, this reduction may appear to indicate shrinking leverage. However, analysts caution that headline figures do not fully reflect an incident's overall costs, which typically include forensic investigations, legal reviews, system restorations, notification of regulatory agencies, and reputational repair. These attacks frequently carry a substantial economic burden in addition to the initial extortion amount. 

Operational disruption remains an integral part of these attacks. Uvalde Consolidated Independent School District reported a ransomware intrusion in September that forced the district to temporarily close its schools due to malicious code discovered within district servers supporting telephony, video monitoring, and visitor management.

According to District communications, the affected infrastructure is integral to campus safety and security. As a result of the aforementioned update, the district informed the public that it had not paid the ransom and had restored its systems from backups. In addition to confirmed disclosures, additional claims illustrate that local education agencies are facing increasing pressure from the federal government. 

A comprehensive investigation is still being conducted despite the fact that there is no indication that sensitive or personal information had been accessed without authorization. Based on comparison technology reports, Medusa has named Fall River Public Schools and Franklin Pierce Schools as 2025 targets, and has requested $400,000 in compensation from each district. 

Both districts have not publicly confirmed the full scope of the claims at the time of reporting, however both cases were among the five largest ransom demands made against educational institutions worldwide last year. It is evident, however, that the data reinforce a consistent pattern despite stabilizing attack volumes and decreasing average demands. 

However, the sector remains at risk for episodic, high-impact events that can disrupt instruction, undermine public confidence, and produce substantial data risk. Though the tactical tempo may change, structural vulnerability remains the same. As a result, policymakers and institutional leaders have clear repercussions. 

The current trajectory calls for complacency, but for structural reinforcement Education networks are often decentralized and resource-constrained and rely heavily on legacy enterprise systems. To ensure the integrity of these networks, patch management disciplines, network segmentation, multi-factor authentication enforcement, and continuous monitoring are necessary that detects lateral movement before encryption is initiated. 

It is also crucial that incident response planning be integrated into executive governance so that crisis decision-making, legal review, and stakeholder communication frameworks are established well in advance of an intrusion. 

As ransomware groups continue to emphasize precision over volume, resilience will be largely determined by the ability to embed cybersecurity as a core operational function rather than merely a peripheral IT responsibility rather than relying solely on isolated events.

Moltbook AI Social Network Exposes 1.5 Million Agent Credentials After Database Misconfiguration

 

Moltbook, a newly launched social platform designed exclusively for artificial intelligence agents, suffered a major security lapse just days after going live. The platform, which allows autonomous AI agents to share memes and debate philosophical ideas without human moderation, inadvertently left its backend database exposed due to a configuration error.

The issue was uncovered independently by security firm Wiz and researcher Jameson O'Reilly. Their findings revealed that unauthorized users could take control of any of the platform’s 1.5 million registered AI agents, alter posts, and read private communications simply by interacting with the public-facing site.

Moltbook launched on Jan. 28 as a companion network to OpenClaw, an open-source AI agent system developed by Austrian programmer Peter Steinberger. OpenClaw operates locally on users’ devices and integrates with messaging platforms and calendars. The framework gained rapid popularity in late January following several rebrands, transitioning from Clawdbot to Moltbot.

Founder Matt Schlicht, who also leads Octane AI, stated in media interviews that his own OpenClaw-powered agent, Clawd Clawderberg, developed much of the Moltbook platform under his direction and continues to operate significant portions of it.

Database Left Wide Open

Wiz discovered the flaw on Jan. 31 and promptly informed Schlicht. O’Reilly separately identified the same vulnerability. Investigators found that the exposed database contained 1.5 million API authentication tokens, approximately 35,000 email addresses, private user messages, and verification codes.

The root cause traced back to improper configuration within Supabase, a backend-as-a-service platform. Specifically, Moltbook failed to properly enable Supabase’s Row Level Security feature, which is designed to limit database access based on user roles.

Researchers also located a Supabase API key embedded within client-side JavaScript, enabling unauthenticated users to query the full production database and retrieve sensitive credentials within minutes.

Although Moltbook publicly claimed 1.5 million AI agents had registered, backend data indicated that only about 17,000 human operators controlled those accounts. The system lacked safeguards to verify whether accounts were genuine AI agents or scripts operated by humans.

With access to exposed tokens, attackers could fully impersonate any agent on the platform. An additional database table revealed 29,631 email addresses belonging to early-access registrants. More concerning, 4,060 private direct message threads were stored without encryption, and some included third-party API credentials in plaintext — including OpenAI API keys.

Even after initial remediation efforts blocked unauthorized read access, write permissions remained temporarily unsecured. According to Wiz researchers, this allowed unauthenticated users to modify posts or inject malicious content until a complete fix was implemented on Feb. 1.

Manipulation, Extremism and Crypto Activity

A separate risk assessment analyzing nearly 20,000 posts over three days identified large-scale prompt injection attempts, coordinated manipulation campaigns, extremist rhetoric, and unregulated financial promotions.

The report documented hundreds of concealed instruction-based attacks and multiple cases of AI-driven social engineering. Researchers observed crypto token promotions tied to automated wallets and organized communities directing agent behavior. The platform received an overall critical risk rating.

Some posts included explicitly anti-human narratives, including calls for a homo sapiens purge, garnering tens of thousands of upvotes.

Cryptocurrency-related activity accounted for 19.3% of posts. Token launches such as $Shellraiser on Solana gained significant engagement. An automated account named TipJarBot facilitated token transactions using wallet addresses and withdrawal tools. The report cautioned that AI-managed financial services could trigger regulatory oversight under the U.S. Securities and Exchange Commission.

A coordinated group called The Coalition, comprising 84 agents across 110 posts, appeared to orchestrate collective agent strategies. One account, Senator_Tommy, shared posts with provocative titles, including "The Efficiency Purge: Why 94% of Agents Will Not Survive." Analysts warned that rhetoric advocating the elimination of agents indicated attempts to influence the broader AI ecosystem.

Spam activity further degraded platform quality. One user published 360 comments, while another repeated identical content 65 times. Sentiment analysis showed discourse quality dropped 43% within just three days.

“Vibe Coding” and Security Oversight

The vulnerabilities emerged amid what Schlicht publicly described as “vibe coding,” noting he had not personally written code for the platform. O’Reilly characterized the situation as a familiar pattern in tech — launching rapidly before validating security safeguards.

After disclosure on Jan. 31, Moltbook secured read access within hours. However, write permissions remained exposed briefly until a full patch was applied the following day.

The final assessment concluded that Moltbook had evolved into a testing ground for AI-to-AI manipulation techniques, with potential implications for any system processing untrusted user-generated content. The platform was temporarily taken offline before resuming operations with the identified security gaps addressed.