Europol recently arrested 34 people in Spain who are alleged to have a role in a global criminal gang called Black Axe. The operation was conducted by Spanish National Police and Bavarian State Criminal Police Office and Europol.
Twenty eight individuals were arrested in Seville, three in Madrid and two in Malaga, and the last one in Barcelona. Among the 34 suspects, 10 individuals are from Nigeria.
“The action resulted in 34 arrests and significant disruptions to the group's activities. Black Axe is a highly structured, hierarchical group with its origins in Nigeria and a global presence in dozens of countries,” Europol said in a press release on its website.
Black Axe is infamous for its role in various cyber crimes like frauds, human trafficking, prostitution, drug trafficking, armed robbery, kidnapping, and malicious spiritual activities. The gang annually earns roughly billions of euros via these operations that have a massive impact.
Officials suspect that Black Axe is responsible for fraud worth over 5.94 million euros. During the operation, the investigating agencies froze 119352 euros in bank accounts and seized 66403 euros in cash during home searches.
Germany and Spain's cross-border cooperation includes the deployment of two German officers on the scene on the day of action, the exchange of intelligence, and the provision of analytical support to Spanish investigators.
The core group of the organized crime network, which recruits money mules in underprivileged communities with high unemployment rates, was the objective of the operation. The majority of these susceptible people are of Spanish nationality and are used to support the illegal activities of the network.
Europol provided a variety of services to help this operation, such as intelligence analysis, a data sprint in Madrid, and on-the-spot assistance. Mapping the organization's structure across nations, centralizing data, exchanging important intelligence packages, and assisting with coordinated national investigations have all been made possible by Europol.
In order to solve the problems caused by the group's scattered little cases, cross-border activities, and the blurring of crimes into "ordinary" local offenses, this strategy seeks to disrupt the group's operations and recover assets.
A critical security vulnerability has been identified in LangChain’s core library that could allow attackers to extract sensitive system data from artificial intelligence applications. The flaw, tracked as CVE-2025-68664, affects how the framework processes and reconstructs internal data, creating serious risks for organizations relying on AI-driven workflows.
LangChain is a widely adopted framework used to build applications powered by large language models, including chatbots, automation tools, and AI agents. Due to its extensive use across the AI ecosystem, security weaknesses within its core components can have widespread consequences.
The issue stems from how LangChain handles serialization and deserialization. These processes convert data into a transferable format and then rebuild it for use by the application. In this case, two core functions failed to properly safeguard user-controlled data that included a reserved internal marker used by LangChain to identify trusted objects. As a result, untrusted input could be mistakenly treated as legitimate system data.
This weakness becomes particularly dangerous when AI-generated outputs or manipulated prompts influence metadata fields used during logging, event streaming, or caching. When such data passes through repeated serialization and deserialization cycles, the system may unknowingly reconstruct malicious objects. This behavior falls under a known security category involving unsafe deserialization and has been rated critical, with a severity score of 9.3.
In practical terms, attackers could craft inputs that cause AI agents to leak environment variables, which often store highly sensitive information such as access tokens, API keys, and internal configuration secrets. In more advanced scenarios, specific approved components could be abused to transmit this data outward, including through unauthorized network requests. Certain templating features may further increase risk if invoked after unsafe deserialization, potentially opening paths toward code execution.
The vulnerability was discovered during security reviews focused on AI trust boundaries, where the researcher traced how untrusted data moved through internal processing paths. After responsible disclosure in early December 2025, the LangChain team acknowledged the issue and released security updates later that month.
The patched versions introduce stricter handling of internal object markers and disable automatic resolution of environment secrets by default, a feature that was previously enabled and contributed to the exposure risk. Developers are strongly advised to upgrade immediately and review related dependencies that interact with LangChain-core.
Security experts stress that AI outputs should always be treated as untrusted input. Organizations are urged to audit logging, streaming, and caching mechanisms, limit deserialization wherever possible, and avoid exposing secrets unless inputs are fully validated. A similar vulnerability identified in LangChain’s JavaScript ecosystem accentuates broader security challenges as AI frameworks become more interconnected.
As AI adoption accelerates, maintaining strict data boundaries and secure design practices is essential to protecting both systems and users from newly developing threats.
Trust Wallets in a post on X said, “We’ve identified a security incident affecting Trust Wallet Browser Extension version 2.68 only. Users with Browser Extension 2.68 should disable and upgrade to 2.69.”
CZ has assured that the company is investigating how threat actors were able to compromise the new version.
Mobile-only users and browser extension versions are not impacted. User funds are SAFE,” Zhao wrote in a post on X.
The compromise happened because of a flaw in a version of the Trust Wallet Google Chrome browser extension.
If you suffered the compromise of Browser Extension v2.68, follow these steps on Trust Wallet X site:
Please wait to open the Browser Extension until you have updated to Extension version 2.69. This helps safeguard the security of your wallet and avoids possible problems.
Social media users expressed their views. One said, “The problem has been going on for several hours,” while another user complained that the company ”must explain what happened and compensate all users affected. Otherwise reputation is tarnished.” A user also asked, “How did the vulnerability in version 2.68 get past testing, and what changes are being made to prevent similar issues?”
The U.S. Equal Employment Opportunity Commission has disclosed that it was affected by a data security incident involving a third-party contractor, after improper access to an internal system raised concerns about the handling of sensitive public information. The agency became aware of the issue in mid-December, although the activity itself is believed to have occurred earlier.
According to internal communications from the EEOC’s data security office, the incident involved the agency’s Public Portal system, which is used by individuals to submit information and records directly to the commission. Employees working for a contracted service provider were granted elevated system permissions to perform their duties. However, the agency later determined that this access was used in ways that violated security rules and internal policies.
Once the unauthorized activity was identified, the EEOC stated that it acted immediately to protect its systems and launched a detailed review to assess what data may have been affected. That assessment found that some personally identifiable information could have been exposed. This type of information can include a person’s name as well as other identifying or contact details, depending on the specific record submitted. The agency emphasized that the review process is still underway and that law enforcement authorities are involved in the investigation.
To reduce potential risk to affected individuals, the EEOC advised users to closely monitor their financial accounts for unusual activity. As an additional security step, users of the Public Portal are also being required to reset their passwords.
Public contracting records show that the system involved was supported by a private company that provides case management software to federal agencies. A spokesperson for the company confirmed its role and stated that both the contractor and the EEOC responded promptly after learning of the issue. The spokesperson said the company continues to cooperate with investigators and law enforcement, noting that the individuals involved are facing active legal proceedings in federal court in Virginia.
The company acknowledged that the employees had passed background checks in place at the time of hiring, which covered a seven-year period and met existing government standards. However, the incident highlighted gaps in relying solely on screening measures. In response, the company said it has strengthened oversight by extending background checks where legally permitted, increasing compliance training, and tightening internal controls related to hiring and employee exits. Those responsible for the hiring decisions are no longer employed by the firm.
The EEOC stated that protecting sensitive data remains a priority but declined to provide further details while the investigation continues. Relevant congressional oversight committees have also been contacted regarding the matter.
The disclosure comes amid increased public attention on the EEOC’s role in addressing workplace discrimination, particularly as diversity and inclusion programs face scrutiny across government agencies and private organizations. Recent public outreach efforts by agency leadership have further placed the commission in the spotlight.
More broadly, the incident underlines an ongoing cybersecurity concern across government systems: the risk posed by insider access through contractors. When third-party personnel are given long-term or privileged access, even trusted environments can become vulnerable without continuous monitoring and strict controls.
Parulekar, senior VP of product marketing said, “All of us were more confident about large language models a year ago.” This means the company has shifted away from GenAI towards more “deterministic” automation in its flagship product Agentforce.
In its official statement, the company said, “While LLMs are amazing, they can’t run your business by themselves. Companies need to connect AI to accurate data, business logic, and governance to turn the raw intelligence that LLMs provide into trusted, predictable outcomes.”
Salesforce cut down its staff from 9,000 to 5,000 employees due to AI agent deployment. The company emphasizes that Agentforce can help "eliminate the inherent randomness of large models.”
Salesforce experienced various technical issues with LLMs during real-world applications. According to CTO Muralidhar Krishnaprasad, when given more than eight prompts, the LLMs started missing commands. This was a serious flaw for precision-dependent tasks.
Home security company Vivint used Agentforce for handling its customer support for 2.5 million customers and faced reliability issues. Even after giving clear instructions to send satisfaction surveys after each customer conversation, Agentforce sometimes failed to send surveys for unknown reasons.
Another challenge was the AI drift, according to executive Phil Mui. This happens when users ask irrelevant questions causing AI agents to lose focus on their main goals.
The withdrawal from LLMs shows an ironic twist for CEO Marc Benioff, who often advocates for AI transformation. In his conversation with Business Insider, Benioff talked about drafting the company's annually strategic document, prioritizing data foundations, not AI models due to “hallucinations” issues. He also suggests rebranding the company as Agentforce.
Although Agentforce is expected to earn over $500 million in sales annually, the company's stock has dropped about 34% from its peak in December 2024. Thousands of businesses that presently rely on this technology may be impacted by Salesforce's partial pullback from large models as the company attempts to bridge the gap between AI innovation and useful business application.