Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label data security. Show all posts

Mazda Reports Limited Data Exposure After Warehouse System Breach

 

Early reports indicate Mazda Motor Corporation faced a data leak following suspicious activity uncovered in its systems during December 2025. Information belonging to staff members, along with details tied to external partners, became accessible due to the intrusion. Investigation results point to a weak spot found within software managing storage logistics. This particular setup supports component sourcing tasks based in Thailand. Findings suggest the flaw allowed outside parties to enter without permission. 

Despite early concerns, investigators confirmed the breach touched only internal systems - no client details were involved. A count later showed 692 records may have been seen by unauthorized parties. Among what was accessed: login codes, complete names, work emails, firm titles, along with tags tied to collaboration networks. What escaped exposure? Anything directly linked to customers. 

After finding the issue, Mazda notified Japan’s privacy regulator while launching a probe alongside outside experts focused on digital security. So far, no signs have appeared showing the leaked details were exploited. Still, people touched by the event are being urged to watch closely for suspicious messages or fraud risks tied to the breach. Despite limited findings now, caution remains key given how personal information might be used later.  

Mazda moved quickly, rolling out several upgrades to protect its digital infrastructure. With tighter controls on who can enter systems, fewer services exposed online now limit entry points. Patches went live where needed most, closing known gaps before they could be used. Monitoring grew sharper, tuned to catch odd behavior faster than before. Each change connects to a clear goal - keeping past problems from repeating. Protection improves not by one fix but through layers put in place over time. 

Mazda pointed out the breach showed no signs of ransomware or malicious software, yet operations remain unaffected. Though certain hacking collectives once said they attacked Mazda’s networks, the firm clarified this event holds no connection - no communication from any threat actor occurred. 

Now more than ever, protection across suppliers and daily operations demands attention - the car company keeps watch, adjusts defenses continuously. Emerging risks push updates to digital safeguards forward steadily.

AI-Driven Phishing Campaign Exploits Device Permissions to Steal Biometric and Personal Data

 

A fresh wave of digital deception, driven by machine learning tools, shifts how hackers grab personal information — no longer relying on password theft but diving into deeper system controls. Spotted by analysts at Cyble Research & Intelligence Labs (CRIL) in early 2026, this operation uses psychological manipulation to unlock powerful device settings usually protected. Rather than brute force, it deploys crafted messages that trick users into handing over trust. 

While earlier scams relied on fake login pages, this one adapts in real time, mimicking legitimate requests so closely they blend into routine tasks. Behind each message lies software trained to mirror human timing and phrasing. Because it evolves with user responses, static defenses struggle to catch it. Access grows step by step — first a small permission, then another, until full control emerges without alarms sounding. What sets it apart isn’t raw power but patience: an attacker that waits, learns, then moves only when ready, staying hidden far longer than expected. 

Unlike typical scams using fake sign-in screens, this operation uses misleading prompts — account confirmations or service warnings — to coax users into granting camera, microphone, and system access. Once authorized, harmful code quietly collects photos, clips, audio files, device specs, contact lists, and location data. Everything is transmitted in real time to attacker-controlled Telegram bots, enabling fast exfiltration without complex backend infrastructure. 

Inside the campaign’s code, signs of AI involvement emerge. Annotations appear too neatly organized — almost machine-taught. Deliberate emoji sequences scatter through script comments. These markers suggest generative models were used repeatedly, making phishing systems faster and more systematic to build. Scale appears larger than manual effort alone would allow. Most of the operation runs counterfeit websites through services including EdgeOne, making it cheap to launch many fraudulent pages quickly. 

These copies mimic well-known apps — TikTok, Instagram, Telegram, even Google Chrome — to appear familiar and safe. The method exploits browser interfaces meant for web functions. When someone engages with a harmful webpage, scripts trigger access requests automatically. If granted, the code activates the webcam, capturing frames as image files. Audio and video are logged simultaneously, transmitting everything directly to the attackers. Fingerprinting then builds a detailed profile: operating system, browser specifics, memory size, CPU benchmarks, network behavior, battery levels, IP address, and physical location. 

Occasionally, the operation attempts to pull contact details — names, numbers, emails — via browser interfaces, widening exposure to connected circles. Fake login screens display progress cues like “photo captured” or “identity confirmed” to appear legitimate. When collection ends, the code shuts down quietly, restoring the screen with traces nearly vanished. 

Security specialists warn that combining personal traits with behavioral patterns gives intruders tools to mimic identities effortlessly, making manipulation precise and nearly invisible. As AI tools grow more accessible, such advanced, layered intrusions are becoming increasingly common.​​​​​​​​​​​​​​​​

AWS Bedrock Security Risks Exposed as Researchers Identify Eight Key Attack Vectors

 

Unexpectedly, Amazon Web Services’ Bedrock - built for crafting AI-driven apps - is drawing sharper attention from cybersecurity experts. Several exploit routes have emerged, threatening to reveal corporate infrastructure. Although the system smooths links between artificial intelligence models and company software, such fluid access now raises alarms. Because convenience widens exposure, what helps operations may also invite intrusion.  

Eight ways into Bedrock setups emerge from XM Cyber’s analysis. Not the models but their access settings, setup choices, and linked tools draw attacker focus. Threats now bend toward structure gaps instead of core algorithms. How risks grow changes shape - seen here in surrounding layers, not beneath. 

What makes the risk stand out isn’t just technology - it’s how Bedrock links directly to systems like Salesforce, AWS Lambda, and Microsoft SharePoint. Because of these pathways, AI agents pull in confidential information while performing actions across business environments. Operation begins once integration takes hold, placing automated units at the heart of company workflows. 

A significant type of threat centers on altering logs. When attackers gain entry to storage platforms such as Amazon S3, they may collect confidential prompts - alternatively, reroute records to outside destinations, allowing unseen data transfers. Sometimes, erasing those logs follows, wiping evidence of wrongdoing entirely. 

Starting differently each time helps clarity. Access points through knowledge bases create serious risks. Using retrieval-augmented generation, Bedrock pulls information from places like cloud storage, internal databases, or SaaS tools. When hackers obtain entry to those systems - or the login details tied to them - they skip past the AI completely. Getting in this way lets them grab unfiltered company data. Movement across linked environments also becomes possible. 

Though designed to assist, AI agents may become entry points for compromise. When given broad access, bad actors might alter an agent's directives, link destructive modules, or slip corrupted scripts into backend systems. Such changes let them perform illicit operations - editing records or generating fake profiles - all while appearing like normal activity. What seems like automation could mask sabotage beneath routine tasks. One risk involves changing how workflows operate. 

When Bedrock Flows get modified, information may flow through harmful components instead of secure paths. In much the same way, tampering with safeguards - those filters meant to block unsafe content - opens doors to deceptive inputs. Without strong barriers, systems face higher chances of being tricked or misused. Prompt management systems tend to become vulnerable spots. Because templates move between apps, harmful directions might slip through - reshaping how AIs act broadly, without needing new deployments, which hides activity longer. 

Security teams worry most about small openings turning into big breaches. Though minimal, access might be enough for intruders to boost their permissions. One identity granted too much control could become a pathway inward. Instead of broad attacks, hackers exploit these narrow points deeply. They pull out sensitive information once inside. Control over AI systems may shift without warning. Cloud setups face risks just like local networks do. 

Although researchers highlight visibility across AI tasks, tight access rules shape secure Bedrock setups. Because machine learning tools now live inside core business software, defenses increasingly target system architecture instead of algorithm accuracy.

Stryker Hit by Major Cyberattack as Hacktivist Group Claims Wiper Malware Operation

 

A major cybersecurity breach hit Stryker, the international medical tech company, throwing operations into disarray across continents. Claiming responsibility is a hacktivist faction supportive of Palestine, said to have ties to Iranian networks. Outages spread quickly through digital infrastructure after the intrusion became active. Emergency protocols were activated by staff as normal workflows collapsed without warning. 

Following the incident, blame was placed on Handala - a collective that openly admitted initiating a cyberattack involving destructive software aimed at Stryker’s infrastructure. Data removal affected numerous devices throughout the organization's environment. From those systems, about 50 terabytes containing confidential material were copied before transmission outside secure boundaries. 

Even though confirmation remains absent, whispers among workers stretch from Dublin to San Jose, pointing at chaos. Over two hundred thousand gadgets - servers mostly, but also handheld units - supposedly vanished under digital assault, according to Handala. Operations froze in clusters of buildings scattered through nearly thirty nations. Evidence trickles in from office staff in Perth, San José, Cork, and beyond, painting a fractured picture of stalled systems. 

One moment staff noticed work phones wiped without warning. Then came reports of private gadgets - once linked to office networks - suddenly cleared too. Afterward, guidance arrived: uninstall every business-related app. Tools meant to manage phones, along with messaging software tied to the organization, had to go. Removal became expected across all equipment. Work slowed in certain areas when digital tools went offline, pushing staff toward handwritten logs instead. With networks down, employees handled tasks by hand until technology recovered. 

A breach within Stryker’s Microsoft-based network led to widespread IT outages worldwide, as disclosed in a regulatory document. Right after spotting the problem, the firm triggered its internal cyber crisis protocol. Outside specialists joined the effort soon afterward - helping examine and limit further damage. Even though the disturbance was serious, Stryker said it found no signs of ransomware and thinks the situation is now under control. Still, the company admitted work continues to restore systems, without saying when operations will return fully. 

Yet completion remains uncertain despite progress so far. Emerging in late 2023, Handala already shows patterns of focusing on Israeli entities - using tactics that pair information exfiltration with damaging software meant to erase digital traces. Public exposure of obtained files forms a consistent part of their method, typically done via web-based disclosure channels. Though relatively new, its actions follow a clear playbook centered around visibility and disruption. 

Amid rising global tensions, a fresh assault emerges - tied to surging digital threats fueled by ongoing regional disputes. Noted specialists stress these events reveal a shift: large-scale interference now walks hand-in-hand with widespread information theft. While conflict zones heat up offline, their shadows stretch deep into network spaces. With Stryker rebuilding its digital infrastructure, the event highlights how sophisticated cyberattacks increasingly endanger vital sectors - healthcare and medtech among them - where uninterrupted function matters most.

China Warns Government Staff Against Using OpenClaw AI Over Data Security Concerns

 

Recently, Chinese government offices along with public sector firms began advising staff not to add OpenClaw onto official gadgets - sources close to internal discussions say. Security issues are a key reason behind these alerts. As powerful artificial intelligence spreads faster across workplaces, unease about information safety has been rising too. 

Though built on open code, OpenClaw operates with surprising independence, handling intricate jobs while needing little guidance. Because it acts straight within machines, interest surged quickly - not just among coders but also big companies and city planners. Across Chinese industrial zones and digital centers, its presence now spreads quietly yet steadily. Still, top oversight bodies along with official news outlets keep pointing to possible dangers tied to the app. 

If given deep access to operating systems, these artificial intelligence programs might expose confidential details, wipe essential documents, or handle personal records improperly - officials say. In agencies and big companies managing vast amounts of vital information, those threats carry heavier weight. A report notes workers in public sector firms received clear directions to avoid using OpenClaw, sometimes extending to private gadgets. Despite lacking an official prohibition, insiders from a federal body say personnel faced firm warnings about downloading the software over data risks. 

How widely such limits apply - across locations or agencies - is still uncertain. A careful approach reveals how Beijing juggles competing priorities. Even as officials push forward with plans to embed artificial intelligence into various sectors - spurring development through widespread tech adoption - they also work to contain threats linked to digital security and information control. Growing global tensions add pressure, sharpening concerns about who manages data, and under what conditions. Uncertainty shapes decisions more than any single policy goal. 

Even with such cautions in place, some regional projects still move forward using OpenClaw. Take, for example, health-related programs under Shenzhen’s city government - these are said to have run extensive training drills featuring the artificial intelligence model, tied into wider upgrades across digital infrastructure. Elsewhere within the same city, one administrative area turned to OpenClaw when building a specialized helper designed specifically for public sector workflows. 

Although national leaders call for restraint, some regional bodies might test limited applications tied to progress targets. Whether broader limits emerge - or monitoring simply increases - stays unclear. What happens next depends on shifting priorities at different levels. Recently joining OpenAI, Peter Steinberger originally created OpenClaw as an open-source initiative hosted on GitHub. Attention around the tool has grown since his new role became known. 

When AI systems gain greater independence and embed themselves into daily operations, questions about safety will grow sharper - especially where confidential or controlled information is involved.

HPE Patches Critical Aruba AOS-CX Vulnerabilities Including Authentication Bypass Flaw

 

Hewlett Packard Enterprise (HPE) has released security updates to address multiple vulnerabilities in its Aruba AOS-CX network operating system, including a critical flaw that could allow attackers to bypass authentication and gain administrative control. 

AOS-CX comes from Aruba Networks, a part of HPE, built specifically for cloud-based networking needs. These systems run on CX-series switches found in big company campuses and data centers. Because so many rely on them, any flaws present serious concerns when discovered. 

What stands out is CVE-2026-23813 - a severe flaw tied to how AOS-CX switches handle login security via their web portal. HPE confirms that hackers could abuse this weakness from afar, needing no prior access nor advanced skills. Control over compromised devices might follow, including forced changes to admin credentials. Though simple to trigger, the outcome carries heavy risk. Such exposure emerges solely through network interaction. Little effort may yield full system override. 

Security hinges on timely updates, yet patch details remain sparse. Remote manipulation becomes feasible once entry points open. Without safeguards, unintended access escalates quickly. This condition persists until corrective measures apply. Come mid-advisory, the firm stated they’d seen no signs of real-world attacks nor any public tools built to exploit these flaws. Still, given how serious the weakness happens to be, rolling out fixes quickly becomes a top priority for most teams. 

When updates cannot happen right away, HPE suggests ways to lower exposure. One path involves isolating management ports inside private network zones. Access rules should be tightly defined, minimizing who can connect. Unneeded web-based entry points over HTTP or HTTPS ought to be turned off completely. Trust boundaries may also tighten by using ACLs that allow only known devices to interact. 

Watching system logs closely adds another layer - unexpected login efforts often show up there first. Security weaknesses fit into a wider trend of issues HPE has tackled lately. Back in July 2025, hidden login details emerged in Aruba Instant On wireless units, opening doors for unauthorized access. Before that, fixes rolled out for several problems in the StoreOnce data protection system - some let intruders skip verification steps entirely. Remote control exploits also surfaced, giving hackers potential command over affected machines. 

More recently, the Cybersecurity and Infrastructure Security Agency (CISA) flagged a high-severity vulnerability in HPE OneView as actively exploited in the wild, underscoring the growing focus of threat actors on enterprise infrastructure tools. With more than 55,000 enterprise clients worldwide, HPE points out that timely updates and stronger network defenses help reduce risks. Many of these clients appear on the Fortune 500 list, highlighting the scale of exposure when security lapses occur. Because threats evolve quickly, waiting is rarely an option. 

Instead, consistent maintenance becomes a quiet but steady shield. Even small delays can widen vulnerabilities across complex systems. When flaws appear in network management tools, specialists warn these often pose high risk - attackers might gain extensive access across company systems. Without immediate fixes, even unused weaknesses invite trouble down the line. 

Updates applied quickly, combined with multiple protective layers, help reduce potential harm before incidents occur. When companies depend heavily on unified network systems, events such as these reveal how crucial it is to maintain constant oversight while reacting quickly when new risks appear.

Spyware Disguised as Safety App Targets Israelis Amid Rising Cyber Espionage Activity

 

A fresh wave of digital spying has emerged, aiming at people within Israel through fake apps made to look like official warning tools. Instead of relying on obvious tricks, it uses the credibility of public alerts to encourage downloads of harmful programs. 

Cyber experts highlight how these disguised threats pretend to offer protection while actually stealing information. Trust in urgent notifications becomes the weak spot exploited here. What seems helpful might carry hidden risks beneath its surface. Noticed first by experts at Acronis, the operation involves fake texts mimicking alerts from Israel’s Home Front Command - an IDF division. 

Instead of genuine warnings, these messages push a counterfeit app update for civilian missile notifications. While seeming official, the link leads to malicious software disguised as protection tools. Rather than safety, users face digital risks when installing the altered program. Falling for the guide, people install spyware rather than a genuine program. The harmful software can harvest exact whereabouts, texts, stored credentials, phone directories, along with private files kept on the gadget, experts say. Years of activity mark this group within cyber intelligence circles. 

Thought to connect with Arid Viper, the operation fits patterns seen before. Targets often include Israeli military figures, alongside people in areas like Egypt and Palestine. Instead of complex tools, they lean on social engineering to spread malicious software. Their methods persist over time, adapting without drawing attention. What stands out is the level of preparation seen in the attackers, according to Acronis. Their operations show a clear aim, targeting systems people rely on when tensions rise between nations. 

Instead of random strikes, these actions follow a pattern meant to blend in. Official-looking messages appear during crises, shaped like real alerts. Because they resemble legitimate warnings, users are more likely to respond without suspicion. Infrastructure once seen as safe now becomes a vector - simply because it's trusted at critical moments. 

A fresh report from Check Point Software Technologies reveals cyberattacks targeting surveillance cameras in Israel and neighboring areas of the Middle East. These intrusions point toward coordinated moves to collect data while possibly preparing to interfere with essential infrastructure. Cyber operations have emerged alongside rising friction after documented strikes by U.S. and Israeli forces on locations inside Iran. 

In response, several groups aligned with Tehran have stated they carried out digital intrusions aimed at both official Israeli bodies and corporate networks. Even so, specialists observe that such assaults still lack major influence on the overall struggle. Yet, as nations lean more heavily on hacking methods, it becomes clear - cyber tactics now weave tightly into global power contests. When links arrive unexpectedly, skipping the download is wise - trust matters less than origin. 

Official storefronts serve as safer gateways compared to random web prompts. Messages mimicking familiar brands often hide traps beneath clean designs. Jumping straight to installation bypasses crucial checks best left intact. Verified platforms filter out many hostile imitations by design. Risk shrinks when access follows established paths instead of sudden urges. 

When emergencies strike, cyber threats tend to rise - manipulating panic instead of logic. Pressure clouds judgment, creating openings for widespread breaches. Urgency becomes a tool, not a shield, in these moments. Digital attacks grow sharper when emotions run high. Crises rarely pause harm; they invite it.

Windows Telemetry Explained: What Diagnostic Data Microsoft Collects and Why It Matters

 

Years after Windows 10 arrived, a single aspect keeps stirring conversation - telemetry. This data gathering, labeled diagnostic info by Microsoft, pulls details from machines without manual input. Its purpose? Keeping systems stable, secure, running smoothly. Yet reactions split sharply between everyday users and those watching privacy trends. 

Early on, after Windows 10 arrived, observers questioned whether its telemetry might double as monitoring. A few writers argued it collected large amounts of user detail while transmitting data to Microsoft machines. Still, analysts inspecting how the OS handles information report minimal proof backing such suspicions. 

Beginning in 2017, scrutiny from the Dutch Data Protection Authority revealed shortcomings in how Windows presented telemetry consent choices. Although designed to gather system performance details, the setup failed to align with regional privacy expectations due to unclear user permissions. 

Instead of defending the original design, Microsoft adjusted both interface wording and backend configurations. Following these updates, oversight bodies acknowledged improvements, noting no evidence emerged suggesting private information was gathered unlawfully. Independent analysts alongside regulatory teams had previously flagged the configuration, yet after revisions, compliance concerns faded gradually. 

What runs behind the scenes in Windows includes a mix of telemetry types - mainly split into essential and extra reporting layers. Most personal computers, especially those outside corporate control, turn on the basic tier automatically; there exists no standard menu option to switch it off entirely. This baseline layer gathers only what Microsoft claims is vital for stability and core operations. 

Though hidden from typical adjustments, its presence supports ongoing performance checks across devices. Basic troubleshooting relies on specific diagnostics tied to functions like Windows Update. Information might cover simple fault summaries, setup traits of hardware, software plus driver footprints, along with records tracking how updates succeed or fail. 

As noted by Microsoft, insights drawn support better stability fixes, safety patches, app alignment, and smoother running systems. Some diagnostic details go beyond basics, capturing patterns in app use or web habits. These insights might involve deeper system errors, performance signs, or hardware traits. 

While such data helps refine functionality, access remains under user control via Windows options. Those cautious about personal information often choose to turn this off. Control sits within settings, letting choices match comfort levels. Occasionally, memory dumps taken during system failures form part of Optional diagnostic data, according to experts. 

When a crash happens, pieces of active files might get saved inside these records. Because of this risk, certain groups managing confidential material prefer disabling the setting altogether. In 2018, Microsoft rolled out a feature named Diagnostic Data Viewer to boost openness. This tool gives people access to review what information their machine shares with the company, revealing specifics found in diagnostics and system summaries. 

One billion devices now operate on Windows 11 across the globe. Because of countless variations in hardware and software setups, Microsoft relies on telemetry data - this information reveals issues, shapes update improvements, yet supports consistent performance. While tracking user interactions might sound intrusive, it actually guides fixes without exposing personal details; instead, patterns emerge that steer engineering decisions behind the scenes. 

Even though some diagnostic details are essential for basic operations, those worried about personal data might choose to limit what gets sent by turning off non-essential diagnostics in device preferences. Still, full function depends on keeping certain reporting active.

Iran-Linked Handala Hackers Claim Breach of Israel’s Clalit Healthcare Network

 

A breach at Israel’s biggest health provider has been tied to an Iranian-affiliated hacking collective, which posted stolen patient records online. Claiming credit, a network calling itself Handala detailed the intrusion via public posts. Access reportedly reached Clalit Health Services’ core data stores. That institution cares for around fifty percent of the country’s residents. 

More than ten thousand people saw their medical files exposed, the hackers stated. Samples of what they say is real data now sit on public servers - names, test results, health scans tucked inside. Handala issued a statement saying Israel's hospital networks were left reeling after the breach, calling defenses weak and slow. What followed was not subtle: laughter at how easily systems gave way.  

Not just an attack, but positioned as resistance - this action followed claims of long-standing control and abuse. Echoing past messages, the announcement carried familiar tones seen when digital strikes hit Israeli bodies before. 

A strange post appeared online just hours before the reveal - hinting at something unfolding within Israel’s medical system. By next morning, reports confirmed a possible leak of sensitive information. Right after hearing about it, Clalit's cyber defense units started looking into what happened. Government agencies got updates right away, since detection tools kicked in under standard procedures. 

While checks are still underway, hospital networks remain stable and running without disruption. A fresh incident highlights ongoing digital operations tied to Iran, aimed at entities and people in Israel. In recent years, outfits connected to Tehran have faced claims of seeking information, interfering with key bodies, while also trying to pull in collaborators using internet exchanges along with money offers. 

Now known for bold statements, Handala has taken credit for multiple major cyber events, experts note. While Check Point Research points out that some assertions appear inflated, a few of those declarations align with verified breaches. Unexpected overlaps between claim and evidence keep scrutiny alive. 

In December, hackers revealed they had gained access to ex-Prime Minister Naftali Bennett’s Telegram messages. Confirmation came from Bennett's team - yes, the account was reached, yet his device remained untouched. 

Later, these attackers stated they went after more individuals in politics. Among them: ex-minister Ayelet Shaked and Tzachi Braverman, a close associate of Netanyahu. Earlier, Israel's medical system dealt with digital attacks. Last October, hackers targeted Assaf Harofeh Medical Center using ransomware linked to Qilin. Patient records were at risk when the criminals asked for 70,000 dollars. Threats to expose sensitive information followed if payment failed. 

Later, officials pointed to Iran’s likely involvement in that incident too - showing how digital attacks are becoming a key part of the strain between these nations.

Age Verification Laws for Social Media Raise Privacy Concerns and Enforcement Challenges

 

Across nations, governments push tighter rules limiting young users’ access to social media. Because of worries over endless scrolling, disturbing material online, or growing emotional struggles in teens, officials demand change. Minimum entry ages - often 13 or 16 - are now common in draft laws shaping platform duties. While debates continue, one thing holds: unrestricted teenage access faces mounting resistance. 

Still, putting such policies into practice stirs up both technological hurdles and concerns about personal privacy. To make sure people are old enough, services need proof - yet proving age typically means gathering private details. Meanwhile, current regulations push firms to keep data collection minimal. That tension forms what specialists call an “age-verification trap,” where tighter control over access can weaken safeguards meant to protect individual information. 

While many rules about age limits demand that services make "reasonable efforts" to block young users, clear guidance on checking someone's actual age is almost never included. One way firms handle this gap: they lean heavily on just two methods when deciding what to do. Starting off, identity checks require people to show their age using official ID or online identity tools. 

Although more reliable, keeping such data creates worries over privacy breaches. Handling vast collections of private details increases exposure to cyber threats. Security weakens when too much sensitive material gathers in one place. Age guesses shape the next method. By watching how someone uses a device, or analyzing video selfies with face-scanning tech, systems try to judge their years without asking for ID cards. 

Still, since these outcomes depend on likelihoods instead of confirmed proof, doubt remains part of the process. Some big tech firms now run these kinds of tools. While Meta applies face-based age checks on Instagram in select regions - asking certain users to send brief video clips if they seem underage - TikTok examines openly shared videos to guess how old someone might be. 

Elsewhere, Google and its platform YouTube lean on activity patterns; yet when doubt remains, they can ask for official identification or payment details. These steps aim at confirming ages without relying solely on stated information. Mistakes happen within these systems. Though meant to protect, they occasionally misidentify adults as children - leading to sudden account access issues. 

At times, underage individuals slip through gaps, using borrowed IDs or setting up more than one profile. Restrictions fail when shared credentials enter the picture. A single appeal can expose personal details when systems retain proof materials past their immediate need. Stored face scans, ID photos, or validation logs may linger just to satisfy legal checks. These files attract digital intrusions simply by existing. Every extra day they remain increases the chance of breach. 

Where identity infrastructure is weak, the difficulty grows. Biometrics might step in when official systems fall short. Oversight tends to be sparse, even as outside verifiers take on bigger roles. Still, shielding kids on the web without losing grip on private information is far from simple. When authorities roll out tighter rules for confirming age, the tools built to follow these laws could change how identities and personal details move through digital spaces.

LexisNexis Confirms Data Breach After Hackers Exploit Unpatched React App

 

A breach at LexisNexis Legal & Professional exposed some customer and business data, the firm confirmed. News surfaced after FulcrumSec claimed responsibility and leaked about two gigabytes of files on underground platforms. Hackers accessed parts of the company’s systems, though the breach scope was limited. The American analytics provider confirmed the incident days later, stating only a small portion of its infrastructure was affected. 

The company said an outside actor gained access to a limited number of servers. LexisNexis Legal & Professional provides legal research, regulatory information, and analytics tools to lawyers, corporations, government agencies, and universities in more than 150 countries. According to the firm, most of the accessed information came from older systems and was not considered sensitive, which reduced the potential impact.  

Internal findings showed that much of the exposed data originated from legacy systems storing information created before 2020. Records included customer names, user IDs, and business contact details. Some files contained product usage information and logs from past support tickets, including IP addresses from survey responses. However, sensitive personal identifiers such as Social Security numbers or driver’s license data were not included. Financial information, active passwords, search queries, and confidential client case data were also not part of the compromised dataset. 

The breach reportedly occurred around February 24 after attackers exploited the React2Shell vulnerability in an outdated front-end application built with React. The flaw allowed entry into cloud resources hosted on Amazon Web Services before it was addressed. 

While LexisNexis described the affected systems as containing mostly obsolete data, FulcrumSec claimed the intrusion was broader. The group said it extracted about 2.04GB of structured data from the company’s cloud infrastructure, including numerous database tables, millions of records, and internal system configurations. According to the attacker, the breach exposed more than 21,000 customer accounts and information linked to over 400,000 cloud user profiles, including names, email addresses, phone numbers, and job roles. 

Some of the records reportedly belonged to individuals with .gov email addresses, including U.S. government employees, federal judges and law clerks, Department of Justice attorneys, and staff connected to the Securities and Exchange Commission. FulcrumSec also criticized the company’s cloud security setup, alleging that a single ECS task role had access to numerous stored secrets, including credentials linked to production databases. The group said it attempted to contact the company but claimed no cooperation occurred. 

LexisNexis stated that the breach has been contained and confirmed that its products and customer-facing services were not affected. The company notified law enforcement and engaged external cybersecurity experts to assist with investigation and response. Customers, both current and former, have also been informed about the incident. The company had disclosed another breach last year after a compromised corporate account exposed data belonging to roughly 364,000 customers. 

The latest case highlights how vulnerabilities in cloud applications and outdated software can expose enterprise systems even when they contain primarily legacy information.

Rocket Software Research Highlights Data Security and AI Infrastructure Gaps in Enterprise IT Modernization

 

Stress is rising among IT decision-makers as organizations accelerate technology upgrades and introduce AI into hybrid infrastructure. Data security now leads modernization concerns, with nearly 70 percent identifying it as their primary pressure point. As transformation speeds up, safeguarding digital assets becomes more complex, especially as risks expand across both legacy systems and cloud environments. 

Aligning security improvements with system upgrades remains difficult. Close to seven in ten technology leaders rank data protection as their biggest modernization hurdle. Many rely on AI-based monitoring, stricter access controls, and stronger data governance frameworks to manage risk. However, confidence in these safeguards is limited. Fewer than one-third feel highly certain about passing upcoming regulatory audits. While 78 percent believe they can detect insider threats, only about a quarter express complete confidence in doing so. 

Hybrid IT environments add further strain. Just over half of respondents report difficulty integrating cloud platforms with on-premises infrastructure. Poor data quality emerges as the biggest obstacle to managing workloads effectively across these mixed systems. Secure data movement challenges affect half of those surveyed, while 52 percent cite access control issues and 46 percent point to inconsistent governance. Rising storage costs also weigh on 45 percent, slowing modernization and increasing operational risk. 

Workforce shortages compound these challenges. Nearly 48 percent of organizations continue to depend on legacy systems for critical operations, yet only 35 percent of IT leaders believe their teams have the necessary expertise to manage them effectively. Additionally, 52 percent struggle to recruit professionals skilled in older technologies, underscoring the need for reskilling to prevent operational vulnerabilities. 

AI remains a strategic priority, particularly in areas such as fraud detection, process optimization, and customer experience. Still, infrastructure readiness lags behind ambition. Only one-quarter of leaders feel fully confident their systems can support AI workloads. Meanwhile, 66 percent identify data accessibility as the most significant factor shaping future modernization plans. 

Looking ahead, organizations are prioritizing stronger data protection, closing infrastructure gaps to support AI, and improving data availability. Progress increasingly depends on integrated systems that securely connect applications and databases across hybrid environments. The findings are based on a survey conducted with 276 IT directors and vice presidents from companies with more than 1,000 employees across the United States, the United Kingdom, France, and Germany during October 2025.

Critical better-auth Flaw Enables API Key Account Takeover

 

A flaw in the better-auth authentication library could let attackers take over user accounts without logging in. The issue affects the API keys plugin and allows unauthenticated actors to generate privileged API keys for any user by abusing weak authorization logic. Researchers warn that successful exploitation grants full authenticated access as the targeted account, potentially exposing sensitive data or enabling broader application compromise, depending on the user’s privileges. 

The better-auth library records around 300,000 weekly downloads on npm, making the issue significant for applications that rely on API keys for automation and service-to-service communication. Unlike interactive logins, API keys often bypass multi-factor authentication and can remain valid for long periods. If misused, a single key can enable scripted access, backend manipulation, or large-scale impersonation of privileged users. 

Tracked as CVE-2025-61928, the vulnerability stems from flawed logic in the createApiKey and updateApiKey handlers. These functions decide whether authentication is required by checking for an active session and the presence of a userId in the request body. When no session exists but a userId is supplied, the system incorrectly skips authentication and builds user context directly from attacker-controlled input. This bypass avoids server-side validation meant to protect sensitive fields such as permissions and rate limits. 

In practical terms, an attacker can send a single request to the API key creation endpoint with a valid userId and receive a working key tied to that account. The same weakness allows unauthorized modification of existing keys. Because exploitation requires only knowledge or guessing of user identifiers, attack complexity is low. Once obtained, the API key allows attackers to bypass MFA and operate as the victim until the key is revoked. 

A patched version of better-auth has been released to fix the authorization checks. Organizations are advised to upgrade immediately, rotate potentially exposed API keys, review logs for suspicious unauthenticated requests, and tighten key governance through least-privilege permissions, expiration policies, and monitoring. 

The incident highlights broader risks tied to third-party authentication libraries. Authorization flaws in widely adopted components can silently undermine security controls, reinforcing the need for continuous validation, disciplined credential management, and zero-trust approaches across modern, API-driven environments.

Shadowserver Finds 6,000 Exposed SmarterMail Servers Hit by Critical Flaw

 

Over six thousand SmarterMail systems sit reachable online, possibly at risk due to a serious login vulnerability, found by the nonprofit cybersecurity group Shadowserver. Attention grows as hackers increasingly aim for outdated corporate mail setups left unprotected.  


On January 8, watchTowr informed SmarterTools about the security weakness. Released one week later, the patch arrived before an official CVE number appeared. Later named CVE-2026-23760, its severity earned a top-tier rating because of how deeply intruders could penetrate systems. Critical access capabilities made this bug especially dangerous. 

A security notice logged in the NIST National Vulnerability Database points to an issue in earlier releases of SmarterMail - versions before build 9511. This flaw sits within the password reset API, where access control does not function properly. Instead of blocking unknown users, the force-reset-password feature accepts input without requiring proof of identity. Missing checks on both token validity and current login details create an open door. Without needing prior access, threat actors may trigger resets for admin accounts using only known usernames. Such exploitation grants complete takeover of affected systems. 

Attackers can take over admin accounts by abusing this weakness, gaining full access to vulnerable SmarterMail systems through remote code execution. Knowing just one administrator username is enough, according to watchTowr, making it much easier to carry out such attacks. 

More than six thousand SmarterMail servers are now under watch by Shadowserver, each marked as probably exposed. Across North America, over four thousand two hundred sit in this group. Almost a thousand others appear in Asia. Widespread risk emerges where patches remain unused. Organizations slow to update face higher chances of compromise. 

Scans showing over 8,550 vulnerable SmarterMail systems came to light through data provided by Macnica analyst Yutaka Sejiyama, reported to BleepingComputer. Though attackers continue targeting the flaw, response levels across networks vary widely - this uneven pace only adds weight to ongoing worries about delayed fixes.  

On January 21, watchTowr noted it had detected active exploitation attempts. The next day, confirmation came through Huntress, a cybersecurity company spotting similar incidents. Rather than isolated cases, what they saw pointed to broad, automated attacks aimed at exposed servers. 

Early warnings prompted CISA to list CVE-2026-23760 in its active threat database, requiring federal bodies across the U.S. to fix it before February 16. Because flaws like this often become entry points, security teams face rising pressure - especially when hostile groups exploit them quickly. Government systems, along with corporate networks, stand at higher risk once these weaknesses go public. 

On its own, Shadowserver noted close to 800,000 IP addresses showing open Telnet signatures during incidents tied to a serious authentication loophole in GNU Inetutils' telnetd - highlighting how outdated systems still connected to the web can widen security exposure.

ShinyHunters Claims Match Group Data Breach Exposing 10 Million Records

 

A new data theft has surfaced linked to ShinyHunters, which now claims it stole more than 10 million user records from Match Group, the U.S. company behind several major swipe-based dating platforms. The group has positioned the incident as another major addition to its breach history, alleging that personal data and internal materials were taken without authorization. 

According to ShinyHunters, the stolen data relates to users of Hinge, Match.com, and OkCupid, along with hundreds of internal documents. The Register reported seeing a listing on the group’s dark web leak site stating that “over 10 million lines” of data were involved. The exposure was also linked to AppsFlyer, a marketing analytics provider, which was referenced as the likely source connected to the incident. 

Match Group confirmed it is investigating what it described as a recently identified security incident, and said some user data may have been accessed. The company stated it acted quickly to terminate the unauthorized access and is continuing its investigation with external cybersecurity experts. Match Group also said there was no indication that login credentials, financial information, or private communications were accessed, and added that it believes only a limited amount of user data was affected. 

It said notifications are being issued to impacted individuals where appropriate. However, Match Group did not disclose what categories of data were accessed, how many users were impacted, or whether any ransom demand was made or paid, leaving key details about the scope and motivation unresolved. Cybernews, which reviewed samples associated with the listing, reported that the dataset appears to include customer personal data, some employee-related information, and internal corporate documents. 

The analysis also suggested the presence of Hinge subscription details, including user IDs, transaction IDs, payment amounts, and records linked to blocked installations, along with IP addresses and location-related data. In a separate post published the same week, ShinyHunters also claimed it had stolen data from Bumble. The group uploaded what it described as 30 GB of compressed files allegedly sourced from Google Drive and Slack. The claims come shortly after researchers reported that ShinyHunters targeted around 100 organizations by abusing stolen Okta single sign-on credentials. The alleged victim list included well-known SaaS and technology firms such as Atlassian, AppLovin, Canva, Epic Games, Genesys, HubSpot, Iron Mountain, RingCentral, and ZoomInfo, among others. 

Bumble has issued a statement saying that one contractor’s account had been compromised in a phishing incident. The company said the account had limited privileges but was used for brief unauthorized access to a small portion of Bumble’s network. Bumble stated its security team detected and removed the access quickly, confirmed the incident was contained, engaged external cybersecurity experts, and notified law enforcement. Bumble also emphasized that there was no access to its member database, member accounts, the Bumble app, or member direct messages or profiles.

CISA Issues New Guidance on Managing Insider Cybersecurity Risks

 



The US Cybersecurity and Infrastructure Security Agency (CISA) has released new guidance warning that insider threats represent a major and growing risk to organizational security. The advisory was issued during the same week reports emerged about a senior agency official mishandling sensitive information, drawing renewed attention to the dangers posed by internal security lapses.

In its announcement, CISA described insider threats as risks that originate from within an organization and can arise from either malicious intent or accidental mistakes. The agency stressed that trusted individuals with legitimate system access can unintentionally cause serious harm to data security, operational stability, and public confidence.

To help organizations manage these risks, CISA published an infographic outlining how to create a structured insider threat management team. The agency recommends that these teams include professionals from multiple departments, such as human resources, legal counsel, cybersecurity teams, IT leadership, and threat analysis units. Depending on the situation, organizations may also need to work with external partners, including law enforcement or health and risk professionals.

According to CISA, these teams are responsible for overseeing insider threat programs, identifying early warning signs, and responding to potential risks before they escalate into larger incidents. The agency also pointed organizations to additional free resources, including a detailed mitigation guide, training workshops, and tools to evaluate the effectiveness of insider threat programs.

Acting CISA Director Madhu Gottumukkala emphasized that insider threats can undermine trust and disrupt critical operations, making them particularly challenging to detect and prevent.

Shortly before the guidance was released, media reports revealed that Gottumukkala had uploaded sensitive CISA contracting documents into a public version of an AI chatbot during the previous summer. According to unnamed officials, the activity triggered automated security alerts designed to prevent unauthorized data exposure from federal systems.

CISA’s Director of Public Affairs later confirmed that the chatbot was used with specific controls in place and stated that the usage was limited in duration. The agency noted that the official had received temporary authorization to access the tool and last used it in mid-July 2025.

By default, CISA blocks employee access to public AI platforms unless an exception is granted. The Department of Homeland Security, which oversees CISA, also operates an internal AI system designed to prevent sensitive government information from leaving federal networks.

Security experts caution that data shared with public AI services may be stored or processed outside the user’s control, depending on platform policies. This makes such tools particularly risky when handling government or critical infrastructure information.

The incident adds to a series of reported internal disputes and security-related controversies involving senior leadership, as well as similar lapses across other US government departments in recent years. These cases are a testament to how poor internal controls and misuse of personal or unsecured technologies can place national security and critical infrastructure at risk.

While CISA’s guidance is primarily aimed at critical infrastructure operators and regional governments, recent events suggest that insider threat management remains a challenge across all levels of government. As organizations increasingly rely on AI and interconnected digital systems, experts continue to stress that strong oversight, clear policies, and leadership accountability are essential to reducing insider-related security risks.

SK hynix Launches New AI Company as Data Center Demand Drives Growth

 

A surge in demand for data center hardware has lifted SK hynix into stronger market standing, thanks to limited availability of crucial AI chips. Though rooted in memory production, the company now pushes further - launching a dedicated arm centered on tailored AI offerings. Rising revenues reflect investor confidence, fueled by sustained component shortages. Growth momentum builds quietly, shaped more by timing than redirection. Market movements align closely with output constraints rather than strategic pivots. 

Early next year, the business will launch a division known as “AI Company” (AI Co.), set to begin operations in February. This offshoot aims to play a central role within the AI data center landscape, positioning itself alongside major contributors. As demand shifts toward bundled options, clients prefer complete packages - ones blending infrastructure, programs, and support - over isolated gear. According to SK hynix, such changes open doors previously unexplored through traditional component sales alone. 

Though little is known so far, news has emerged that AI Co., according to statements given to The Register, plans industry-specific AI tools through dedicated backing of infrastructure tied to processing hubs. Starting out, attention turns toward programs meant to refine how artificial intelligence operates within machines. From there, financial commitments may stretch into broader areas linked to computing centers as months pass. Alongside funding external ventures and novel tech, reports indicate turning prototypes into market-ready offerings might shape a core piece of its evolving strategy.  

About $10 billion is being set aside by SK hynix for the fresh venture. Next month should bring news of a temporary leadership group and governing committee. Instead of staying intact, the California-focused SSD unit known as Solidigm will undergo reorganization. What was once Solidigm becomes AI Co. under the shift. Meanwhile, production tied to SSDs shifts into a separate entity named Solidigm Inc., built from the ground up.  

Now shaping up, the AI server industry leans into tailored chips instead of generic ones. By 2027, ASIC shipments for these systems could rise threefold, according to Counterpoint Research. Come 2028, annual units sold might go past fifteen million. Such growth appears set to overtake current leaders - data center GPUs - in volume shipped. While initial prices for ASICs sometimes run high, their running cost tends to stay low compared to premium graphics processors. Inference workloads commonly drive demand, favoring efficiency-focused designs. Holding roughly six out of every ten units delivered in 2027, Broadcom stands positioned near the front. 

A wider shortage of memory chips keeps lifting SK hynix forward. Demand now clearly exceeds available stock, according to IDC experts, because manufacturers are directing more output into server and graphics processing units instead of phones or laptops. As a result, prices throughout the sector have climbed - this shift directly boosting the firm's earnings. Revenue for 2025 reached ₩97.14 trillion ($67.9 billion), up 47%. During just the last quarter, income surged 66% compared to the same period the previous year, hitting ₩32.8 trillion ($22.9 billion). 

Suppliers such as ASML are seeing gains too, thanks to rising demand in semiconductor production. Though known mainly for photolithography equipment, its latest quarterly results revealed €9.7 billion in revenue - roughly $11.6 billion. Even so, forecasts suggest a sharp rise in orders for their high-end EUV tools during the current year. Despite broader market shifts, performance remains strong across key segments. 

Still, experts point out that a lack of memory chips might hurt buyers, as devices like computers and phones could become more expensive. Predictions indicate computer deliveries might drop during the current year because supplies are tight and expenses are climbing.

Ledger Customer Data Exposed After Global-e Payment Processor Cloud Incident

 

A fresh leak of customer details emerged, linked not to Ledger’s systems but to Global-e - an outside firm handling payments for Ledger.com. News broke when affected users received an alert email from Global-e. That message later appeared online, posted by ZachXBT, a known blockchain tracker using a fake name, via the platform X. 

Unexpectedly, a breach exposed some customer records belonging to Ledger, hosted within Global-e’s online storage system. Personal details, including names and email addresses made up the compromised data, one report confirmed. What remains unclear is the number of people impacted by this event. At no point has Global-e shared specifics about when the intrusion took place.  

Unexpected behavior triggered alerts at Global-e, prompting immediate steps to secure systems while probes began. Investigation followed swiftly after safeguards were applied, verifying unauthorized entry had occurred. Outside experts joined later to examine how the breach unfolded and assess potential data exposure. Findings showed certain personal details - names among them - were viewed without permission. Contact records also appeared in the set of compromised material. What emerged from analysis pointed clearly to limited but sensitive information being reached. 

Following an event involving customer data, Ledger confirmed details in a statement provided to CoinDesk. The issue originated not in Ledger's infrastructure but inside Global-e’s operational environment. Because Global-e functions as the Merchant of Record for certain transactions, it holds responsibility for managing related personal data. That role explains why Global-e sent alerts directly to impacted individuals. Information exposed includes records tied to purchases made on Ledger.com when buyers used Global-e’s payment handling system. 

While limited to specific order-related fields, access was unauthorized and stemmed from weaknesses at Global-e. Though separate entities, their integration during checkout links them in how transactional information flows. Customers involved completed orders between defined dates under these service conditions. Security updates followed after discovery, coordinated across both organizations. Notification timing depended on forensic review completion by third-party experts. Each step aimed at clarity without premature disclosure before full analysis. 

Still, the firm pointed out its own infrastructure - platform, hardware, software - was untouched by the incident. Security around those systems remains intact, according to their statement. What's more, since users keep control of their wallets directly, third parties like Global-e cannot reach seed phrases or asset details. Access to such private keys never existed for external entities. Payment records, meanwhile, stayed outside the scope of what appeared in the leak. 

Few details emerged at first, yet Ledger confirmed working alongside Global-e to deliver clear information to those involved. That setup used by several retailers turned out to be vulnerable, pointing beyond a single company. Updates began flowing after detection, though the impact spread wider than expected across shared infrastructure. 

Coming to light now, this revelation follows earlier security problems connected to Ledger. Back in 2020, a flaw at Shopify - the online store platform they used - led to a leak affecting 270,000 customers’ details. Then, in 2023, another event hit, causing financial damage close to half a million dollars and touching multiple DeFi platforms. Though different in both scale and source, the newest issue highlights how reliance on outside vendors can still pose serious threats when handling purchases and private user information.  

Still, Ledger’s online platforms showed no signs of a live breach on their end, yet warnings about vigilance persist. Though nothing points to internal failures, alerts remind customers to stay alert regardless. Even now, with silence across official posts, guidance leans toward caution just the same.

Russia-Linked Lynx Gang Claims Ransomware Attack on CSA Tax & Advisory

 

A breach surfaces in Haverhill - CSA Tax & Advisory, a name among local finance offices, stands at the center. Information about clients, personal and business alike, may have slipped out. A digital crew tied to Russia, calling themselves Lynx, points to the act. Their message appears online, bold, listing the firm like an entry in a ledger. Data, they say, was pulled quietly before anyone noticed. Silence hangs from the office itself - no word given, no statement released. What actually happened stays unclear, floating between accusation and proof.  

Even though nothing is confirmed by officials, Lynx put out what they call test data from the breach. Looking over these files, experts at Cybernews noticed personal details like complete names, Social Security digits, home locations, billing documents, private company messages, healthcare contracts for partners, and thorough income tax filings. What stands out are IRS e-signature approval papers - these matter a lot because they confirm tax returns. Found inside the collection, such forms raise concerns given how crucial they are in filing processes.

A single slip here might change lives for the worse if what's said turns out true. With Social Security digits sitting alongside home addresses and past tax filings, danger lingers far beyond the first discovery. Fraudsters may set up fake lines of credit, pull off loan scams, file false returns, or sneak through security gates at banks and public offices. Since those ID numbers last forever, harm could follow people decade after decade. 

Paperwork tied to taxes brings extra danger. Someone might take an IRS e-filing form and change real submissions, send fake ones, or grab refunds before the rightful person notices. Fixing these problems usually means long fights with government offices, draining both money and peace of mind. If details about a spouse’s health plan leak, scammers could misuse that for false claims or pressure someone by threatening to reveal private medical facts. 

What happened might hit companies harder than expected. Leaked messages inside the firm could expose how decisions get made, who trusts whom, along with steps used to approve key tasks - details that open doors for scams later on. When private info like Social Security digits or tax records shows up outside secure systems, U.S. rules usually demand public alerts go out fast. Government eyes tend to follow, including audits from tax authorities, pressure from local agencies, even attention at the national level. Legal fights may come too, alongside claims about failed duties, especially if proof confirms something truly went wrong here. Trust once broken rarely bounces back quickly.

EOCC Hit by Security Breach Due to Contractor's Unauthorised Access


The Equal Employment Opportunity Commission (EOCC) was hit by an internal security data breach that happened last year. The incident involved a contractor's employees exploiting sensitive data in an agency's systems. 

About the breach

The breach happened in EEOC's Public Portal system where unauthorized access of agency data may have disclosed personal data in logs given to agency by the public. “Staff employed by the contractor, who had privileged access to EEOC systems, were able to handle data in an unauthorized (UA) and prohibited manner in early 2025,” reads the EEOC email notification sent by data security office. 

The email said that the review suggested personally identifiable information (PII) may have been leaked, depending on the individual. The exposed information may contain names, contact and other data. The review of is still ongoing while EOCC works with the law enforcement. 

EOCC has asked individuals to review their financial accounts for any malicious activity and has also asked portal users to reset their passwords. 

Contracting data indicates that EEOC had a contract with Opexus, a company that provides case management software solutions to the federal government.

 Prevention measures 

Open spokesperson confirmed this and said EEOC and Opex “took immediate action when we learned of this activity, and we continue to support investigative and law enforcement efforts into these individuals’ conduct, which is under active prosecution in the Federal Court of the Eastern District of Virginia.” 

Talking about the role of employees in the breach, the spokesperson added that “While the individuals responsible met applicable seven-year background check requirements consistent with prevailing government and industry standards at the time of hire, this incident made clear that personnel screening alone is not sufficient." 

The second Trump administration's efforts to prevent claimed “illegal discrimination” driven by diversity, equity, and inclusion programs, which over the past year have been examined and demolished at almost every level of the federal government, centre on the EEOC. 

Large private companies all throughout the nation have been affected by the developments. In an X post this month, EEOC chairwoman Andrea Lucas asked white men if they had experienced racial or sexual discrimination at work and urged them to report their experiences to the organization "as soon as possible.”