Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Latest News

How Gender Politics Are Reshaping Data Privacy and Personal Information

The contemporary legal and administrative actions in the United States are revamping how personal data is recorded, shared, and accessed by ...

All the recent news you need to know

Spotify Flags Unauthorised Access to Music Catalogue

 

Spotify reported that a third party had scraped parts of its music catalogue after a pirate activist group claimed it had released metadata and audio files linked to hundreds of millions of tracks. 

The streaming company said an investigation found that unauthorised users accessed public metadata and used illicit methods to bypass digital rights management controls to obtain some audio files. 

Spotify said it had disabled the accounts involved and introduced additional safeguards. The claims were made by a group calling itself Anna’s Archive, which runs an open source search engine known for indexing pirated books and academic texts. 

In a blog post, the group said it had backed up Spotify’s music catalogue and released metadata covering 256 million tracks and 86 million audio files. 

The group said the data spans music uploaded to Spotify between 2007 and 2025 and represents about 99.6 percent of listens on the platform. Spotify, which hosts more than 100 million tracks and has over 700 million users globally, said the material does not represent its full inventory. 

The company added that it has no indication that private user data was compromised, saying the only user related information involved was public playlists. The group said the files total just under 300 terabytes and would be distributed via peer to peer file sharing networks. 

It described the release as a preservation effort aimed at safeguarding cultural material. Spotify said it does not believe the audio files have been widely released so far and said it is actively monitoring the situation. 

The company said it is working with industry partners to protect artists and rights holders. Industry observers said the apparent scraping could raise concerns beyond piracy. 

Yoav Zimmerman, chief executive of intellectual property monitoring firm Third Chair, said the data could be attractive to artificial intelligence companies seeking to train music models. Others echoed those concerns, warning that training AI systems on copyrighted material without permission remains common despite legal risks. 

Campaigners have called on governments to require AI developers to disclose training data sources. Copyright disputes between artists and technology companies have intensified as generative AI tools expand. In the UK, artists have criticised proposals that could allow AI firms to use copyrighted material unless rights holders explicitly opt out. 

The government has said it will publish updated policy proposals on AI and copyright next year. Spotify said it remains committed to protecting creators and opposing piracy and that it has strengthened defences against similar attacks.

Eurostar’s AI Chatbot Exposed to Security Flaws, Experts Warn of Growing Cyber Risks

 

Eurostar’s newly launched AI-driven customer support chatbot has come under scrutiny after cybersecurity specialists identified several vulnerabilities that could have exposed the system to serious risks. 

Security researchers from Pen Test Partners found that the chatbot only validated the latest message in a conversation, leaving earlier messages open to manipulation. By altering these older messages, attackers could potentially insert malicious prompts designed to extract system details or, in certain scenarios, attempt to access sensitive information.

At the time the flaws were uncovered, the risks were limited because Eurostar had not integrated its customer data systems with the chatbot. As a result, there was no immediate threat of customer data being leaked.

The researchers also highlighted additional security gaps, including weak verification of conversation and message IDs, as well as an HTML injection vulnerability that could allow JavaScript to run directly within the chat interface. 

Pen Test Partners stated they were likely the first to identify these issues, clarifying: “No attempt was made to access other users’ conversations or personal data”. They cautioned, however, that “the same design weaknesses could become far more serious as chatbot functionality expands”.

Eurostar reiterated that customer information remained secure, telling City AM: “The chatbot did not have access to other systems and more importantly no sensitive customer data was at risk. All data is protected by a customer login.”

The incident highlights a broader challenge facing organizations worldwide. As companies rapidly adopt AI-powered tools, expanding cloud-based systems can unintentionally increase attack surfaces, making robust security measures more critical than ever.


New US Proposal Allows Users to Sue AI Companies Over Unauthorised Data Use


US AI developers would be subject to data privacy obligations applicable in federal court under a wide legislative proposal disclosed recently by the US senate Marsha Blackburn, R-Tenn. 

About the proposal

Beside this, the proposal will create a federal right for users to sue companies for misusing their personal data for AI model training without proper consent. The proposal allows statutory and punitive damages, attorney fees and injunctions. 

Blackburn is planning to officially introduce the bill this year to codify President Donald Trump’s push for “one federal rule book” for AI, according to the press release. 

Why the need for AI regulations 

The legislative framework comes on the heels of Trump’s signing of an executive order aimed at blocking “onerous” AI laws at the state level and promoting a national policy framework for the technology.  

In order to ensure that there is a least burdensome national standard rather than fifty inconsistent State ones, the directive required the administration to collaborate with Congress. 

Michael Kratsios, the president's science and technology adviser, and David Sacks, the White House special adviser for AI and cryptocurrency, were instructed by the president to jointly propose federal AI legislation that would supersede any state laws that would contradict with administration policy. 

Blackburn stated in the Friday release that rather than advocating for AI amnesty, President Trump correctly urged Congress to enact federal standards and protections to address the patchwork of state laws that have impeded AI advancement.

Key highlights of proposal:

  • Mandate that regulations defining "minimum reasonable" AI protections be created by the Federal Trade Commission. 
  • Give the U.S. attorney general, state attorneys general, and private parties the authority to sue AI system creators for damages resulting from "unreasonably dangerous or defective product claims."
  • Mandate that sizable, state-of-the-art AI developers put procedures in place to control and reduce "catastrophic" risks associated with their systems and provide reports to the Department of Homeland Security on a regular basis. 
  • Hold platforms accountable for hosting an unauthorized digital replica of a person if they have actual knowledge that the replica was not authorized by the person portrayed.
  • Require quarterly reporting to the Department of Labor of AI-related job effects, such as job displacement and layoffs.

The proposal will preempt state laws regulating the management of catastrophic AI risks. The legislation will also mostly “preempt” state laws for digital replicas to make a national standard for AI. 

The proposal will not preempt “any generally applicable law, including a body of common law or a scheme of sectoral governance that may address” AI. The bill becomes effective 180 days after enforcement. 

Romanian Water Authority Hit by BitLocker Ransomware, 1,000 Systems Disrupted

 

Romanian Waters, the country's national water management authority, was targeted by a significant ransomware attack over the weekend, affecting approximately 1,000 computer systems across its headquarters and 10 of its 11 regional offices. The breach disrupted servers running geographic information systems, databases, email, web services, Windows workstations, and domain name servers, but crucially, the operational technology (OT) systems controlling the actual water infrastructure were not impacted.

According to the National Cyber Security Directorate (DNSC), the attackers leveraged the built-in Windows BitLocker security feature to encrypt files on compromised systems and left a ransom note demanding contact within seven days. Despite the widespread disruption to IT infrastructure, the DNSC confirmed that the operation of hydrotechnical assets—such as dams and water treatment plants—remains unaffected, as these are managed through dispatch centers using voice communications and local personnel.

Investigators from multiple Romanian security agencies, including the Romanian Intelligence Service's National Cyberint Center, are actively working to identify the attack vector and contain the incident's fallout. Authorities have not yet attributed the attack to any specific ransomware group or state-backed actor. 

The DNSC also noted that the national cybersecurity system for critical IT infrastructure did not previously protect the water authority's systems, but efforts are underway to integrate them into broader protective measures. The incident follows recent warnings from international agencies, including the FBI, NSA, and CISA, about increased targeting of critical infrastructure by pro-Russia hacktivist groups such as Z-Pentest, Sector16, NoName, and CARR. 

This attack marks another major ransomware event in Romania, following previous breaches at Electrica Group and over 100 hospitals due to similar threats in recent years. Romanian authorities continue to stress that water supply and flood protection activities remain fully operational, and no disruption to public services has occurred as a result of the cyberattack.

University of Phoenix Data Breach Exposes Records of Nearly 3.5 Million Individuals

 

The University of Phoenix has confirmed a major cybersecurity incident that exposed the financial and personal information of nearly 3.5 million current and former students, employees, faculty members, and suppliers. The breach is believed to be linked to the Clop ransomware group, a cybercriminal organization known for large-scale data theft and extortion. The incident adds to a growing number of significant cyberattacks reported in 2025. 

Clop is known for exploiting weaknesses in widely used enterprise software rather than locking systems. Instead, the group steals sensitive data and threatens to publish it unless victims pay a ransom. In this case, attackers took advantage of a previously unknown vulnerability in Oracle Corporation’s E-Business Suite software, which allowed them to access internal systems. 

The breach was discovered on November 21 after the University of Phoenix appeared on Clop’s dark web leak site. Further investigation revealed that unauthorized access may have occurred as early as August 2025. The attackers used the Oracle E-Business Suite flaw to move through university systems and reach databases containing highly sensitive financial and personal records.  

The vulnerability used in the attack became publicly known in November, after reports showed Clop-linked actors had been exploiting it since at least September. During that time, organizations began receiving extortion emails claiming financial and operational data had been stolen from Oracle EBS environments. This closely mirrors the methods used in the University of Phoenix breach. 

The stolen data includes names, contact details, dates of birth, Social Security numbers, and bank account and routing numbers. While the university has not formally named Clop as the attacker, cybersecurity experts believe the group is responsible due to its public claims and known use of Oracle EBS vulnerabilities. 

Paul Bischoff, a consumer privacy advocate at Comparitech, said the incident reflects a broader trend in which Clop has aggressively targeted flaws in enterprise software throughout the year. In response, the University of Phoenix has begun notifying affected individuals and is offering 12 months of free identity protection services, including credit monitoring, dark web surveillance, and up to $1 million in fraud reimbursement. 

The breach ranks among the largest cyber incidents of 2025. Rebecca Moody, head of data research at Comparitech, said it highlights the continued risks organizations face from third-party software vulnerabilities. Security experts say the incident underscores the need for timely patching, proactive monitoring, and stronger defenses, especially in education institutions that handle large volumes of sensitive data.

San Francisco Power Outage Brings Waymo Robotaxi Services to a Halt

 


A large power outage across San Francisco during the weekend disrupted daily life in the city and temporarily halted the operations of Waymo’s self-driving taxi service. The outage occurred on Saturday afternoon after a fire caused serious damage at a local electrical substation, according to utility provider Pacific Gas and Electric Company. As a result, electricity was cut off for more than 100,000 customers across multiple neighborhoods.

The loss of power affected more than homes and businesses. Several traffic signals across the city stopped functioning, creating confusion and congestion on major roads. During this period, multiple Waymo robotaxis were seen stopping in the middle of streets and intersections. Videos shared online showed the autonomous vehicles remaining stationary with their hazard lights turned on, while human drivers attempted to maneuver around them, leading to traffic bottlenecks in some areas.

Waymo confirmed that it temporarily paused all robotaxi services in the Bay Area as the outage unfolded. The company explained that its autonomous driving system is designed to treat non-working traffic lights as four-way stops, a standard safety approach used by human drivers as well. However, officials said the unusually widespread nature of the outage made conditions more complex than usual. In some cases, Waymo vehicles waited longer than expected at intersections to verify traffic conditions, which contributed to delays during peak congestion.

City authorities took emergency measures to manage the situation. Police officers, firefighters, and other personnel were deployed to direct traffic manually at critical intersections. Public transportation services were also affected, with some commuter train lines and stations experiencing temporary shutdowns due to the power failure.

Waymo stated that it remained in contact with city officials throughout the disruption and prioritized safety during the incident. The company said most rides that were already in progress were completed successfully, while other vehicles were either safely pulled over or returned to depots once service was suspended.

By Sunday afternoon, PG&E reported that power had been restored to the majority of affected customers, although thousands were still waiting for electricity to return. The utility provider said full restoration was expected by Monday.

Following the restoration of power, Waymo confirmed that its ride-hailing services in San Francisco had resumed. The company also indicated that it would review the incident to improve how its autonomous systems respond during large-scale infrastructure failures.

Waymo operates self-driving taxi services in several U.S. cities, including Los Angeles, Phoenix, Austin, and parts of Texas, and plans further expansion. The San Francisco outage has renewed discussions about how autonomous vehicles should adapt during emergencies, particularly when critical urban infrastructure fails.

Featured