Search This Blog

Popular Posts

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label User Data. Show all posts

Chinese Hacking Group Breaches Email Systems Used by Key U.S. House Committees: Report

 

A cyber espionage group believed to be based in China has reportedly gained unauthorized access to email accounts used by staff working for influential committees in the U.S. House of Representatives, according to a report by the Financial Times published on Wednesday. The information was shared by sources familiar with the investigation.

The group, known as Salt Typhoon, is said to have infiltrated email systems used by personnel associated with the House China committee, along with aides serving on committees overseeing foreign affairs, intelligence, and armed services. The report did not specify the identities of the staff members affected.

Reuters said it was unable to independently confirm the details of the report. Responding to the allegations, Chinese Embassy spokesperson Liu Pengyu criticized what he described as “unfounded speculation and accusations.” The Federal Bureau of Investigation declined to comment, while the White House and the offices of the four reportedly targeted committees did not immediately respond to media inquiries.

According to one source cited by the Financial Times, it remains uncertain whether the attackers managed to access the personal email accounts of lawmakers themselves. The suspected intrusions were reportedly discovered in December.

Members of Congress and their staff, particularly those involved in overseeing the U.S. military and intelligence apparatus, have historically been frequent targets of cyber surveillance. Over the years, multiple incidents involving hacking or attempted breaches of congressional systems have been reported.

In November, the Senate Sergeant at Arms alerted several congressional offices to a “cyber incident” in which hackers may have accessed communications between the nonpartisan Congressional Budget Office and certain Senate offices. Separately, a 2023 report by the Washington Post revealed that two senior U.S. lawmakers were targeted in a hacking campaign linked to Vietnam.

Salt Typhoon has been a persistent concern for the U.S. intelligence community. The group, which U.S. officials allege is connected to Chinese intelligence services, has been accused of collecting large volumes of data from Americans’ telephone communications and intercepting conversations, including those involving senior U.S. politicians and government officials.

China has repeatedly rejected accusations of involvement in such cyber spying activities. Early last year, the United States imposed sanctions on alleged hacker Yin Kecheng and the cybersecurity firm Sichuan Juxinhe Network Technology, accusing both of playing a role in Salt Typhoon’s operations.

Inside the Hidden Market Where Your ChatGPT and Gemini Chats Are Sold for Profit

 

Millions of users may have unknowingly exposed their most private conversations with AI tools after cybersecurity researchers uncovered a network of browser extensions quietly harvesting and selling chat data.Here’s a reminder many people forget: an AI assistant is not your friend, not a financial expert, and definitely not a doctor or therapist. It’s simply someone else’s computer, running in a data center and consuming energy and water. What you share with it matters.

That warning has taken on new urgency after cybersecurity firm Koi uncovered a group of Google Chrome extensions that were quietly collecting user conversations with AI tools and selling that data to third parties. According to Koi, “Medical questions, financial details, proprietary code, personal dilemmas,” were being captured — “all of it, sold for ‘marketing analytics purposes.’”

This issue goes far beyond just ChatGPT or Google Gemini. Koi says the extensions indiscriminately target multiple AI platforms, including “Claude, Microsoft Copilot, Perplexity, DeepSeek, Grok (xAI) and Meta AI.” In other words, using any browser-based AI assistant could expose sensitive conversations if these extensions are installed.

The mechanism is built directly into the extensions. Koi explains that “for each platform, the extension includes a dedicated ‘executor’ script designed to intercept and capture conversations.” This data harvesting is enabled by default through hardcoded settings, with no option for users to turn it off. As Koi warns, “There is no user-facing toggle to disable this. The only way to stop the data collection is to uninstall the extension entirely.”

Once installed, the extensions monitor browser activity. When a user visits a supported AI platform, the extension injects a specific script — such as chatgpt.js, claude.js, or gemini.js — into the page. The result is total visibility into AI usage. As Koi puts it, this includes “Every prompt you send to the AI. Every response you receive. Conversation identifiers and timestamps. Session metadata. The specific AI platform and model used.”

Alarmingly, this behavior was not part of the extension’s original design. It was introduced later through updates, while the privacy policy remained vague and misleading. Although the tool is marketed as a privacy-focused product, Koi says it does the opposite. The policy admits: “We share the Web Browsing Data with our affiliated company,” described as a data broker “that creates insights which are commercially used and shared.”

The main extension involved is Urban VPN Proxy, which alone has around six million users. After identifying its behavior, Koi searched for similar code and found it reused across multiple products from the same publisher, spanning both Chrome and Microsoft Edge.

Affected Chrome Web Store extensions include:
  • Urban VPN Proxy – 6,000,000 users
  • 1ClickVPN Proxy – 600,000 users
  • Urban Browser Guard – 40,000 users
  • Urban Ad Blocker – 10,000 users
On Microsoft Edge Add-ons, the list includes:
  • Urban VPN Proxy – 1,323,622 users
  • 1ClickVPN Proxy – 36,459 users
  • Urban Browser Guard – 12,624 users
  • Urban Ad Blocker – 6,476 users
Despite this activity, most of these extensions carry “Featured” badges from Google and Microsoft. These labels suggest that the tools have been reviewed and meet quality standards — a signal many users trust when deciding what to install.

Koi and other experts argue that this highlights a deeper problem with extension privacy disclosures. While Urban VPN does technically mention some of this data collection, it’s easy to miss. During setup, users are told the extension processes “ChatAI communication” along with “pages you visit” and “security signals,” supposedly “to provide these protections.”

Digging deeper, the privacy policy spells it out more clearly: “‘AI Inputs and Outputs. As part of the Browsing Data, we will collect the prompts and outputs queried by the End-User or generated by the AI chat provider, as applicable.’” It also states plainly: “‘We also disclose the AI prompts for marketing analytics purposes.’”

The extensions, Koi warns, “remained live for months while harvesting some of the most personal data users generate online.” The advice is blunt: “if you have any of these extensions installed, uninstall them now. Assume any AI conversations you've had since July 2025 have been captured and shared with third parties.”

U.S. Startup Launches Mobile Service That Requires No Personal Identification

 



A newly launched U.S. mobile carrier is questioning long-standing telecom practices by offering phone service without requiring customers to submit personal identification. The company, Phreeli, presents itself as a privacy-focused alternative in an industry known for extensive data collection.

Phreeli officially launched in early December and describes its service as being built with privacy at its core. Unlike traditional telecom providers that ask for names, residential addresses, birth dates, and other sensitive information, Phreeli limits its requirements to a ZIP code, a chosen username, and a payment method. According to the company, no customer profiles are created or sold, and user data is not shared for advertising or marketing purposes.

Customers can pay using standard payment cards, or opt for cryptocurrency if they wish to reduce traceable financial links. The service operates entirely on a prepaid basis, with no contracts involved. Monthly plans range from lower-cost options for light usage to higher-priced tiers for customers who require more mobile data. The absence of contracts aligns with the company’s approach, as formal agreements typically require verified personal identities.

Rather than building its own cellular infrastructure, Phreeli operates as a Mobile Virtual Network Operator. This means it provides service by leasing network access from an established carrier, in this case T-Mobile. This model allows Phreeli to offer nationwide coverage without owning physical towers or equipment.

Addressing legal concerns, the company states that U.S. law does not require mobile carriers to collect customer names in order to provide service. To manage billing while preserving anonymity, Phreeli says it uses a system that separates payment information from communication data. This setup relies on cryptographic verification to confirm that accounts are active, without linking call records or data usage to identifiable individuals.

The company’s privacy policy notes that information will only be shared when necessary to operate the service or when legally compelled. By limiting the amount of data collected from the start, Phreeli argues that there is little information available even in the event of legal requests.

Phreeli was founded by Nicholas Merrill, who previously operated an internet service provider and became involved in a prolonged legal dispute after challenging a government demand for user information. That experience reportedly influenced the company’s data-minimization philosophy.

While services that prioritize anonymity are often associated with misuse, Phreeli states that it actively monitors for abusive behavior. Accounts involved in robocalling or scams may face restrictions or suspension.

As concerns grow rampant around digital surveillance and commercial data harvesting, Phreeli’s launch sets the stage for a broader discussion about privacy in everyday communication. Whether this model gains mainstream adoption remains uncertain, but it introduces a notable shift in how mobile services can be structured in the United States.



Landfall Spyware Exploited a Samsung Image Flaw to Secretly Target Users For Nearly a Year




Security specialists at Palo Alto Networks’ Unit 42 have uncovered a complex spyware tool named Landfall that silently infiltrated certain Samsung Galaxy phones for close to a year. The operation relied on a serious flaw in Samsung’s Android image-processing system, which allowed the device to be compromised without the user tapping or opening anything on their screen.

Unit 42 traces the campaign back to July 2024. The underlying bug was later assigned CVE-2025-21042, and Samsung addressed it in a security update released in April 2025. The details of how attackers used the flaw became public only recently, after researchers completed their investigation.

The team emphasizes that even users who browsed risky websites or received suspicious files during that period likely avoided infection. Evidence suggests the operation was highly selective, targeting only specific individuals or groups rather than the general public. Based on submitted samples, the activity was concentrated in parts of the Middle East, including Iraq, Iran, Turkey, and Morocco. Who controlled Landfall remains unknown.

The researchers discovered the spyware while examining earlier zero-click bugs affecting Apple iOS and WhatsApp. Those unrelated flaws showed how attackers could trigger remote code execution by exploiting image-handling weaknesses. This motivated Unit 42 to search for similar risks affecting Android devices. During this process, they found several suspicious files uploaded to VirusTotal that ultimately revealed the Landfall attack chain.

At the center of this operation were manipulated DNG image files. DNG is a raw picture format built on the TIFF standard and is normally harmless. In this case, however, the attackers altered the files so they carried compressed ZIP archives containing malicious components. The image-processing library in Samsung devices had a defect that caused the system to extract and run the embedded code automatically while preparing the image preview. This made the threat a true zero-click exploit because no user action was required for infection.

Once the malware launched, it attempted to rewrite parts of the device’s SELinux security policy. This gave the operators broad system access and made the spyware harder to detect or remove. According to Unit 42, the files appeared to have been delivered through messaging platforms like WhatsApp, disguised as regular images. Code inside the samples referenced models such as the Galaxy S22, S23, S24, Z Flip 4, and Z Fold 4. Samsung believes the vulnerability existed across devices running Android 13, 14, and 15.

After installation, Landfall could gather extensive personal information. It could transmit hardware identifiers, lists of installed apps, contacts, browsing activity, and stored files. It also had the technical ability to activate the device’s microphone or camera for surveillance. The spyware included multiple features to avoid detection, meaning that fully removing it would require deep device repairs or resets.

Unit 42 noted similarities between Landfall’s design and advanced commercial spyware used by major surveillance vendors, but they did not identify any company or group responsible. Although Samsung has already released a fix, attackers could reuse this method on devices that have not installed the April 2025 update or later. Users are urged to check their security patch level to remain protected.


ChatGPT Atlas Surfaces Privacy Debate: How OpenAI’s New Browser Handles Your Data

 




OpenAI has officially entered the web-browsing market with ChatGPT Atlas, a new browser built on Chromium: the same open-source base that powers Google Chrome. At first glance, Atlas looks and feels almost identical to Chrome or Safari. The key difference is its built-in ChatGPT assistant, which allows users to interact with web pages directly. For example, you can ask ChatGPT to summarize a site, book tickets, or perform online actions automatically, all from within the browser interface.

While this innovation promises faster and more efficient browsing, privacy experts are increasingly worried about how much personal data the browser collects and retains.


How ChatGPT Atlas Uses “Memories”

Atlas introduces a feature called “memories”, which allows the system to remember users’ activity and preferences over time. This builds on ChatGPT’s existing memory function, which stores details about users’ interests, writing styles, and previous interactions to personalize future responses.

In Atlas, these memories could include which websites you visit, what products you search for, or what tasks you complete online. This helps the browser predict what you might need next, such as recalling the airline you often book with or your preferred online stores. OpenAI claims that this data collection aims to enhance user experience, not exploit it.

However, this personalization comes with serious privacy implications. Once stored, these memories can gradually form a comprehensive digital profile of an individual’s habits, preferences, and online behavior.


OpenAI’s Stance on Early Privacy Concerns

OpenAI has stated that Atlas will not retain critical information such as government-issued IDs, banking credentials, medical or financial records, or any activity related to adult content. Users can also manage their data manually: deleting, archiving, or disabling memories entirely, and can browse in incognito mode to prevent the saving of activity.

Despite these safeguards, recent findings suggest that some sensitive data may still slip through. According to The Washington Post, an investigation by a technologist at the Electronic Frontier Foundation (EFF) revealed that Atlas had unintentionally stored private information, including references to sexual and reproductive health services and even a doctor’s real name. These findings raise questions about the reliability of OpenAI’s data filters and whether user privacy is being adequately protected.


Broader Implications for AI Browsers

OpenAI is not alone in this race. Other companies, including Perplexity with its upcoming browser Comet, have also faced criticism for extensive data collection practices. Perplexity’s CEO openly admitted that collecting browser-level data helps the company understand user behavior beyond the AI app itself, particularly for tailoring ads and content.

The rise of AI-integrated browsers marks a turning point in internet use, combining automation and personalization at an unprecedented scale. However, cybersecurity experts warn that AI agents operating within browsers hold immense control — they can take actions, make purchases, and interact with websites autonomously. This power introduces substantial risks if systems malfunction, are exploited, or process data inaccurately.


What Users Can Do

For those concerned about privacy, experts recommend taking proactive steps:

• Opt out of the memory feature or regularly delete saved data.

• Use incognito mode for sensitive browsing.

• Review data-sharing and model-training permissions before enabling them.


AI browsers like ChatGPT Atlas may redefine digital interaction, but they also test the boundaries of data ethics and security. As this technology evolves, maintaining user trust will depend on transparency, accountability, and strict privacy protection.



Cyberattack on EC-Ship Platform Exposes Personal Data of Thousands

 



Hong Kong, China — A recent cyberattack on Hongkong Post’s online mailing system has resulted in a major data breach affecting tens of thousands of users. According to officials, the hacker managed to access sensitive contact information from the EC-Ship platform, which is widely used for managing and sending mail.

Postmaster General Leonia Tai revealed that the attacker was able to view information stored in the address books of approximately 60,000 to 70,000 EC-Ship accounts. These records contained the names, addresses, email IDs, and phone numbers of both senders and recipients, as well as company names and fax numbers.

EC-Ship is a digital tool operated by the Hongkong Post, which helps individuals and businesses arrange mail deliveries locally and internationally. The platform allows users to save contact information, print shipping labels, and track parcels.

The breach began on Sunday night and continued into Monday. According to Tai, the attacker created a legitimate account on the platform and began exploring weaknesses in the system’s code. Though the system recognized unusual activity and temporarily suspended the attacker’s access, the hacker continued trying different techniques. Eventually, they discovered a flaw in the program’s code that allowed them to reach data stored in other users’ address books.

Tai stated that the issue was quickly identified and the affected programming code was patched to block further intrusions. However, the hacker had already extracted confidential information from a large number of users. The Hongkong Post has contacted affected account holders by email and asked them to alert anyone whose information may have been exposed.

Law enforcement agencies have launched an investigation into the incident. In the meantime, Hongkong Post is seeking expert advice to strengthen its digital defences.

Cybersecurity professionals have raised concerns over where the EC-Ship system is hosted. Some believe that sensitive systems like this should operate on government cloud servers, which offer more advanced protection. Tai responded that Hongkong Post follows standard security procedures and that their internal systems did detect and respond to the attack.

Efforts are now underway to migrate the EC-Ship service to a central government-managed internet platform that uses multiple layers of protection and round-the-clock monitoring. Officials hope this will reduce the chances of future incidents and better safeguard users’ data.

Oracle Cloud Confirms Second Hack in a Month, Client Log-in Data Stolen

 

Oracle Corporation has warned customers of a second cybersecurity incident in the last month, according to Bloomberg News. A hacker infiltrated an older Oracle system and stole login credentials from client accounts, some of which date back as recently as 2024. 

The tech company reportedly informed clients that an attacker had gained access to a legacy environment—a system that had not been in active operation for roughly eight years. Although Oracle told clients that the environment had been dormant, the data retrieved included valid login credentials, which might pose a security concern, especially if users had not updated or deleted their accounts. 

This follows a prior hack last month, in which an anonymous individual attempted to sell stolen Oracle data online, prompting internal investigations. That incident, too, involved data stolen from Oracle's cloud servers in Austin, Texas. 

The FBI and cybersecurity firm CrowdStrike Holdings are presently looking into the most recent incident, Oracle informed some of its clients. According to individuals who spoke to Bloomberg, the attacker is thought to have demanded an extortion payment. Interestingly, Oracle has declared that there is no connection between the two incidents. 

According to the firm, this breach occurred due to an outdated, dormant system, whereas the previous one affected specific clients in the healthcare sector. Oracle has not yet released a statement to the public, but according to Reuters, the company told customers directly and stressed that the impact is minimal because of how old the system in question is. 

Last month, Oracle also notified clients last month of a compromise at the software-as-a-service (SaaS) company Oracle Health (formerly Cerner), which affected many healthcare organisations and hospitals in the United States.

Even though the company has not publicly reported the event, threat analysts confirmed that patient data was stolen during the attack, as evidenced by private contacts between Oracle Health and impacted clients, as well as talks with people involved. Oracle Health reported that the breach of legacy Cerner data transfer servers occurred on February 20, 2025, and that the perpetrators accessed the systems using compromised client credentials after January 22, 2025.

No More Internet Cookies? Digital Targeted Ads to Find New Ways


Google Chrome to block cookies

The digital advertising world is changing rapidly due to privacy concerns and regulatory needs, and the shift is affecting how advertisers target customers. Starting in 2025, Google to stop using third-party cookies in the world’s most popular browser, Chrome. The cookies are data files that track our internet activities in our browsers. The cookie collects information sold to advertisers, who use this for targeted advertising based on user data. 

“Cookies are files created by websites you visit. By saving information about your visit, they make your online experience easier. For example, sites can keep you signed in, remember your site preferences, and give you locally relevant content,” says Google.

In 2019 and 2020, Firefox and Safari took a step back from third-party cookies. Following their footsteps, Google’s Chrome allows users to opt out of the settings. As the cookies have information that can identify a user, the EU’s and UK’s General Data Protection Regulation (GDPR) asks a user for prior consent via spamming pop-ups. 

No more third-party data

Once the spine of targeted digital advertising, the future of third-party cookies doesn’t look bright. However, not everything is sunshine and rainbows. 

While giants like Amazon, Google, and Facebook are burning bridges by blocking third-party cookies to address privacy concerns, they can still collect first-party data about a user from their websites, and the data will be sold to advertisers if a user permits, however in a less intrusive form. The harvested data won’t be of much use to the advertisers, but the annoying pop-ups being in existence may irritate the users.

How will companies benefit?

One way consumers and companies can benefit is by adapting the advertising industry to be more efficient. Instead of using targeted advertising, companies can directly engage with customers visiting websites. 

Advances in AI and machine learning can also help. Instead of invasive ads that keep following you on the internet, the user will be getting information and features personally. Companies can predict user needs, and via techniques like automated delivery and pre-emptive stocking, give better results. A new advertising landscape is on its way.