In the digital world where personal privacy is increasingly at risk, it has now come to light that the U.S. government has been quietly purchasing airline passenger information without public knowledge.
A recent report by Wired revealed that the U.S. Customs and Border Protection (CBP), which operates under the Department of Homeland Security (DHS), has been buying large amounts of flight data from the Airlines Reporting Corporation (ARC). This organization, which handles airline ticketing systems and works closely with travel agencies, reportedly provided CBP with sensitive passenger details such as names, full travel routes, and payment information.
ARC plays a critical role in managing airfare transactions worldwide, with about 240 airlines using its services. These include some of the biggest names in air travel, both in the U.S. and internationally.
Documents reviewed by Wired suggest that this agreement between CBP and ARC began in June 2024 and is still active. The data collection reportedly includes more than a billion flight records, covering trips already taken as well as future travel plans. Importantly, this data is not limited to U.S. citizens but includes travelers from around the globe.
What has raised serious concerns is that this information is being shared in bulk with U.S. government agencies, who can then use it to track individuals’ travel patterns and payment methods. According to Wired, the contract even required that the government agencies keep the source of the data hidden.
It’s important to note that the issue of airline passenger data being shared with the government was first highlighted in June 2024 by Frommer's, which referenced a related deal involving Immigration and Customs Enforcement (ICE). This earlier case was investigated by The Lever.
According to the privacy assessment reports reviewed, most of the data being purchased by CBP relates to tickets booked through third-party platforms like Expedia or other travel websites. There is no public confirmation yet on whether tickets bought directly from airline websites are also being shared through other means.
The U.S. government has reportedly justified this data collection as part of efforts to assist law enforcement in identifying individuals of interest based on their domestic air travel records.
When contacted by news organizations, including USA Today, both ARC and CBP did not provide any official responses regarding these reports.
The revelations have sparked public debate around digital privacy and the growing practice of companies selling consumer data to government bodies. The full scale of these practices, and whether more such agreements exist, remains unclear at this time.
Google is currently being investigated in Europe over privacy concerns raised about how the search giant has used personal data to train its generative AI tools. The subject of investigation is led by Ireland's Data Protection Commission, which ensures that the giant technical company adheres to strict data protection laws within the European Union. This paper will establish whether Google adhered to its legal process, such as obtaining a Data Protection Impact Assessment (DPIA), before using people's private information to develop its intelligent machine models.
Data Collection for AI Training Causes Concerns
Generative AI technologies similar to Google's brand Gemini have emerged into the headlines because these tend to create fake information and leak personal information. This raises the question of whether Google's AI training methods, necessarily involving tremendous amounts of data through which such training must pass, are GDPR-compliant-its measures to protect privacy and rights regarding individuals when such data is used for developing AI.
This issue at the heart of the probe is if Google should have carried out a DPIA, which is an acronym for Data Protection Impact Assessment-the view of any risks data processing activities may have on the rights to privacy of individuals. The reason for conducting a DPIA is to ensure that the rights of the individuals are protected simply because companies like Google process humongous personal data so as to create such AI models. The investigation, however, is specifically focused on how Google has been using its model called PaLM2 for running different forms of AI, such as chatbots and enhancements in the search mechanism.
Fines Over Privacy Breaches
But if the DPC finds that Google did not comply with the GDPR, then this could pose a very serious threat to the company because the fine may amount to more than 4% of the annual revenue generated globally. Such a company as Google can raise billions of dollars in revenue every year; hence such can result in a tremendous amount.
Other tech companies, including OpenAI and Meta, also received similar privacy-based questions relating to their data practices when developing AI.
Other general issues revolve around the processing of personal data in this fast-emerging sphere of artificial intelligence.
Google Response to Investigation
The firm has so far refused to answer questions over specific sources of data used to train its generative AI tools. A company spokesperson said Google remains dedicated to compliance with the GDPR and will continue cooperating with the DPC throughout the course of the investigation. The company maintains it has done nothing illegal. And just because a company is under investigation, that doesn't mean there's something wrong with it; the very act of inquiring itself forms part of a broader effort to ensure that companies using technology take account of how personal information is being used.
Data Protection in the AI Era
DPC questioning of Google is part of a broader effort by the EU regulators to ensure generative AI technologies adhere to the bloc's high data-privacy standards. As concerns over how personal information is used, more companies are injecting AI into their operations. The GDPR has been among the most important tools for ensuring citizens' protection against misuse of data, especially during cases involving sensitive or personal data.
In the last few years, other tech companies have been prosecuted with regard to their data-related activities in AI development. Recently, the developers of ChatGPT, OpenAI, and Elon Musk's X (formerly Twitter), faced investigations and complaints under the law of GDPR. This indicates the growing pressure technological advancement and the seriousness in the protection of privacy are under.
The Future of AI and Data Privacy
In developing AI technologies, firms developing relevant technology need to strike a balance between innovation and privacy. The more innovation has brought numerous benefits into the world-search capabilities and more efficient processes-the more it has opened risks to light by leaving personal data not so carefully dealt with in most cases.
Moving forward, the regulators, including the DPC, would be tracking the manner in which the companies like Google are dealing with the data. It is sure to make rules much more well-defined on what is permissible usage of personal information for developing the AI that would better protect individuals' rights and freedoms in this digital age.
Ultimately, the consequences of this study may eventually shape how AI technologies are designed and implemented in the European Union; it will certainly inform tech businesses around the world.
One particular area of interest is Chinese-made EVs, which dominate the global market. This blog post delves into the privacy and security risks associated with these vehicles, drawing insights from a recent investigation.
In 2022, Tor Indstøy purchased a Chinese electric vehicle for $69,000 to accommodate his growing family.
Indstøy had an ulterior motivation for purchasing an ES8, a luxury SUV from Shanghai-based NIO Inc. The Norwegian cybersecurity specialist wanted to investigate the EV and see how much data it collects and transmits back to China.
He co-founded Project Lion Cage with several industry acquaintances to examine his SUV and release the findings.
Since its inception in July 2023, Indstøy and his crew have provided nearly a dozen status reports. These have largely consisted of them attempting to comprehend the enormously complex vehicle and the operation of its numerous components.
In a fascinating experiment, Norwegian cybersecurity researcher Tor Indstøy purchased a $69,000 Chinese electric vehicle—an ES8 luxury SUV manufactured by Shanghai-based NIO Inc. His motive? To dissect the vehicle, uncover its data practices, and shed light on potential risks.
The project, aptly named “Project Lion Cage,” aims to answer critical questions about data privacy and security in EVs.
Electric cars are not mere transportation devices; they are rolling data centers. Unlike their gas-powered counterparts, EVs rely heavily on electronic components—up to 2,000 to 3,000 chips per vehicle.
These chips control everything from battery management to infotainment systems. Each chip can collect and transmit data, creating a vast information flow network within the vehicle.
However, studying EVs is also a challenge. Traditional cybersecurity tools designed for PCs and servers need to improve when dealing with the intricate architecture of electric cars. Researchers like Indstøy face unique challenges as they navigate this uncharted territory.
Indstøy and his team have identified potential areas of concern for the NIO ES8, but no major revelations have been made.
One example is how data gets into and out of the vehicle. According to the researchers, China received over 90% of the communications, which contained data ranging from simple voice commands to the car to the vehicle's geographical location. Other destinations included Germany, the United States, the Netherlands, Switzerland, and others.
Indstøy suggests that the ambiguity of some communications could be a source of concern. For example, the researchers discovered that the car was regularly downloading a single, unencrypted file from a nio.com internet address, but they have yet to determine its purpose.
China’s dominance in the EV market raises geopolitical concerns. With nearly 60% of global EV sales happening in China, the data collected by these vehicles becomes a strategic asset.
Governments worry about potential espionage, especially given the close ties between Chinese companies and the state. The Biden administration’s cautious approach to Chinese-made EVs reflects these concerns.
Automatic Content Recognition (ACR) is the invisible eye that tracks everything you watch on your smart TV. Whether it’s a gripping drama, a cooking show, or a late-night talk show, your TV is quietly analyzing it all. ACR identifies content from over-the-air broadcasts, streaming services, DVDs, Blu-ray discs, and internet sources. It’s like having a digital detective in your living room, noting every scene change and commercial break.
Ever notice how ads seem eerily relevant to your interests? That’s because of Advertisement Identification (AdID). When you watch a TV commercial, it’s not just about the product being sold; it’s about the unique code embedded within it. AdID deciphers these codes, linking them to your viewing history. Suddenly, those shoe ads after binge-watching a fashion series make sense—they’re tailored to you.
Manufacturers and tech companies profit from your data. They analyze your habits, preferences, and even your emotional reactions to specific scenes. This information fuels targeted advertising, which generates revenue. While it’s not inherently evil, the lack of transparency can leave you feeling like a pawn in a digital chess game.
Turn Off ACR: Visit your TV settings and disable ACR. By doing so, you prevent your TV from constantly analyzing what’s on your screen. Remember, convenience comes at a cost—weigh the benefits against your privacy.
AdID Management: Reset your AdID periodically. This wipes out ad-related data and restricts targeted ad tracking. Dig into your TV’s settings to find this option.
Voice Control vs. Privacy: Voice control is handy, but it also means your TV is always listening. If privacy matters more, disable voice services like Amazon Alexa, Google Assistant, or Apple Siri. Sacrifice voice commands for peace of mind.
Different smart TV brands have varying privacy settings. Here’s a quick guide:
Amazon Fire TV: Navigate to Settings > Preferences > Privacy Settings. Disable “Interest-based Ads” and “Data Monitoring.”
Google TV: Head to Settings > Device Preferences > Reset Ad ID. Also, explore the “Privacy” section for additional controls.
Roku: Visit Settings > Privacy > Advertising. Opt out of personalized ads and reset your Ad ID.
LG, Samsung, Sony, and Vizio: These brands offer similar options. Look for settings related to ACR, AdID, and voice control.
Your smart TV isn’t just a screen; it’s a gateway to your personal data. Be informed, take control, and strike a balance. Enjoy your favorite shows, but remember that every episode you watch leaves a digital footprint. Protect your privacy—it’s the best show you’ll ever stream.