Adonne Washington, who is a policy council at the Future of Privacy Forum, said. "These privacy policies are written in a way to ensure that whatever is happening in the car if there's an inference that can be made, they're still ensuring that there's protection and that they're compliant with different state laws." The agreements take into consideration technology advancements that may occur while you own the car. According to Washington, tools designed to do one thing may soon be able to do more, therefore manufacturers must keep this in mind.
So it seems logical that a car manufacturer's privacy policy would include every form of data feasible in order to protect the company legally if it fell into a particular data collection area. Nissan's privacy policy, for instance, lists "sexual orientation, sexual activity, precise geolocation, health diagnosis data, and genetic information" as forms of personal data gathered.
Organizations claim prior ownership, so you can't sue if, for example, they mistakenly capture you having sex in the backseat. Nissan argued in a statement that this is why their privacy policy is still so general. Nissan claims it "does not knowingly collect or disclose customer information on sexual activity or sexual orientation," but it has those terms in its policy as "some U.S. state laws require us to account for inadvertent data we have or could infer but do not request or use."
Aside from covering all legal bases, there's no way of knowing why these companies would want extremely sensitive information on their drivers, or what they'd do with it. Even if it isn't a "smart" car, any vehicle equipped with Bluetooth, USB, or recording capabilities may gather a lot of information on the driver.
The lack of available connected cars, paired with an absence of full disclosure related to driver data use, leaves customers with little choice but to trust that what they share is being used ethically, or that at least some of the categories of data listed in these troubling privacy policies — such as Nissan's decision to include "genetic information" — are solely linked to possible liability. The choices are basically to read each of these policies and choose the least severe, to buy a very old, probably fuel-inefficient automobile with no smart technologies, or to just not own a car at all. To that end, just around 8% of American households own a car, not because they belong to an area that is walkable with good public transportation, but because they are unable to afford one.
Customers are actively constrained by the current state of legal contract understanding, while companies are driven to limit risk by continuing to exaggerate these (often misread) agreements with more intrusive kinds of information. A lot of experts would tell you that federal regulation is the only actual option here. There have been occasional instances of state privacy laws being used to benefit customers, such as in Massachusetts and California, but for the most part, drivers have no idea they should be upset. But even if they are outraged, there is nothing much they can do except buy a car anyway.
The study was conducted between the 40 most downloaded Android apps, out of which 20 were free apps and 20 were paid, on Google Play and found that nearly 80% of these apps disclose misleading or false information.
The following findings were made by the Mozilla researchers:
Google apparently launched its data privacy section for the Play Store last year. This section was introduced in an attempt to provide a “complete and accurate declaration” for information gathered by their apps by filling out the Google Data Safety Form.
Due to certain vulnerabilities in the safety form's honor-based system, such as ambiguous definitions for "collection" and "sharing," and the failure to require apps to report data shared with "service providers," Mozilla claims that these self-reported privacy labels may not accurately reflect what user data is actually being collected.
In regards to Google’s Data Safety labels, Jen Caltrider, project lead at Mozilla says “Consumers care about privacy and want to make smart decisions when they download apps. Google’s Data Safety labels are supposed to help them do that[…]Unfortunately, they don’t. Instead, I’m worried they do more harm than good.”
In one instance in the report, Mozilla notes that TikTok and Twitter both confirm that they do not share any user data with the third parties in their Data Safety Forms, despite stating that the data is shared with the third parties in their respective privacy policies. “When I see Data Safety labels stating that apps like Twitter or TikTok don’t share data with third parties it makes me angry because it is completely untrue. Of course, Twitter and TikTok share data with third parties[…]Consumers deserve better. Google must do better,” says Caltrider.
In response to the claim, Google has been dismissing Mozilla’s study by deeming its grading system inefficient. “This report conflates company-wide privacy policies that are meant to cover a variety of products and services with individual Data safety labels, which inform users about the data that a specific app collects[…]The arbitrary grades Mozilla Foundation assigned to apps are not a helpful measure of the safety or accuracy of labels given the flawed methodology and lack of substantiating information,” says a Google spokesperson.
Apple, on the other hand, has also been criticized for its developer-submitted privacy labels. The 2021 report from The Washington Post indicates that several iOS apps similarly disclose misleading information, along with several other apps falsely claiming that they did not collect, share, or track user data.
To address these issues, Mozilla suggests that both Apple and Google adopt an overall, standardized data privacy system across all of their platforms. Mozilla also urges that major tech firms shoulder more responsibility and take enforcement action against apps that fail to give accurate information about data sharing. “Google Play Store’s misleading Data Safety labels give users a false sense of security[…]It’s time we have honest data safety labels to help us better protect our privacy,” says Caltrider.
A surveillance vendor from Barcelona called Variston IT is believed to deploy spyware on victim devices by compromising various zero-day flaws in Google Chrome, Mozilla Firefox, and Windows, some of these go back to December 2018.
Google Threat Analysis Group (TAG) researchers Clement Lecigne and Benoit Sevens said "their Heliconia framework exploits n-day vulnerabilities in Chrome, Firefox, and Microsoft Defender, and provides all the tools necessary to deploy a payload to a target device."
Variston has a bare-bones website, it claims to provide tailor-made security solutions to its customers, it also makes custom security patches for various types of proprietary systems and assists in the discovery of digital information by law enforcement agencies, besides other services.
Google said "the growth of the spyware industry puts users at risk and makes the Internet less safe, and while surveillance technology may be legal under national or international laws, they are often used in harmful ways to conduct digital espionage against a range of groups. These abuses represent a serious risk to online safety which is why Google and TAG will continue to take action against, and publish research about, the commercial spyware industry."
The vulnerabilities, which have been fixed by Google, Microsoft, and Mozilla in 2021 and early 2022, are said to have been used as zero-days to help customers deploy whichever malware they want to, on targeted systems.
Heliconia consists of three components called Noise, Files, and Soft, each of these is responsible for installing exploits against vulnerabilities in Windows, Firefox, and Chrome, respectively.
Noise is designed to exploit a security flaw in the Chrome V8 engine JavaScript that was fixed last year in August 2021, along with an unknown sandbox escape method known as "chrome-sbx-gen" to allow the final payload (also called an agent) to be deployed on select devices.
But the attack works only when the victim accesses a malicious webpage intended to trap the user, and then trigger the first-stage exploit.
Google says it came to know about the Heliconia attack framework after it got an anonymous submission in its Chrome bug reporting program. It further said that currently there's no proof of exploitation, after hinting the toolset has shut down or evolved further.
Although the vulnerabilities are now patched, we assess it is likely the exploits were used as 0 days before they were fixed.
Heliconia Noise: a web framework for deploying an exploit for a Chrome renderer bug followed by a sandbox escape
Heliconia Soft: a web framework that deploys a PDF containing a Windows Defender exploit
Files: a set of Firefox exploits for Linux and Windows.
An inquiry into mental health and prayer apps disclosed a problematic lack of concern around user security and privacy. Last Monday, Mozilla published the findings of new research about these kinds of apps, which mostly deal with sensitive issues like depression, anxiety, mental health awareness, PTSD, domestic violence, etc., and religion-based services. Mozilla's recent "Privacy Not Included," guide says that even though these apps manage personal information, they regularly share data, allow easy passwords, pick vulnerable users via targeted ads, and show poorly written and vague privacy policies.