Search This Blog

Powered by Blogger.

Blog Archive

Labels

Footer About

Footer About

Labels

Showing posts with label Alexa. Show all posts

Federal Judge Allows Amazon Alexa Users’ Privacy Lawsuit to Proceed Nationwide

 

A federal judge in Seattle has ruled that Amazon must face a nationwide lawsuit involving tens of millions of Alexa users. The case alleges that the company improperly recorded and stored private conversations without user consent. U.S. District Judge Robert Lasnik determined that Alexa owners met the legal requirements to pursue collective legal action for damages and an injunction to halt the alleged practices. 

The lawsuit claims Amazon violated Washington state law by failing to disclose that it retained and potentially used voice recordings for commercial purposes. Plaintiffs argue that Alexa was intentionally designed to secretly capture billions of private conversations, not just the voice commands directed at the device. According to their claim, these recordings may have been stored and repurposed without permission, raising serious privacy concerns. Amazon strongly disputes the allegations. 

The company insists that Alexa includes multiple safeguards to prevent accidental activation and denies evidence exists showing it recorded conversations belonging to any of the plaintiffs. Despite Amazon’s defense, Judge Lasnik stated that millions of users may have been impacted in a similar manner, allowing the case to move forward. Plaintiffs are also seeking an order requiring Amazon to delete any recordings and related data it may still hold. The broader issue at stake in this case centers on privacy rights within the home.

If proven, the claims suggest that sensitive conversations could have been intercepted and stored without explicit approval from users. Privacy experts caution that voice data, if mishandled or exposed, can lead to identity risks, unauthorized information sharing, and long-term security threats. Critics further argue that the lawsuit highlights the growing power imbalance between consumers and large technology companies. Amazon has previously faced scrutiny over its corporate practices, including its environmental footprint. 

A 2023 report revealed that the company’s expanding data centers in Virginia would consume more energy than the entire city of Seattle, fueling additional criticism about the company’s long-term sustainability and accountability. The case against Amazon underscores the increasing tension between technological convenience and personal privacy. 

As voice-activated assistants become commonplace in homes, courts will likely play a decisive role in determining the boundaries of data collection and consumer protection. The outcome of this lawsuit could set a precedent for how tech companies handle user data and whether customers can trust that private conversations remain private.

Why Hackers Focus on Certain Smart Home Devices and How to Safeguard Them

 


In an era where convenience is the hallmark of modern living, smart devices have become a large part of households around the world, offering a range of advantages from voice-activated assistants to connected cameras and appliances. These technologies promise to streamline daily routines simply and productively. Even so, it's also important to remember that the same internet link that makes them function is also what exposes them to significant risks. 

Security experts warn that poorly protected devices can become a digital gateway for cybercriminals, providing them with the opportunity to break into home networks, steal sensitive personal information, monitor private spaces, and even hijack other connected systems if not well protected. The adoption of smart technologies is widespread, but many users are unaware of how easily they can be compromised, leaving entire smart homes vulnerable to exploitation. 

As smart technology has progressed, new vulnerabilities have been introduced into modern homes, as well as innovation. It is estimated that Smart TVs will account for 34 per cent of the reported security flaws in the year 2023, followed by smart plugs at 18 per cent, followed by digital video recorders at 13 per cent. Underscoring the risks that are hidden behind everyday devices, this study shows. 

Currently, the University of Bradford's School of Computer Science, Artificial Intelligence and Electronics is home to an array of digital threats. As a result, homeowners must adopt more comprehensive digital hygiene practices to protect themselves. It takes more than just buying the latest gadgets to create a smart home today; it also requires a careful assessment of privacy and security tradeoffs. Smart speakers, thermostats, and video doorbells are incredibly convenient devices, but they each come with potential risks that homeowners must weigh prior to purchasing them. 

Although security cameras can be useful for remote monitoring, they are often stored in the cloud, raising concerns about how manufacturers handle sensitive video footage. Experts suggest consumers carefully read privacy policies prior to installing such cameras in their home or elsewhere. As well as that, voice assistants such as Alexa, Google Assistant, and Siri constantly listen for wake words to be detected. 

In addition to enabling hands-free control, this feature also results in audio samples being sent to company servers for analysis, which results in an analysis of the audio snippets. It is all about the level of trust consumers place in the providers of these technology services that will decide if this feature enhances their lives or compromises their privacy. Although connected cameras, speakers, and appliances provide convenience by controlling lighting, entertainment, and security, many of them are designed with minimal privacy safeguards, making them vulnerable to hacking. 

In many cases, home networks are easy to access through weak default passwords, outdated firmware, and unencrypted data, allowing cybercriminals to gain entry into entire home networks with ease. It is clear from this trend that IoT manufacturers prioritise affordability and ease of use over robust security, leaving millions of households at risk. 

As a result, statistics reveal that over 112 million cyberattacks are predicted to have been launched by cybercriminals over the course of 2022 against smart devices across the globe. Enhanced security measures must be developed along with the technological advancements, since once a single device is compromised, it can be a gateway to sensitive personal information, security systems, and even financial accounts.

While smart technology is constantly redefining our living styles, it has never been more obvious that convenience and security are the two factors that should be balanced. As household devices become increasingly connected, cybercriminals have more opportunities to exploit weaknesses, potentially compromising financial data, private information, or even personal safety by exploiting weak points. 

Experts have emphasised that as IoT devices become more common, users must adopt stronger cybersecurity practices to safeguard their digital environments as they become increasingly dependent on these devices. Among the most important measures for protecting home Wi-Fi networks is to secure them with strong, unique passwords, rather than using default settings, and to apply similarly strong credentials across all accounts and devices. 

Using multi-factor authentication, which incorporates passwords with biometric verifications or secondary codes, we are able to enhance our ability to protect ourselves against credential stuffing attacks. In addition, consumers should consider their security track record and data-handling practices carefully before buying a device, since patches often address newly discovered vulnerabilities. It is important for consumers to regularly update their devices' software and mobile applications as new vulnerabilities are often discovered. 

There are several ways in which homeowners can enhance their security beyond device-level precautions, such as encrypting routers, setting up separate guest networks for IoT gadgets, and carefully monitoring network activity to identify suspicious activity. Additionally, software designed specifically for connected homes provides enhanced protection by automatically scanning for threats and flagging unauthorised access attempts as they happen. 

There is no doubt that the most important thing to remember is that every connection to Wi-Fi or Bluetooth represents a potential entry point. It has been observed that the smartest home is not just the most connected, but also the one with the most secure systems. In addition to the features that make smart devices appealing, they can also be powerful tools for cybercriminals to use.

IoT security weaknesses can allow hackers to exploit cameras and microphones as covert surveillance devices, compromise smart locks to gain remote access to homes, and infiltrate networks to steal sensitive data by hijacking cameras and microphones. As a result of thousands of unsecured devices being marshalled into botnets, which can cripple websites and online services globally, the botnets could cripple websites. 

Research has shown that while these risks exist, only 52 per cent of IoT manufacturers in the United Kingdom are currently complying with basic password security provisions, allowing significant openings for exploitation. To prevent these vulnerabilities from occurring in the future, experts argue manufacturers should integrate security into the design of their devices from the very beginning—by implementing robust coding practices, encrypting data transmission, and updating firmware regularly. 

It is becoming increasingly apparent that governments are responding to the threats: for instance, the UK's Product Security and Telecommunications Infrastructure (PSTI) Act and the European Union's Cyber Resilience Act (CRA) now require higher privacy and protection standards throughout the industry. It is important to note that legislation alone cannot guarantee safety; consumers, as well as manufacturers, must prioritise security as homes become increasingly connected. 

To maintain trust in smart home technology, it is imperative to strike a balance between convenience and resilience. Increasingly, as the boundaries of the home continue to blur together, the security of connected devices becomes increasingly important to consumer confidence as technology begins to take over the traditional home and office. 

Analysts note that a smart living environment will not be characterised by the sophistication of gadgets alone, but by the quality of the ecosystems they depend on. Increasing the collaboration between policy makers, manufacturers, and security researchers will be crucial to preventing hackers from exploiting loopholes so readily in the future. In order for consumers to maintain a secure smart home, they are responsible for more than just installing it. They must remain vigilant as well, as maintaining a secure smart home isn't just a one-time process.

Stop Siri, Google & Alexa from Stealing Audio Files for Unauthorized Usage

There are several ways to stop devices from accessing your data, as per USA Today reports. Some call for physically blocking cameras and microphones. Laptops and desktop computers would be the finest platforms for this.

The evolution of search and technology will rely on individuals speaking to computers more fluidly to complete tasks. Along the road, users need to protect their privacy, and that process begins with the products employed currently. You might want to as a result or at least forbid them from exploiting such information. Due to transcription's imperfection, it is possible for it to unintentionally carry out your instructions and send odd messages to one of your contacts.

How to turn off Siri on iPhone

It requires a few steps to deactivate Siri on an iPhone. Here is what you must do when you want to entirely deactivate Siri:
  1. Open the iPhone's Settings app.
  2. Click on 'Siri & Search.'
  3. Turn off the controls for 'Listen for Hey Siri ' and 'Press Home for Siri.'
If you deactivate Siri, you won't be able to call it by pressing the home button or say 'Hey Siri.'  Additionally, you won't be able to employ Siri to perform actions like making calls, sending text messages, and creating reminders.

Amazon

Employees at Amazon listening to your records is the real issue. Here's how to stop it:
  1. Launch the Alexa app on your device and select the More menu option.
  2. Choose Privacy > Settings for Alexa.
  3. Pick Manage Your Alexa Data on step three.
  4. Deactivate the toggles next to "Help Alexa" and "Use messages to enhance transcriptions."
For added privacy in some circumstances, you can switch off the Echo's microphone. At the device's top, press the button to turn the microphone on or off.

Smartphone

When you are uncomfortable with Android having access to private records, your preferred option is to turn off Google Assistant because you can't really choose what is sent and saved.

Here's how to disable the "OK Google" wake command:
  1. Launch the Google app on your mobile device.
  2. Tap the icon for your profile photo in the top section.
  3. Select General under Settings > Google Assistant.
  4. To disable Google Assistant, slide the switch next to it to the left.

When it was discovered that Google Assistant and Amazon Alexa were recording random voice snippets in their formative days, criticism erupted. Some firmly gripped their conspiratorial hats, claiming that this was a brand-new dystopia tool for keeping an eye on millions of people. The likelihood is higher that the assistants misheard the cue and began anticipatorily listening for orders. These systems are not fault-tolerant and they can still make blunders.

Some of the major digital firms that might use intelligent chatbots like ChatGPT are Amazon, Google, and Apple. They could complement the current solutions or be included in future versions of Alexa, Google Assistant, and Siri. 




Potential Wiretapping Bugs Found in Google Home Speakers

 


For identifying security issues with Google Home smart speakers, a security researcher recently received a bug bounty award of $107,500. It is possible to exploit these issues to install backdoors into the software and make it able to spy on you remotely. 

A researcher, who uses the name Matt, said this week in a technical write-up published by the University of Washington that it has been discovered that the flaws could be exploited by attackers who are within a wireless range of the device by installing a 'backdoor' account on the device so that they could access the microphone feed of the device remotely, send commands remotely through the internet, and make arbitrary HTTP requests within the victim's local area network, according to the researcher, Matt, who wrote a technical report published earlier this week.  

This malicious request, coupled with an attempt to identify the Wi-Fi password, could not only expose the Wi-Fi password but could also allow the adversary direct access to other connected devices on the same network so that he could carry out further attacks. In April 2021, Google remediated the issues that had arisen as a result of their responsible disclosure on January 8, 2021.  

This problem, to put it modestly, is related to how the software architecture of Google Home can be exploited to add a rogue Google account to a target's home automation system, which in turn would facilitate the theft of valuable data.  

The researcher has outlined a chain of attacks in which a threat actor would seek to eavesdrop on a victim. The goal is to convince the victim to install a malicious Android app. When the app detects a Google Home device is present on the network, it installs itself on the device. It then issues a stealthy HTTP request to connect an attacker's account to the victim's. 

In addition, it has also been reported that it is possible to force a Google Home device into "setup mode" and to create its open Wi-Fi network if an attacker can stage a Wi-Fi de-authentication attack to get it disconnected from the networks. 

Upon connecting to the device's setup network, the threat actor can request information such as the device name, cloud_device_id, and certificate of the device. In this way, they will be able to link their devices to their accounts by using them. 

The adversary can take advantage of the routines built into Google Home, regardless of the attack sequence used. This is done by using a successful connection to turn down the volume to zero on Google Home. As a result of this, the adversary can make a call to an exact telephone number to spy on the victim. This is done through the microphone of the device at any given time. 

According to Matt, the only thing that may come to notice to the victim would be that the device's LEDs would turn solid blue. However, they would probably just assume that it was an update to the firmware or something like that. When a call is taking place, the LEDs of the device do not pulse as they would if the device was listening. Due to this, the LEDs cannot detect the microphone during a call. 

The attacker can also extend this attack to the point of attempting arbitrary HTTP requests inside the victim's network as part of the attack. In addition, it may even be able to read files or introduce malicious changes that would be applied to the linked device after a reboot as a result of the hack. 

Voice-activated devices have been used for quite some time to spy on potential targets without being detected and to covertly snoop on them. 

The Light Commands technique was recently released by a group of academics in November. MEMS microphones have been found to have a vulnerability that may lead to a data breach. A remote attacker could use this exploitation to inject inaudible and invisible commands and commands into popular voice assistants such as Google Assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light instead of voice.  

Alexa Skills can Easily Bypass Vetting Process

 

Researchers have uncovered gaps in Amazon's skill vetting process for the Alexa voice assistant ecosystem that could permit a threat actor to publish a misleading skill under any arbitrary developer name and even make backend code changes after approval to fool clients into surrendering sensitive data. The discoveries were introduced on Wednesday at the Network and Distributed System Security Symposium (NDSS) meeting by a group of scholastics from Ruhr-Universität Bochum and the North Carolina State University, who examined 90,194 skills accessible in seven nations, including the US, the UK, Australia, Canada, Germany, Japan, and France.

 “While skills expand Alexa’s capabilities and functionalities, it also creates new security and privacy risks,” said a group of researchers from North Carolina State University, the Ruhr-University Bochum and Google, in a research paper. 

Amazon Alexa permits third-party developers to make additional functionality for gadgets, for example, Echo smart speakers by configuring "skills" that run on top of the voice assistant, along these lines making it simple for clients to start a conversation with the skill and complete a particular task. Chief among the discoveries is the worry that a client can actuate a wrong skill, which can have serious results if the skill that is set off is designed with a treacherous aim. 

Given that the actual criteria Amazon uses to auto-enable a particular skill among several skills with the same invocation names stay obscure, the researchers advised it's conceivable to actuate some wrong skill and that an adversary can get away with publishing skills utilizing notable organization names. "This primarily happens because Amazon currently does not employ any automated approach to detect infringements for the use of third-party trademarks, and depends on manual vetting to catch such malevolent attempts which are prone to human error," the researchers explained. "As a result users might become exposed to phishing attacks launched by an attacker." 

Far more terrible, an attacker can make code changes following a skills approval to persuade a client into uncovering sensitive data like telephone numbers and addresses by setting off a torpid purpose.

Wi-Fi Bug in Amazon Echo and Kindle Devices Assist Attackers in Stealing Sensitive Data


There is no denying the fact that Amazon Echo and Kindle devices are extremely popular and are utilized by a large number of users around the world. The news, therefore, comes as a huge shock to those millions of users that some researchers from ESET Smart Home saw that Amazon Echo and Kindle Echo are vulnerable against KRACK attacks.

The KRACK attacks, discovered and published by two Belgian researchers in October 2017 are based on the weaknesses in the WPA2 protocol utilized in modern-day Wi-Fi devices.

The weakness is said to have been exploited by the attackers utilizing key reinstallation attacks if the victim resides within the system and the successful exploitation of the attack enables attackers to steal sensitive details, for example, credit numbers, passwords, chat messages emails, photos, etc.

Researchers tried the first generation of the Amazon Echo devices with original Amazon Alexa as well as the eighth generation of Amazon Kindle and concluded that they are vulnerable against two KRACK vulnerabilities.

With KRACK scripts, ESET researchers ready to "replicate the reinstallation of the pairwise encryption key (PTK-TK) in the four-way handshake (CVE-2017-13077) and reinstallation of the group key (GTK) in the four-way handshake (CVE-2017-13078).”

As per the ESET team, the vulnerabilities enable the attackers to Replay old packets to cause a DOS attack or interferences.

  • Unscramble the data transmitted. 
  • Attackers can likewise forge packets. 
  • It can even steal sensitive details, like passwords or session cookies.

Nonetheless, Amazon has acknowledged the issue as soon as the vulnerabilities were accounted for to it on October 23rd, 2018 and to do that Amazon distributed another version of software application wpa_supplicant that is responsible for the correct authentication to the Wi-Fi network.

Google’s Language Experts Listen to Users’ Private Recordings





The technology superpower Google recently avowed that its employees listen to customers' personal audio recordings on Google Home smart speakers.


For allegedly improving the voice recognition quality, language experts analyze "snippets" of users' recordings.


Those recordings are used to further develop the Google assistant's artificial intelligence system which is used in the Android phones and Google Home smart speakers.


According to sources the company is a statement cited their experts did transcribe a few of the anonymous recordings.


An investigation had been launched after it was found out that some Dutch audio data had been leaked.


Per sources the technology giant also said that in the process of developing technology of its AI products, transcribing a small set of queries is critical for which they collaborate with language experts around the world.


And it was one of these reviewers who allegedly leaked the Dutch audio data hence violating Google's security policies.


Actually, only 0.2% of all audio snippets are reviewed by the language experts, which especially are never associated with user accounts.



The investigation launched by the Security and Privacy Response teams is Soon to reach some result and all possible actions are being taken to deduct all chances of repetition.


Amazon also indulges in similar actions of listening to recordings of customers in relation with Alexa, its voice based assistant, mentioned a report.


Later Amazon admitted to the process and mentioned that the number of recordings was pretty small and imperative to train AI's responses.


There's a special provision for users though. They can always delete their recordings linked to their account by way of the Alexa Companion App.


Amazon Sued Over Illegal Retention of Child Recordings Through Alexa



Amazon is being sued by a Massachusetts woman for unlawfully recording and storing the voices of children with its Alexa-enabled devices; the lawsuit filed in Seattle this week, claims that Amazon is contributing to a massive database by harnessing private details of millions of Americans via voice recordings.
Children, as a matter of fact, don’t fully understand the “potentially invasive uses of big data by a company the size of Amazon” and they “use Alexa without any understanding or warning that Amazon is recording and voice-printing them”, according to the lawsuit.
Criticizing Amazon’s methodologies, the two law firms, Quinn Emanuel Urquhart & Sullivan and Keller Lenkner alleged that the company decides to retain the actual voice recordings in spite of having an option to encrypt user voices. According to the complaint filed by these firms on behalf of an anonymous minor, Amazon stores the voices to examine it in the future and deploy the same for commercial profit.
Referencing from the Lawsuit, “It takes no great leap of imagination to be concerned that Amazon is developing voiceprints for millions of children that could allow the company (and potentially governments) to track a child’s use of Alexa-enabled devices in multiple locations and match those uses with a vast level of detail about the child’s life, ranging from private questions they have asked Alexa to the products they have used in their home,
The company is “allowing workers around the world to listen to the voice recordings and creating voiceprints of the users, which can be used to identify them when they speak to other devices in other locations,” the lawsuit reads.
Referenced from the statements given by a spokeswoman to BBC, “Amazon has a longstanding commitment to preserving the trust of our customers and their families, and we have strict measures and protocols in place to protect their security and privacy.”
Commenting on the matter during his conversation with Yahoo Finance,” Travis Lenkner, one of the plaintiffs’ attorneys, said,
“The legal theory is very straightforward. These kids themselves never consented, if they even could. No one such as a parent ever consented on their behalf,”
“Amazon purports to obtain consent to record individuals who set up an Alexa-enabled device,” the complaint states. “But there is a large group of individuals who do not consent to be recorded when using an Alexa-enabled device and who use Alexa without any understanding or warning that Amazon is recording and voice printing them: children.”
“Every recording that is made of a child, by Amazon through the Alexa software in one of these nine states is ... a per se violation of the privacy laws of those states and carries statutory penalties along with it,”
Delving further into the matter, Lenkar explains “It builds voiceprints of individual users”, “so if a child uses an Alexa device in California, and then uses another one in Washington, Amazon theoretically knows it’s the same person.” The device creates a unique identity for each person based on their voice.”
The fact that Amazon could potentially overwrite the voice recordings and yet chose not to, given that doing so would not hinder the performance of the assistant, further worsens the matter on which the company is expected to provide answers in greater detail very soon.




Amazon's Alexa storing all the voice recordings





Amazon’s Alexa may delete your voice recordings but it keeps the automatically produced transcripts in the company's cloud, according to reports.

According to CNET report, all the voice commands said to the virtual assistant should be deleted from the server, but the company saves all the text logs. 

The company stores all its data on its cloud servers, which could not be deleted by the users. Meanwhile, the company claims that they are working to make the data inaccessible. 

"When a customer deletes a voice recording, we also delete the corresponding text transcript associated with their account from our main Alexa systems and many subsystems, and have work underway to delete it from remaining subsystems," an Amazon spokesperson said in an email.

After revelation of the report, more than a dozen consumer advocacy groups plan to file a complaint against the company with the Federal Trade Commission.

The company is violating federal laws as they are not seeking parental consent before collecting data on children through Echo devices. 

Goa DGP calls Alexa a spy

Goa Director General of Police (DGP), Muktesh Chander, while speaking at a cybersecurity seminar on Thursday, 21 February, warned people from excessive use of Amazon's artificial intelligence assistant Alexa, saying that these assistants are acting like spies and collecting private information, The Indian Express reported.

“And what Alexa does. All the time it is listening. Everything. Every word you are saying, Alexa is listening and passing it on to Google. (Chander then corrects himself and says Amazon)."

Chander, who is also a cybersecurity expert, was delivering a keynote address at a seminar on ‘Cyber Security for Industry’ in Panaji.

“Sounds.pk… PK are Pakistani sites. Why are they giving sounds free of cost?” Chander said, adding that the songs.pk website promotes a “compromised Chinese-made browser” to glean information from a user’s phone. “Has anybody tried downloading this songs.pk? All of a sudden if you are trying on mobile, one thing is bound to come up… UC browser. Have you heard of that? Because UC browser is… a Chinese browser. It is collecting all the information. So there is a hidden agenda,” Chander said.