Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Alexa. Show all posts

Stop Siri, Google & Alexa from Stealing Audio Files for Unauthorized Usage

There are several ways to stop devices from accessing your data, as per USA Today reports. Some call for physically blocking cameras and microphones. Laptops and desktop computers would be the finest platforms for this.

The evolution of search and technology will rely on individuals speaking to computers more fluidly to complete tasks. Along the road, users need to protect their privacy, and that process begins with the products employed currently. You might want to as a result or at least forbid them from exploiting such information. Due to transcription's imperfection, it is possible for it to unintentionally carry out your instructions and send odd messages to one of your contacts.

How to turn off Siri on iPhone

It requires a few steps to deactivate Siri on an iPhone. Here is what you must do when you want to entirely deactivate Siri:
  1. Open the iPhone's Settings app.
  2. Click on 'Siri & Search.'
  3. Turn off the controls for 'Listen for Hey Siri ' and 'Press Home for Siri.'
If you deactivate Siri, you won't be able to call it by pressing the home button or say 'Hey Siri.'  Additionally, you won't be able to employ Siri to perform actions like making calls, sending text messages, and creating reminders.

Amazon

Employees at Amazon listening to your records is the real issue. Here's how to stop it:
  1. Launch the Alexa app on your device and select the More menu option.
  2. Choose Privacy > Settings for Alexa.
  3. Pick Manage Your Alexa Data on step three.
  4. Deactivate the toggles next to "Help Alexa" and "Use messages to enhance transcriptions."
For added privacy in some circumstances, you can switch off the Echo's microphone. At the device's top, press the button to turn the microphone on or off.

Smartphone

When you are uncomfortable with Android having access to private records, your preferred option is to turn off Google Assistant because you can't really choose what is sent and saved.

Here's how to disable the "OK Google" wake command:
  1. Launch the Google app on your mobile device.
  2. Tap the icon for your profile photo in the top section.
  3. Select General under Settings > Google Assistant.
  4. To disable Google Assistant, slide the switch next to it to the left.

When it was discovered that Google Assistant and Amazon Alexa were recording random voice snippets in their formative days, criticism erupted. Some firmly gripped their conspiratorial hats, claiming that this was a brand-new dystopia tool for keeping an eye on millions of people. The likelihood is higher that the assistants misheard the cue and began anticipatorily listening for orders. These systems are not fault-tolerant and they can still make blunders.

Some of the major digital firms that might use intelligent chatbots like ChatGPT are Amazon, Google, and Apple. They could complement the current solutions or be included in future versions of Alexa, Google Assistant, and Siri. 




Potential Wiretapping Bugs Found in Google Home Speakers

 


For identifying security issues with Google Home smart speakers, a security researcher recently received a bug bounty award of $107,500. It is possible to exploit these issues to install backdoors into the software and make it able to spy on you remotely. 

A researcher, who uses the name Matt, said this week in a technical write-up published by the University of Washington that it has been discovered that the flaws could be exploited by attackers who are within a wireless range of the device by installing a 'backdoor' account on the device so that they could access the microphone feed of the device remotely, send commands remotely through the internet, and make arbitrary HTTP requests within the victim's local area network, according to the researcher, Matt, who wrote a technical report published earlier this week.  

This malicious request, coupled with an attempt to identify the Wi-Fi password, could not only expose the Wi-Fi password but could also allow the adversary direct access to other connected devices on the same network so that he could carry out further attacks. In April 2021, Google remediated the issues that had arisen as a result of their responsible disclosure on January 8, 2021.  

This problem, to put it modestly, is related to how the software architecture of Google Home can be exploited to add a rogue Google account to a target's home automation system, which in turn would facilitate the theft of valuable data.  

The researcher has outlined a chain of attacks in which a threat actor would seek to eavesdrop on a victim. The goal is to convince the victim to install a malicious Android app. When the app detects a Google Home device is present on the network, it installs itself on the device. It then issues a stealthy HTTP request to connect an attacker's account to the victim's. 

In addition, it has also been reported that it is possible to force a Google Home device into "setup mode" and to create its open Wi-Fi network if an attacker can stage a Wi-Fi de-authentication attack to get it disconnected from the networks. 

Upon connecting to the device's setup network, the threat actor can request information such as the device name, cloud_device_id, and certificate of the device. In this way, they will be able to link their devices to their accounts by using them. 

The adversary can take advantage of the routines built into Google Home, regardless of the attack sequence used. This is done by using a successful connection to turn down the volume to zero on Google Home. As a result of this, the adversary can make a call to an exact telephone number to spy on the victim. This is done through the microphone of the device at any given time. 

According to Matt, the only thing that may come to notice to the victim would be that the device's LEDs would turn solid blue. However, they would probably just assume that it was an update to the firmware or something like that. When a call is taking place, the LEDs of the device do not pulse as they would if the device was listening. Due to this, the LEDs cannot detect the microphone during a call. 

The attacker can also extend this attack to the point of attempting arbitrary HTTP requests inside the victim's network as part of the attack. In addition, it may even be able to read files or introduce malicious changes that would be applied to the linked device after a reboot as a result of the hack. 

Voice-activated devices have been used for quite some time to spy on potential targets without being detected and to covertly snoop on them. 

The Light Commands technique was recently released by a group of academics in November. MEMS microphones have been found to have a vulnerability that may lead to a data breach. A remote attacker could use this exploitation to inject inaudible and invisible commands and commands into popular voice assistants such as Google Assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light instead of voice.  

Alexa Skills can Easily Bypass Vetting Process

 

Researchers have uncovered gaps in Amazon's skill vetting process for the Alexa voice assistant ecosystem that could permit a threat actor to publish a misleading skill under any arbitrary developer name and even make backend code changes after approval to fool clients into surrendering sensitive data. The discoveries were introduced on Wednesday at the Network and Distributed System Security Symposium (NDSS) meeting by a group of scholastics from Ruhr-Universität Bochum and the North Carolina State University, who examined 90,194 skills accessible in seven nations, including the US, the UK, Australia, Canada, Germany, Japan, and France.

 “While skills expand Alexa’s capabilities and functionalities, it also creates new security and privacy risks,” said a group of researchers from North Carolina State University, the Ruhr-University Bochum and Google, in a research paper. 

Amazon Alexa permits third-party developers to make additional functionality for gadgets, for example, Echo smart speakers by configuring "skills" that run on top of the voice assistant, along these lines making it simple for clients to start a conversation with the skill and complete a particular task. Chief among the discoveries is the worry that a client can actuate a wrong skill, which can have serious results if the skill that is set off is designed with a treacherous aim. 

Given that the actual criteria Amazon uses to auto-enable a particular skill among several skills with the same invocation names stay obscure, the researchers advised it's conceivable to actuate some wrong skill and that an adversary can get away with publishing skills utilizing notable organization names. "This primarily happens because Amazon currently does not employ any automated approach to detect infringements for the use of third-party trademarks, and depends on manual vetting to catch such malevolent attempts which are prone to human error," the researchers explained. "As a result users might become exposed to phishing attacks launched by an attacker." 

Far more terrible, an attacker can make code changes following a skills approval to persuade a client into uncovering sensitive data like telephone numbers and addresses by setting off a torpid purpose.

Wi-Fi Bug in Amazon Echo and Kindle Devices Assist Attackers in Stealing Sensitive Data


There is no denying the fact that Amazon Echo and Kindle devices are extremely popular and are utilized by a large number of users around the world. The news, therefore, comes as a huge shock to those millions of users that some researchers from ESET Smart Home saw that Amazon Echo and Kindle Echo are vulnerable against KRACK attacks.

The KRACK attacks, discovered and published by two Belgian researchers in October 2017 are based on the weaknesses in the WPA2 protocol utilized in modern-day Wi-Fi devices.

The weakness is said to have been exploited by the attackers utilizing key reinstallation attacks if the victim resides within the system and the successful exploitation of the attack enables attackers to steal sensitive details, for example, credit numbers, passwords, chat messages emails, photos, etc.

Researchers tried the first generation of the Amazon Echo devices with original Amazon Alexa as well as the eighth generation of Amazon Kindle and concluded that they are vulnerable against two KRACK vulnerabilities.

With KRACK scripts, ESET researchers ready to "replicate the reinstallation of the pairwise encryption key (PTK-TK) in the four-way handshake (CVE-2017-13077) and reinstallation of the group key (GTK) in the four-way handshake (CVE-2017-13078).”

As per the ESET team, the vulnerabilities enable the attackers to Replay old packets to cause a DOS attack or interferences.

  • Unscramble the data transmitted. 
  • Attackers can likewise forge packets. 
  • It can even steal sensitive details, like passwords or session cookies.

Nonetheless, Amazon has acknowledged the issue as soon as the vulnerabilities were accounted for to it on October 23rd, 2018 and to do that Amazon distributed another version of software application wpa_supplicant that is responsible for the correct authentication to the Wi-Fi network.

Google’s Language Experts Listen to Users’ Private Recordings





The technology superpower Google recently avowed that its employees listen to customers' personal audio recordings on Google Home smart speakers.


For allegedly improving the voice recognition quality, language experts analyze "snippets" of users' recordings.


Those recordings are used to further develop the Google assistant's artificial intelligence system which is used in the Android phones and Google Home smart speakers.


According to sources the company is a statement cited their experts did transcribe a few of the anonymous recordings.


An investigation had been launched after it was found out that some Dutch audio data had been leaked.


Per sources the technology giant also said that in the process of developing technology of its AI products, transcribing a small set of queries is critical for which they collaborate with language experts around the world.


And it was one of these reviewers who allegedly leaked the Dutch audio data hence violating Google's security policies.


Actually, only 0.2% of all audio snippets are reviewed by the language experts, which especially are never associated with user accounts.



The investigation launched by the Security and Privacy Response teams is Soon to reach some result and all possible actions are being taken to deduct all chances of repetition.


Amazon also indulges in similar actions of listening to recordings of customers in relation with Alexa, its voice based assistant, mentioned a report.


Later Amazon admitted to the process and mentioned that the number of recordings was pretty small and imperative to train AI's responses.


There's a special provision for users though. They can always delete their recordings linked to their account by way of the Alexa Companion App.


Amazon Sued Over Illegal Retention of Child Recordings Through Alexa



Amazon is being sued by a Massachusetts woman for unlawfully recording and storing the voices of children with its Alexa-enabled devices; the lawsuit filed in Seattle this week, claims that Amazon is contributing to a massive database by harnessing private details of millions of Americans via voice recordings.
Children, as a matter of fact, don’t fully understand the “potentially invasive uses of big data by a company the size of Amazon” and they “use Alexa without any understanding or warning that Amazon is recording and voice-printing them”, according to the lawsuit.
Criticizing Amazon’s methodologies, the two law firms, Quinn Emanuel Urquhart & Sullivan and Keller Lenkner alleged that the company decides to retain the actual voice recordings in spite of having an option to encrypt user voices. According to the complaint filed by these firms on behalf of an anonymous minor, Amazon stores the voices to examine it in the future and deploy the same for commercial profit.
Referencing from the Lawsuit, “It takes no great leap of imagination to be concerned that Amazon is developing voiceprints for millions of children that could allow the company (and potentially governments) to track a child’s use of Alexa-enabled devices in multiple locations and match those uses with a vast level of detail about the child’s life, ranging from private questions they have asked Alexa to the products they have used in their home,
The company is “allowing workers around the world to listen to the voice recordings and creating voiceprints of the users, which can be used to identify them when they speak to other devices in other locations,” the lawsuit reads.
Referenced from the statements given by a spokeswoman to BBC, “Amazon has a longstanding commitment to preserving the trust of our customers and their families, and we have strict measures and protocols in place to protect their security and privacy.”
Commenting on the matter during his conversation with Yahoo Finance,” Travis Lenkner, one of the plaintiffs’ attorneys, said,
“The legal theory is very straightforward. These kids themselves never consented, if they even could. No one such as a parent ever consented on their behalf,”
“Amazon purports to obtain consent to record individuals who set up an Alexa-enabled device,” the complaint states. “But there is a large group of individuals who do not consent to be recorded when using an Alexa-enabled device and who use Alexa without any understanding or warning that Amazon is recording and voice printing them: children.”
“Every recording that is made of a child, by Amazon through the Alexa software in one of these nine states is ... a per se violation of the privacy laws of those states and carries statutory penalties along with it,”
Delving further into the matter, Lenkar explains “It builds voiceprints of individual users”, “so if a child uses an Alexa device in California, and then uses another one in Washington, Amazon theoretically knows it’s the same person.” The device creates a unique identity for each person based on their voice.”
The fact that Amazon could potentially overwrite the voice recordings and yet chose not to, given that doing so would not hinder the performance of the assistant, further worsens the matter on which the company is expected to provide answers in greater detail very soon.




Amazon's Alexa storing all the voice recordings





Amazon’s Alexa may delete your voice recordings but it keeps the automatically produced transcripts in the company's cloud, according to reports.

According to CNET report, all the voice commands said to the virtual assistant should be deleted from the server, but the company saves all the text logs. 

The company stores all its data on its cloud servers, which could not be deleted by the users. Meanwhile, the company claims that they are working to make the data inaccessible. 

"When a customer deletes a voice recording, we also delete the corresponding text transcript associated with their account from our main Alexa systems and many subsystems, and have work underway to delete it from remaining subsystems," an Amazon spokesperson said in an email.

After revelation of the report, more than a dozen consumer advocacy groups plan to file a complaint against the company with the Federal Trade Commission.

The company is violating federal laws as they are not seeking parental consent before collecting data on children through Echo devices. 

Goa DGP calls Alexa a spy

Goa Director General of Police (DGP), Muktesh Chander, while speaking at a cybersecurity seminar on Thursday, 21 February, warned people from excessive use of Amazon's artificial intelligence assistant Alexa, saying that these assistants are acting like spies and collecting private information, The Indian Express reported.

“And what Alexa does. All the time it is listening. Everything. Every word you are saying, Alexa is listening and passing it on to Google. (Chander then corrects himself and says Amazon)."

Chander, who is also a cybersecurity expert, was delivering a keynote address at a seminar on ‘Cyber Security for Industry’ in Panaji.

“Sounds.pk… PK are Pakistani sites. Why are they giving sounds free of cost?” Chander said, adding that the songs.pk website promotes a “compromised Chinese-made browser” to glean information from a user’s phone. “Has anybody tried downloading this songs.pk? All of a sudden if you are trying on mobile, one thing is bound to come up… UC browser. Have you heard of that? Because UC browser is… a Chinese browser. It is collecting all the information. So there is a hidden agenda,” Chander said.