Search This Blog

Powered by Blogger.

Blog Archive

Labels

Showing posts with label Keyboard security. Show all posts

AI Eavesdrops on Keystrokes with 95% Accuracy

An advanced artificial intelligence (AI) model recently showed a terrifying ability to eavesdrop on keystrokes with an accuracy rate of 95%, which has caused waves in the field of data security. This new threat highlights potential weaknesses in the security of private data in the digital age, as highlighted in research covered by notable media, including.

Researchers in the field of cybersecurity have developed a deep learning model that can intercept and understand keystrokes by listening for the sound that occurs when a key is pressed. The AI model can effectively and precisely translate auditory signals into text by utilizing this audio-based technique, leaving users vulnerable to unwanted data access.

According to the findings published in the research, the AI model was tested in controlled environments where various individuals typed on a keyboard. The model successfully decoded the typed text with an accuracy of 95%. This raises significant concerns about the potential for cybercriminals to exploit this technology for malicious purposes, such as stealing passwords, sensitive documents, and other confidential information.

A prominent cybersecurity researcher, Dr. Amanda Martinez expressed her apprehensions about this breakthrough: "The ability of AI to listen to keystrokes opens up a new avenue for cyberattacks. It not only underscores the need for robust encryption and multi-factor authentication but also highlights the urgency to develop countermeasures against such invasive techniques."

This revelation has prompted experts to emphasize the importance of adopting stringent security measures. Regularly updating and patching software, using encrypted communication channels, and employing acoustic noise generators are some strategies recommended to mitigate the risks associated with this novel threat.

While this technology demonstrates the potential for deep learning and AI innovation, it also emphasizes the importance of striking a balance between advancement and security. The cybersecurity sector must continue to keep ahead of possible risks and weaknesses as AI develops.

It is the responsibility of individuals, corporations, and governments to work together to bolster their defenses against new hazards as the digital landscape changes. The discovery that an AI model can listen in on keystrokes is a sobering reminder that the pursuit of technological innovation requires constant vigilance to protect the confidentiality of sensitive data.


With 95% Accuracy, New Acoustic Attack can Steal from Keystrokes


UK universities’ researchers have recently developed a deep learning model, designed to extract information from keyboard keystrokes collected using a microphone, with 95% accuracy. 

The prediction accuracy decreased to 93% when Zoom was used to train the sound classification algorithm, still exceedingly good and a record for that medium.

Such an attack has a significantly adverse impact on the users’ data security since it is capable of exposing users' passwords, conversations, messages, and other sensitive information to nefarious outsiders.

When compared to the other side attacks that need specific circumstances and are susceptible to data rate and distance restrictions, these acoustic attacks are easier to operate because of the popularity of devices that are now equipped with high-end microphones. 

This makes sound-based side-channel attacks achievable and far more hazardous than previously thought, especially given the rapid advances in machine learning.

Listening to Keystrokes

The attack is initiated in order to acquire keystrokes on the victim’s keyboard, since the data is required for the prediction algorithm to work. This can be done via a nearby microphone or by accessing the microphone on the target's phone, which may have been compromised by malware.

Additionally, keystrokes can also be recorded via Zoom call, in which, rogue meeting attendee compares the messages entered by the target with the auditory recording of that person.

The researchers acquired training data by pressing 36 keys on a modern MacBook Pro, 25 times each, further recording the sounds produced on each press. 

The spectrogram images were used to train the image classifier "CoAtNet," and it took some trials and errors with the epoch, learning rate, and data splitting parameters to get the best prediction accuracy outcomes.

The same laptop, whose keyboard has been present in all Apple laptops over the past two years, an iPhone 13 mini positioned 17 cm from the target, and Zoom were utilized in the researchers' tests.

The CoatNet classifier gained 95% accuracy in the smartphone recordings and 93% from the content captured via Zoom. Skype, on the other, produced comparatively lower accuracy, i.e. 91.7%.

Possible Security Measures

In order to protect oneself from side-channel attacks, users are advised to try “altering typing styles,” or generating passwords with randomized keys. 

Another safety measure includes utilizing software in order to generate keystroke sounds, white noise, or software-based keystroke audio filters. 

Moreover, since the attack model proved highly efficient even against a very silent keyboard, installing sound dampeners to mechanical keyboards or shifting to membrane-based keyboards is unlikely to help in any way. 

Finally, using password managers to avoid manually entering sensitive information and using biometric authentication whenever possible also serve as mitigating factors.