Home » The Sounds Of Your Keyboard Can Be Heard By Ai, Stealing Passwords And Private Information

The Sounds Of Your Keyboard Can Be Heard By Ai, Stealing Passwords And Private Information

by admin
0 comment 53 views

Our webcams and microphones were the first targets, but now even our keyboards are at risk. People who use computers now run the risk of having their private messages, passwords, and credit card information stolen just by typing.

According to a new study published by a group of academics from British universities, artificial intelligence is capable of recognising keystrokes simply from aural signals and can do so with an accuracy rate of 95%. Given how quickly AI is evolving at the moment, these attacks are only likely to get more complex.

What it Does:

The research paper explores the area of “acoustic side channel attacks,” in which a malicious party uses an additional device, such as a cell phone next to a laptop or an unmuted microphone on a video conferencing platform like Zoom, to record the auditory input of typing sounds.

The content of the written text is then revealed once this recorded sound has been analysed by a deep-learning artificial intelligence model that has been trained to recognise the various acoustic patterns of each keystrokes.

With this approach, the researchers successfully identified keystrokes made on a MacBook Pro by just listening to the sound of a nearby mobile phone, with an amazing success rate of 95%. The identification accuracy remained alarmingly low (93%), according to an examination of a recorded Zoom call.

How to Prevent These Attacks:

Using harder passwords with numerous cases to make it difficult for the AI to recognise all the keystrokes at once might be one approach to prevent such assaults. Additionally, complete words in passwords will be simpler to decipher than a random assortment of numbers, letters, and other symbols.

Additionally, you can include more basic security features like two-factor authentication and biometric verification. However, the paper issues a warning that as AI develops, it might eventually be able to deal with a variety of different security prompts.

You may also like

Leave a Comment