New acoustic attack steals data from keystrokes with 95% accuracy::A team of researchers from British universities has trained a deep learning model that can steal data from keyboard keystrokes recorded using a microphone with an accuracy of 95%.
You have to train it on per device + per room basis and you don’t give everything access to your microphones
I was just thinking, streamers might have to be careful actually — you can often both see and hear when they’re typing, so if you correlated the two you could train a key audio → key press mapping model. And then if they type a password for something, even if it’s off-screen from their stream, the audio might clue you in on what they’re typing.
That could hypothetically be avoided by distorting the streamed audio just a tiny bit
Yeah. Or just use a password wallet.
Could be a fun honey pot.
laughs in custom multi-layer orthogonal layout with one-of-a-kind enclosure & artisan keycaps
A very widespread implication of this is if you are on a call with a bad actor and are on speaker phone, and you enter your password while talking to them, they could potentially get that password or other sensitive information that you typed.
Assuming it really is that accurate, a real-world attack could go something like this. Call someone and social engineer them in a way that causes them to type their login credentials, payment information, whatever, into the proper place for them. They will likely to this without a second thought because “well, I’m signing into the actual place that uses those credentials and not a link someone sent me so it’s all good! I even typed in the address myself so I’m sure there’s no URL trickery!” And then attempt to extract what they typed. Lots of people, especially when taking calls or voice conference meetings or whatever from their desk, prefer to not hold their phone to their ear of use a headset mic and instead just use their normal laptop mic or an desktop external one. And, most people stop talking when they’re focused on typing which makes it even easier. Hell if you manage to reach, say, the IT server department of a major company and play your cards right, you might even be able to catch them entering a root password for a system that’s remotely accessible.
Is very convoluted, but yeah, could work
Tangentially related: Did you know, that it‘s technically also possible to reconstruct sound via smartphone accelerometers and there‘s no restrictions on which apps can use it. Have fun with this info (:
Thanks, I hate it.
SpyApp is spying in background
User thinks “why is battery draining so fast?”
Opens battery setting
Oh, this app shouldnt work right now
Restricts SpyApp’s battery permissions
are you saying that a cellphone accelerometer can be used as a microphone? That sounds… interesting. Do you have a source?
I am not the person you are replying to, but if the accelerometers are sensible enough, the vibration of the voice will be picked up by the accelerometer.
Since the sound we make when talking are periodical, it can probably easier to track that periodicity and reconstruct the sound from there.
It’s all my (un)educated guess.
From the article:
The researchers gathered training data by pressing 36 keys on a modern MacBook Pro 25 times each and recording the sound produced by each press.
In their experiments, the researchers used the same laptop, whose keyboard has been used in all Apple laptops for the past two years, an iPhone 13 mini placed 17cm away from the target, and Zoom.
Now they should do this under real usage and see if they get anywhere close to 95% accuracy. Phones are usually in pockets, people listen to music, not everyone has a MacBook.
I think it will be difficult for the average person to use this attack effectively, but I think this will become some sort of government spy thing for sure.
I’m just going to play a keyboard ASMR video while I type. Problem solved.
Phreaking for the modern era.
It looks like they only tested one keyboard from a MacBook. I’d be curious if other keyboard styles are as susceptible to the attack. It also doesn’t say how many people’s typing that they listened to. I know mine changes depending on my mood or excitement about something, I’m sure that would affect it.
My wife types with her fists when I’m trying to have Zoom meetings.
That “95%” has about as much credibility and extremely specific test conditions as MPG for cars
Assuming that this does not only work on English words, this is actually really terrifying.
I have to assume it could be modified to work on any language. You just have to know the keyboard layout for the language in question do you know what to listen for. Languages with a lot of accents like French maybe could be slightly more complicated but I seriously doubt that it couldn’t be done. I’m honestly not sure how the keyboard is set up for something like Chinese with so very many characters but again if this can be done, that can be done with some dedication and know how.
There are several different ways of inputting Chinese, but generally they all map 2~6 keystrokes to one or multiple Chinese characters, and then the user chooses one. I’d imagine it wouldn’t be much harder.
Can we, one day, have a research, a project where DL, AI, LLM (w’ the f*ck you call it) solving real and useful problems?
I swear, these techs are boring as f .