Comment by dharmab
2 years ago
An eavesdropper cannot see the content of your keystrokes, but (previous to this feature) they could see when each keystroke is sent. If you know the target's typing patterns, you could use that data to recover their content. You could collect the target's typing patterns by getting them to type into a website you control with a Javascript enabled browsers, or from an audio recording of their typing. (Some online streamers have been hacked as of late using AI models trained to steal their passwords using the sounds of them typing on their keyboards).
> Some online streamers have been hacked as of late using AI models trained to steal their passwords using the sounds of them typing on their keyboards
do you have any sources for that?
I've only seen this mentioned from research results recently but no real world exploitation reports.
https://www.bleepingcomputer.com/news/security/new-acoustic-...
Years ago when I saw a paper on that topic, I tried recording my own keyboard and trained a ML model to classify keystrokes. I used a SVM, to give you an idea of how long ago this was.
I got to 90% accuracy extremely quickly. The "guessed" keystrokes had errors but they were close enough to tell exactly what I was typing.
If I could do that as an amateur in a few hours of coding with no advanced signal processing and with the first SVM architecture I tried, it must be relatively easy to learn / classify.
Also, if the goal was to guess a password you wouldn't necessarily need it to be really accurate. Just narrowing the search space could get you close enough that a brute force attack could do the rest.
https://github.com/ggerganov/kbd-audio
It's quite good at decoding my own typing, although I am a quite aggressive typist and that may help. I haven't tried it on others, though (honest, officer).
I gave that a bunch of tries over the last half an hour with longer and longer training data and it never got better than random chance.
I didnt find an article about actual hacks carried out with that technique, but here’s a HN discussion [1] from this month about a paper on the topic.
From that discussion it sounds like you need to train on data captured from the actual target. Same physical keyboard in the same physical space with the same typer.
Pretty wild despite those specific conditions. Very interested to know if people have actually been attacked in the wild with this and if the attackers were able to generalize it down to just make and model of a keyboard, or if they could gather enough data from a stream.
[1]: https://news.ycombinator.com/item?id=37013704