Today’s AI models are advanced enough to have a pretty troubling impact on privacy. A few examples I’ve collected recently...
Stealing data from keystrokes recorded over a Zoom call. Researchers can extract data from keyboard keystrokes recorded by a microphone at 93% accuracy when recorded on Zoom. https://www.bleepingcomputer.com/news/security/new-acoustic-attack-steals-data-from-keystrokes-with-95-percent-accuracy/
Decoding secret keys from card readers using videos of their power LED This method can extract secret keys from devices from a distance, as demonstrated with a smart card and a Samsung Galaxy S8. https://www.nassiben.com/video-based-crypta
Extracting user locations by analysing SMS timings This paper shows that, after training a machine learning model, the sender can accurately determine the recipient’s location with up to 96% accuracy across countries. https://arxiv.org/abs/2306.07695
WiFi routers used to produce 3D images of humans Researchers at Carnegie Mellon have found a new way to locate and map people in space using WiFi transmitters. Or, x-ray vision into people’s homes. https://vpnoverview.com/news/wifi-routers-used-to-produce-3d-images-of-humans/
Connecting CCTV footage with social media posts A creepy example of how easy it is to identify people using security camera footage and social media posts. https://driesdepoorter.be/thefollower/
Using stylometry to find people’s alternate accounts This Hacker News user is demonstrating how easily you can find people’s anonymous accounts by comparing writing styles. https://news.ycombinator.com/item?id=33755016
The common thread here seems to be AI making previously useless data useful. While these examples are all concerning, I’m sure there are many other datasets that AI can make sense of that will be overwhelmingly positive for us.