Researchers Come Up With Better Idea To Prevent AirTag Stalking (arstechnica.com) 29
An anonymous reader quotes a report from Ars Technica: Apple's AirTags are meant to help you effortlessly find your keys or track your luggage. But the same features that make them easy to deploy and inconspicuous in your daily life have also allowed them to be abused as a sinister tracking tool that domestic abusers and criminals can use to stalk their targets. Over the past year, Apple has taken protective steps to notify iPhone and Android users if an AirTag is in their vicinity for a significant amount of time without the presence of its owner's iPhone, which could indicate that an AirTag has been planted to secretly track their location. Apple hasn't said exactly how long this time interval is, but to create the much-needed alert system, Apple made some crucial changes to the location privacy design the company originally developed a few years ago for its "Find My" device tracking feature. Researchers from Johns Hopkins University and the University of California, San Diego, say, though, that they've developed (PDF) a cryptographic scheme to bridge the gap -- prioritizing detection of potentially malicious AirTags while also preserving maximum privacy for AirTag users. [...]
The solution [Johns Hopkins cryptographer Matt Green] and his fellow researchers came up with leans on two established areas of cryptography that the group worked to implement in a streamlined and efficient way so the system could reasonably run in the background on mobile devices without being disruptive. The first element is "secret sharing," which allows the creation of systems that can't reveal anything about a "secret" unless enough separate puzzle pieces present themselves and come together. Then, if the conditions are right, the system can reconstruct the secret. In the case of AirTags, the "secret" is the true, static identity of the device underlying the public identifier that is frequently changing for privacy purposes. Secret sharing was conceptually useful for the researchers to employ because they could develop a mechanism where a device like a smartphone would only be able to determine that it was being followed around by an AirTag with a constantly rotating public identifier if the system received enough of a certain type of ping over time. Then, suddenly, the suspicious AirTag's anonymity would fall away and the system would be able to determine that it had been in close proximity for a concerning amount of time.
Green notes, though, that a limitation of secret sharing algorithms is that they aren't very good at sorting and parsing inputs if they're being deluged by a lot of different puzzle pieces from all different puzzles -- the exact scenario that would occur in the real world where AirTags and Find My devices are constantly encountering each other. With this in mind, the researchers employed a second concept known as "error correction coding," which is specifically designed to sort signal from noise and preserve the durability of signals even if they acquire some errors or corruptions. "Secret sharing and error correction coding have a lot of overlap," Green says. "The trick was to find a way to implement it all that would be fast, and where a phone would be able to reassemble all the puzzle pieces when needed while all of this is running quietly in the background." The researchers published (PDF) their first paper in September and submitted it to Apple. More recently, they notified the industry consortium about the proposal.
The solution [Johns Hopkins cryptographer Matt Green] and his fellow researchers came up with leans on two established areas of cryptography that the group worked to implement in a streamlined and efficient way so the system could reasonably run in the background on mobile devices without being disruptive. The first element is "secret sharing," which allows the creation of systems that can't reveal anything about a "secret" unless enough separate puzzle pieces present themselves and come together. Then, if the conditions are right, the system can reconstruct the secret. In the case of AirTags, the "secret" is the true, static identity of the device underlying the public identifier that is frequently changing for privacy purposes. Secret sharing was conceptually useful for the researchers to employ because they could develop a mechanism where a device like a smartphone would only be able to determine that it was being followed around by an AirTag with a constantly rotating public identifier if the system received enough of a certain type of ping over time. Then, suddenly, the suspicious AirTag's anonymity would fall away and the system would be able to determine that it had been in close proximity for a concerning amount of time.
Green notes, though, that a limitation of secret sharing algorithms is that they aren't very good at sorting and parsing inputs if they're being deluged by a lot of different puzzle pieces from all different puzzles -- the exact scenario that would occur in the real world where AirTags and Find My devices are constantly encountering each other. With this in mind, the researchers employed a second concept known as "error correction coding," which is specifically designed to sort signal from noise and preserve the durability of signals even if they acquire some errors or corruptions. "Secret sharing and error correction coding have a lot of overlap," Green says. "The trick was to find a way to implement it all that would be fast, and where a phone would be able to reassemble all the puzzle pieces when needed while all of this is running quietly in the background." The researchers published (PDF) their first paper in September and submitted it to Apple. More recently, they notified the industry consortium about the proposal.