Here’s an intriguing security scenario for you. Imagine you use your right hand to unlock your iPhone or tablet and you wear your fitness tracker or smartwatch on your right wrist. Now imagine that someone who really wants to get access to information on your mobile devices is somehow able to use the motion-sensing data in your wrist device to see what PIN you use to unlock your mobile devices. This scenario can actually happen, according to this research paper “Friend or Foe?: Your Wearable Devices Reveal Your Personal PIN.” Specifically:
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user’s fine-grained hand movements, which enable attackers to reproduce the trajectories of the user’s hand and further to recover the secret key entries.
So basically, not only can sensor data detect when you’re being a couch potato, it can leak your secrets. Thanks a lot, Fitbit!
Don’t tear the wearable from your wrist just yet. Instead, let’s assess the likelihood of such an attack based on its scale, difficulty and consequences of the attack.
For the sensor data to be used to deduce the PIN, it has to either be extracted from the wearable or intercepted as it is transmitted off the device for legitimate reasons. For the second, generally, raw sensor data is not sent off the device, but rather analyzed locally, with only calculated qualities, e.g., steps, transmitted. The first implies compromising the wearable itself through something like malware, and then sending the sensor data to the hacker. Malware could be distributed to a large numbers of users, allowing sensor data to be collected, and hackers could correlate specific sensor data with a particular user. This seems non-trivial, but necessary if the data is to be used to unlock phones or tablets.
A more likely scenario would be a targeted attack where a particular user is chosen. For this, both the following need to be true:
- The wearable can be compromised (so sensor data can be collected).
- The hacker has access to the phone or tablet (so sensor data can be applied).
But if the attacker has physical access to the phone, there are easier ways to extract the PIN, like shoulder surfing or looking at the smudge pattern from oily fingerprints, which is much easier than also installing malware on a different wearable.
Moreover, while unlocking the phone with the stolen PIN will give the hacker access to sensitive information on that phone, it’s unlikely to enable access to sensitive applications accessed from that phone. This is because any sensitive application (not Facebook) likely mandates a short session time, so that when it’s launched, there will be an authentication prompt — one that the hacker armed only with the PIN will fail.
The unlocked phone could, however, enable the phone as a second factor for an application session from some other device. But the burden of the first password authentication for the hacker remains the same. The challenge for the hacker actually gets worse:
- Knowledge of the user’s password
- Ability to compromise the wearable
- Physical access to the mobile device used for 2FA
Additionally, authentication systems are increasingly more sensitive to the context of a device being used to access applications than the mere possession of an unlocked phone, or even the password. For example, just having the device may be insufficient if it’s being used from an anomalous location, or the operations being performed are inconsistent with the valid user’s history.
And of course, the hack requires that the mechanism for unlocking the phone also involves some physical movement from the valid user, e.g., the hand moving as the PIN is entered. The growing capabilities of phones and tablets for biometric authentication mechanisms — either for local unlock or, via the FIDO Alliance specifications, for server authentication — would completely mitigate the attack. Applying my finger to the TouchID sensor on my iPhone provides no useful movement data that could be used to retroactively determine the template.
Even if not particularly viable, the hack is interesting because it highlights the risk of data from a user’s activities and actions, collected by an IoT thing for a valid application (e.g., steps), being used for nefarious purposes. Imagine a patient’s EKG, collected by an implanted sensor for analysis of a heart condition, being used as a biometric to impersonate that user in an authentication. Ultimately, we mitigate this risk by ensuring that 1) such attacks cannot be achieved at large scale (like a database of passwords enables), and 2) that compromise of one such factor is insufficient to impersonate the valid user.
In conclusion, this attack should not give you much cause for concern. That said, if your fitness tracker was an unwelcome gift from a spouse sending not so subtle hints like, “Wow, Jane’s husband sure looks great” and “Those jeans used to be looser on you,” it might provide you a plausible justification for letting it sit uncharged in your bedside table where it belongs.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.