Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Android Encryption Google Operating Systems Privacy Security Software News Build Hardware Technology

Security Researcher Publishes How-To Guide To Crack Android Full Disk Encryption (thehackernews.com) 84

An anonymous reader writes: Google first implemented Full Disk Encryption in Android by default with Android 5.0 Lollipop in an effort to prevent criminals or government agencies from gaining unauthorized access to one's data. What it does is it encodes all the data on a user's Android device before it's ever written to disk using a user's authentication code. Once it is encrypted, it can only be decrypted if the user enters his/her password. However, security researcher Gal Beniamini has discovered issues with the full disk encryption. He published a step-by-step guide on how one can break down the encryption protections on Android devices powered by Qualcomm Snapdragon processors. The source of the exploit is posted on GitHub. Android's disk encryption on devices with Qualcomm chips is based only on your password. However, Android uses your password to create a 2048-bit RSA key (KeyMaster) derived from it instead. Qualcomm specifically runs in the Snapdragon TrustZone to protect critical functions like encryption and biometric scanning, but Beniamini discovered that it's possible to exploit a security flaw and retrieve the keys from TrustZone. Qualcomm runs a small kernel in TrustZone to offer a Trusted Execution Environment known as Qualcomm Secure Execution Environment (QSEE), which allows small apps to run inside of QSEE away from the main Android OS. Beniamini has detailed a way for attackers to exploit an Android kernel security flaw to load their own QSEE app inside this secure environment, thereby exploiting privilege escalation flaw and hijacking of the complete QSEE space, including the keys generated for full disk encryption. The researcher also said Qualcomm or OEMs can comply with government or law enforcement agencies to break the FDE: "Since the key is available to TrustZone, Qualcomm and OEMs [Original Equipment Manufacturers] could simply create and sign a TrustZone image which extracts the KeyMaster keys and flash it to the target device," Beniamini wrote. "This would allow law enforcement to easily brute force the FDE password off the device using the leaked keys."
This discussion has been archived. No new comments can be posted.

Security Researcher Publishes How-To Guide To Crack Android Full Disk Encryption

Comments Filter:
  • by NotInHere ( 3654617 ) on Friday July 01, 2016 @07:50PM (#52430777)

    From reading TFA, I conclude that you still need unlocked access to the phone? so if somebody gets hold of your turned off phone, they can't use it.

    • by AmiMoJo ( 196126 )

      Yep, it's an attack on a running system. The encryption key is derived from your password and stored in secure memory for use. If someone can get hold of your unlocked device they can potentially access that memory.

      Similar to attacks on encryption on running PCs etc, only harder because PCs don't usually have secure memory to hold the key in.

  • by Anonymous Coward
    Don't use TrustZone; store the stuff as a proper LUKS volume and stop using "trusted" proprietary crap. People complain about how these hypervisor sideband processors are not open or auditable and are always dismissed as paranoid fools by naysayers (it can't POSSIBLY happen!) then this kind of thing happens and the hand-wavers desperately try to do some damage control so they won't be wrong somehow. It's the same story with Intel ME. Can we just admit already that having supervisory-processor-within-a-proce
    • "Trusted" stuff cannot be trusted. It's called "trusted" because its maker trusts it to keep you from doing anything its maker didn't intend you to.

      Why the fuck should I trust something like that?

      • That's a nice, specific little clarification of the larger life rule: never trust anyone who says "trust me".

      • by swillden ( 191260 ) <shawn-ds@willden.org> on Friday July 01, 2016 @08:43PM (#52430969) Journal

        "Trusted" stuff cannot be trusted. It's called "trusted" because its maker trusts it to keep you from doing anything its maker didn't intend you to.

        No, the only thing that makes it "trusted" is that it's small, and isolated. Those characteristics reduce its attack surface and reduce the number of bugs it has, on average.

        I'll grant that the primary purpose of TrustZone in Android devices, historically, has been DRM, which is absolutely something the maker doesn't want you to muck with. That's not the case with Keymaster. If you want to know what it does, there's a full open source reference implementation in AOSP. That's not the implementation used in Qualcomm devices; they wrote their own and it's closed -- but it does as close to the same thing as what the reference implementation does as the engineers involved could make it. Some other devices do use the code from AOSP.

        • No, the only thing that makes it "trusted" is that it's small, and isolated. Those characteristics reduce its attack surface and reduce the number of bugs it has, on average.

          No, what makes it "trusted" is that ARM and/or the chip vendor repeatedly tell you it is in press releases. There have been several attacks on TrustZone devices which take advantage of the fact that it's often very poorly implemented, and much less secure than the non-TrustZone stuff. The classic example of this was the rooting of a Motorola phone which hacked the insecure TrustZone, then attacked the secure non-TrustZone stuff from inside the "trusted" environment. It was the presence of TrustZone that

          • No, what makes it "trusted" is that ARM and/or the chip vendor repeatedly tell you it is in press releases. There have been several attacks on TrustZone devices which take advantage of the fact that it's often very poorly implemented, and much less secure than the non-TrustZone stuff.

            Utter nonsense. Go take a look at the CVE reports of vulnerabilities in the Linux kernel, and compare them to the CVE reports of TrustZone OS vulnerabilities. The kernel is two orders of magnitude worse. That's not because it's bad, or TZ code is good... I'll readily concede that kernel code is almost certainly higher in quality than closed-source TZ OSes. But it's also vastly larger and more complex, with so much more attack surface.

      • You shouldn't. In this case "trusted" means that the processor/ram/rom/code can be trusted to not be altered by YOU. Only the corporations that paid the cpu manufacturer for access to it can 'trust' it...

    • Don't use TrustZone; store the stuff as a proper LUKS volume and stop using "trusted" proprietary crap

      TrustZone isn't proprietary; it's part of the standard ARM feature set. In many cases (not all[*]) the software running in it is. As for using an LUKS volume, the problem with that is that it's no more secure than the Linux kernel, which is popped by another security researcher every week. The kernel is so large and complex that hardening it is really difficult.

      Can we just admit already that having supervisory-processor-within-a-processor stuff is inherently insecure?

      No, because it's not.

      The purpose of having TrustZone isn't that it's inherently more secure. It's still just software, and subject to all of the s

      • by sjames ( 1099 )

        There is an important distinction though. If I do not control the code running in that processor (or trust zone), it *IS* inherently insecure for me. If I do control it, it may improve security or it might be neutral WRT security.

        • There is an important distinction though. If I do not control the code running in that processor (or trust zone), it *IS* inherently insecure for me. If I do control it, it may improve security or it might be neutral WRT security.

          There's lots of code that you do not control running in every computing device you use. If you insist on that exceedingly strict definition, you should just avoid them entirely. Also, your assertion that if you do control it it's either neutral or positive is patently false. It is entirely possible to exercise control in ways that decrease your security. It's pretty common, for example, for people who root their Android devices to believe that they're improving their device security by taking control over i

          • by sjames ( 1099 )

            There's not actually a lot on the machines I am using. That doesn't mean that I have written or even audited every line myself, just that it is open for me to do so.

            Of course, in embedded devices the code is less open for my audit or modification, but those devices also don't have much exposure to my personal data.

            • You're using some very unusual machines, then. Or you're mistaken about how many binary firmware blobs there are.
              • by sjames ( 1099 )

                There was even less when I was working on coreboot.

                Keep in mind, the BIOS is a nasty pile of hair but it IS open to audit.

                The ME, OTOH loads an encrypted and signed blob.

                • Just to be clear, I don't like or support the presence of opaque binaries in systems. One of the things I'm trying to achieve in the Android ecosystem is to move OEMs to using open source code in TrustZone. There is tremendous security value in having a trusted execution environment which is small and isolated, because making full consumer OSes secure is simply impossible, but having the code in that TEE be closed doesn't add anything to the security proposition, and arguably reduces it. That's why Google h

                  • by sjames ( 1099 )

                    In a cellphone, the radio is the worst of the bunch. It has the ability to snoop on the main CPU and controls the boot process. that's why rooting many Android phones involves re-flashing the radio. An actual free and open radio probably isn't permitted by the FCC and other nation's equivalents around the world, but I would like to see it demoted to just another component with a narrow interface to the main system. Preferably it would require the cpu to feed it digitized audio rather than having a direct co

                    • In a cellphone, the radio is the worst of the bunch. It has the ability to snoop on the main CPU and controls the boot process.

                      I've heard that said by many people, but as far as I can tell by examining the architecture of the devices I've worked on, it's not true. The baseband radio does have DMA access, like most every peripheral, but it doesn't have any special sort of access to the CPU.

                    • by sjames ( 1099 )

                      If there isn't an IOMMU, DMA access is enough to overwrite any code in the system, and so control the CPU.

                    • There is an IOMMU. Among other things, TrustZone relies on it.
                    • by sjames ( 1099 )

                      I should be more specific, an IOMMU exclusively controlled by the CPU. In this case, the secure side necessarily controls it (or it wouldn't be secure), not the user side. This makes the user side at the secure side's mercy. A narrow interface would leave the radio secure, but would also make the user side secure from the radio.

                    • I should be more specific, an IOMMU exclusively controlled by the CPU. In this case, the secure side necessarily controls it (or it wouldn't be secure), not the user side. This makes the user side at the secure side's mercy. A narrow interface would leave the radio secure, but would also make the user side secure from the radio.

                      The IOMMU is controlled by the CPU, by which I mean the AP (application processor). The non-secure mode software (EL0 and EL1) controls most of its configuration, including allocating DMA pages for the baseband, though it can't override the configuration provided by the secure mode (EL3). That's my understanding, anyway. Most of my work is at a slightly higher level.

                    • by sjames ( 1099 )

                      The difference may just be the phones we're dealing with. Do the ones you've worked with/on do locked bootloader/network? The ones I've seen do, and use the processor in the radio to lock it down. It may be the ones you know of simply don't do that. It could also be a matter of the generation of phone. My knowledge is a couple years out of date at this point.

                      There is still the question of the radio though. Does it have independent access to the microphone or does it depend on the AP for that? can it access

    • by mlts ( 1038732 )

      IIRC, CyanogenMod doesn't touch TZ... it just uses dm-crypt, prompts for the passphrase that unlocks the /data volume key, then goes from there.

      • CyanogenMod still uses TZ/QSEE on phones where that is used in the original firmware.

        However, that never reduces security, only increases it. If your passphrase is long enough not to be at risk of a bruteforce, then this attack does not affect you.

    • The problem is not "trusting" the proprietary crap, the problem is trusting it to improve security in any measurable way.

      Android full disk encryption is just as secure as LUKS (in fact, under the hood it's dm-crypt just like LUKS, the key derivation is just different). This doesn't break the FDE. You still need the passphrase. What this does is break the "you need the hardware to access the FDE and we're going to impose additional non-provable restrictions such that you can keep using your 4-digit PIN and i

  • He published a step-by-step guide on how one can break down the encryption protections on Android devices powered by Qualcomm Snapdragon processors.

    I do not think Qualcomm or Google will be impressed.

    My advice to him: "Get a lawyer, quick!"

  • And exploit of the widevine DRM subsystem running in trust-zone. The second exploit is possible because you don't control the keys to the kingdom.
  • by BitterOak ( 537666 ) on Friday July 01, 2016 @08:18PM (#52430859)
    I read the article and it looks like this exploit merely allows offline brute forcing of the password. Now, of course, many people choose short passwords on their portable devices, but if you choose a password with sufficient entropy (at least 100 bits, or better yet, 128) you should be safe from this attack. Note: that would require a fairly long and random alphanumeric password.
  • For devices that get regular updates the Qualcomm TrustZone bug has already been fixed. It went out in the January 2016 updates: https://source.android.com/sec... [android.com]. Check the patchlevel date on your device.

    Of course the other part, that someone who can compel Qualcomm to sign TrustZone software images that intentionally compromise security, is still the case, and likely will be for some time. That's a threat model that hadn't been considered important until recently, and it's one that's not easy to mitigat

    • by hawguy ( 1600213 )

      For devices that get regular updates the Qualcomm TrustZone bug has already been fixed. It went out in the January 2016 updates: https://source.android.com/sec... [android.com]. Check the patchlevel date on your device.

      Of course the other part, that someone who can compel Qualcomm to sign TrustZone software images that intentionally compromise security, is still the case, and likely will be for some time. That's a threat model that hadn't been considered important until recently, and it's one that's not easy to mitigate. That's not restricted to Qualcomm, either. In every device with a trusted execution environment, there's some organization who holds the signing keys needed to load firmware in that environment, generally (but not always) the SoC vendor.

      Can't this hole be closed by designing the trusted firmware so it requires that the passcode be entered before it will accept a firmware update?

      • Can't this hole be closed by designing the trusted firmware so it requires that the passcode be entered before it will accept a firmware update?

        Yes, but it's tricky, particularly because it's not that hard to open the device and write an updated firmware blob to flash directly, bypassing any software-based gatekeeping checks. There are ways to address that, but they raise subtle issues in turn, and ways to address those, but they raise more issues. At root, the existence of Replay Protected Memory Blocks in eMMC controllers makes it feasible to have small amounts of non-volatile storage that an attacker probably can't write to, which I believe make

    • Oh yeah Grandma: "check the patchlevel of your device". Android is such a clusterfuck.
      • Oh yeah Grandma: "check the patchlevel of your device".

        Well, the right answer is to buy a device that has a published support policy that includes monthly security patches. Then you don't have to check the patchlevel (which actually isn't tough, even for Grandma: It's just a date and she's perfectly capable of understanding it), because you know it's good. Maybe someday consumers will demand that of all Android devices. At present, they don't, and manufacturers give them what they want.

        • Re: (Score:1, Offtopic)

          Oh yeah grandma: "buy a device that has a published support policy". Christ. No wonder Apple is selling so many iPhones.
          • Oh yeah grandma: "buy a device that has a published support policy". Christ. No wonder Apple is selling so many iPhones.

            Well, she knows to look at the warranty when she buys a car.

            • Well, she knows to look at the warranty when she buys a car.

              Does she? And if she does, will she? Or will she just ask them a couple of questions about it like how long is it and what does it cover, and then move on? Legal contracts are difficult for anyone to understand, is your grandmother a lawyer?

              To be fair, Apple's legalese is plenty long.

          • by Anonymous Coward

            No, it's not JUST buy a device with a patch policy you trust, it is buy a device from a vendor you trust, with a patch policy you trust, and also go and do a bunch of research into the vendors to figure out which ones you can trust, and way MORE research to figure out the patch policy you can trust.

            Or just buy and iPhone and skip the degree in this field that you don't have any interest in.

  • All encryption can be eventually cracked. More at 11.
  • The original Android implementation of dm-crypt generates a -random- key in Android 5.0, which is the key used for encryption. It then encrypts that key with the user's passphrase/PIN/whatever. This implementation is pretty secure, because other than active RAM attacks, when the phone starts up, it will ask for the decryption key for /data, and no key, no decryption.

    From what I have looked at with CyanogenMod, this ROM uses the "old fashioned way". No TrustZone, no fancy footwork with keys... just a rela

    • "CyanogenMod (...) uses the "old fashioned way". No TrustZone, no fancy footwork with keys... just a relatively simple prompt for the passphrase at the phone startup so /data can be mounted and used."

      I'm using CM13 (Android 6/Marshmallow), which uses FDE, yet can boot without password prompt, just like stock Android 6. It will present the lock screen though. Leaving aside the security of a booted and connected phone with only the lock screen as a protection, the only way to do this seems to be to use some k

  • Seems like the solution would be to allow users to wipe the manufacturer key and install their own. Although they probably don't want to do this with all their signed bootloaders and code.
  • >" The researcher also said Qualcomm or OEMs can comply with government or law enforcement agencies to break the FDE: "Since the key is available to TrustZone, Qualcomm and OEMs could simply create and sign a TrustZone image which extracts the KeyMaster keys and flash it to the target device," Beniamini wrote. "This would allow law enforcement to easily brute force the FDE password off the device using the leaked keys." "

    Which also means ANYONE it was leaked to can do the same thing- criminals, spies, ma

  • this was made public knowledge before the FBI could classify it as a National Security secret.

  • From the article in The Hacker News, "Once getting hold of this key, an attacker could perform a brute-force attack to grab the user password, PIN or lock, cracking Android's full disk encryption."

    Why not just get the user password by brute force attack to begin with since it will unlock the encrypted files for you? I know I'm missing something. If you go through this process only to wind up having to mount a brute force attack, what is gained?

    • Why not just get the user password by brute force attack to begin with since it will unlock the encrypted files for you?

      Because the user's password won't let you unlock the encrypted files, at least not on its own. The password works in conjunction with a much stronger key randomly generated by the device itself. To decrypt the files you need both the device key and the user's password. The key is normally only available to software running inside the TrustZone, which limits the number and frequency of authentication attempts. This exploit allows an attacker to retrieve the device-specific key from the TrustZone, but they st

Genius is ten percent inspiration and fifty percent capital gains.

Working...