Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Android Security Google IOS Open Source Operating Systems Privacy News Build Entertainment Technology Your Rights Online

Academics Claim Google Android 2FA Is Breakable (theregister.co.uk) 48

totalcaos writes: Attackers who control the [browser on the] PC of a user consuming Google services (Gmail, Google+ etc) can surreptitiously push and activate apps on the user's mobile device, bypassing SMS-based two-factor authentication (2FA) via the phone. How Anywhere Computing Just Killed Your Phone-Based Two-Factor Authentication is a paper that explains the wider issues of phone-based 2FA. Herbert Boss, professor of systems and security at Vrije Unversiteit Amsterdam, who co-authored the mobile security paper with the two PhD students, disclosed the vulnerability to Google but they "still [refuse] to fix it."
This discussion has been archived. No new comments can be posted.

Academics Claim Google Android 2FA Is Breakable

Comments Filter:
  • duplicate (Score:5, Informative)

    by Anonymous Coward on Monday April 11, 2016 @03:47AM (#51882855)
  • by Todd Knarr ( 15451 ) on Monday April 11, 2016 @04:05AM (#51882905) Homepage

    Fix should be simple: when an app's installed remotely from the browser, queue the installation and put up a notification asking the user to confirm the installation. Installation doesn't proceed until the user responds affirmatively to the prompt (if they respond negatively, the installation's de-queued). The authors are right, though, that the more tightly you integrate the browser-based services with the phone the less you can depend on the separation of the two for security. What's different here is that it's showing that tight integration between Google's services and the phone affects vendors other than Google.

    • Fix should be simple: when an app's installed remotely from the browser, queue the installation and put up a notification asking the user to confirm the installation.

      No, the fix already exists. If you're worried about someone accessing your google play account from a web browser, turn on 2-factor authentication. 2-factor authentication is still there, it just needs to be done through the web browser instead of the phone.

      Personally, I need to be able to control/access my phone remotely if it ever gets lost/stolen. So keep it the way it is, be sure to have 2-factor authentication turned on already, because doing an halfway measure like what you're suggesting will only dec

      • I think you miss the premise of the paper. The PC that you use to access Google services is under control of an attacker. Therefore, even if 2FA is turned, it will not actually be requested if it is accessed from the computer in question.
        • by chill ( 34294 ) on Monday April 11, 2016 @06:50AM (#51883225) Journal

          If the your main PC that is used to control your Google accounts, including permissions, is under the control of bad actors, you're screwed either way.

          They could always just turn off 2FA from the PC.

          This paper is akin to bitching if someone got a hold of my phone in my home, where location based trust is used and keeps the phone unlocked, then the bad actor could install stuff then.

          Duh!

          It is next to impossible to ensure security if the bad guys have control of the actual hardware.

          P.S. -- You misunderstood the premise of the person you were replying to. They are saying turn on 2FA for accessing your Google accounts ON THE PC. That way you need control of not only the PC, but the phone as well to essentially get control. Perfect? No. A much bigger hurdle? Yes.

          • by Anonymous Coward

            Still nope. The 2FA at risk is all of the supplemental accounts that might use it, i.e. banking or brokerage accounts, shopping accounts, online services,etc. If you have a clear path to go from your email (which may or may not be well protected) and allows you to sidestep any other account's 2FA is keys to the castle. You can do anything you want to that person. Drain their bank account, take over websites or social media accounts, other email accounts, anything. This exploit is in the wild and being used

          • by mlts ( 1038732 )

            How is this different from someone who manages to get a RAT on a victim's computer and control iTunes, installing/buying/removing apps at will? iOS is pretty much "vulnerable" to the same thing.

            • by chill ( 34294 )

              It is essentially the same thing. Unless you absolutely trust nothing, this is a possibility. The problem comes because there is a limited amount of trust based on things like other device ownership, location and paired association.

              I like the convenience I get from my phone not locking when I'm at home, or when it is paired via BlueTooth to my car radio. I made the conscious choice to weaken the security and not require manual unlocking in those situations.

              Because of my home PC being the main control point

            • by tlhIngan ( 30335 )

              How is this different from someone who manages to get a RAT on a victim's computer and control iTunes, installing/buying/removing apps at will? iOS is pretty much "vulnerable" to the same thing.

              Except you can't do that unless the phone is present - either on the same WiFi (with WiFI sync enabled) or via cable.

              You can buy apps, but they cannot be remotely installed. The only way is if the user is syncing with iTunes to then wait for the phone to be connected and then alter the sync settings to sync your app

        • by msauve ( 701917 ) on Monday April 11, 2016 @07:03AM (#51883267)
          I think you missed the point of the GP - Google also support s 2FA for the PC web browser, which requires you have the phone in order to complete the sign on. The authors say they "assume that the attacker already has control over the victim's PC," but that's not right. They assume that they not only have the PC, but a running browser which the user left logged into Google services. The paper just glosses over this.

          Simply having access to someone's PC and Google credentials is not enough if they have turned 2FA on for the web, they would also need the phone to complete the sign on on the PC. If they have control of both factors (name/password and phone), it is not a failure of 2FA, that's exactly how 2FA is intended to work. And, if you're going to base a claim on such a poor premise, why not simply premise it on the attacker having the phone itself, already logged into Google services, which makes the whole thing even easier?

          This is a very poor paper. Having started with that faulty premise, they go on through a bunch of stuff which simply doesn't matter. Perhaps I'll write a similar paper about how water is wet. I'll also point out that the paper also claims a similar vulnerability for Apple's iOS, which the summary ignores. That seems pointedly biased.
          • If they have access to the PC, all they have to do is wait for the user to log in to Google themselves, then use that session for their purposes. No need at all to wait for an unattended browser.

          • The authors say they "assume that the attacker already has control over the victim's PC," but that's not right. They assume that they not only have the PC, but a running browser which the user left logged into Google services

            If the user is using much Google stuff, then that's pretty easy. Just wait until they log into gmail and open an invisible tab with the same cookie. And if that asks for authentication then pop up a thing in the gmail tab saying 'for extra security, Google needs you to authenticate again'. Now you've compromised their Google account and you can install the 2FA trojans for everything else.

      • The idea is that it won't matter whether you have 2FA turned on or not, you've already authenticated to your Google account and your compromised browser can then use it's access to that account to push the malicious app to your phone and activate it. 2FA can't protect you from that because the malicious code starts after you've handled all steps of the authentication for it.

    • Comment removed based on user account deletion
  • by Errol backfiring ( 1280012 ) on Monday April 11, 2016 @04:26AM (#51882951) Journal
    The second link is to a google doc, which is a possible attack vector according to the submission. Should I visit this link with my android phone? Or is someone really not thinking clearly?
  • by Anonymous Coward

    Is that the exploit?

  • by trawg ( 308495 ) on Monday April 11, 2016 @04:40AM (#51882977) Homepage

    I glanced through some of the Android parts of the paper; it describes these as 'practical attacks' but it also opens with "we assume that a victimâ(TM)s PC has been compromised, allowing an attacker to perform Man-in-the-Browser (MitB) attacks", so it would appear the immediate risk would be at least on the low side. Unless your PC is pwned, but of course if that's the case, you're in trouble already.

    For Android, the paper describes a mechanism by which a malicious app can be published to the Google Play store, then silently installed and activated through a Google Chrome plugin trojan (installed as part of the PC pwnage). There are more [interesting] details about how that process works and circumvents some existing Google tricks intended to stop it (e.g., static analysis of apps).

    At this point, the app can now intercept SMS tokens that are sent to you as part of 2FA.

    I was mostly interested to see if there were vulnerabilities in the Google Authenticator mechanism/implementation; it seems that this is not the case. It basically just takes advantage of the fact that Google offer a way to skip the Google Authenticator by using an SMS instead, although I guess this requires that your Google account is set up with a phone number (which may or may not be a requirement?).

    The end of the paper notes that "Google believes that our proposed attack is not feasible in practice". I feel like eventually we'll see a bunch of common trojans that are set up to mess with 2FA. I kind of think that this is a pretty involved process with a lot of room for things to go wrong (for the attackers) so how effective it is remains to be seen. (I also wonder with Android M if the permissions model is different enough so that the SMS reading permission needs to be invoked on a per-app basis? But that might be work-aroundable anyway.)

    • That's what I thought as well first and deleted my main mobile phone number immediately from 2FA settings. But I think the Authenticator App itself might be vulnerable as well. I was able to take screenshot, and I would assume that there are apps that could do the same. It would then just need some OCR and the Authenticator would be broken as well. However, it may be the case that Android does not allow other apps to take screenshots and that there is no way to give them a permission to do so.
    • "I was mostly interested to see if there were vulnerabilities in the Google Authenticator mechanism/implementation; it seems that this is not the case."

      For the standalone authenticator, the security largely comes from the security of the built-in hash function. If that's broken, so are a lot of things.

      Barring that, the goal would be to extract the secrets from the app. Google likes to use time-based, which means that a cloned token won't get out of sync, and won't be detected (though it does protect again

      • Apparently the older RSA-and-imitators fobs were a bit shaky [iacr.org]; so there would be real reason for concern if anyone is still making authentication 'apps' based on it(recording the token's output every 60 seconds for a couple of months to a year, depending on your luck, would be a serious pain in the ass with a hardware token in someone else's possession; but becomes much more viable if the phone OS allows(whether by design or failure) your application to access the display output of another application. Thoug
      • by AmiMoJo ( 196126 )

        What the paper doesn't mention is that their app installed via Google Play shows up like any other app you remotely installed, i.e. there is a notification message that an app was installed.

        This attack isn't very practical against anyone with much of a clue. Their browser has to be compromised, they have to be using SMS for 2FA rather than an app like Google Authenticator (others are available, including open source), and they have to ignore warning messages. The iOS version is a little bit easier to exploi

    • Given that basically everyone with something even resembling an 'ecosystem' is pushing as hard as they can to have users logging into the same services, under the same account, on their phones, computers, tablets, etc. the only real fix is going to involve dropping the pitiful little farce that 'something on your phone' is actually a 'second factor' in any useful sense.

      Phones(whether running software implementations of authentication fobs, or using SMS) have never been ideal for the job(software fobs on
  • These so-called "academics" are a stain on higher learning.

    They're wasting their time trying to learn things.

    Thinking?!?!?!

    That's a waste of time. You could wind up upsetting someone if you dare to do THAT heinous activity!

    They really should be spending all their time worrying about not microagressing anyone, because that's what's really important to true academics!

  • All security or encryption should be treated as breakable, so the real question is not is it breakable, but how much effort needs to be made to break it? This should also be put in the context of who we are really trying to protect data from?

  • Code isn't secure.
  • Uhh... (Score:4, Interesting)

    by The MAZZTer ( 911996 ) <megazzt AT gmail DOT com> on Monday April 11, 2016 @07:27AM (#51883347) Homepage

    Nobody's beaten Google's 2FA. Remote install does not REQUIRE 2FA. If Google should decide it does, they can throw up a prompt for a code when you go to do a remote install and suddenly the "vulnerability" is gone. I agree with the article as much as they might want to do this. Right now Google uses 2FA for login and protecting account security settings only.

    It's important to note that an attacker would already have to be logged in as a user. If a user keeps themselves logged into an insecure PC an attacker can use there's only so much Google can do... the article doesn't really mention the attacker has access to much of the user's Google services and data in addition to remote install. It brings to mind the "It rather involved being on the other side of this airtight hatchway" class of "vulnerability" that Raymond Chen bases off a quote from The Hitchhiker's Guide.

    In addition there's a couple problems not addressed in the link I can see. First of all, AFAIK, other than on a really old version of Android through a glitch, any newly installed app cannot run any code until the first time the user launches it. Then it is allowed to install background services and whatever. But not before then. So if you manage to silently install an app which the user never sees or runs you've defeated yourself. Secondly, this can only be used to install apps from Google Play, which Google can manage to take down malicious apps as they are reported.

  • this is nothing important, go get an old copy of windows 95 and use a hex editor to modify command.com so that all the key commands like DIR do not work and have changed, then make sure that you made additional changes to maintain the file size and you will can a command.com that passes all security checks. Now extrapolate to use this system to modify ssd firmware and you can hack any computer in the world since the firmware that passes verification now has a virus to send windows update to a hacked server

Money doesn't talk, it swears. -- Bob Dylan

Working...