Forgot your password?
typodupeerror
Firefox Encryption Mozilla Security

Mozilla To Support Public Key Pinning In Firefox 32 90

Posted by Soulskill
from the pin-the-key-on-the-fox dept.
Trailrunner7 writes: Mozilla is planning to add support for public-key pinning in its Firefox browser in an upcoming version. In version 32, which would be the next stable version of the browser, Firefox will have key pins for a long list of sites, including many of Mozilla's own sites, all of the sites pinned in Google Chrome and several Twitter sites. Public-key pinning has emerged as an important defense against a variety of attacks, especially man-in-the-middle attacks and the issuance of fraudulent certificates. The function essentially ties a public key, or set of keys, issued by known-good certificate authorities to a given domain. So if a user's browser encounters a site that's presenting a certificate that isn't included in the set of pinned public keys for that domain, it will then reject the connection. The idea is to prevent attackers from using fake certificates in order to intercept secure traffic between a user and the target site.
This discussion has been archived. No new comments can be posted.

Mozilla To Support Public Key Pinning In Firefox 32

Comments Filter:
  • This is a good idea, but I bet it will not work well on corporate networks that do MITM attacks: every cert will be wrong. This same thing happens if you use the SSL Observatory add-on. This clearly shows how the public key infrastructure implementation is completely flawed.

    • by MobyDisk (75490) on Friday August 29, 2014 @04:54PM (#47787055) Homepage

      Sorry! I'm totally wrong! The corporate MITM will work just fine once it is updated:

      The UA will not be able to detect and thwart a MITM attacking the
            UA's first connection to the host. (However, the requirement that
            the MITM provide an X.509 certificate chain that can pass the UA's
            validation requirements, without error, mitigates this risk
            somewhat.) Worse, such a MITM can inject its own PKP header into the
            HTTP stream, and pin the UA to its own keys. To avoid post facto
            detection, the attacker would have to be in a position to intercept
            all future requests to the host from that UA.

    • by robmv (855035)

      The default is:

        1. Allow User MITM (pinning not enforced if the trust anchor is a user inserted CA, default)

      So CAs inserted by the corporate networks will be allowed, only verified for CAs shipped by Mozilla

    • by mrops (927562)

      And that is the intention, I don't want MITM attack by my company or anyone else.

      • by BitZtream (692029)

        Then perhaps you should browse personal sites on your own dime, not the company network.

        • by 0123456 (636235)

          Then perhaps you should browse personal sites on your own dime, not the company network.

          Then what's the problem? Mozilla will no longer let employees do that.

    • by Anonymous Coward

      Without commenting on whether, and in what circumstances, it's wise for companies to operate MITM firewalls:

      It seems to me that this change should, in fact, make such firewalls more secure, because it'll encourage web server admins to start using PKP, which will allow the firewall to better verify the remote server identity. (Whether the creators of the firewall software will actually implement this feature is of course another question, but anyway...)

      In fact, PKP itself will be more effective if it's impl

    • by Lennie (16154)

      You mean what corporate networks are doing is wrong. That is the biggest flaw.

      They should move to a model of a proxy configured in the browser. The browser then can trust the proxy.

      • by MobyDisk (75490)

        I am unclear on all this, but "the browser then can trust the proxy" seems to mean that same thing as the MITM. The proxy issues a cert, and the browser has to trust that cert. It is a form of MITM attack: except you know and (supposedly) trust the MITM.

  • Good idea (Score:3, Insightful)

    by Anonymous Coward on Friday August 29, 2014 @04:54PM (#47787061)

    Lets patch an inherently broken system with another inherently broken system that does not scale and will cause a whole new range of unwanted side-effects and problems.

  • Please... (Score:5, Insightful)

    by Anonymous Coward on Friday August 29, 2014 @04:58PM (#47787081)

    What ever public-key pinning is. How about a stable 64-bit version for Windows, and actually fix the bugs in their software (yeah, Thunderbird too) that have been actively open for *years* instead of wasting time a mobile OS that nobody uses, and features that aren't really relevant. Hell, just working on the things that are broken might fix the issues they're pushing through as new features.

    • Re: (Score:1, Troll)

      by ysth (1368415)

      You lost me at "Windows".

    • by hairyfeet (841228)
      Try Pale Moon [palemoon.org] friend. Its based on FF so you can keep your plugins, has a native 64bit build, oh and the best part NO STUPID NEW UI, in fact the devs have stated they will NOT be going to the new UI PERIOD. its fast, stable, works so well in fact I've started using it as my default browser even over my beloved Comodo Dragon because its even snappier, just a really great browser all around.
    • Re:Please... (Score:5, Interesting)

      by tlhIngan (30335) <.slashdot. .at. .worf.net.> on Saturday August 30, 2014 @03:46AM (#47789863)

      How about a stable 64-bit version for Windows,

      THere were stable builds for Windows. The problem was people needed plugins which weren't available (because a 64-bit browser can't run 32-bit plugins without a thunk layer). Chrome did it because Chrome ships with the plugins recompiled for 64-bit (because Google has the source code to Flash and all that).

      It's the same reason why Microsoft actively discourages use of the 64-bit version of Office.

      Though, other than being "64-bit", is there a real reason for having a 64-bit browser?

  • by Kethinov (636034) on Friday August 29, 2014 @05:18PM (#47787167) Homepage Journal

    When will Firefox support killing CPU-hogging tabs individually?

    That's the only killer feature from Chrome I'm waiting for to switch back to Firefox.

    In Chrome, if I've got 50 tabs open (not uncommon) and one of them starts spiking my CPU, I can pull open Activity Monitor (on OS X) and kill the "Google Chrome Helper" that's eating all the CPU.

    That kills the one tab that was the problem, not the whole browser. And lets me reload it when I actually care about that tab again.

    I haven't found a similar way to imitate this workflow in Firefox.

    The whole noscript / flashblock / adblock / etc approach hasn't worked. Tried it with Firefox, still had constant CPU issues after whitelisting sites I need JS or Flash turned on for, still had no way to kill runaway processes individually.

    • by Anonymous Coward on Friday August 29, 2014 @05:25PM (#47787209)

      They are too busy ruining the user interface and removing customization features to actually copy any of the good features of Chrome.

      • Yup, including simple stuff like changing the address bar default search URL. Now one has to install an add-on to do so, instead of just going to about:config. GnomeZilla, the monster that feeds on usability.
    • Re: (Score:3, Informative)

      by Etzos (3726819)
      Probably sometime after electrolysis[1] (e10s) lands. That's probably going to take a while because there's a lot to do between now and when it will be deemed release ready (add-on compatibility, switching some internal components over to e10s friendly versions, memory checks, and various other odds and ends).

      If it's flash or other plugins that were causing the CPU usage then recent versions of Firefox already have that covered. Plugins can be set to click to activate (so it will only run on sites you en
      • by Lennie (16154)

        The electrolysis project is scheduled to go into the stable release at the end of this year. If it will be enabled by default this year I don't know. My gut feeling is they'll do so early next year.

  • ... will have air gapping and sneakernet.
    My salute to FF -- you are not the problem, but you are not the solution either.

  • by diamondmagic (877411) on Friday August 29, 2014 @06:32PM (#47787559) Homepage

    Why does the list have to be hardcoded? Why not pull the records from DNSSEC... there's a whole specification for this, RFC6698 [ietf.org]

    • by Anonymous Coward

      Because that would make sense?

    • by Nimey (114278)

      Because hardly anyone uses DNSSEC, ISPs included.

    • by Anonymous Coward

      If you are in a MITM position already, you may be able to MITM the response from DNSSEC to suit your needs. At the very least, you must hardcode the cert to DNSSEC to be certain you know you are talking to them. This also creates an additionally dependency and latency (what if DNSSEC doesn't respond? how long is waited?).

      • by KiloByte (825081)

        Uhm no, you can't MITM DNSSEC, you can't do anything except a denial of service unless you control one of three entities:

        • ICANN
        • that particular TLD
        • the registrar your victim uses

        That is, unless someone is stupid enough to trust some external DNS server, but no reasonable DNSSEC client would use a dumb stub resolver this way.

        • by AmiMoJo (196126) *

          Many of the registrars ARE controlled by the enemy which these days is usually the state. In some places the government just forces them to issue dodgy certificates, in others GCHQ or the NSA just hacks them.

          • by Rich0 (548339)

            Many of the registrars ARE controlled by the enemy which these days is usually the state. In some places the government just forces them to issue dodgy certificates, in others GCHQ or the NSA just hacks them.

            Keep in mind that you would have to have control of the registrar that issued the domain. With SSL today anybody on the trusted CA list can impersonate any website anywhere. With DNSSEC Verisign certainly could impersonate .com websites, and Iran certainly could impersonate .ir websites, but neither party could impersonate the other's websites. That is a BIG reduction in the vulnerability space, even if it isn't perfect.

            If the NSA really does has everybody under their thumbs, then face it, you aren't goi

            • by AmiMoJo (196126) *

              We can't judge every security improvement solely on whether it solves "the NSA is out to get me."

              The NSA and GCHQ are out to break the internet, so I'm afraid that is the benchmark which we have to use. They spy on everyone, and spoof sites like Slashdot to deliver malware. They prefer to hack other people's servers, i.e. your computer, and use them to attack their more specific targets.

              While Iran might find it hard to impersonate .com sites, I bet that the NSA/GCHQ can impersonate .ir sites. That is a major concern for everyone, not just Iranians, because they are know to hack infrastructure providers

              • by Rich0 (548339)

                We can't judge every security improvement solely on whether it solves "the NSA is out to get me."

                The NSA and GCHQ are out to break the internet, so I'm afraid that is the benchmark which we have to use. They spy on everyone, and spoof sites like Slashdot to deliver malware. They prefer to hack other people's servers, i.e. your computer, and use them to attack their more specific targets.

                So, what solution do you actually advocate then? Right now the NSA/GCHQ can still break the internet, and so can about a million other people. Oh, and everybody gets to pay $100/yr or so for every webserver they run, and virtual hosting is a pain.

                DNSSEC is a lot better than what we have now. Moving to it doesn't prevent us from moving to something even better assuming that somebody figures out what it is.

                While Iran might find it hard to impersonate .com sites, I bet that the NSA/GCHQ can impersonate .ir sites. That is a major concern for everyone, not just Iranians, because they are know to hack infrastructure providers in Europe and pretty much any other part of the world for this very purpose.

                The only solution to that is to have ICANN be under the control of somebody that everybody can trust.

        • Im a bit rusty on DNSSEC so I went to look it up to see if that were true.

          DNSSEC works by digitally signing records for DNS lookup using public-key cryptography. The correct DNSKEY record is authenticated via a chain of trust,

          So, no, you can MITM it in the exact same way you can MITM SSL. It uses a chain of trust with a trusted authority installed on each client, just like SSL, and just like SSL, whatever country hosts the root key for a TLD is subject to subpoena and global MITM.

          ICANN

          Or, whoever hacks ICANN, o

          • by KiloByte (825081)

            The .cn TLD is can be MITMed by the Chinese government, yes. That's why you need to host your chinese-dissident page in a TLD of any country that hates China (ie, almost any of them). Same for a site that reveals wrongdoings of the NSA. Any point other than ICANN can be avoided by simply chosing a different TLD, and ICANN itself can be secured by pinning TLD keys.

            This goes in sharp contrast with the CA cartel model, where you need to trust the sum (rather than alternative) of 400+ entities, some of which

  • by Anonymous Coward

    ...Pale Moon for ever!!!!

    http://www.palemoon.org/ [palemoon.org]

  • Usually certificates have an arbitrary high cost, expire yearly, need to be reissued because you need to add a subdomain (and "wildcard" certificates are usually very expensive). I can see trouble for all but a few domains, who will register certificates for decades, maybe because they have their own c.a.
    • by Lennie (16154)

      What probably happens is that a big site says: we use CA 1 and CA 2.

      Then uses CA 1. After that when CA 1 is somehow a problem they switch using certificates from CA 2 they have already prepared and ready for use.

  • So they have given up on certificates alone, don't they?
  • We've had some source code theft recently at my job, so we have an SSL MITM proxy that generates a work SSL cert for everything. At first I hated it, but it is a work comp, and they provide a dirty LAN, so just bring your device if you want to browse your mail.

    But, this would break Google searches for me. I wouldn't be able to look at any Google site, no Google searches, no wikipedia, no stackoverflow on my work comp with this. Make this a hard to find, no normal person would be able to find it, only geeks

Slowly and surely the unix crept up on the Nintendo user ...

Working...