Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
Google Security Encryption Facebook Network Privacy The Internet News

How Far Have We Come With HTTPS? Google Turns On the Spotlight (networkworld.com) 84

alphadogg writes from an article on NetworkWorld: HTTPS is widely considered one of the keys to a safer Internet, but only if it's broadly implemented. Aiming to shed some light on how much progress has been made so far, Google on Tuesday launched a new section of its transparency report dedicated to encryption. Included in the new section is data highlighting the progress of encryption efforts both at Google and on popular third-party sites. "Our aim with this project is to hold ourselves accountable and encourage others to encrypt so we can make the Web even safer for everyone," wrote HTTPS evangelists Rutledge Chin Feman and Tim Willis on the Google Security Blog.
This discussion has been archived. No new comments can be posted.

How Far Have We Come With HTTPS? Google Turns On the Spotlight

Comments Filter:
  • Congrats Slashdot! (Score:5, Insightful)

    by gQuigs ( 913879 ) on Wednesday March 16, 2016 @03:22PM (#51710131) Homepage

    This is the first time one of these stories have come up and slashdot has had HTTPS enabled!

    • by BeauHD ( 4450103 ) Works for Slashdot
      Perfect timing indeed! ;)
    • by whipslash ( 4433507 ) Works for Slashdot on Wednesday March 16, 2016 @03:45PM (#51710277) Homepage Journal
      Would ya look at that
    • by darkain ( 749283 )

      Holyshit, you're right! The cert for this site is only 2 days old. Didn't even notice until you mentioned it!

    • by iggymanz ( 596061 ) on Wednesday March 16, 2016 @03:56PM (#51710381)

      SSLLabs gives A rating

      Protocols
      TLS 1.2 Yes
      TLS 1.1 Yes
      TLS 1.0 Yes
      SSL 3 No
      SSL 2 No

      Protocol Details
      DROWN (experimental) No, server keys and hostname not seen elsewhere with SSLv2
      (1) For a better understanding of this test, please read this longer explanation
      (2) Key usage data kindly provided by the Censys network search engine; original DROWN test here
      (3) Censys data is only indicative of possible key and certificate reuse; possibly out-of-date and not complete
      Secure Renegotiation Supported
      Secure Client-Initiated Renegotiation Yes
      Insecure Client-Initiated Renegotiation No
      BEAST attack Not mitigated server-side (more info) TLS 1.0: 0xc014
      POODLE (SSLv3) No, SSL 3 not supported (more info)
      POODLE (TLS) Inconclusive (Timeout) (more info)
      Downgrade attack prevention Yes, TLS_FALLBACK_SCSV supported (more info)
      SSL/TLS compression No
      RC4 No
      Heartbeat (extension) No
      Heartbleed (vulnerability) No (more info)
      OpenSSL CCS vuln. (CVE-2014-0224) No (more info)
      Forward Secrecy With modern browsers (more info)
      ALPN No
      NPN No
      Session resumption (caching) No (IDs assigned but not accepted)
      Session resumption (tickets) No
      OCSP stapling No
      Strict Transport Security (HSTS) No
      HSTS Preloading Not in: Chrome Edge Firefox IE Tor
      Public Key Pinning (HPKP) No
      Public Key Pinning Report-Only No
      Long handshake intolerance No
      TLS extension intolerance No
      TLS version intolerance No
      Incorrect SNI alerts No
      Uses common DH primes No, DHE suites not supported
      DH public server param (Ys) reuse No, DHE suites not supported
      SSL 2 handshake compatibility Yes

      Miscellaneous
      Test date Wed, 16 Mar 2016 19:51:39 UTC
      Test duration 125.443 seconds
      HTTP status code 200
      HTTP server signature Apache/2.2.3 (CentOS)
      Server hostname star.slashdot.org

      • by Lennie ( 16154 ) on Wednesday March 16, 2016 @06:29PM (#51711449)

        You know what is good about HTTPS these days:

        - HTTP/2 using HTTPS is faster than HTTP/1.x without HTTPS and it's getting easier to deploy it. For example by using the H2O webserver ( https://h2o.examp1e.net/ [examp1e.net] ) as a proxy, it comes with built in SSL/TLS library for easier deployment and support for replicating sessions.

        HTTPS itself is becoming easier to deploy and manage:

        - HTTPS doesn't need a dedicated IP-address any more (older browsers/operating systems had problems with the HTTPS equivalent of 'virtual hosts'):
        https://en.wikipedia.org/wiki/... [wikipedia.org]

        - certificates are available for free with an automatic request and renewal system. So no more messing around, you can automate it. -> with Let's encrypt Beta: https://letsencrypt.org/ [letsencrypt.org] and for example with acmetool: https://hlandau.github.io/acme... [github.io].

        There are finally ways to fight the silly CA-system, not completely, but things are improving.

        For regular visitors on a site you can add headers which will prevent an other CA issuing a rogue certificate for your site.
        https://developer.mozilla.org/... [mozilla.org]

        • by Anonymous Coward

          That method of certificate pinning assumes that it will not be tampered with while in transit on the first use. Yes, it works similar to SSH, but unless the web visitor is verifying the certificate signature via an out-of-band communications channel, it's trivial for some hostile entity to intercept the page on the first request and replace that header with their own header. Then continue to MITM the site without the user encountering a rogue certificate error.

          The only sane solution at the moment is eithe

          • by Lennie ( 16154 )

            Obviously at first visit the CA-system still applies, so the certificate was/were issued based on some verification process. So that is a form of out-of-band communication channel. It's the most used channel on the Internet right now. This is just an improvement.

            What a lot of attackers want to prevent is detection and with this system in place, the risk of detection also becomes much higher.

            Anyway, you can also get your site added to the lists that are included in browsers. Chrome and Firefox use that too (

          • HSTS and HPKP provide provide cookies that, while far more unconvenient to abuse, are enormously more insidious. You can't really block third-party H??? cookies, making them session-only defeats their stated purpose, and usual cookie-handling extensions don't manage them either.

            Thus, anyone who cares about privacy needs to disable both HSTS and KPKP, which is sad as such people tend be those most at risk of state-level attacks.

            So we indeed urgently need DANE. For now, both Chrome and Firefox have it effec

            • by Lennie ( 16154 )

              How are they cookies ? How does the server learn what the client/browser knows ?

              The client/browser doesn't send what the it knows to the server and AFAIK there is no Javascript API or similar to check it from within the page.

              • You get only 1 bit per hostname but can use any number of them. You then make a query to http://bit0.example.org/ [example.org] http://bit1.example.org/ [example.org] http://bit2.example.org/ [example.org] and so on, recording which succeeded and which failed. For HSTS you query http and have https either not work or return a different answer, for HPKP you query https and have test servers use certificates signed by a valid CA that doesn't match the pin.

                You don't even need javascript to read the answers, both to display something to the user (pie

                • by Lennie ( 16154 )

                  I understand your point now. But that's a lot of trouble to go through and it would take a lot of requests to identify a single user. There are much easier and more stealthier ways to do that.

    • Unless you're reading on mobile, then it defaults to "m.slashdot.org" over plain http.

      • Mobile slashdot has a LOT of issues.

        It's definitely easier to just give up on logged-in browsing over a mobile device and post as AC. I know I do.

    • On the other hand, the old DICEy overlords at Sourceforge can't even keep their https working, at least for prdownloads.sourceforge.net [sourceforge.net].

  • by xxxJonBoyxxx ( 565205 ) on Wednesday March 16, 2016 @03:24PM (#51710147)
    Better link to "other sites" report: https://www.google.com/transparencyreport/https/grid/?hl=en </karmawhoring>

    Notice that Apple, Bing and Microsoft are all knocked for NOT running a "Modern TLS Config" and NOT using HTTPS by default. (I actually had to check that for myself - it was hard to believe that major companies are still NOT doing HTTPS by default - I enforce this even on my little podunk sites - but it was true!)
    • Forbes.com:
      Site works on HTTPS Y
      Modern TLS Config N
      Default HTTPS N

      SNRK...
      wait, "Other Top Sites" ?!

    • Re: (Score:2, Insightful)

      by Hentes ( 2461350 )

      Why is it better to force users to use https instead of letting them choose? I hate this modern trend when removing features is considered "progress".

      • Re: (Score:2, Insightful)

        by NatasRevol ( 731260 )

        Adding security is now removing features?

        Fat, drunk and stupid is no way to go through life, son.

        • by DavidRawling ( 864446 ) on Wednesday March 16, 2016 @06:34PM (#51711477)

          And because we need to ~double the amount of data used by all the hamster forums, cat videos and aircraft curation guides, especially when a lot of the world's users are on slow or data-limited connections?

          Look. I get that it's good to ensure that there's no injected content, and that you know you're connected to the site you want - but that's only true for 1% of the population. The rest of the world wouldn't know the difference between https://www.example.com/member... [example.com] and https://www.example.com.member... [example.com.members]. Both "secure" because they're HTTPS, right?

          Factor in all the browsers deciding that privately-signed sites are worse than plain http, that no-one needs to actually SEE the protocol, or the URL, that all the certs are issued by a cabal of companies who just see the benefit of charging for a NUMBER, but barely doing validation ... but sure. "Adding security". Right.

  • by Anonymous Coward

    Well, not just one problem, but it's a big one, and typically overlooked. It's nice we now encrypt everything, but we're still relying on commercial parties to "verify identity" and to protect us... from anyone they don't take money from. That's a big one, it's fundamental, and it doesn't scale. Worse, any currently in use alternative, like relying on governments, isn't better.

    • >> we're still relying on commercial parties to "verify identity"...it doesn't scale

      Actually, it DOES scale, which is generally why HTTPS (used to provide "one-way" authentication of server identity) thrives on top of its CA-signed X.509 certificates.(As opposed to a point-to-point world in which each user would have to individually trust the credential of every site they visited, like most SSH or PGP implementations.)
    • Re: (Score:2, Insightful)

      by Cramer ( 69040 )

      Exactly. It's just more theater. 99% of those "cheap" ssl certificates are not validated AT ALL. People are blindly putting trust into a system that has none.

      And on top of this, SSL is an exceptionally expensive computation. It takes rather expensive dedicated hardware to handle at any meaningful rate. (go play with the opensssl speed tool to see for yourself)

      • Re: (Score:2, Informative)

        by Anonymous Coward

        It doesn't matter that SSL is computationally expensive, because you don't use it to encrypt the entire session. Instead, you use it to exchange symmetric encryption keys in a secure manner and *those* keys (and algorithms) are what is used to encrypt the session. Even an old Xeon 5160 could encrypt 100-150 MB/s with AES-128 which is enough to keep up with a 1Gbps NIC.

        Modern CPUs are even faster [calomel.org]. Those are per-core numbers, so multiple by 4/8/12/16 (however many cores you have).

  • by ickleberry ( 864871 ) <web@pineapple.vg> on Wednesday March 16, 2016 @03:27PM (#51710167) Homepage
    HTTPS isn't that safe. Any agency that can coerce one of the numerous CA's can snoop traffic quite easily. Of course Eric Schmidt is an avid fan of the surveillance society so thats why they weren't going to back anything less centralised than CA-based HTTPS
    • coercing a CA might be revealing though, wouldn't it be easier to coerce a particular server owner?

      • by Cramer ( 69040 )

        Or more likely (as in they-are-already-doing-this)... record TB's of encrypted traffic until they can seize the server (and thus the certificate). Once they have the server certificate, they can decrypt every session that has ever been encrypted with it.

        • That's what we have Perfect Forward Secrecy for.
          • by Cramer ( 69040 )

            Doesn't apply. With the server key, you can do the exact same math as the server to generate the exact same master secret and decode the entire stream.

            (I've been building systems to do this for over a decade. Sure, it's easier if you happen to be the load-balancer -- i.e. "server" -- in the equation, but out-of-band monitoring of SSL is Very. Real.)

      • Can you tell anyone if you get a secret FISA order? No. So how could that be revealing?

        • you would suck at counter-intelligence. the actions of an agency based CA-level compromise would be different than acting on compromising of individual corporations servers.

    • by Shawn Willden ( 2914343 ) on Wednesday March 16, 2016 @06:08PM (#51711315)

      HTTPS isn't that safe. Any agency that can coerce one of the numerous CA's can snoop traffic quite easily.

      While your concerns are real, I think they're overstated.

      A coerced CA cert does allow MITM attacks, but they have to be used very carefully and on a targeted basis, because if they're used too broadly it will be noticed. A TLS MITM attack is very noticeable to anyone who is looking. Google Chrome has caught a few subverted CAs now, thanks to certificate pinning of intermediates for Google, Verisign, GeoTrust and some others. Firefox pins large numbers of intermediates, for lots of domains. I think other browsers are also getting into it.

      Of course Eric Schmidt is an avid fan of the surveillance society so thats why they weren't going to back anything less centralised than CA-based HTTPS

      Nice cheap shot. In fact Google has a couple of significant projects to address the shortcomings of the CA system. One is to increase pinning, but that's kind of a hack. The other is the Certificate Transparency [certificat...arency.org] project, which aims to ensure that any certificate produced by any CA for any domain is visible to the owner of that domain. If that succeeds covert certificate issuance will be impossible.

      At bottom, the problem with the CA isn't centralization, it's more complicated than that. The CA system is decentralized in the sense that there are many CAs... but that makes every one of them a single point of failure. In some ways we'd be safer with a truly centralized CA system, because then we'd have one single point of failure rather than a few hundred. The semi-decentralized system we have is pretty decent... if we can enable the world to easily recognize improperly-issued certificates. Certificate Transparency is one good way to do that. I'm also a fan of the Convergence [wikiwand.com] system, but in addition to the existing CA system, rather than as a replacement.

      In any case, although the CA system has some issues, and we have seen a handful of cases where they've been exploited, by and large it works very well, securing more connections and more data than anything else ever has. We'd be foolish to replace it, but augmenting it to address the problems is a good idea.

    • by Anonymous Coward

      That's why we need DANE [wikipedia.org]. Which makes it more difficult for rogue agencies (or rogue CAs) to issue certificates for a particular domain.

      Certificate pinning could also work, but doesn't scale unless it's a distributed lookup service.

    • by houghi ( 78078 )

      OK, as you seem to be the to-go person on this matter, why do you not tell us what the solution is, how to implement it and what sites use it already and what is being done to have it be used as a standard.

    • https is not perfect but refusing to use it is a serious case of letting perfect be the enemy of good. Furthermore google has been one of the main driving forces behind the introduction of http key pinning which makes it much harder to perform MITM attacks successfully and much more likely that an organisation attempting a MITM attack will be noticed.

  • by nimbius ( 983462 ) on Wednesday March 16, 2016 @03:54PM (#51710363) Homepage
    Google, er, Alphabet didnt really get a rock in their shoe about encryption until Edward Snowden so lets take a moment to thank their engineering consultant in Russia. Once it was revealed that the US government had placed secret taps on links between google datacenters to render their endpoint to service encryption meaningless, the company began working to make the internet a living hell for the surveillance state.

    Google owns a browser, so that browser adopted SSL, then TLS as a mandatory connection parameter for google services. a pretty good share of Mozilla, at least in braintrust, is owned by Google and so getting firefox to endorse and enforce encrypted channels to google and other web services was childsplay. after a year, the titty got tougher and HSTS was the norm on some of the largest web content providers on the internet. SSL had been completely phased out for TLS 1.2 and strong AES256 crypto in light of a more concerned review of open source encryption after the governments audacious claim they could break in excess of 80% of encrypted traffic. Libressl hit the scene after some disclosures, DEFCON changed their status with US law enforcement to "its complicated" and now in this foul year of our lord 2016, uncle sams up proverbial shits creek with Apple in what will either be a resounding defeat, or a knock-down drag out multi decade battle at the hands of a corporation with more income and assets than most foreign nations. But slashdot, its only getting good.

    googles set their target to the next generation of encryption, elliptic curve, and its looking to be a trend-setter. the primes used by most current elliptic are cooked up by NIST, and NIST has in the past been implicated in weakening encryption as part of US security policy. NIST primes are still used in chrome, but chrome supports Dan Bernsteins Curve25519 alternative that isnt intentionally gimped for uncle sam. Look for Google --and the internet-- to begin using this and other "safe curves" as standards to replace secp384r1 and prime256v1. devices and services in the year of the hoverboard are now shipping with encryption as a requirement, not an option, and features to disable it in browsers have been quietly retired. And perhaps the most resounding confirmation of the internets collective "fuck you" against the carte-blanc collection of data on citizens has come from outside the US. a majority of VPN and encrypted storage providers do not reside within the immediate jurisdiction of Washington.
    • by Shawn Willden ( 2914343 ) on Wednesday March 16, 2016 @06:55PM (#51711581)

      Google, er, Alphabet didnt really get a rock in their shoe about encryption until Edward Snowden

      While I think we should all be very grateful for Snowden's revelations, that's not true. Google was really serious about encrypting everything long before Snowden's revelations.

      For example, Gmail was the first major webmail service to provide users the option of only using SSL, back in 2008 or so. Google turned on SSL for web searches in 2011. The design of SPDY, later adopted by the IETF as HTTP2, started around 2010 and from the beginning had no unencrypted mode (though the IETF insisted on adding one).

      Once it was revealed that the US government had placed secret taps on links between google datacenters

      Google actually started work on encrypting all of those data center links long before Snowden's information came out, though Snowden definitely did light a fire under the project, causing it to get fully deployed very quickly. Snowden probably also had a lot to do with Google's decision to completely disable non-TLS traffic for many of their services (IIRC it was 2014 when gmail and search went TLS-only).

      Google owns a browser, so that browser adopted SSL, then TLS as a mandatory connection parameter for google services.

      Chrome supported SSL and TLS before Snowden, and ownership of Chrome had nothing to do with making encryption mandatory for Google services, which was done in a browser-agnostic way. Chrome did provide a platform for Google to experiment with other improvements, though, such as certificate pinning, SPDY and QUIC. SPDY and QUIC are mostly about performance, but as I mentioned above Google build encryption into them from the ground up and never even bothered with unencrypted versions.

      after a year, the titty got tougher and HSTS was the norm on some of the largest web content providers on the internet.

      HSTS also predated Snowden, and Google even started using it for some services before Snowden. But, yes, again Snowden spurred much wider adoption. Which is awesome.

      But slashdot, its only getting good.

      Indeed. All new Internet protocols and standards now specifically address anti-surveillance in their designs, and lots of academic research is focused on new technologies to make surveillance hard. This is actually an even bigger change than the TLS push, etc., indicates. Prior to Snowden, preventing surveillance was not a design goal. If it happened, it happened by accident. No more. It's now a design goal for much of the tech industry.

  • What? Did I miss something? Has hell frozen over?


    (walks away mumbling something about UTF...)
  • Of course, that's the ONLY reason why they'd serve over https, right?
  • Oh yeah google? Really want https everywhere? How about start signing some SSL certs for free to put the mofreking CA mob out of business?
  • by Anonymous Coward

    As in the subject line: Why the heck does /. thinks it should be using it. There is absolutely nothing going on here for me, as a reader and occasional poster, to warrant it.

    I definitily start to dislike all those "lets go with the times" sites who think that their informative articles, jokes and/or cat videos need to be send over HTTPS only. Where is my choice in the matter ?

    Interception (MITM) attacks are only usefull if either side is doing something that needs to be kept secret, and where its value

"Intelligence without character is a dangerous thing." -- G. Steinem

Working...