Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Firefox The Internet Technology

Cloudflare, Google Chrome, and Firefox Add HTTP/3 Support (zdnet.com) 48

HTTP/3, the next major iteration of the HTTP protocol, is getting a big boost today with support added in Cloudflare, Google Chrome, and Mozilla Firefox. From a report: Starting today, Cloudflare announced that customers will be able to enable an option in their dashboards and turn on HTTP/3 support for their domains. That means that whenever users visit a Cloudflare-hosted website from an HTTP/3-capable client, the connection will automatically upgrade to the new protocol, rather than being handled via older versions. On the browser side, Chrome Canary added support for HTTP/3 earlier this month. Users can enable it by using the Chrome command-line flags of "--enable-quic --quic-version=h3-23". In addition, Mozilla too announced it would roll out support for HTTP/3. The browser maker is scheduled to ship HTTP/3 in an upcoming Firefox Nightly version later this fall.
This discussion has been archived. No new comments can be posted.

Cloudflare, Google Chrome, and Firefox Add HTTP/3 Support

Comments Filter:
  • by BAReFO0t ( 6240524 ) on Thursday September 26, 2019 @01:39PM (#59239880)

    Given that both Mozilla and Google are apparently run by Xzibit...

    What about IP over HTTP?
    No, better, ... *sockets* over HTTP! ... oh, wait?

    (I'm gonna standardize PCIe over HTTP/JSON, I swear!)

  • Benefits of HTTP/3? (Score:5, Informative)

    by Anonymaus Coward ( 6165324 ) on Thursday September 26, 2019 @01:48PM (#59239910)

    The summary fails to mention anything about what HTTP/3 does [afasterweb.com].

    * HTTP/3 moves from TCP to QUIC. QUIC runs on UDP.
    * No head-of-line blocking of all streams when a multiplexed connection has a failure in one stream.
    * Faster connection setup on high latency networks
    * Smoother transitions between networks - switching from cellular data to WiFi just plain works.

    • Wow (Score:5, Insightful)

      by the_skywise ( 189793 ) on Thursday September 26, 2019 @02:05PM (#59239972)

      Better transitions between networks Instead of requiring an IP address for the source and destination of each request, QUIC uses a unique connection ID to ensure that all packets get delivered to the right place. The benefit of using these connection IDs instead of IP addresses is these IDs will stay the same even if you switch networks in the middle of a connection. For instance, if your phone is connected to a local wifi network, and then it switches connection to use LTE, the change in IP address will not affect the QUIC connection. If you’re in the middle of a download, that download can continue even if you switch networks. This is currently not the case with HTTP/2.

      Impressive... and with new privacy implications.

      • Re:Wow (Score:5, Informative)

        by BeerFartMoron ( 624900 ) on Thursday September 26, 2019 @02:41PM (#59240060)

        Impressive... and with new privacy implications.

        Privacy implication for whom? I wasn't sure, so I did a quick check:

        How Google’s QUIC Protocol Impacts Network Security and Reporting [fastvue.co] Basically says QUIC is currently bad for NetOps because it's all encrypted and NetOps can't look inside. Recommendation is to disable it until your firewall learns how to inspect it properly. Sounds good if you are a customer.

        Why is Google’s QUIC Leaving Network Operators in the Dark? [owmobility.com] QUIC is bad for NetOps. "QUIC poses a problem for mobile network operators (MNOs) and their subscribers. The modern security measures that are integrated with QUIC are encryption based. And because it is encrypted, MNOs can’t see the traffic that is flowing on their networks. In a nutshell, mobile networks are 'going dark'."

        The Impact on Network Security Through Encrypted Protocols – QUIC [cisco.com] Cisco says QUIC is encrypted and bad for NetOps. Shocking.

        A QUIC Look at Web Tracking [uni-hamburg.de] Finally, something wrong for customers. "[QUIC] design contains violations of privacy best practices through which a tracker can passively and uniquely identify clients across several connections. This tracking mechanisms can achieve reduced delays and band-width requirements compared to conventional browser fingerprinting or HTTP cookies. This allows them to be applied in resource- or time-constrained scenarios such as real-time biddings in online advertising." Ouch, real-time bidding on the ads I get to see. Thanks Google, you still suck.

        Sounds like QUIC is bad for everyone at some level, or great for everyone as long as you are Google.

        P.S. Very nice BlackHat presentation: HTTP/2 & QUIC: TEACHING GOOD PROTOCOLS TO DO BAD THINGS [blackhat.com]

      • Correct. They're trying to fix a flaw in all traffic, with a sledgehammer.

      • QUIC uses a unique connection ID to ensure that all packets get delivered to the right place.

        QUIC uses UDP. Isn't UDP a best-effort protocol with no error checking? How do you ensure that all UDP packets get delivered at all, let alone to the right place?

        • Re: (Score:3, Interesting)

          QUIC uses a unique connection ID to ensure that all packets get delivered to the right place.

          QUIC uses UDP. Isn't UDP a best-effort protocol with no error checking? How do you ensure that all UDP packets get delivered at all, let alone to the right place?

          QUIC handles all connection quality issues itself, and so only needs UDP for transport. QUIC takes over the role of TCP and TLS.

          https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/46403.pdf [googleusercontent.com]

        • Re:Wow (Score:4, Informative)

          by Zocalo ( 252965 ) on Thursday September 26, 2019 @03:28PM (#59240226) Homepage
          TCP maintains state in the protocol so, in theory, any transmission errors should be handled by the IP stack, but this incurs additional overhead in transmission. UDP does away with that, meaning a much greater amount of the available bandwidth being taken up with data rather than protocol headers, but you need to do all your error checking in the application stack. Generally, that's done by a simple checksum and sequence number on each data "block" to allow a client to request a retransfer of any overdue/bad blocks.

          If you've got a reliable network then UDP can be much faster - viz. TFTP vs. FTP - so it can make a lot of sense when you've got lots of data to stream/transfer (like media files). The flipside is that it can get really unpleasant if you network isn't very stable and you don't have a small enough block size when you need to recover and re-transmit the last block. In a worst case scenario, it can try and fail to transmit every single block multiple times before it goes through, but that'll only happen with large block sizes and a very high packet loss rate.
          • Re:Wow (Score:5, Informative)

            by WaffleMonster ( 969671 ) on Thursday September 26, 2019 @03:56PM (#59240364)

            TCP maintains state in the protocol so, in theory, any transmission errors should be handled by the IP stack, but this incurs additional overhead in transmission.

            TCP provides a reliable ordered bidirectional data stream. The overhead is in doing what is necessary to provide this service over IP.

            UDP does away with that, meaning a much greater amount of the available bandwidth being taken up with data rather than protocol headers

            Fundamentally it is not about space consumed by headers. The overhead is in doing what is necessary to provide underlying guarantees themselves. This primarily takes the form of latency and resources necessary to manage state and buffers.

            UDP only provides packets it doesn't provide ordered delivery or reliability guarantees. Benefit of UDP over TCP is only realized with applications able to tolerate lower guarantees. Most websites are not one of those applications.

            A common example of UDP is real-time voice and video communication where perfect lossless transmission is less important than minimizing latency.

            If you've got a reliable network then UDP can be much faster - viz. TFTP vs. FTP - so it can make a lot of sense when you've got lots of data to stream/transfer (like media files).

            This simply is not true. Modern TCP stacks are highly efficient at bulk data transmission and will use dynamic window scaling to achieve essentially the same outcome over time for large bulk data.

            • Reading through what people here have said about QUIC, and interpreting it through my current filter regarding Google's business practices, I am going to conclude that QUIC was (internally to Google) primarily designed to give Google better tracking of individuals over a longer period of time - but (to the public) is being justified as providing potential marginal improvement for a particular edge case, that is browsing the web while traversing across different networks.

              I mean seriously, why move off of TCP

              • by Agripa ( 139780 )

                I have the same suspicion fueled by Google's unsavory and sometimes technically idiotic business practices.

                But to be fair, the existing protocol SCTP is not viable because of lack of support across NAT boundaries. Offsetting this however is that apparently this was recognized as a problem and SCTP apparently can also be used over UDP.

            • by Agripa ( 139780 )

              If you've got a reliable network then UDP can be much faster - viz. TFTP vs. FTP - so it can make a lot of sense when you've got lots of data to stream/transfer (like media files).

              This simply is not true. Modern TCP stacks are highly efficient at bulk data transmission and will use dynamic window scaling to achieve essentially the same outcome over time for large bulk data.

              If TCP was that efficient, then there would have been no need for UDT.

              My own experience has been that TCP is worse than a properly designed UDP data transfer protocol under adverse conditions; TCP just performs poorly compared to UDP as packet loss increases. Maybe this is a result of poor tuning of the TCP stack or interference between the endpoints but since this is largely outside of the user's control, the cause is irrelevant.

              Over reliable connections, the difference in performance is small until bandw

        • Re:Wow (Score:4, Informative)

          by Retired ICS ( 6159680 ) on Thursday September 26, 2019 @05:31PM (#59240720)

          The same way TCP does, with ACKs of the window edge. It is nothing more than TCP implemented over UDP with some anti-privacy super-tracking features added so that the inventor (Google) can make more money by stealing shit that does not belong to them and selling it to whomever for whatever price the market can bear.

        • by Agripa ( 139780 )

          QUIC uses UDP. Isn't UDP a best-effort protocol with no error checking? How do you ensure that all UDP packets get delivered at all, let alone to the right place?

          Error detection and correction is implemented at a higher level which would be presentation or application in the OSI model. Many protocols like Micro Transport Protocol and OpenVPN work this way and it has some significant advantages over TCP including immunity to RST attacks because UDP is stateless.

          I wonder why they did not use an existing advanced protocol like SCTP instead of inventing a new one.

      • by nadass ( 3963991 )

        Better transitions between networks Instead of requiring an IP address for the source and destination of each request, QUIC uses a unique connection ID to ensure that all packets get delivered to the right place. The benefit of using these connection IDs instead of IP addresses is these IDs will stay the same even if you switch networks in the middle of a connection.

        For instance, if your phone is connected to a local wifi network, and then it switches connection to use LTE, the change in IP address will not affect the QUIC connection. If you’re in the middle of a download, that download can continue even if you switch networks. This is currently not the case with HTTP/2.

        Impressive... and with new privacy implications.

        The easiest way for browsers to address this sticky ID situation is to generate a new UUID for every browser session, as long as the network transport protocol libraries do not manage the sticky ID's in the first place! If they do, then an alternate QUIC implementation should strip out the enforcement of the sticky ID's and instead allow higher up the stack (i.e. browser) to dictate the UUID to use for the given session.

        • The easiest way for browsers to address this sticky ID situation is to generate a new UUID for every browser session, as long as the network transport protocol libraries do not manage the sticky ID's in the first place! If they do, then an alternate QUIC implementation should strip out the enforcement of the sticky ID's and instead allow higher up the stack (i.e. browser) to dictate the UUID to use for the given session.

          The privacy issues are actually addressed in the drafts. Basically the way it works you ask server for a new ID in advance that can be transmitted with subsequent switchovers. I suspect reality will end up being quite a bit different like those amazing IPv6 privacy extensions in Windows that have been broken for some dozen years.

      • For instance, if your phone is connected to a local wifi network, and then it switches connection to use LTE, the change in IP address will not affect the QUIC connection.

        Oh, so a built-in ability to switch connections to a different IP address? There's absolutely no way ever this could be abused. Not ever.

        • Since the device communicating needs to be able to encrypt and decrypt with the shared key only it and the server knows that is correct ... It is not something easily abused.
          • by tepples ( 727027 )

            The server can know through the shared key the client is the same client that previously connected. This makes the shared key a tracking cookie.

            • I can see now that the GP was implying that was an issue. I didn't pick up on it because that is a stupid concern that ignores the fact that switching from IP to IP was never a method for defeating tracking to begin with. It's the equivalent of saying ... "Great, a computer that doesn't crash every 10 minutes ... I guess it didn't occur to the morons who came up with that idea that hackers will have more time to try and break in now!"
    • 3 is greater then 2 that is all I need to know.

      For the most part this isn't changing HTML where most of the development is in but on the HTTP side. The big thing I can see, is that lately most programmings languages for web development, Python, Node.JS, etc... are no longer commonly plugins to a Web Server software like Apache with PHP and IIS with .NET but have self contained web servers as part of the language. So a lot of "New Code" would need to be upgraded to work with HTTP/3, while the Old PHP and A

    • Seems like ditching the megabytes of javascript and tracking code would do wonders to speed things up.

    • The summary fails to mention anything about what HTTP/3 does.

      That's because it is harmful. None of these features are worth damage caused by aggressive congestion algorithms.

      No head-of-line blocking of all streams when a multiplexed connection has a failure in one stream.

      This is why browsers use multiple connections.

      Faster connection setup on high latency networks

      Round trip times are the same as employing relevant TCP and TLS extensions and just as dangerous and worthless.

      Smoother transitions between networks - switching from cellular data to WiFi just plain works.

      I can't imagine a scenario in which this "feature" is worth the risks from end user, application or network perspectives.

      Today I have a web application and change links. It continues uninterrupted as if nothing happened. Today I download a

  • Http: Https: httpss:?? LOL
  • Contrary to popular opinion, not everything is run behind CF.

  • Long live HTTP/1.1 (Score:5, Insightful)

    by belg4mit ( 152620 ) on Thursday September 26, 2019 @09:24PM (#59241292) Homepage

    Screw your binary bullshit Google, sometimes it's nice to telnet to port 80 to get a real view of what's happening.

On the eighth day, God created FORTRAN.

Working...