Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Chrome Firefox Google The Internet

Most of the Web Really Sucks If You Have a Slow Connection (danluu.com) 325

Dan Luu, hardware/software engineer at Microsoft, writes in a blog post: While it's easy to blame page authors because there's a lot of low-hanging fruit on the page side, there's just as much low-hanging fruit on the browser side. Why does my browser open up 6 TCP connections to try to download six images at once when I'm on a slow satellite connection? That just guarantees that all six images will time out! I can sometimes get some images to load by refreshing the page a few times (and waiting ten minutes each time), but why shouldn't the browser handle retries for me? If you think about it for a few minutes, there are a lot of optimizations that browsers could do for people on slow connections, but because they don't, the best current solution for users appears to be: use w3m when you can, and then switch to a browser with ad-blocking when that doesn't work. But why should users have to use two entirely different programs, one of which has a text-based interface only computer nerds will find palatable?
This discussion has been archived. No new comments can be posted.

Most of the Web Really Sucks If You Have a Slow Connection

Comments Filter:
  • by Opportunist ( 166417 ) on Friday February 10, 2017 @09:44AM (#53838981)

    No idea how your connection speed adds anything to this.

    • by stooo ( 2202012 )

      Most of the web sucks, but with a slow connection you really need a long time to get to that point of understanding.
      An adblocker can help you go to this conclusion faster, tho.

  • Comment removed (Score:5, Interesting)

    by account_deleted ( 4530225 ) on Friday February 10, 2017 @09:45AM (#53838989)
    Comment removed based on user account deletion
    • Lynx? You really should upgrade to eLinks. Lynx is superior for piping to a speech converter, on a real terminal eLinks formats the page a lot better.

    • by dbIII ( 701233 )
      I still use lynx from time to time and used to use it to get nvidia driver downloads. I can't use it for that purpose and a lot of other pages now because there are things you just can't get to beyond the first page if all you have is text.
      I don't know how blind people would cope because they have text mode web browsers as well. There must be a lot of the web they can't get to despite web design 101 from the early days of the net insisting that you must have some way for blind people to navigate your site
  • This is one of the most obvious statements I have ever read on /., or anywhere on the Internet. As long as I have a fast enough connection, of course.
  • by DontBeAMoran ( 4843879 ) on Friday February 10, 2017 @09:48AM (#53839017)

    Everything Internet-related really sucks if you have a slow connection.

    • Umm, no. (Score:5, Informative)

      by Viol8 ( 599362 ) on Friday February 10, 2017 @10:32AM (#53839389) Homepage

      Ssh, and telnet work just fine over a slow connection and so does email so long as it doesn't have a load of attachments plus other protocols such as gopher. People did manage to use the internet over dial up before HTTP/HTML came along and sucked up as much bandwidth as it could!

      • by trg83 ( 555416 )
        Depends on whether slow means low latency or small bandwidth. SSH and telnet are miserable on a high latency link regardless of its bandwidth.
  • Timeout (Score:5, Interesting)

    by religionofpeas ( 4511805 ) on Friday February 10, 2017 @09:48AM (#53839019)

    Why does my browser open up 6 TCP connections to try to download six images at once when I'm on a slow satellite connection? That just guarantees that all six images will time out!

    The problem is not opening 6 connections, or failure to retry, but a timeout that's too short.

    • by Octorian ( 14086 )

      And its interesting to see how different browsers handle this.

      I have a cable modem with a web admin interface that's *extremely* slow to respond to any requests. It works fine via Firefox, if I'm very patient. Its totally unusable via Chrome.
      (Its Comcast's high-end "wireless business gateway" device, and something I'm basically stuck with if I want my current service package)

      • Which one? I'm on Comcast Business and while the modem's UI isn't particularly zippy, it's not terribly slow either. I don't use its wifi though; I have my own router and gateway.

    • The problem is that HTTP is a shitty protocol. It uses an unique TCP connection for every request. For each page of text and every image, HTTP requests a new TCP connection and tears it down after transfer. This causes a lot of latency, partly because TCP is designed to start slow and ramp up to the available bandwidth and partly because of the extra signalling for new TCP handshakes, authentication tokens, encryption renegotiation, etc. As a result, your web browser spends way more time than it should

      • For each page of text and every image, HTTP requests a new TCP connection and tears it down after transfer.

        Which version of HTTP are you using? Even HTTP 1.1 has keep-alive.

      • That page is about http 1.0

        1.1 (currently the dominent version) allows connection keepalive and pipelining which were supposed to solve those issues. Unfortunately pipelining has it's own problem. One slow request (large ammount of data, slow CGI script etc) can block the whole pipeline. So afaict most clients use connection keepalive but not pipelining.

        2.0 allows multiple simultanious requests on the same TCP connection but has the downside of being much more complicated to implement.

  • by Anonymous Coward

    Tell that to all the people with slow connections, who's operating system is constantly phoning home to a dozen different servers and hijacking their bandwidth to spread itself on a p2p network.

  • by jfdavis668 ( 1414919 ) on Friday February 10, 2017 @09:51AM (#53839047)
    I have the same problem with smartphone apps. If you don't have the highest LTE connection possible, the app is a pain to use. Go to a rural area, and you may not even get it to open. Web sites are the same way. They give developers super fast connections, and they develop applications that require that speed. They don't put them on slow networks and test to see if they are even useful on a basic level.
    • My transit prediction webapp, TransSee [transsee.ca] should work fine on slow networks. All that bandwidth sucking still just means more work for me to create it.

    • by Anonymous Coward on Friday February 10, 2017 @10:15AM (#53839243)

      I remember WordPerfect 6. All we, as users and support techs, asked, was that they did not make it slow with new features and debris.

      They failed. Why?

      We find out in one of the responses from the dev team, via support: "All our developers use it too, and they have no problems with performance. Perhaps you need top upgrade your computer?"

      Well, we find out also that the dev team had 486DX2--66 machines with 16MB RAM. In an age when most attorneys' assistants were using 386 machines with 4MB RAM, maybe, and Windows 3.0/3.1. Yep, hanging on. A busy assistant would reboot twice a day. Had the WP dev team been testing even minimally on mainstream workstations, they would have known, and had a better answer, like "that's the price for state-of-the-art features, so suck it up and upgrade"

      How is this related to smartphone apps and network performance?

      First, the issue has nothing to do with user satisfaction. It is about 0. app infrastructure and 1. marketing.

      These apps that require a server infrastructure aren't going to be built to serve 'slow' users. Open connections spanning minutes are expensive. Adapting to network speed by stretching timeouts and optimizing data flow don't make any money for advertisers or the app itself, the devs are focused on new features, gadgets, and monetizing the product. The rural market is literally worthless to them.

      And since the apps are marketed to majorities, smaller populations out in the slow network space have no voice. And no money.

      Lightweight apps like Twitter (how much data DOES it take to send a 140 character message?) and SMS manage, but even maps fail spectacularly when they are used in 2G network spaces, and there are still a lot of those.

      My complaint is that in a metro area, during lunch, the congestion is so bad I lose data entirely. LTE4G is available, as well as UMTS, but when I lose I don't even get 2G - they just drop connections. Reboot my phone and get service for a few minutes, and then gone again. It's just 5500+ employees going out for lunch in a square mile. I have asked, and no carrier is immune to this. And building out capacity there isn't going to happen, because the congestion occurs for 3 hours a day, Monday-Friday. Nope, not worth it. When voice stats failing, maybe.

      Surprisingly, the PSTN had minimum service levels dictated by state legislation, and while the penalties were often minimal, there is NOTHING like this in cell service that I am aware of. AMPS and NAMPS didn't really have them, and CDMA/GSM apparently did not. Now that cell service is the only service for many, SLAs for service woudl be useful. I doubt they will be proposed, enacted, or enforced.

      And app performance will *never* be enforceable. Too many variables. I hear ya, rural performance is terrible, and every claim that it will be improved hinges on the cost benefit analysis. That will not change for a while, even with Band 12/700MHz/600MHz spectrum being deployed. Nope.

      Now Gigabit LTE and the next gen give us hope that the cell companies will build out and try to displace the incumbent wired ISPs, but I'm dreaming here. Or am I? Since rural American doesn't have wired ISPs, it may be overlooked during the transition to 'copious bandwidth'. Glad I don't live in the woods any more, though if I didn't have a great job I would move there and disconnect.

  • Most web browser engines are open source. Go and modify one or many of them to handle slow connections better.

    • by rudy_wayne ( 414635 ) on Friday February 10, 2017 @10:02AM (#53839143)

      Most web browser engines are open source. Go and modify one or many of them to handle slow connections better.

      And after you get done with that, you can take your car's engine apart and redesign it to get 400 miles per gallon.

    • The question is do you want your browser doing a speed test or an os with an API that the browsers can pick up and adjust accordingly? I usually have chrome and safari open. Chrome eats up less memory and auto connects to my android tablet but safari is needed to connect to my phone.

      Since they won't build a bridge between the two I have to link them the hard way.

      But bandwidth and speed tests should be on a much lower level than web browser. That way other apps can adjust accordingly. Also I always first

  • Comment removed based on user account deletion
    • Providers don't offer a proxy because an HTTPS proxy requires each user of each browser on device to add the proxy's root certificate to the certificate store used by the profile associated with that user, browser, and device. Non-technical users are unlikely to successfully complete this process for both the OS-wide certificate set and the separate NSS set used by Firefox.

  • by JohnM4 ( 1709336 ) on Friday February 10, 2017 @09:55AM (#53839091)

    You can configure this setting in Firefox. It doesn't look like Chrome has a similar configuration.

    http://kb.mozillazine.org/Abou... [mozillazine.org]
    network.http.max-persistent-connections-per-server - default = 6

    Try setting this to 1.

    Source:
    https://support.mozilla.org/t5... [mozilla.org]

  • Follow the money (Score:5, Interesting)

    by Larry_Dillon ( 20347 ) <dillon.larry@gmailTWAIN.com minus author> on Friday February 10, 2017 @09:56AM (#53839103) Homepage

    A lot of what drives modern internet design is e-commerce. If you're on a slow connection, you probably don't have much money to spend, so why should anyone care? Or so the thinking goes....

    • I do have money to spend and I maxed out my connection with dual 3Mbit connections into a load balancer. That is the best I could get, period. By the way, I only live about 15 miles from Portland, OR. Internet coverage in this country is very uneven.

      • by trg83 ( 555416 )
        15 miles? Are there no terrestrial (surface) RF-based connectivity solutions available to you?
      • Of course there are exceptions, but in aggregate, faster internet = more disposable income.

        Thought I agree with the article that most web pages are needlessly bloated. I'm not a web developer, but I've seen a few demos of web sites where the customer comes to the developer's shop and is shown a demo of the new site. The page is hosted locally and many elements are cached on the developer's computer, so of course it's much faster than the average home user is going to experience. The customer rarely under

    • by Calydor ( 739835 )

      I am on a 448/96 kbps ADSL connection which is the fastest possible available to the house that I own.

      So which should be the biggest indicator of my available money to spend? My connection speed or owning the house I live in?

      • by green1 ( 322787 )

        A website can figure out the speed of your connection, it doesn't know if you own your home.

  • by Red_Chaos1 ( 95148 ) on Friday February 10, 2017 @09:57AM (#53839109)

    Even with fast Internet connections, websites are so bloated with ancillary scripts and tracking code and cross linking to 20 different various advertising and content servers that you get stuck waiting no matter what. CDNs helped but you're still hostage to X advertising companies one slow server because it's not on that CDN.

    • Unless you run a local dns server that spoofs a lot of those and points them to something local that responds "instantly". I use a Pi for this at home, with 1.5mb dsl it is a big saver, and even though I've finally been able to upgrade to 6mb is is still a saver.

    • It's even worse if you are using your ISPs DNS servers. Each request involves a high-latency hostname lookup before the HTTP portion can even start. And for some reason, CDNs use a different hostname for every client's files - so the DNS lookup is never cached.

    • Honestly, this is why I run NoScript. I really don't give a shit about privacy and all that, but being able to save hours of waiting every month because I'm not loading some bloated advertising shitshow? Priceless.
  • Web page designers (Score:5, Insightful)

    by Grand Facade ( 35180 ) on Friday February 10, 2017 @09:58AM (#53839115)

    all show which side their bread is buttered on.

    When the advertising content loads first and the page rebuilds/rearranges itself 6 or 8 times and finally the content you want to see becomes visible or stabilizes enough to click a link.

    I think some of the pages are designed to do this on purpose.
    You get a glimpse of the content you are looking for and click on a link just as the page rebuilds itself and the link has changed to an ad and the cash register rings on a click through.

    • You think the page designers make the decisions to add the advertising & tracking pixels? No.

      Those decisions are made by the people who pay them. Some of us cranks sit in meetings and point out the drawbacks to their incestuous, ignorant greed, but again, we don't have the final decision.

      • You think the page designers make the decisions to add the advertising & tracking pixels? No.

        Those decisions are made by the people who pay them. Some of us cranks sit in meetings and point out the drawbacks to their incestuous, ignorant greed, but again, we don't have the final decision.

        Yet those developers don't seem to be able to come up with alternative ideas to "monetise" that web site - a necessary evil, as otherwise no-one is going to pay their salaries.

    • Ad networks in general try to make it "easy" for web designers by giving them just a javascript to include. Then you can't pre-define the dimensions of that content block, making it block rendering until it loads.

    • I hate pages that look like they stopped loading and decide to do one last refresh AFTER I've started scrolling. That and images that don't load until you get near them. All that does is rearrange text while you're trying to read.

    • all show which side their bread is buttered on.

      When the advertising content loads first and the page rebuilds/rearranges itself 6 or 8 times and finally the content you want to see becomes visible or stabilizes enough to click a link.

      I think some of the pages are designed to do this on purpose.
      You get a glimpse of the content you are looking for and click on a link just as the page rebuilds itself and the link has changed to an ad and the cash register rings on a click through.

      I was getting ready to say fuck that shit, I'm out in the boonies on Hughsnet metered connections and we go over our bandwidth every month. What I was going to do is record the domains hosting any adds about herbal viagra, why a celeb doesn't talk about an offspring, or pictures being banned or embarrassing, dingus.tv or dear hughesnet subscriber popunder and the edit the host file so they would resolve to my router who's web server would send back a nice clean 404 error. Some how win10 isn't allowing me to

  • Are designed for faster connections than yours where doing things asynchronously is much preferred (Though not it appears by the developers I tend to deal with) to synchronous.
  • by Lisandro ( 799651 ) on Friday February 10, 2017 @10:04AM (#53839153)

    It had a number of options where you could set up the number of global connections and per page. And yes, it was useful during the early days of dialup.

    But anyway, most of the web sucks in this regard. Period. Sites are horribly designed these days - a gazillion JS dependencies, unnecessarily large images which get scaled, zero concern for mobile devices, etc.

    • by sirber ( 891722 )
      You can disable javascript per site (with chrome at least). Helps a lot on some pages, like on slashdot.org.
      • A shitload of sites simply stop working if you disable JS. We're long past the days of content + presentation.

    • unnecessarily large images which get scaled

      Understatement. I've seen 20MP images scaled down to under 200 pixels high. And worse, they're straight off the camera and not even recompressed for web. 6MB images like this really eat through my mobile data, as I try to stay under 1GB.

  • by 0100010001010011 ( 652467 ) on Friday February 10, 2017 @10:07AM (#53839179)

    The digital divide [wikipedia.org] in the US became most evident (to me) in this last election cycle.

    If you look at the page weights of 'conservative' vs 'liberal' news sites the former are much smaller and tailored to people on even a dial up, in large part because they know their demographic. Rural internet in the US flat out sucks. We have counties in my state, not more than 3 hours outside of Chicago that still have dialup as a viable option.

    Drudge Report [drudgereport.com] loads amazingly fast. Huffington Post [huffingtonpost.com] does not. Drudge was 1.13 MB in size with 44% of that images [imgix.com]. (The site I used to analyze them was done with Drudge's 14 assets long before Huffington Post stalled at 220/222 assets.)

    The art of optimization seems to have disappeared, it made a small resurgence when web developers tried to optimize for the mobile web, but it doesn't look like most developers ever tried that hard.

    It's a closed feedback loop. Developers live in places with fast Internet, test in places with fast Internet and then don't understand what it's like anywhere else. Students on college campuses live with gigabit internet and Internet2 connections to peer universities. They move to cities that Comcast pays attention to.

    The best suggestion I have: Turn off images, configure the browser not to thread connections, and get involved in local government to get faster internet to your area.

    • by Lumpy ( 12016 )

      The art of optimization seems to have disappeared...

      Because shiny and swoopy is more important that performance

      • It's not that it has disappeared. But if I have to choose between making my website shiny and make it small, 9/10 will prioritise the UX for users with high bandwidth (HiDPI icons, etc).

    • by green1 ( 322787 )

      The art of optimization seems to have disappeared, it made a small resurgence when web developers tried to optimize for the mobile web, but it doesn't look like most developers ever tried that hard.

      They never tried to "optimize" for the mobile web, they just removed half the content while leaving all the ads and tracking in place. It means that the mobile version ALWAYS sucks in comparison to the full version, even when viewed on a mobile device.

      I have NEVER seen a single "mobile" web page that was better on my phone than the full version of the same site. I never, under any circumstances, ever want to see the mobile version of any web page.

  • Important fix (Score:5, Insightful)

    by Lumpy ( 12016 ) on Friday February 10, 2017 @10:09AM (#53839207) Homepage

    Be sure to run adblockers, stripping out adverts makes a big difference.

    But even slashdot is a big fat bloated pig. no reason at all to load everything and a giant pile of JS.

    • by Calydor ( 739835 )

      Slashdot even gets crazy timeouts if you're lagging TOO much. Won't load all comments, maybe won't even load the main page or stories ... And looking at this site it really should be just friggin' text.

    • An adblocker and Noscript on Linux with Firefox and my internet is blazingly fast.

    • by TheStickBoy ( 246518 ) on Friday February 10, 2017 @11:22AM (#53839857)

      Be sure to run adblockers, stripping out adverts makes a big difference.

      But even slashdot is a big fat bloated pig. no reason at all to load everything and a giant pile of JS.

      I highly recommend https://alterslash.org [alterslash.org] it removes all the bloat from slashdot.org Thanks to Jonathan Hedley!

  • Websites that have to load 50+ JavaScript ads before displaying the page can easily overwhelm a slow processor even with a fast cable connection.
  • Get an extension for Chrome or Firefox that blocks images unless asked not to. Turn off Javascript unless the site needs it.
  • When one of the tactics for those on slow connections is to employ ad blocking technology, one doesn't have to look far as to the impact on bandwidth. Couple that with the fucking telemetry bullshit infecting software these days, and the problem is obvious in both directions.

    I find it odd that our society will label physically tracking everything a human does a crime, but virtually tracking and selling everything a human does is called capitalism.

  • I think Opera came up with an ideal compromise. A middle-man proxy that did filtering and compression. Fast pipe on their end, slow on yours.

  • by feenberg ( 201582 ) on Friday February 10, 2017 @10:34AM (#53839403)

    There is a hilarious (and sad) commentary on website bloat at http://idlewords.com/talks/web... [idlewords.com] that shows truely outrageous examples of this sin.

  • Users like myself who are on a metered connection also suffer, but in a slightly different way. All those pesky videos that (partially) download on page load when you're just trying to read a news article. The bandwidth is plenty fast, which actually exacerbates the problem. How is it that Chrome provides a switch to turn off images, but not videos?

    Another commenter here talked about pipelining. Again, this comes back to a lack of proper browsing features in our de-facto web browser, Chrome. We want power o

  • Easy solutions (Score:4, Informative)

    by guruevi ( 827432 ) on Friday February 10, 2017 @11:14AM (#53839805)

    Most browsers open 6 connections because most connections these day can handle it. If you live in a time warp (like most of the US) with tech that's 20 years old, then you have to use 20 year old tricks of the trade.

    "Back in the day" we had 56k at home or for the rich folk, 128k (ISDN), double ISDN if you were really lucky. We had the same problems, the 'web' was getting fancy with things like Flash, video and high-def images because all the 'work' was being done at places that had either access through an institution with at least Fractional T1's and things like 100Mbps home-internet copper/fiber connections for $10 were being promised as less than a decade away by the ISP's which were then repeatedly subsidized by the governments to do just that.

    How did we do it:
    a) Set up your own DNS caching servers
    b) Set up your own HTTP/HTTPS proxy caching servers with giant caches
    c) Proper QoS to make sure certain traffic had priority over others, small packets and small buffers
    d) Set up your own servers (such as IMAP/SMTP, gaming etc) and have them sync during times of lesser activity. These days, with a bit of API tinkering, even social media can be done that way.

    We had LAN parties with 100's of computers behind a 1Mbps cable modem, no problem.

  • It's a shame Opera Software abandoned their old Presto engine for Chromium instead - Opera (through version 12) had a setting for number of connections per site and total number of connections. And ad blocking, and on and on.
  • by account_deleted ( 4530225 ) on Friday February 10, 2017 @11:48AM (#53840097)
    Comment removed based on user account deletion
  • 1. People are egocentric. Everyone thinks (by default) that everyone else is in the exact some relationship with their environment as they are. I've been in corporate IT for many years (in the past) and have seen a large number of major initiatives designed from the point of view of the HQ folks who are on top of the data center, network-wise. You know, millisecond latency to the servers. Then the app is deployed out to the remote masses, and... it sucks. So the app developers demand to know when the

  • Why does my browser open up 6 TCP connections to try to download six images at once

    The website could easily be serving those 6 images as 1 image and use CSS to display them individually.
    So it's not only a browser issue.

    Actually, it's a protocol issue.
    I believe this sort of thing is being changed for HTTP/2, with multiplexing.

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...