Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Java The Internet AI Google Software Yahoo! News Science Technology

MIT Creates Algorithm That Speeds Up Page Load Time By 34% (softpedia.com) 169

An anonymous reader writes: MIT researchers have created an algorithm that analyzes web pages and creates dependency graphs for all network resources that need to be loaded (CSS, JS, images, etc.). The algorithm, called Polaris, will be presented this week at the USENIX Symposium on Networked Systems Design and Implementation conference, and is said to be able to cut down page load times by 34%, on average. The larger and more resources a web page contains, the better the algorithm's efficiency gets -- which should be useful on today's JavaScript-heavy sites.
This discussion has been archived. No new comments can be posted.

MIT Creates Algorithm That Speeds Up Page Load Time By 34%

Comments Filter:
  • So fast (Score:5, Funny)

    by SeaFox ( 739806 ) on Wednesday March 09, 2016 @05:39PM (#51668759)

    I got first post!

    • Great, now we'll have even more first posts for every page on the Internet. As if seven plus minus three weren't enough.
      • Re:So fast (Score:4, Funny)

        by davester666 ( 731373 ) on Thursday March 10, 2016 @01:39AM (#51670377) Journal

        In other news, most websites announced they are upgrading to new larger graphics and javascript libraries, as a necessary first step in ensuring their pages don't load too fast.

        • Their next project should be an algorithm which converts pages with "scripting junk" to plain nice HTML which can be readable by any browser.
        • You're absolutely right, even though I suspect you were half-joking. We all know by now that programs tend to expand to use the available resources and there's little reason to think this will be any different. If we're loading pages "34% faster" then we can soon expect those pages to be filled with 34% more ads!
  • by Anonymous Coward on Wednesday March 09, 2016 @05:40PM (#51668767)

    Now, the next logical step is to have this algorithm analyze the actual scripts and figure out a way to convince the various malwares that they've been loaded satisfactorily even though they haven't. That way you could avoid downloading almost 99% of modern web pages.

    • by Anonymous Coward on Wednesday March 09, 2016 @06:18PM (#51668949)

      Now, the next logical step is to have this algorithm analyze the actual scripts and figure out a way to convince the various malwares that they've been loaded satisfactorily even though they haven't. That way you could avoid downloading almost 99% of modern web pages.

      I run the NoScript extension, so I already get all of those benefits without any need for fancy page analysis.

    • No, the next step is to kill Javascript which has now become a cancer that is destroying the Internet.

      • No, the next step is to kill Javascript which has now become a cancer that is destroying the Internet.

        I'd agree, but sadly, a huge number of sites won't work at all without Javascript. Even sadder, I actually need to use some of those sites.

        And when I say "need", I mean "need", they're not optional for me, I have to use them for work or work-related stuff.

        To be honest, I like some of the functionality that Javascript provides (ajax, responsive menus, etc) but yeah, it's wormed its way into even the most basic functions of many sites these days- a lot of sites won't even load a page without it.

      • by khelms ( 772692 )
        Wait. I thought Flash was the cancer that was destroying the Internet.
  • by jimbob6 ( 3996847 ) on Wednesday March 09, 2016 @05:41PM (#51668773)
    I have something kinda like that its called No Script.
    • To see a big difference try all of these together Disconnect + AdBlock + ScriptBlock + FlashBlock + Vanilla (cookie blocker and manager)
      • by cheater512 ( 783349 ) <nick@nickstallman.net> on Wednesday March 09, 2016 @06:14PM (#51668917) Homepage

        The big difference being nothing working.

  • by fisted ( 2295862 ) on Wednesday March 09, 2016 @05:41PM (#51668775)

    would still be cooler if there was no 'dependency graph' of dynamically loaded resources behind my every HTTP request.

    • by Mouldy ( 1322581 )
      This page loaded 488KB of data. It took my browser 25.5 seconds* to download it all. What you're suggesting is all of that page data be included in a single response? So the browser would have to wait 25.5 seconds before it could even start rendering the page? Where it'd be difficult for the browser to then cache content that could be shared across multiple pages? Compare that to the current dependency structure where the DOM was loaded in 2.41 seconds & the page was considered loaded at 5.91 seconds.
      • by fisted ( 2295862 )

        What you're suggesting is all of that page data be included in a single response?

        No. Please think harder.

        • by Mouldy ( 1322581 )

          What you're suggesting is all of that page data be included in a single response?

          No. Please think harder.

          Care to enlighten? Or are you just going to keep to short comments with no real content in them and leave people guessing what you mean?

          • by fisted ( 2295862 )

            My comment was short because I didn't feel like continuing to read yours after

            It took my browser 25.5 seconds* to download it all [whereas with a single response] the browser would have to wait 25.5 seconds before it could even start rendering the page?

            because it is so obviously flawed reasoning that I had to assume you're trolling.
            You can certainly have a full answer:

            This page loaded 488KB of data. It took my browser 25.5 seconds* to download it all.

            So we agree there's a problem. You do realize (since you're going to mention it) that most of this time is spent traversing the "dependency graph" to pull external resources.

            What you're suggesting is all of that page data be included in a single response?

            I wasn't suggesting anything particular, but if so I'd be suggesting to do away with most of the junk entirely.

            So the browser would have to wait 25.5 seconds before it could even start rendering the page?

            I'm startled about how you cou

    • by arth1 ( 260657 )

      would still be cooler if there was no 'dependency graph' of dynamically loaded resources behind my every HTTP request.

      Yeah, TANSTAAFL.

      Also, always be weary when seeing weasel words like "up to". It's an euphemism for "less than". The overall benefit can even be negative while still satisfying the "up to" claim.

  • by markdavis ( 642305 ) on Wednesday March 09, 2016 @05:41PM (#51668777)

    They would all load a lot freaking faster if they would stop designing them with multiple, stupid, scrolling, 20 megapixel background images and dozens of megabytes of irritating javascript "special effects". Just saying.

    • by AHuxley ( 892839 )
      A lot of different ideas have been tried. From Microsoft Chrome https://en.wikipedia.org/wiki/... [wikipedia.org] that allowed the users own computer to produce rich content.
      Add more gzip https://en.wikipedia.org/wiki/... [wikipedia.org]?
      Give the user the site text or images to get them looking, then load in the ads? Load the ads first, then present the full page?
      The problem is all the trackers, ads, super cookies need to be connected. Giving the users a bit of quick up front content to then allow ads to load is fun. Keep the use
      • A better solution is just to bake all the advertising and trackers right into the browser itself so that it doesn't need to keep redownloading it for every site.

      • What can a site do? Run a script to detect an ad blocker? Suggest a monthly payment and block the page from that user or request the ad block is removed?

        Wired http://www.wired.com/ [wired.com] has started doing that and I've started not visiting their site, even though I whitelisted them so I could do it for free. Screw them ...

        On the other hand, Stack Overflow https://stackoverflow.com/ [stackoverflow.com] has stated publicly that they are fine with ad blockers. Their reasoning is that if you're running one, you don't want ads, and wouldn't click on any if you saw them.

    • Came here to say that. "Designers" and most people that create websites are all concerned about style and not about how it functions. Toss in the fact that they are all on well equipped machines and good networks which makes everything load quickly and they never see the problem so it never gets fixed. It would be great if they even just optimized the images for size!

      About ten years ago I was on the maintenance group for a bunch of government websites and there was one site on Cold Fusion. It was slow as

  • by xxxJonBoyxxx ( 565205 ) on Wednesday March 09, 2016 @05:41PM (#51668779)
    >> Algorithm That Speeds Up Page Load Time By 34%

    It's called AdBlocker
    • Ad blockers do more than 34%, especially if you block tracking rather than just visible stuff.

  • Use a static HTML generator to eliminate the overhead of a CMS.
    • by kuzb ( 724081 )

      ...which is exactly what every popular CMS does...

      • ...which is exactly what every popular CMS does...

        The Joomla! CMS took six seconds or longer to load itself before displaying a dynamically-generated page on one of my websites. After I converted the website to static pages, each page loaded in less than five seconds. More tweaking is needed to reduce the load times. The average Internet user has an attention span of a goldfish (i.e., six seconds or less).

      • They may support it but not every site generates static pages and shows them. Many create their content dynamically even if it would be more efficient to use static pages.

    • One place I was at was looking at implementing a new CMS and at one meeting we were discussing the options for the architecture of the system. I was from the maintenance group and was there to provide feedback because we would be looking after it long term. I was in favour of having the system generate static HTML pages whenever a change happens (something the software supported and that we saw another department implement) because it would reduce the hardware required for serving the site. Well, we had th

      • Also if the CMS goes down the site still is live.

        I got tired of hackers beating down the doors of the CMS and occasionally crashing the website. After I converted the website to static pages, the hackers went away because there was nothing to hack.

  • by ihtoit ( 3393327 ) on Wednesday March 09, 2016 @05:46PM (#51668799)

    disable javascript.

  • Vulcanising and HTTP2 Push are the way to go.
    Allthough I do wonder if this method then still has a chance of improving a sites performance.
    Personally I'd say well and automatically curated HTTP2 Push and automated minifying and compression are probably the best method overall.
    I do doubt that this method could improve much more if that were in place.

    But I could be wrong.

    Does anyone have experience with http-push and perhaps some insights to offer?
    Please comment below. Thanks.

    • by darkain ( 749283 )

      Here is a good example: https://http2.akamai.com/demo [akamai.com]

    • by AmiMoJo ( 196126 )

      Google has been working on a new compression scheme where the dictionary is fixed and stored in the browser. It makes sense since a lot of HTML and Javascript is highly repetitive and would likely be selected for inclusion in the dictionary by gzip anyway. You can even optimize Javascript to be more compressible under this scheme.

  • Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
  • by smooth wombat ( 796938 ) on Wednesday March 09, 2016 @05:56PM (#51668845) Journal

    Not having 14 scripts be needed to post a comment, not having 8 other scripts clogging the pipes for one advertisement, 6 scripts for tracking you, and multiple other scripts for whatever reason.

    Nor having a giant, moving graphic as the base part of your page which can't be turned off, menus which bounce up or down when you hover your mouse over them, or needing to have the latest and greatest browser so you don't miss out on the latest and greatest "features" of a site.

    But no, finding an algorithm to speed web page loading is what we should concentrate on.

  • Reading TFA (Score:5, Interesting)

    by darkain ( 749283 ) on Wednesday March 09, 2016 @06:38PM (#51669079) Homepage

    I went and actually read TFA. It seems all they've done is create a bastardized version of a less efficient SPDY/HTTP2 protocol fetching system. Essentially, they're trying to solve a problem that is already solved, but the existing solution is already faster, more efficient, and more well thought out in general.

    • by johannesg ( 664142 ) on Thursday March 10, 2016 @02:05AM (#51670431)

      > I went and actually read TFA

      Thank you brother. Your sacrifice is appreciated by all.

    • Re: (Score:2, Funny)

      by Anonymous Coward

      I went and actually read TFA. It seems all they've done is create a bastardized version of a less efficient SPDY/HTTP2 protocol fetching system. Essentially, they're trying to solve a problem that is already solved, but the existing solution is already faster, more efficient, and more well thought out in general.

      When they get their degrees from MIT, they're already well-prepared to go to work on systemd.

  • Identify the advertisements client side and don't load them. Speeds up loading and rendering pages a lot.

  • I guess the browser-side performance isn't so much what they're talking about (rather, reducing network round-trips), but still, I have always wondered why we're still sending xml and js, plain or gzipped, rather than sending compact binary formats.

    EXI is a W3C standard; it's more compact than gzipped xml and it's more than a dozen times faster to parse.

    Rather than coding in JS all the time, lots of people are using javascript as an intermediate representation or bytecode. This is tremendously inefficient.

    T

    • All I can say from my own experience as a web developer is that I'd much rather have a human readable format (JSON) than a binary format. It is invaluable when debugging problems.

      • by jensend ( 71114 )

        For binary XML, as well as for the various fledgling binary json or binary yaml formats, the binary representation can be quickly converted to a plaintext one that has basically only minor formatting differences from the original. (I was about to say "to a human-readable one that..." but that's a stretch for a lot of XML.)

        An AST / IR / bytecode is decompilable; e.g. from what I understand LLVM can do a good job of translating its IR back to C. Obviously a lot more information that could help with human comp

  • Sweet, now I can bog down my sites with more eye candy, and the users won't notice a slowdown.
  • ``The larger and more resources a web page contains, the better the algorithm's efficiency gets -- which should be useful on today's JavaScript-heavy sites.''

    Browsers don't have enough trouble properly dealing with all the JavaScript that web sites shove down out Internet connection now. How nice that you've found a way for web sites lard up their pages with even more of the stuff.

  • Not (apparently) having learned anything from the switch to digital tv broadcasting (where the higher bandwidth was not used for better quality, but was co opted to shovel more channels of low quality shit) this "34% faster" algorithm will simply result in web coders programming at least 34% more crap ads and scripts into web pages.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (10) Sorry, but that's too useful.

Working...