Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Google User Journal

Googlebot and Document.Write 180

With JavaScript/AJAX being used to place dynamic content in pages, I was wondering how Google indexed web page content that was placed in a page using the JavaScript "document.write" method. I created a page with six unique words in it. Two were in the plain HTML; two were in a script within the page document; and two were in a script that was externally sourced from a different server. The page appeared in the Google index late last night and I just wrote up the results.
This discussion has been archived. No new comments can be posted.

Googlebot and Document.Write

Comments Filter:
  • by Whiney Mac Fanboy ( 963289 ) * <whineymacfanboy@gmail.com> on Monday March 12, 2007 @12:09AM (#18312800) Homepage Journal
    An alert came in in the late evening of March 10th for "zonkdogfology", one of the words in the first pair

    zonkdogfology is a real word:

    zonk-dog-fol-o-gy zohnk-dog--ful-uh-jee
    noun, plural -gies.

    1. the name given to articles from zonk where the summary makes no sense whatsoever.
    Serious question now - is the author of the article worried that the ensuing slashdot discussion will mention all his other nonsense words? I've no doubt slashdotters will find & mention the other words here, polluting google's index....
    • by Anonymous Coward on Monday March 12, 2007 @12:26AM (#18312902)

      zonkdogfology is a real word:

      It's a perfectly cromulent word, and it's use embiggens all of us.

  • The Results: (Score:5, Informative)

    by XanC ( 644172 ) on Monday March 12, 2007 @12:12AM (#18312812)
    Save a click: No, Google does not "see" text inserted by Javascript.
    • Re:The Results: (Score:5, Informative)

      by temojen ( 678985 ) on Monday March 12, 2007 @12:27AM (#18312904) Journal
      And rightly so. You should be hiding & un-hiding or inserting elements using the DOM, never using document.write (which F's up your DOM tree).
        • by XanC ( 644172 ) on Monday March 12, 2007 @12:58AM (#18313033)

          If you're using document.write, you're writing directly into the document stream, which only works in text/html, not an XHTML MIME type, because there's no way to guarantee the document will continue to be valid.

          In this day and age, document.write should never be used, in favor of the more verbose but more future-proof document.createElement and document.createTextNode notation.

          • by jesser ( 77961 ) on Monday March 12, 2007 @01:40AM (#18313193) Homepage Journal
            Perhaps more importantly, document.write can't be used to modify a page that has already loaded, limiting its usefulness for AJAX-style features.
          • What if you need to insert large amounts of HTML into a page? What if you don't have all your HTML that you want to insert laid into a perfect, XML compliant document? I realize that in most cases document.createElement is the better of the 2 methods, but it isn't always possible to not use document.write. There are some instances where it is unavoidable.
          • because there's no way to guarantee the document will continue to be valid.


            Except that the programmer might know what they're doing. But I guess we're getting past the point of trusting people more than machines ;)

            Not that it's wrong to have failsafes in place, and not that XHTML isn't fine without document.write, but this "validity guarantee" argument is a little worrying.
            • Except that the programmer might know what they're doing. But I guess we're getting past the point of trusting people more than machines ;) Not that it's wrong to have failsafes in place, and not that XHTML isn't fine without document.write, but this "validity guarantee" argument is a little worrying.

              That's right up there with saying it's a little worrisome that an invalid cast in a strictly typed language generates a compiler error. After all, can't we trust humans to know what they're doing?

            • Re: (Score:3, Insightful)

              by ultranova ( 717540 )

              Except that the programmer might know what they're doing. But I guess we're getting past the point of trusting people more than machines ;)

              Based on all the segfaults, blue screens of death, X-Window crashes, Firefox crashes, code insertion bugs et cetera I've seen, I'd say that no, in general programmers don't know what they're doing, and certainly shouldn't be trusted to not fuck it up. The less raw access to any resource - be it memory or document stream - they are given, the better.

              • That's fine up to a point, but there should be a way around these limitations. In C, it's all too easy to screw things up with null pointers etc., but if we didn't have those low-level features, a lot of important software would be impossible to write.

                I'm not saying that Javascript should ENCOURAGE low-level access to the document, but to flatly deny those things is to falsely limit a language. Languages, after all, are supposed to allow you to express ANYTHING.
                • That's fine up to a point, but there should be a way around these limitations.

                  No. If there's a way around these limitations, then most programmers will simply turn them off because they are experts and know what they're doing. And then the user has to suffer the consequences of the expert's ego and laziness.

                  No, bounds checking needs to be mandatory, not voluntary; otherwise it goes unused and the problems continue.

                  In C, it's all too easy to screw things up with null pointers etc., but if we didn't h

          • by jcuervo ( 715139 )
            Hmm. I just do <span> or <div> and document.getElementById('whatever').innerHTML = "...";

            Am I wrong?

          • by suv4x4 ( 956391 )

            If you're using document.write, you're writing directly into the document stream, which only works in text/html, not an XHTML MIME type, because there's no way to guarantee the document will continue to be valid.

            In this day and age, document.write should never be used, in favor of the more verbose but more future-proof document.createElement and document.createTextNode notation.


            element.innerHTML works even on XHTML MIME documents however (Firefox, Opera etc), and there's no significant hurdle to support doc
      • by XanC ( 644172 )
        I doubt Google will notice DOM-created elements, either. But the author should re-test with that. And I would suggest that he post the result only if it turns out Google can see that, because we all assume it can't.
    • by kale77in ( 703316 ) on Monday March 12, 2007 @01:30AM (#18313159) Homepage

      I think the actual experiment here is:

      • Create a 6-odd-paragraph page saying what everybody already knows.
      • Slashdot it, by suggesting something newsworthy is there.
      • Pack the page with Google ads.
      • Profit.

      I look forward to the follow-up piece which details the financial results.

      • Exactly, this is the typical sort of fluff that Digg seems to love. As far as I know, Slashdot had avoided this particular type of adword blog post crap until now.
        • by caluml ( 551744 )
          But with the Firehose, Slashdot will now start using the "wisdom" of crowds to produce the same pap that Digg does.
          Shall we all migrate to Technocrat, anyone? It has decent stories.
        • Re: (Score:3, Insightful)

          by dr.badass ( 25287 )
          As far as I know, Slashdot had avoided this particular type of adword blog post crap until now

          It used to be that the web as a whole avoided this crap. Now, it's so easy to make stupid amounts of money from stupid content that a huge percentage of what gets submitted only even exists for the money -- it's like socially-acceptable spam. Digg is by far the worst confluence of this kind of crap, but the problem is web-wide, and damn near impossible to avoid.
          • by Raenex ( 947668 )
            The sad thing is I bet the vast majority of crap like this earns enough to buy lunch or something. There's a lot of people running around trying to get rich doing this, but Google is the real winner.
        • by ColaMan ( 37550 )
          As far as I know, Slashdot had avoided this particular type of adword blog post crap until now.

          Two words:
          Roland Piquepaille.
      • by Restil ( 31903 )
        I've noticed something with regards to my own site and the few google ads I have placed on the back pages. On those occasions when I get heavy traffic from a link on a popular tech site, my average click ratio goes way down. Slashdot users aren't going to pages to search for products to buy, so it's highly unlikely more than a very few will ever click on any ads, if any at all. Now if the article was promoting a product that the average geek would be interested in, and there were ads on the page for that
  • by sdugoten2 ( 449392 ) on Monday March 12, 2007 @12:14AM (#18312832)
    The Google Pigeon [google.com] is smart enough to read through Document.write. Duh!
  • by AnonymousCactus ( 810364 ) on Monday March 12, 2007 @12:18AM (#18312866)

    Google needs to consider script if they want high-quality results. Besides the obvious fact that they'll miss content supplied by dynamic page elements, they could also sacrifice page quality. Page-rank and the like will get them very far, but an easy way to spam the search engines would be to have pages on a whole host of topics that immediately get rewritten as ads for Viagra as soon as they're downloaded by a Javascript-aware browser. It's interesting to know the extent to which they correct for this.

    Of course, there are much more subtle ways of changing content once it's been put out there. One might imagine a script that waits 10 seconds and then removes all relevant content and displays Viagra instead. Who knew web search would be restricted by the halting problem? I wonder how far Google goes...

    • Page-rank and the like will get them very far, but an easy way to spam the search engines would be to have pages on a whole host of topics that immediately get rewritten as ads for Viagra as soon as they're downloaded by a Javascript-aware browser.

      Of course, there are much more subtle ways of changing content once it's been put out there. One might imagine a script that waits 10 seconds and then removes all relevant content and displays Viagra instead.

      Google tends to nuke those sites from orbit once it disc

    • by gregmac ( 629064 ) on Monday March 12, 2007 @02:16AM (#18313339) Homepage
      You have to also remember though, that often the content generated dynmically is going to be of no use to a search engine, it will often be user-specific - there's obviously some reason it's being generated that way.

      And if pages are designed using AJAX and dynamic rendering just for the sake of using AJAX and dynamic rendering.. well, they deserve what they get :)
    • by jrumney ( 197329 )
      Google should index the static content, but run/analyse the Javascript and throw out any pages where the user-visible content changes drastically. To be 100% effective though, they'd have to fake the IE or Firefox User-Agent, and use IP addresses from an ISP's dynamically assigned range for their crawling, which some people might see as evil.
    • Rightfully so for google news. Is there any way to configure google news only to show links to articles on certain sites? Or to blacklist certain sites? I really hate those "news" sites that put javascript on every seventh word, so that if you hover over the word, it shows a little pop-up div type ad. It's especially annoying because I like to highlight text as I read it, because I find it easier. I wish google would run all the JS in a page, and lower the ranking if it contained too many ads.
  • by Anonymous Coward
    It should be pretty obvious that no search engine should interpret javascript, let alone remotely sourced javascript. I was actually hoping this guy would show me wrong and demonstrate otherwise, but to my disappointment this was just another mostly pointless blog post.
    • by Jake73 ( 306340 )
      Yeah, I was kinda shocked, really. I always wondered how people with bad blogs were able to break into the mainstream and gather regular readers. I guess they just try like hell to get picked up on Slashdot/Digg/etc with some worthless blog post.
      • by kv9 ( 697238 )

        Yeah, I was kinda shocked, really. I always wondered how people with bad blogs were able to break into the mainstream and gather regular readers. I guess they just try like hell to get picked up on Slashdot/Digg/etc with some worthless blog post.

        well that too, but in general it's even easier. just aim low and hope for the best. it's not very hard to appeal to the mainstream. shit, it's the largest audience out there.

  • by JAB Creations ( 999510 ) on Monday March 12, 2007 @12:22AM (#18312878) Homepage
    Check your access log to see if Google actually requested the external JavaScript file. If it didn't there would be no reason to assume Google is interested in non-(X)HTML based content.
    • Re: (Score:3, Informative)

      I have actually seen some reports [google.com] of a "new" Googlebot requesting the CSS and Javascript. The rumour I heard was that it was using the Gecko rendering engine or something along those lines. This was some time ago. I'm not sure what ever became of this.
  • by The Amazing Fish Boy ( 863897 ) on Monday March 12, 2007 @12:29AM (#18312919) Homepage Journal
    FTFA:

    Why was I interested? Well, with all the "Web 2.0 technologies that rely on JavaScript (in the form of AJAX) to populate a page with content, it's important to know how it's treated to determine if the content is searchable.
    Good. I am glad it doesn't work. Google's crawler should never support Javascript.

    The model for websites is supposed to work something like this:
    • (X)HTML holds the content
    • CSS styles that content
    • Javascript enhances that content (e.g. provides auto-fill for a textbox)

    In other words, your web page should work for any browser that supports HTML. It should work regardless of whether CSS and/or Javascript is enabled.

    So why would Google's crawler look at the Javascript? Javascript is supposed to enhance content, not add it.

    Now, that's not saying many people don't (incorrectly) use Javascript to add content to their pages. But maybe when they find out search engines aren't indexing them, they'll change their practices.

    The only problem I can see is with scam sites, where they might put content in the HTML, then remove/add to it with Javascript so the crawler sees something different than the end-user does. I think they already do this with CSS, either by hiding sections or by making the text the same color as the background. Does anyone know how Google deals with CSS that does this?
    • Re: (Score:3, Informative)

      by doormat ( 63648 )
      I thought I remember a while ago about some search engine using intelligence to ignore hidden text (text with the same or a similar color as the background). Of course the easy work around for that is to use an image for your background and then that may fool the bot, but who knows, they could code to accomidate that too.

      Regardless, I'm pretty sure you'd get banned from the search engines for using such tactics.
      • by zobier ( 585066 )

        I thought I remember a while ago about some search engine using intelligence to ignore hidden text (text with the same or a similar color as the background). Of course the easy work around for that is to use an image for your background and then that may fool the bot, but who knows, they could code to accomidate that too.
        You could use OCR to detect that (and to index images used for text content).
    • Re: (Score:3, Insightful)

      by cgenman ( 325138 )
      In other words, your web page should work for any browser that supports HTML. It should work regardless of whether CSS and/or Javascript is enabled.

      Define "work". A web page without formatting is going to be useless to anyone who isn't a part-time web developer. To them, it's just going to be one big, messy looking freak out... akin to a television show whose cable descrambler broke. Sure all the "information" is there, somewhere, but in such a horrible format that a human being can't use it.

      Web pages ar
      • Re: (Score:3, Insightful)

        by WNight ( 23683 )
        I don't know about you, but I write my webpages so that when the style goes away, the page still views in a basic 1996 kind of style. Put the content first and your index bars and ads last then use CSS to position them first, visibly. This way if a blind user or someone without style sheets sees the site it at least reads in order.
        • by caluml ( 551744 )
          View, Page Style, No Style in Firefox will show you what your page looks like to browsers/spiders.
      • Re: (Score:3, Insightful)

        Define "work". A web page without formatting is going to be useless to anyone who isn't a part-time web developer.

        How's this? Disable CSS on Slashdot. First you get the top menu, then some options to skip to the menu, the content, etc. Then you get the menu, then the content. It's very easy to use it that way.

        To them, it's just going to be one big, messy looking freak out... akin to a television show whose cable descrambler broke. Sure all the "information" is there, somewhere, but in such a horrible

    • Re: (Score:3, Insightful)

      by Animats ( 122034 )

      The model for websites is supposed to work something like this:

      If only. Turn off JavaScript and try these sites:

    • The old model is dying. Simple web pages are on the way out. Web applications are the future.

      A search engine that indexes web applications is more useful to me than one that can not.

      Google realizes that, and you don't.
    • The model should really be

      DOM holds the content (whether HTML/XHTML/XML or plain text; static/dynamic or mixed)
      CSS styles that content
      Javascript enhances that content (e.g. provides auto-fill for a textbox)

      Google should be indexing the DOM and it's contents, not the code in the file. That's like indexing the english Dictionary and saying you've indexed the english language.

      Websites are going to be more and more dynamic. Content is going to be added directly to the page from an amalgamation of sources with t
    • by suv4x4 ( 956391 )
      The only problem I can see is with scam sites, where they might put content in the HTML, then remove/add to it with Javascript so the crawler sees something different than the end-user does. I think they already do this with CSS, either by hiding sections or by making the text the same color as the background. Does anyone know how Google deals with CSS that does this?

      Google has a bot that understands CSS and JavaScript, based roughly on the Mozilla source code (wondered why they hire so many Firefox develop
  • Accessibility? (Score:2, Informative)

    The bottom line is your web sites should probably degrade nice enough when JavaScript is not enabled. It might not flow as nice, the user may have to submit more forms, but the core functionality should still work and the core content should still be available.

    DDA / Section 508 / WCAG - the no JavaScript clause makes for a lot of extra work - but it is one that can't be avoided on the (commercial) web application I architect. (Friggin sharks with laser beems for eyes making lawsuits and all.)
  • Document.write() is executed as the page loads. Most AJAX-style implementation rely on either the innerHTML-property or creating nodes through the DOM. Testing these would tell us much more than testing Document.write().
  • So, some friends and I have been bantering back and forth about how Google treats content that has been inserted into a page using Javascript. So I decided to do an experiment. This page has six nonsense words. Two are hardcoded into the page via straight HTML. Two are inserted via Javascript, but the script is part of the page HTML. The last two are inserted via Javascript, but the script is on a remote server. The purpose of the test is to see three things... * The time lapse between when the words
  • I predict that from now on, zonkdogfology will be a common tag for all articles that relate to google search...
  • by Animats ( 122034 ) on Monday March 12, 2007 @02:49AM (#18313481) Homepage

    I'd thought Google would be doing that by now. I've been implementing something that has to read arbitrary web pages (see SiteTruth [sitetruth.com]) and extract data, and I've been considering how to deal with JavaScript effectively.

    Conceptually, it's not that hard. You need a skeleton of a browser, one that can load pages and run Javascript like a browser, builds the document tree, but doesn't actually draw anything. You load the page, run the initial OnLoad JavaScript, then look at the document tree as it exists at that point. Firefox could probably be coerced into doing this job.

    It's also possible to analyze Flash files. Text which appears in Flash output usually exists as clear text in the Flash file. Again, the most correct approach is to build a psuedo-renderer, one that goes through the motions of processing the file and executing the ActionScript, but just passes the text off for further processing, rather than rendering it.

    Ghostscript [ghostscript.com] had to deal with this problem years ago, because PostScript is actually a programming language, not a page description language. It has variables, subroutines, and an execution engine. You have to run PostScript programs to find out what text out.

    OCR is also an option. Because of the lack of serious font support in HTML, most business names are in images. I've been trying OCR on those, and it usually works if the background is uncluttered.

    Sooner or later, everybody who does serious site-scraping is going to have to bite the bullet and implement the heavy machinery to do this. Try some other search engines. Somebody must have done this by now.

    Again, I'm surprised that Google hasn't done this. They went to the trouble to build parsers for PDF and Microsoft Word files; you'd think they'd do "Web 2.0" documents.

    • by dargaud ( 518470 )

      OCR is also an option. Because of the lack of serious font support in HTML, most business names are in images. I've been trying OCR on those, and it usually works if the background is uncluttered.

      Yes, and it should work like that too. If the background is so cluttered as to make the OCR difficult, then chances are the human will have trouble reading it too. I suggested that during a job interview witha *cough* serious search engine: use a secondary crawler reporting as a normal IE/firefox, load a page usin

      • If the background is so cluttered as to make the OCR difficult, then chances are the human will have trouble reading it too.

        Web site images with logos against faint but busy backgrounds are moderately common. I'm talking about stuff like this. [ddfurnitur...dators.com] Commercial OCR programs interpret that as "a picture". Because we're working to automatically extract business identities from uncooperative websites, we sometimes need heavier technology than the search engines.

    • Again, I'm surprised that Google hasn't done this. They went to the trouble to build parsers for PDF and Microsoft Word files; you'd think they'd do "Web 2.0" documents.

      Does Google run macros in Word documents? No? Then why are you even comparing this? I can parse a PDF document or a Word document without having to have a script interpreter running.

      I imagine that the Googlebot crawler is a rather simplistic program that only knows how to:
      1. Read robots.txt
      2. Read meta tags (robot tags in particular)
      3. Fi

    • by imroy ( 755 )

      Ghostscript had to deal with this problem years ago, because PostScript is actually a programming language, not a page description language.

      Ghostscript had to deal with what problem? Yes, PostScript is a programming language with built-in graphics primitives. What does that have to do with search engines? It doesn't have to recognise certain outlines as being text (i.e text drawn without using the PostScript primitive for drawing text), it just draws it. Ghostscript is just another implementation of a lang

      • by shish ( 588640 )

        Ghostscript had to deal with this problem years ago, because PostScript is actually a programming language, not a page description language.

        Ghostscript had to deal with what problem? Yes, PostScript is a programming language with built-in graphics primitives. What does that have to do with search engines?

        Postscript is a programming language, not a page description language; you need to write a language interpreter, not just a data parser, to get the most from it. HTML + Javascript also requires an interp

  • by BrynM ( 217883 ) * on Monday March 12, 2007 @04:28AM (#18313837) Homepage Journal

    If you want to see through a search engine's eyes, open the page in Lynx [browser.org]. The funniest part about showing that method to another developer is when they think Lynx is broken because the page is empty. "It didn't load. How do I refresh the page? This browser sucks." Heh. Endless fun.

    (method does not account for image crawlers)

  • this is a pretty straightforward example of how google holds back the web. this is not google's fault, per se, but it definitely is true. We routinely resort to older, inefficient technologies for our websites simply to please google. it works well for us from an advertising standpoint, but is often incredibly stupid technologically.
    • As always, I'm underwhelmed by idiot slashdot moderators who mark my comment as 'flamebait.' You may not agree with it, but 'flamebait?' It's not even CLOSE to flamebait. / most of the comments i make on slashdot that get moderated get BOTH {troll/flamebait} AND {insightful/interesting}. If that doesn't suggest that slashdot's moderation system isn't heavily broken, then I don't know what does. I'd understand {overrated}+{interesting} or something like that, but my comments show rather clearly that mod
  • AJAX is for writing applications not Documents. Why and how should an application be indexed?
    • by Raenex ( 947668 )
      The line between application vs document gets blurry, fast. Consider a site like Try Ruby! [hobix.com]. There's definitely content hidden inside the tutorial, yet a search engine will never see it.
  • Hey I didn't think that after read-skipping the first paragraphs the article would actually state the obvious, that javascript generated content is not indexed... I would have expected an article to appear in case he found out it WAS INDEXED...

    In other news: Most plants are green!
  • Take a look at the Web Accessibility roadmap from the W3C, and in particular the section on intent-based markup [w3.org].
  • Now, its too early to say conclusively that Google will never index the JavaScript-generated content..
    ..but still, we can hope Google doesn't completely cave in to useless trendy bullshit.

A committee takes root and grows, it flowers, wilts and dies, scattering the seed from which other committees will bloom. -- Parkinson

Working...