Googlebot and Document.Write 180
With JavaScript/AJAX being used to place dynamic content in pages, I was wondering how Google indexed web page content that was placed in a page using the JavaScript "document.write" method. I created a page with six unique words in it. Two were in the plain HTML; two were in a script within the page document; and two were in a script that was externally sourced from a different server. The page appeared in the Google index late last night and I just wrote up the results.
Nonsense words? (Score:5, Funny)
zonkdogfology is a real word: Serious question now - is the author of the article worried that the ensuing slashdot discussion will mention all his other nonsense words? I've no doubt slashdotters will find & mention the other words here, polluting google's index....
Re:Nonsense words? (Score:4, Funny)
It's a perfectly cromulent word, and it's use embiggens all of us.
Re: (Score:2)
Seriously, he shouldn't have posted these words until he was done with the test.
Absolutely, and a major problem I have with taking it seriously now is that Google uses words in links *to* a particular site. This means that there is now a very high risk of false positives if *anyone* has done so with the words that are only dynamically-written on the original page.
If he'd been serious that "Over the next two weeks, I'll be watching to see two things", he should have kept his mouth shut for those two weeks.
The Results: (Score:5, Informative)
Re:The Results: (Score:5, Informative)
How does document.write mess up your DOM tree? (Score:2)
Re:How does document.write mess up your DOM tree? (Score:5, Informative)
If you're using document.write, you're writing directly into the document stream, which only works in text/html, not an XHTML MIME type, because there's no way to guarantee the document will continue to be valid.
In this day and age, document.write should never be used, in favor of the more verbose but more future-proof document.createElement and document.createTextNode notation.
Re:How does document.write mess up your DOM tree? (Score:5, Insightful)
Re:How does document.write mess up your DOM tree? (Score:4, Funny)
One of the most clever uses of document.write I've seen was something like: document.write("<--") YOU NEED JAVSCRIPT FOR THIS PAGE document.write("-->")
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
There are some instances where it is unavoidable.
Can you give an example?
Re: (Score:2)
Except that the programmer might know what they're doing. But I guess we're getting past the point of trusting people more than machines
Not that it's wrong to have failsafes in place, and not that XHTML isn't fine without document.write, but this "validity guarantee" argument is a little worrying.
Re: (Score:2)
Except that the programmer might know what they're doing. But I guess we're getting past the point of trusting people more than machines ;)
Not that it's wrong to have failsafes in place, and not that XHTML isn't fine without document.write, but this "validity guarantee" argument is a little worrying.
That's right up there with saying it's a little worrisome that an invalid cast in a strictly typed language generates a compiler error. After all, can't we trust humans to know what they're doing?
Re: (Score:3, Insightful)
Based on all the segfaults, blue screens of death, X-Window crashes, Firefox crashes, code insertion bugs et cetera I've seen, I'd say that no, in general programmers don't know what they're doing, and certainly shouldn't be trusted to not fuck it up. The less raw access to any resource - be it memory or document stream - they are given, the better.
Re: (Score:2)
I'm not saying that Javascript should ENCOURAGE low-level access to the document, but to flatly deny those things is to falsely limit a language. Languages, after all, are supposed to allow you to express ANYTHING.
Re: (Score:2)
No. If there's a way around these limitations, then most programmers will simply turn them off because they are experts and know what they're doing. And then the user has to suffer the consequences of the expert's ego and laziness.
No, bounds checking needs to be mandatory, not voluntary; otherwise it goes unused and the problems continue.
Re: (Score:2)
Am I wrong?
Re: (Score:2)
Code to the standard and let your client decide what obscure browser to use.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
If you code to the standard, at least you can blame browsers for their broken implementation.
Re: (Score:2)
There are also standards which neither browser support. Bottom line is you have to code to what works, and preferably the standards if the browsers support them. I recommend watching the lecture: An Inconvenient API: The Theory of the DOM (three parts, downloadable here [yahoo.com]).
I wish it was just as simple as "follow the sta
Re: (Score:2)
If you're using document.write, you're writing directly into the document stream, which only works in text/html, not an XHTML MIME type, because there's no way to guarantee the document will continue to be valid.
In this day and age, document.write should never be used, in favor of the more verbose but more future-proof document.createElement and document.createTextNode notation.
element.innerHTML works even on XHTML MIME documents however (Firefox, Opera etc), and there's no significant hurdle to support doc
True (Score:2)
google.com/?q=slashdotting+in+google+dollars (Score:5, Insightful)
I think the actual experiment here is:
I look forward to the follow-up piece which details the financial results.
Re:google.com/?q=slashdotting+in+google+dollars (Score:5, Insightful)
Re: (Score:2)
Shall we all migrate to Technocrat, anyone? It has decent stories.
Re: (Score:3, Insightful)
It used to be that the web as a whole avoided this crap. Now, it's so easy to make stupid amounts of money from stupid content that a huge percentage of what gets submitted only even exists for the money -- it's like socially-acceptable spam. Digg is by far the worst confluence of this kind of crap, but the problem is web-wide, and damn near impossible to avoid.
Re: (Score:2)
Re: (Score:2)
Two words:
Roland Piquepaille.
Re: (Score:2)
Re: (Score:2)
I'm not sure if 2 of the 3 ads are for brain injury and brain tumor treatment because the name of his blog is "Brain handles" or because you'd need to have a brain injury to do the "research" he's doing.
Google Pigeon technolog (Score:3, Funny)
If they weren't, then they're trying (Score:4, Interesting)
Google needs to consider script if they want high-quality results. Besides the obvious fact that they'll miss content supplied by dynamic page elements, they could also sacrifice page quality. Page-rank and the like will get them very far, but an easy way to spam the search engines would be to have pages on a whole host of topics that immediately get rewritten as ads for Viagra as soon as they're downloaded by a Javascript-aware browser. It's interesting to know the extent to which they correct for this.
Of course, there are much more subtle ways of changing content once it's been put out there. One might imagine a script that waits 10 seconds and then removes all relevant content and displays Viagra instead. Who knew web search would be restricted by the halting problem? I wonder how far Google goes...
Re: (Score:2)
Google tends to nuke those sites from orbit once it disc
Re:If they weren't, then they're trying (Score:5, Insightful)
And if pages are designed using AJAX and dynamic rendering just for the sake of using AJAX and dynamic rendering.. well, they deserve what they get
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
*apologies, watched the Borat trailer too many times
How did this make the front page? (Score:2, Insightful)
Re: (Score:2)
Re: (Score:2)
Yeah, I was kinda shocked, really. I always wondered how people with bad blogs were able to break into the mainstream and gather regular readers. I guess they just try like hell to get picked up on Slashdot/Digg/etc with some worthless blog post.
well that too, but in general it's even easier. just aim low and hope for the best. it's not very hard to appeal to the mainstream. shit, it's the largest audience out there.
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2)
From memory, setTimeout forms a time-delayed but synchronous entry into the execution stream, you will not get two threads in the same javascript code pile running simultaneously, the timeout will not fire until the execution stream is idle.
Uh-uh, it has its own execution context. You can absolutely run timed out functions concurrently. Try this:
Re: (Score:2)
Without any output it doesn't prove concurrent execution. It could just be a garden variety infinite loop, albeit one with an indirect execution path.
You want output, how's this for output?
Re: (Score:3, Informative)
Did you know that 99% of all statistics are made up?
I can source some Javascript statistics: W3Schools reports [w3schools.com] that, as of January 2007, 94% of their audience has Javascript turned on, a significantly lower statistic than you are reporting. Not only that, but it is actually the highest percentage since they started recording them binannually in late 2002.
It's a moot poi
Re: (Score:2)
No, indeed, because doing things based on empirical evidence is foolish behaviour. On the other hand you should take a political position (that data, presentation and behaviour should be kept separate) and behave as though that was in some way more true than a statistical value.
I'm not saying that the separation of data, presentation and behaviour is wrong, just that you have to realise that it's a human engineered best practise, not a law of the
Re: (Score:2)
Re: (Score:3, Interesting)
And for what? So t
Google request external JavaScript file? (Score:4, Insightful)
Re: (Score:3, Informative)
Doesn't work; Good (kind of) (Score:5, Insightful)
The model for websites is supposed to work something like this:
In other words, your web page should work for any browser that supports HTML. It should work regardless of whether CSS and/or Javascript is enabled.
So why would Google's crawler look at the Javascript? Javascript is supposed to enhance content, not add it.
Now, that's not saying many people don't (incorrectly) use Javascript to add content to their pages. But maybe when they find out search engines aren't indexing them, they'll change their practices.
The only problem I can see is with scam sites, where they might put content in the HTML, then remove/add to it with Javascript so the crawler sees something different than the end-user does. I think they already do this with CSS, either by hiding sections or by making the text the same color as the background. Does anyone know how Google deals with CSS that does this?
Re: (Score:3, Informative)
Regardless, I'm pretty sure you'd get banned from the search engines for using such tactics.
Re: (Score:2)
Re: (Score:3, Insightful)
Define "work". A web page without formatting is going to be useless to anyone who isn't a part-time web developer. To them, it's just going to be one big, messy looking freak out... akin to a television show whose cable descrambler broke. Sure all the "information" is there, somewhere, but in such a horrible format that a human being can't use it.
Web pages ar
Re: (Score:3, Insightful)
Re: (Score:2)
Re: (Score:3, Insightful)
How's this? Disable CSS on Slashdot. First you get the top menu, then some options to skip to the menu, the content, etc. Then you get the menu, then the content. It's very easy to use it that way.
Re: (Score:3, Insightful)
The model for websites is supposed to work something like this:
If only. Turn off JavaScript and try these sites:
Re: (Score:2)
A search engine that indexes web applications is more useful to me than one that can not.
Google realizes that, and you don't.
Re: (Score:2)
DOM holds the content (whether HTML/XHTML/XML or plain text; static/dynamic or mixed)
CSS styles that content
Javascript enhances that content (e.g. provides auto-fill for a textbox)
Google should be indexing the DOM and it's contents, not the code in the file. That's like indexing the english Dictionary and saying you've indexed the english language.
Websites are going to be more and more dynamic. Content is going to be added directly to the page from an amalgamation of sources with t
Re: (Score:2)
Google has a bot that understands CSS and JavaScript, based roughly on the Mozilla source code (wondered why they hire so many Firefox develop
Re: (Score:2)
Re:Doesn't work; Good (kind of) (Score:4, Informative)
Looking for good/current Lynx for Windows/XP (Score:2)
Can anyone here recommend a good place to download a current port of Lyn
Re: (Score:3, Insightful)
Re: (Score:2)
AJAX should degrade gracefully, if you don't have javascript things should still work which means that search spiders sho
I would make normal links, then use JS on top (Score:4, Insightful)
It's a nice improvement. Less bandwidth used, and a quicker interface.
Unfortunately, it's not often done right. The way I would do it is to first make the menu work like it normally would. Make each menu item a link to a new page. Then you apply Javascript to the menu item. Something like this: (FYI, this is how I do pop-up windows, too.)
Putting it behind a login screen doesn't solve all the problems. You're right that it won't be searchable anyway, but people with older browsers or screen readers won't be able to access it.
I think Gmail actually offers two versions. One for older browser that uses no (or little?) Javascript, and the other which almost everyone else (including me) uses and loves. But I'm not sure how easy it would be to maintain two versions of the same code like that. I also don't think it's nice for the end user to have to choose "I want the simple version", though it may encourage them to update to a newer browser, I guess.
(Of course this is all "ideally speaking", I realize there are deadlines to meet and I violate some of my own guidelines sometimes. I still think they're good practices, though.)
Re: (Score:2)
It's a matter of project requirements,
Re: (Score:2)
Accessibility? (Score:2, Informative)
DDA / Section 508 / WCAG - the no JavaScript clause makes for a lot of extra work - but it is one that can't be avoided on the (commercial) web application I architect. (Friggin sharks with laser beems for eyes making lawsuits and all.)
Document.write() is not the way to go (Score:2)
From TFA: (Score:2)
Re: (Score:2)
(tagging beta) (Score:2)
Google doesn't, but it's possible (Score:3, Informative)
I'd thought Google would be doing that by now. I've been implementing something that has to read arbitrary web pages (see SiteTruth [sitetruth.com]) and extract data, and I've been considering how to deal with JavaScript effectively.
Conceptually, it's not that hard. You need a skeleton of a browser, one that can load pages and run Javascript like a browser, builds the document tree, but doesn't actually draw anything. You load the page, run the initial OnLoad JavaScript, then look at the document tree as it exists at that point. Firefox could probably be coerced into doing this job.
It's also possible to analyze Flash files. Text which appears in Flash output usually exists as clear text in the Flash file. Again, the most correct approach is to build a psuedo-renderer, one that goes through the motions of processing the file and executing the ActionScript, but just passes the text off for further processing, rather than rendering it.
Ghostscript [ghostscript.com] had to deal with this problem years ago, because PostScript is actually a programming language, not a page description language. It has variables, subroutines, and an execution engine. You have to run PostScript programs to find out what text out.
OCR is also an option. Because of the lack of serious font support in HTML, most business names are in images. I've been trying OCR on those, and it usually works if the background is uncluttered.
Sooner or later, everybody who does serious site-scraping is going to have to bite the bullet and implement the heavy machinery to do this. Try some other search engines. Somebody must have done this by now.
Again, I'm surprised that Google hasn't done this. They went to the trouble to build parsers for PDF and Microsoft Word files; you'd think they'd do "Web 2.0" documents.
Re: (Score:2)
Yes, and it should work like that too. If the background is so cluttered as to make the OCR difficult, then chances are the human will have trouble reading it too. I suggested that during a job interview witha *cough* serious search engine: use a secondary crawler reporting as a normal IE/firefox, load a page usin
OCR and web sites (Score:2)
If the background is so cluttered as to make the OCR difficult, then chances are the human will have trouble reading it too.
Web site images with logos against faint but busy backgrounds are moderately common. I'm talking about stuff like this. [ddfurnitur...dators.com] Commercial OCR programs interpret that as "a picture". Because we're working to automatically extract business identities from uncooperative websites, we sometimes need heavier technology than the search engines.
Re: (Score:2)
Does Google run macros in Word documents? No? Then why are you even comparing this? I can parse a PDF document or a Word document without having to have a script interpreter running.
I imagine that the Googlebot crawler is a rather simplistic program that only knows how to:
1. Read robots.txt
2. Read meta tags (robot tags in particular)
3. Fi
Re: (Score:2)
Ghostscript had to deal with what problem? Yes, PostScript is a programming language with built-in graphics primitives. What does that have to do with search engines? It doesn't have to recognise certain outlines as being text (i.e text drawn without using the PostScript primitive for drawing text), it just draws it. Ghostscript is just another implementation of a lang
Re: (Score:2)
Postscript is a programming language, not a page description language; you need to write a language interpreter, not just a data parser, to get the most from it. HTML + Javascript also requires an interp
If you want to see (Score:4, Funny)
If you want to see through a search engine's eyes, open the page in Lynx [browser.org]. The funniest part about showing that method to another developer is when they think Lynx is broken because the page is empty. "It didn't load. How do I refresh the page? This browser sucks." Heh. Endless fun.
(method does not account for image crawlers)
Google holds back the web! (Score:2, Insightful)
Re: (Score:2)
Re: (Score:2)
AJAX is for writing applications not Documents (Score:2, Interesting)
Re: (Score:2)
err news? (Score:2)
In other news: Most plants are green!
Intent-based markup -- a look ahead (Score:2)
What a dumb idea (Score:2)
Re: (Score:2)
Re: (Score:2)
Choose a non-default email i.e. not webmaster but web-master and deal with the consequences.
In my eyes, a customer/client/new friend not being able to contact you is far more expensive than dealing with some *more* spam.
Re: (Score:2)
Meh. I have mine double-escaped, using two unescape() calls. The first hides the e-mail address, and the second hides the HTML for the mailto link. It even has a <noscript> condition to point out that the user does not have Javascript enabled. I've been scrupulous about noscript ever since one web site that just displayed a blank black page with JS disabled.
If I was really paranoid, I'd probably come up with some sort of while loop to decode the mail address, and a skip-over condition to change the i
Re: (Score:2)
OTOH, there's nothing wrong at all with having static content that is only displayed to
Re: (Score:2)
Erm. "of <NOSCRIPT> tags". Sorry.
Re: (Score:2)