Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
News

Interview With Google's Director of Research 135

Cialti writes "Salon has a very interesting article with Monika Henziger, Google's Director of Research, about their search technology and where they're going with it. "
This discussion has been archived. No new comments can be posted.

Interview With Google's Director of Research

Comments Filter:
  • by Anonymous Coward
    none. It's a gigantic Perl script written entirely of RegExen and one tr///.

    use strict is REMed out.
  • by Anonymous Coward
    I love when people don't read the article and post

    He did read the article. He said "I'm not sure that's not why they were working with BMW." Note the double negative, hence he is sure that is why they were working with BMW.
  • by Anonymous Coward

    nah!

    bestbet at shootybangbang.com [shootybangbang.com] bounces you straight to the *best bet*

    sometimes it is smart, sometimes it is stupid

  • by Anonymous Coward

    (car cuts driver off)
    "Fuck you, asshole!"
    (computer beeps)
    [25,945 results found.]
  • Quality! I've linked back to your site too. Now I wonder if anyone else was doing this...
  • In the good old days, ask.com let you see everything being asked of Jeeves, unfiltered. I watched it for a while, saving off the really weird questions, and made a page of it here [catalystinternet.com].

    Happy reading, and remember, you're looking at the end of the human race.

  • So these are the guys we can complain to whenever we hit one of those heavy flash ladden sites like Ford? Geez, I went to their site and the stupid background flash process was eating up 85% of my CPU time. I love when these companies add useless bullshit eye candy to a site.
  • Judging by the article, they build lists of words, and find their intersection. I can't imagine how big the lists for common words (e.g. articles) would be. Perhaps they had to cut them out due to hardware constraints?
  • by Tet ( 2721 )
    The most interesting part about the interview was the snippet that implies Google didn't have much of a say in the Deja archives being down after the buyout. So it wasn't the complete cock up that we all thought it was. They still handled the PR really badly, though. If they'd just told people what was happening, I'm sure they wouldn't have come across half as badly as they did.
  • google.com/mac [google.com]

    But the Google/Mac logo isn't as cool as the Google/Linux logo -- it contains all the fruity colors that Apple has largely abandoned.

  • The documents are assigned id's 1..n and, for each word, an ordered list of id's of documents containing the word is constructed. When a search asks for, say, "cheese fondue" the array for "cheese" and the array for "fondue" are retrieved and merged using a sorted list merge (fast, since the arrays are already ordered). The result is a list of document id's that were in both lists, i.e. documents containing both words.

    There are various ways to speed this up by compressing the arrays, hash joins, etc., but the basic idea is the same.
  • by K-Man ( 4117 ) on Thursday June 21, 2001 @10:11AM (#135282)
    That's true if the data is changing. However most search engines do web crawls in large chunks, and index the data once in one large block. Under such conditions dynamic management of hit lists and other data structures is not necessary. Basically, the bytes are packed as tight as they can get them so that it all fits into memory.

    As far as I can tell from their paper [nec.com], Google manages its web crawls the same way. It partitions the data into "barrels" and indexes each separately. Once the indices are built, they aren't updated. They also extend the hit lists to include word position and some other attributes for each hit.
  • It says it's ignoring them, but the top few "hits" typically do include the exact page. I just tried, for instance, "All your base are belong to us". It claims to ignore "are" and "to" but the top few hits contain the exact phrase. (The same happens with your example "Hail to the chief", though it says it's ignoring "to the".)
  • Now bear in mind that Google couldn't even come up with the phrase, however much I +'d it to death, on its top ten list. If I only have that one phrase in memory on Google, I can't find it.

    The problem is that you +'ed it too much. If you search for +"+but +that +the +dread" [google.com] you'll notice that it gives you some warnings. Google's ignoring all of the +'s you added, because you're using some of them incorrectly. ("dread" is not a stop word, for example)

    Instead, try searching for "but +that +the dread" [google.com]. Then you'll get what you're looking for.
  • Oh, gahd.

    That's just great. Now the cell-phone dolts in the SUVs will be using Google *at the same time* to check on their facts, *while* they are driving...



    --
  • That site is absolutely hilarious! Thank you for the link.

  • by ergo98 ( 9391 ) on Thursday June 21, 2001 @07:30AM (#135287) Homepage Journal

    Google absolutely blows away the competition, however it is humorous seeing entries in my log file related to people looking for masturbation tips (from the beginner level "How To" style queries, to full blown searches for advanced techniques). The page [yafla.com] in question is entitled "Hey Jerk : Get Off My Computer!" (and relates to pop-up ad windows) and I'm, uh, proud to see that it ranks #2 for searches for "jerk off technique" (I've had dozens of related hits appearing). While it is humorous seeing searching going a little off-track, I am very curious how many consumers know that each link you follow passes on where you came from, so for instance I see log entries like

    200x-xx-xx xx:xx:xx xxx.xxx.xxx.xxx GET /rants/jerk/index.htm 200 5986 334 270 Mozilla/4.0+(compatible;+MSIE+5.0;+Windows+98;+Dig Ext) http://google.yahoo.com/bin/query?p=jerk+off&b=21& hc=0&hs=5
    -or-
    200x-xx-xx xx:xx:xx xxx.xxx.xxx.xxx GET /rants/jerk/index.htm 200 5986 437 1292 Mozilla/4.0+(compatible;+MSIE+5.0;+Windows+98;+Dig Ext;+sureseeker.com) http://www.google.com/search?q=guys+who+jerk+off

  • The OmniWeb browser on MacOS X has a very nice feature, enabled by default, which simply disables all pop-up windows. You can disable all pop-ups, or disable only pop-ups that are not the result of you manually clicking on a link.

    Unfortunately, OmniWeb's JavaScript support is lacking in other areas, but that feature is brilliant, and their text display is the cleanest I've ever seen in any program. Linux users should get MacOS X just to rest their bad font weary eyes :-).

    D

    ----
  • Perhaps the best news, though, is that

    http://www.google.com/windows/

    doesn't work. Great job!

    D
    ----
  • They are working with BMW to see they can integrate the search engine into the car to do a search base on what you say.

    Even out of the scope of a car - this feature would be awesome if it were integrated with cable (or satellite) and the TV room

    Get me Gilligan's Island ... Click

  • by funkman ( 13736 ) on Thursday June 21, 2001 @08:38AM (#135291)
    I love when people don't read the article and post. From page 2 of the article:
    What other kinds of search are you developing?

    We have a voice-search project with BMW -- BMW wants to put voice search into their 7 Series cars. They want to put microphones in the cars -- you can just speak whatever your search is and then it gives you answers back on a display. Then you just say the result number and the search jumps to that result.

  • All search engines spider ahead of time and store; to do otherwise would take forever to get you any search results ("It's a terrible strain on the animators' wrists." :) My impression from the article was not that they generate whole searches ahead of time, but that they categorize by the individual search words, and then when you type in a query they generate the intersection of the pages on their many word lists. Then one miracle occurs, and ...

    Caution: contents may be quarrelsome and meticulous!

  • *sigh* you're right.

    But in the case where they would implement my ability to submit a RegEx, I could give them lots of flex on the time in return for the exact one page that I want. How hard could it possibly be?
    (dodging)
  • I'm just waiting for them to implement a RegEx interface. now THAT would be some love for the geeks out here.
  • I missed an answer to "How come for the last N months the Google front page has stated:

    Search 1,346,966,000 web pages

    and this number doesn't change?"
  • hehe, pretty funny :)

    It is too bad they took that away.
  • by King Babar ( 19862 ) on Thursday June 21, 2001 @08:59AM (#135297) Homepage
    For example, searching for: "Hail to the chief" would ignore to and the. In order to actually search for the phrase (which I indicated that I wanted to do by surrounding it in quotation marks), I would have to type "Hail +to +the chief". Hardly user-friendly.

    And, actually, that's not quite right, either. It's apparently always going to blow off your "the" (I just tried it). This is, alas, a seriously hard problem. What you were doing was looking for what actually amounts to a single chunk of information: the title of a fanfare played for the president. Unfortunately, the English version of the title is four words long although the title itself might in some cases act just like a single word (or noun phrase). So:

    That was one of the worst "Hail to the chief"
    s that I have every heard.
    Yes, you might even pluralize it just like a noun. So that's one problem right there: search terms that really are tantamount to a single lexical item might be four or more words long, and might even be inflected.

    Ideally, you'd like to index separately these multi-word chunks, especially if you can prove they occur way more often than expected. So in your example, "hail" and "chief" co-occur on about 28,000 pages, while "hail" alone is on 510,000 and "chief" alone is on over 1,500,000. If Google indexes 1.5 billion pages (or so), and the terms were independent, then, you'd expect something like 5000 co-occurrences, and 28,000 is so outrageously out of line you would know that something is up.

    Now, I'm guessing that *local* co-occurrence information is likely to eventually going to prove even handier in this regard. So, for example, "hail to" comes up 157,000 times, which is about 1/3 of all "hail" pages. That's very unlikely unless there's something systematic (and very possibly exploitable) going on.

    The big problem is that you can't really do much with function words alone, since they're just too staggeringly frequent. In running English text, the frequency of "the" is just about 70,000 per million. In other words, 7% of all English text consists of the definite article, and most web pages contain many distinct copies. You've got to kill that. Unfortunately, by omitting "the", you lose a lot of potentially useful information about definiteness of the noun phrase. In the "hail to the chief" example, the song title itself is just one example of a (somewhat) productive expression "hail to [definite-NP]", which has a specific kind of meaning implied (interestingly, usually sarcastic or abusive). Picking up on this could be very useful.

    So suppose I typed into deja "bush mass-mooning Gothenburg". I'll get 9 hits. That's nice, but google might want to do more, and provide additional examples of president (or candidate) Bush being derided in public. Or maybe give me pages that refer to the same incident being described as the Swedish version of "hail to the chief".

    So there is no doubt that function words need love, but I'd argue for a love that seeks to understand them and their weird little contributions to meaning rather than just a way to make sure you can nail a song title exactly.

  • There's an excellent presentation at technetcast by jim reese (cheif operations engineer @ google) called "the technology behind google", in mp3 format. Its much more technical than this interview, really a very good listen. get it here [technetcast.com]

    --sean
  • by daytrip ( 25725 ) on Thursday June 21, 2001 @09:10AM (#135299) Homepage
    You'll probably get a resonable idea at this page:

    http://www-db.stanford.edu/~backrub/google.html [stanford.edu].

    Also, try a lookup for a bloom filter [google.com], which google uses, I think. Most search engines work by inverting the index, and then merging the lists. Taking the intersection of all the keywords gives ou the membership, then you apply ranking to the membership. Pretty simple concept. I don't know of any search engines that use a trie, or use any form of stemming.

    -js
  • Lame geek reaction to a woman.

    See a female who ain't your mother,
    run in circles, sweat and stutter.

    From the article:
    "...people like my husband would get crazy. He just wants to find pages that have his words."

    Lesbian? Not. Competent? Hell, yes!
  • Last semester, I did a directed study about applying approximate machine reasoning to human information access, specifically to searching hypertexts of metadata. One of the ideas I looked at was an article about a search engine called FuzzyBase (pdf) [ensc.sfu.ca] which was developed by three people including my professor [ensc.sfu.ca], who works in the SFU Communication Networks Laboratory [ensc.sfu.ca]. FuzzyBase did just what you suggest - it used an interactive user session to disambiguate user queries. There are several interesting technologies which use this sort of thing to obtain unambiguous search keys, and most involve the usage of semantic ontologies. If you want to get started looking at this stuff, have a look at some of the articles on this page [www.sfu.ca], especially the online links at the end of the page. There are already search engines that do this to some degree.
  • German search engine fireball.de has a page [fireball.de] that lets you see what others have requested in the last 30 seconds. There are some sick people out there...
  • by harmonica ( 29841 ) on Thursday June 21, 2001 @10:03AM (#135303)
    You probably mean The Technology Behind Google [ddj.com]. It's a 73 min MP3, very interesting!
  • This is one of the reasons I found Gnutella fun when it first came out ... just looking at all those searches. It became even more fun when people began using the Gnutella-search-stream as a chat-feature ;)
  • That sounds like the same guy (and mostly the same topics). Good call.

    chris
  • by htmlboy ( 31265 ) on Thursday June 21, 2001 @08:37AM (#135306)
    Google gave a talk for ACM here last semester (got a t-shirt, woohoo!). The speaker described how they're used. They have thousands of linux boxes, and they're used to store websites (to be searched and cached copies) and to do searching on the pages they have (I think that's how it went). I got the impression that linux is used because it's free (important with thousands of licenses), it's reliable, and they found it a good platform for the searching backend software.

    an interesting side note: they found that when one of the linux boxes stops working, it's more cost effective to replace it than to fix the problem (hardware, at least). google throws out a lot of good hardware because of that. the lecture hall was begging for a student donation program of some sort when the google guy mentioned that :)

    chris
  • by dead_penguin ( 31325 ) on Thursday June 21, 2001 @07:54AM (#135307)
    With the giant display of scrolling queries (filtered, though) they have in their lobby, I think it's time to start sending little messages to the Google staff using searches.

    "Help, I'm stuck in here!!" is an obvious classic to try. If enough of us do it, it might even get noticed...

    "Intelligence is the ability to avoid doing work, yet getting the work done".
  • Google already has this. If you do a search on 'slishdot [google.com]' it asks you if you meant slashdot.
  • A speculative answer since b-trees are my bread and butter (I am just now specing a 2TB data-mine): hundreds of thousands of entries (or hundreds of millions) should not really bother a b-tree. From the articles about Google, I am guessing they have implemented some sort of distributed b-tree app server, across all those COTS linux boxes.

    I am curious as to what kind of implementation they are using; Google's roots would suggest some hacked form of Berkeley DB with lots of performance improvements.

    Oh, well, just some guesswork... if I am close, I am expecting a job offer by the way :-)...
  • Before I read your post, I had the same idea. I just sent one that said "Sorry, am I DOSing the Google lobby scroller?" Then, after reading this post [slashdot.org], I did a search for "jerk off technique."

    Hope those scroller babies don't log IPs. It would look like I was so bored (at work right now) that I decided to SPAM their scroller, which had somehow gotten me into some kind of masturbatory mood.

    < tofuhead >
    --

  • type "monitor" in gnut and you will see all the gnutella search requests going through your node. Often you see people searching for an exact filename and you know that their transfer stopped halfway through and they're looking for the rest of it, so you can get some idea of what is available out there in a completely passive manner.
  • How about a simple exact: query type? Too damn slow I suppose.
  • It's just a rounding error.
  • I switched from dogpile to google. It was the day that I read on /. that you could search for "more evil than satan" on google and the first hit was www.microsoft.com [microsoft.com]. That was a great day.

  • Interesting work. Thanks for the helpful links.
  • Google already has this. If you do a search on 'slishdot' it asks you if you meant slashdot.

    Thanks for this suggestion. Although it is a good example of interaction between the engine and the user, it seems to be based on a simple spelling check. Rather, I was thinking more in terms of what Monika Henziger referred to as a topic based query. For example, typing 'bicycle' and receiving a choice of 'bicycle repair', 'bicycle racing', 'bicycle sales', 'bicycle parts', 'bicycle touring', etc...
  • Thanks for the info on Excite's zoom feature. I am impressed. I wonder how they go about creating their topic associations. Do they compile it manually or do they have a automated tool that searches previous user inputs to come up with the most common keyword associations? An automated tool would, of couurse, be much more efficient and cheaper to operate.
  • by Louis Savain ( 65843 ) on Thursday June 21, 2001 @07:32AM (#135319) Homepage
    Monika Henziger: You can try to return documents that are specifically on this topic. We're developing more sophisticated techniques to return documents that might not mention the query words, but are [still relevant to] the topic. We're getting away from just pure word matches and getting more into topics.

    This is interesting. I wonder if there might be a way for the engine to have a two way back-and-forth "conversation" with the user. IOW, if the engine interprets the query to have several possible meanings, a few multiple choice questions might clarify the meaning and narrow the search parameters. I think this could be more helpful than doing a blind guess of the user's intention.
  • Sarcastic or not...that's damn funny!

    -Ben
  • Yeah, now it just brings up a bunch of pages talking about when doing the search brought up Microsoft. :)

    --
  • Dunno if anyone's noticed the new 'phone book' function - type "your name" {your city/state/zip code} if you live in north america and see what comes back as the first google find. Your home address & phone number, at least if you're in the phone book.

    I first noticed this function when searching for information on the professional work of someone who I was going to be working with - and the #1 thing google spat up was his home address and phone number. I know I could have found this almost immediately if I went actively looking for it, but it was a bit creepy anyway. I guess the reason I'm disturbed it that it wouldn't have occured to me to go looking for that information, but once it was thrust in my face like that, I could immediately think of reasons it might be handy to have it.. In the event, I didn't copy it down anywhere, but, well, I could think of people who wouldn't hesistate to call me at 3am if they had my home number..

    Fortunately google seems willing to at least let you opt out - http://www.google.com/help/pbremoval.html - which is fine for people who know about google and its more esoteric functions, but ain't going to help Jane Shmoe when she starts wondering why so many more people seem to know here she lives and what her home number is - people who wouldn't necessarily have gone looking for the information (that would be rude..) but who don't mind having it when it's 'handed' to them.
  • I would imagine google uses a highly compressed inverted index stored probably in a flat file format. If you would like to read some academic literature on the subject you can find a great list of resources [poly.edu] compiled by Prof. Torsten Suel.
  • by LocalYokel ( 85558 ) on Thursday June 21, 2001 @07:22AM (#135324) Homepage Journal
    Search terms have all kinds of problems.

    I had the same problem yesterday when I was searching for "quotes about Shakespeare". "to be or not to be" (with quotes) pulls up the proper category, but the first rsult it comes up with is the GNU homepage, because GNU's not Unix!. The second link is to Am I Hot or Not, BTW...

    Strangely enough, it warns about "or", and if I want to use it in a search, it must be in CAPS, but then how do I search for something in ORegon? For some reason, it says nothing about "not", so I don't know what's up with their search terms anymore.

    --

  • The most common query to hit my site is "fuck the skull of jesus". <shrug>
  • Is Google's technology really so ground breaking? Didn't Yahoo take it in a bigger leap? I get this feeling when I read This article [syncore.org] back in April, is Google's business strategy or practises really all the newsworthy? You decide. :)
  • no doubt, they are a good engine, but aren't we just repackaging existing services? Or more succinctly, making slight improvements on existing technologies?
  • In running English text, the frequency of "the" is just about 70,000 per million. In other words, 7% of all English text consists of the definite article, and most web pages contain many distinct copies. You've got to kill that.

    Well, actually you don't. AltaVista indexes every word, including "the". This helps it do exact phrase queries. For instance, try searching for "The Who".

  • by zpengo ( 99887 ) on Thursday June 21, 2001 @07:01AM (#135329) Homepage
    A recent development in Google technology left me very dismayed -- They started ignoring "common words."

    This makes sense on a general level, but when you try searching for a phrase embedded in quotation marks, it's frustrating to have Google decide which parts of a literal string to search for and which to ignore. If I had wanted it to ignore parts of it, I wouldn't have indicated that it was a literal phrase, dangnabbit!

    It is possible to include words that you typed in the search phrase, but you have to add an Altavista-style '+' before it.

    For example, searching for: "Hail to the chief" would ignore to and the. In order to actually search for the phrase (which I indicated that I wanted to do by surrounding it in quotation marks), I would have to type "Hail +to +the chief". Hardly user-friendly.

    Oh, well.

  • I would seriously doubt they have a SQL interface for there DB. I also would bet a you mom's poop that they don't use a comersial database for the website indexing.

    I figure it's something derived from a B-Tree (like a binary tree - but better for databases) and distribute it on a cluster of of boxens (linux right?)

    I'm sure there's a hell of a lot more to it then that. a hell of a lot more, hell let's ask him.

    begin question
    Hey google guy, how is the webpage index data stored and retrieved. What data structures and what algorithms are used. how many boxes do you have for indexing?
    end question

    maybe he'll answer.

    -Jon
  • who else has BSD only searches?.. and not only that, a cool BSD google logo!

    http://www.google.com/bsd [google.com]

    -jon
  • Google's data structures are optimized so that a large document collection can be crawled, indexed, and searched with little cost. Although, CPUs and bulk input output rates have improved dramatically over the years, a disk seek still requires about 10 ms to complete. Google is designed to avoid disk seeks whenever possible, and this has had a considerable influence on the design of the data structures.

    BigFiles
    BigFiles are virtual files spanning multiple file systems and are addressable by 64 bit integers. The allocation among multiple file systems is handled automatically. The BigFiles package also handles allocation and deallocation of file descriptors, since the operating systems do not provide enough for our needs. BigFiles also support rudimentary compression options.
    4.2.2 Repository

    Figure 2. Repository Data Structure
    The repository contains the full HTML of every web page. Each page is compressed using zlib (see RFC1950). The choice of compression technique is a tradeoff between speed and compression ratio. We chose zlib's speed over a significant improvement in compression offered by bzip. The compression rate of bzip was approximately 4 to 1 on the repository as compared to zlib's 3 to 1 compression. In the repository, the documents are stored one after the other and are prefixed by docID, length, and URL as can be seen in Figure 2. The repository requires no other data structures to be used in order to access it. This helps with data consistency and makes development much easier; we can rebuild all the other data structures from only the repository and a file which lists crawler errors.

    Document Index
    The document index keeps information about each document. It is a fixed width ISAM (Index sequential access mode) index, ordered by docID. The information stored in each entry includes the current document status, a pointer into the repository, a document checksum, and various statistics. If the document has been crawled, it also contains a pointer into a variable width file called docinfo which contains its URL and title. Otherwise the pointer points into the URLlist which contains just the URL. This design decision was driven by the desire to have a reasonably compact data structure, and the ability to fetch a record in one disk seek during a search
    Additionally, there is a file which is used to convert URLs into docIDs. It is a list of URL checksums with their corresponding docIDs and is sorted by checksum. In order to find the docID of a particular URL, the URL's checksum is computed and a binary search is performed on the checksums file to find its docID. URLs may be converted into docIDs in batch by doing a merge with this file. This is the technique the URLresolver uses to turn URLs into docIDs. This batch mode of update is crucial because otherwise we must perform one seek for every link which assuming one disk would take more than a month for our 322 million link dataset.

    Lexicon
    The lexicon has several different forms. One important change from earlier systems is that the lexicon can fit in memory for a reasonable price. In the current implementation we can keep the lexicon in memory on a machine with 256 MB of main memory. The current lexicon contains 14 million words (though some rare words were not added to the lexicon). It is implemented in two parts -- a list of the words (concatenated together but separated by nulls) and a hash table of pointers. For various functions, the list of words has some auxiliary information which is beyond the scope of this paper to explain fully.

    Hit Lists
    A hit list corresponds to a list of occurrences of a particular word in a particular document including position, font, and capitalization information. Hit lists account for most of the space used in both the forward and the inverted indices. Because of this, it is important to represent them as efficiently as possible. We considered several alternatives for encoding position, font, and capitalization -- simple encoding (a triple of integers), a compact encoding (a hand optimized allocation of bits), and Huffman coding. In the end we chose a hand optimized compact encoding since it required far less space than the simple encoding and far less bit manipulation than Huffman coding. The details of the hits are shown in Figure 3.
    Our compact encoding uses two bytes for every hit. There are two types of hits: fancy hits and plain hits. Fancy hits include hits occurring in a URL, title, anchor text, or meta tag. Plain hits include everything else. A plain hit consists of a capitalization bit, font size, and 12 bits of word position in a document (all positions higher than 4095 are labeled 4096). Font size is represented relative to the rest of the document using three bits (only 7 values are actually used because 111 is the flag that signals a fancy hit). A fancy hit consists of a capitalization bit, the font size set to 7 to indicate it is a fancy hit, 4 bits to encode the type of fancy hit, and 8 bits of position. For anchor hits, the 8 bits of position are split into 4 bits for position in anchor and 4 bits for a hash of the docID the anchor occurs in. This gives us some limited phrase searching as long as there are not that many anchors for a particular word. We expect to update the way that anchor hits are stored to allow for greater resolution in the position and docIDhash fields. We use font size relative to the rest of the document because when searching, you do not want to rank otherwise identical documents differently just because one of the documents is in a larger font.

    The length of a hit list is stored before the hits themselves. To save space, the length of the hit list is combined with the wordID in the forward index and the docID in the inverted index. This limits it to 8 and 5 bits respectively (there are some tricks which allow 8 bits to be borrowed from the wordID). If the length is longer than would fit in that many bits, an escape code is used in those bits, and the next two bytes contain the actual length.

    Forward Index
    The forward index is actually already partially sorted. It is stored in a number of barrels (we used 64). Each barrel holds a range of wordID's. If a document contains words that fall into a particular barrel, the docID is recorded into the barrel, followed by a list of wordID's with hitlists which correspond to those words. This scheme requires slightly more storage because of duplicated docIDs but the difference is very small for a reasonable number of buckets and saves considerable time and coding complexity in the final indexing phase done by the sorter. Furthermore, instead of storing actual wordID's, we store each wordID as a relative difference from the minimum wordID that falls into the barrel the wordID is in. This way, we can use just 24 bits for the wordID's in the unsorted barrels, leaving 8 bits for the hit list length.

    Inverted Index
    The inverted index consists of the same barrels as the forward index, except that they have been processed by the sorter. For every valid wordID, the lexicon contains a pointer into the barrel that wordID falls into. It points to a doclist of docID's together with their corresponding hit lists. This doclist represents all the occurrences of that word in all documents.
    An important issue is in what order the docID's should appear in the doclist. One simple solution is to store them sorted by docID. This allows for quick merging of different doclists for multiple word queries. Another option is to store them sorted by a ranking of the occurrence of the word in each document. This makes answering one word queries trivial and makes it likely that the answers to multiple word queries are near the start. However, merging is much more difficult. Also, this makes development much more difficult in that a change to the ranking function requires a rebuild of the index. We chose a compromise between these options, keeping two sets of inverted barrels -- one set for hit lists which include title or anchor hits and another set for all hit lists. This way, we check the first set of barrels first and if there are not enough matches within those barrels we check the larger ones.

  • Northernlight [northernlight.com] categorises its returns into "Custom Search Folders", subject by subject.
  • I thought that they switched to "billions and billions served" about 10 years ago...
  • I wonder if there might be a way for the engine to have a two way back-and-forth "conversation" with the user. IOW, if the engine interprets the query to have several possible meanings, a few multiple choice questions might clarify the meaning and narrow the search parameters.

    I believe it was Altavista that had (and may still have, though I don't see any sign of it) something along these lines - after a query, it would also present an option to narrow the query by selecting some other key words that appeared in some of the pages. If I recall correctly this was not on the main query results pages, but there was a link to it.

    For the example someone posted earlier where he gets a lot of hits from people looking for masturbation tips, using that option would present you with several groupings of words - one group might include "masturbate" and other terms likely to be found on that sort of pages, another group might include "network," "security," and "adware." Each group and each word within a group had a checkbox that could be used to select additional words to use in limiting the search.

    I suspect that this was dropped for load reasons, though I could be wrong - it may be that people just didn't use it and they decided it wasn't worth the hassle.

    -- fencepost

  • by jwater ( 112092 ) on Thursday June 21, 2001 @08:48AM (#135336)
    Here at Slashdot it seems like people only can complain about a service. Most of the posts are rants without understanding of the dynamics below them.

    I think we all could use more understanding of the topic. A link to the paper that started it all here [nec.com].

    1. When was the last time that "to" or any other preposition helped the average query. Your Grandmother does not know that this word is meaningles 99.9% of the time, so google ties to improve their relevancy.

    2. Google has not sold out. Their ads are the most simple in the industry. They give access to users like you and me at reasonable rate. Who wants to wait for 345x123 pixel banner ads anyways.

    3. Have you noticed the spelling feature? Google will correct your spelling. This is a function of the tons of bigrams that they have stored.

    4. Here is a link to more papers [Warning: Technical] here [nec.com].

  • > (translated from English to Korean
    > and then back to English again)

    And that's the catch. Most documents are readible after they;ve been put though the blender once. But two passes through the blender results in garbage.

    The Fish is quite good for the one-way trips that it was designed for. A round trip ticket through the Fish is usually deadly.
    --

  • Its been said here on slashdot before, but
    every one should check out that if you change your preferences on google having to do with language, one of the languages is Bork Bork Bork!, or the sweedish chef's language.

    also, what happened to searching for 666, the first entry it spat up was microsoft?

    zero
  • You might want to take a look at this, too: http://www.guidebeam.com/ They work on top of Google, I got this URL just some days ago and didn't find the time to check the information on their site but it might be what you are looking for. HTH, PeterB
  • They filter what gets projected.. maybe you should have read the next sentence before posting.

    "That's a filtered version, except that the filter doesn't work well in other languages. So we had people here from BMW, and they told me that there were some German queries that got through that shouldn't have.

    [Note to self: Curse on Google only in foreign tongues.]"
  • Have you ever had experience with filtering software? Any filtering software worth 2 cents looks for that kind of shit.. purposeful misspellings, replacements like 0s for Os, 1s for ls. I think Google is smart enough to make a filter like this. So no.. "britney spears suk1ng c0ck" isn't going to get through. Beyotch.
  • AND...

    Mac only searches.. and a cool Mac logo!
    http://www.google.com/mac [google.com]

    AND...

    US Government searches... and a "cool" US logo?
    http://www.google.com/unclesam [google.com]
  • by mr_gerbik ( 122036 ) on Thursday June 21, 2001 @08:13AM (#135344)
    who else has linux only searches?.. and not only that, a cool linux google logo!

    http://www.google.com/linux [google.com]

    -gerbik

  • Go under http://www.google.com/linux [google.com].

    Try searching for "news".

    Guess what comes up #2?

    Ryan Finley
  • Kudos on creating the most relevant search engine.

    My question is, what are you doing to improve the timeliness of searches? Often, there is a conservative bias as older sites have more links to them. As I watch the results from my site get integrated, it seems that your processing cycle is about a month--making google not the SE of choice to research recent news events. I may also add, that this seems like a bigger imperative given the recent acquistion of deja/usenet.

    Keep up the good work (and don't ever sell out baby, no matter what riches the VC put in front of your nose).

  • *sigh* That's why he misspelled the words.

    --

  • You probably should check out this site: Disturbing Search Requests [weblogs.com]

    --
  • I can't remember where I found that -- it may have even been here on /.

    It kinda makes you want to start checking those referer logs, eh? I found once that was looking for 'priceless pissing'. No clue how they ended up on my site!

    $ grep google /usr/apache/logs/referer_log

    --

  • This company claims they are writing the new serach engine for Google. Click on clients and then #6.

    In fact they seem to be claiming that they built most of Google. It's a pity their own web-site looks so bad though. Here is an excerpt: We also built a sophisticated server system to run the show and organized the site's starting database

  • I tried your search for "to be or not to be" using the +'s in front and I got this back:

    Google always searches for pages containing all the words in your query, so you do not need to use + in front of words. [details] The word "or" was ignored in your query -- for search results including one term or another, use capitalized "OR" between words.[details] The following words are very common and were not included in your search: to be to be. [details]

    That seems so pointy-haired-bossish.
  • Wait until that last posted company "helps" them with their web interface! I avoid Flash sites at all costs. I seem to remember that messing with the simple interface was the beginning of the end for Deja.

    The criticisms being made here about how Google omits certain words apply equally to their newsgroup searches. Very annoying. The advanced groups search lets you search for an "exact phrase". Or so it says. It doesn't let you search that way at all. They have done a pretty good job so far with deja's data, however. I missed it all being out there. I look forward to their improvements over time.
  • It's good to know that at least one dot-com is still going strong. Good work Google-people!

    I love google.. it's fast, gives lots of results and the page isn't cluttered with dozens of banner ads like some other *cough* search-engine-portal-wannabes *cough*.

    Maybe someday I'll get to use my networking skills on that server farm the've got going there... ahhhh a guy can dream eh?

  • When a search asks for, say, "cheese fondue" the array for "cheese" and the array for "fondue" are retrieved and merged using a sorted list merge (fast, since the arrays are already ordered). The result is a list of document id's that were in both lists, i.e. documents containing both words.

    That would work ok, except that the process of updating the lists would be very expensive. Indexing every word in the interenet would be trivial, but keeping the addresses for those words in sorted order would be extremely non-trivial.

    Imagine the word 'test' for example. You gotta believe that 'test' is on about a hundred million web pages, with more being added each day. That's one hundred million sorted addresses- probably taking up more than 800 disk blocks (100,000,000 / 4096 bytes block / ~30 bytes address). Every time you add a new page with the word 'test' (or take one away), you have to update the list. That's a lot of disk block rearranging. Now multiply this by all the words on the web and you can see what a huge amount of rewriting has to be done. I don't think linear address lists would cut it.

    Now they could have some kind of funky indexing scheme for all the addresses. But its still freakin expensive to update them all. The article mentioned they update every 28 days. Does this mean they stop everything every 28 days to update- or does it mean that it takes 28 days to do an update? Regardless, this could mean that Google is always 28 days out of date. Another search engine that beat this number could potentially compete by saying they are more up to date.

    You have to imagine that as the internet grows larger, that this is going to get even more time consuming.

  • What Yahoo did was license google, instead of what they were doing before, licensing Inktomi. Google rocks.

    http://news.cnet.com/news/0-1005-200-5561996.htm l

  • Yahoo is repackaging existing services - they're repackaging Google. And yahoo has more name recognition, so more people use it. And they bring in more revenue in ads, so more money goes to Google to develop.

    Google OTOH, is developing new technology. Most of that development is incremental -things get better and better. Until we actually find an alien monolith to give us all our science, this is how most advancements happen.
  • www.google.com/redhat [google.com] - Doesn't do anything special, but the URL is there
    www.google.com/palm [google.com] - Looks to be made for monochrome PDA browsers
    www.google.com/ie [google.com] - For Pocket IE maybe?
    wishus
    ---
  • A friend of mine (web developer) says that he's created a way to increase the hit count among all the sites he creates. He uses a server-side Perl scripts to determine if the Google bot is hitting a page, and includes links to *all* of the sites' homepages that they are hosting. So if he includes this script on every page of every site he hosts, then every page links to every site.

    Does this work? I mean, they include (in plain English) something like "Here are some of the other sites we, [our web design firm], created and host" along with a short blurb. It sounds like it would work, right?

  • This company [zephyr-lv.com] claims they are writing the new serach engine for Google. Click on clients and then #6.

    It really says 'To fullfill their needs, we built a brand new searcg engine for Google.....'

    [flash alert]
  • There is nothing technological that Google is doing that isn't done by other engines (Excite, Hotbot).

    Really. Google uses a patented ranking algorithm, described by Page and Brin (Stanford graduate students which founded Google) in a paper titled The PageRank Citation Ranking: Bringing Order to the Web (1998) [nec.com] . The algorithm does very well at recognizing relevant documents. Last I looked, other search engines used mostly sets of hand-tuned hacks which did not do as well. Has this changed? I'd appreciate some references, refereed if possible.

    ~

  • by sdo1 ( 213835 ) on Thursday June 21, 2001 @08:07AM (#135368) Journal
    These translation services (such as BabelFish on AltaVista) still have quite a way to go before they're completely reliable. Especially when you translate from one language to another, you might end up with something similar to this (translated from English to Korean and then back to English again):

    Will be complete and on the front of the L it will be reliable to translation service (as the BabelFish is same) a yet positively is thin method to Altavista. It was special and when you from one language also translate in different one thing, you in child one silence comfort ended to this, (and the that time English back mac tayn Great Britain from again under translate again in a Korean):

    -S
  • You can pre-build lists of matches by word, but regex is too general a concept. You can't pre-build an index that will help speed up a query based on some yet-to-be-specified regex. There's just no way to do it fast.
  • by wrinkledshirt ( 228541 ) on Thursday June 21, 2001 @07:08AM (#135382) Homepage

    Okay, this is so off-topic it's not even funny.

    Anybody have an inkling of a clue of the data structure that Google uses (or probably uses) to store all its words? I was just thinking that maybe it was some sort of balanced binary tree with each node containing a word, two pointers to the next two words further down the tree, and the root of a linked list of all the pages that word is contained in? I know binary search trees are supposed to be fast, but I was wondering if that'd be good enough for something with probably hundreds of thousands of words?

    I'm assuming they're not using some sort of sql LIKE "%searchword%", I can't imagine any kind of cluster that could speed that process up, although I don't really know all that much about the process or what the main benefits of clustering are.

    Anyway, hugely sorry for the offtopic post, it's just something that's been on the brain lately...

  • by blamanj ( 253811 ) on Thursday June 21, 2001 @08:10AM (#135384)
    Probably they use a trie [harvard.edu] or the related Patricia tree. These are very space efficient and relatively fast.
  • Do you know how long it has been since they changed that number on their homepage????

    I emailed Google about it gave me some crap about it being too difficult....

    What the mess...

    • They filter what gets projected.. maybe you should have read the next sentence before posting.

    Uh huh, and maybe you should have read the trailing ;) before replying.

    ;)

  • Doesn't seem to work for me up here in Canada, although my name does come up with some interesting stuff that I've never seen online before :)

    As for not having your phone number/address on the internet... that's why the phone companies are required by law to allow you to de-list. Without the internet, it takes me all of 5 minutes to drive to my local library, where they have phone books from around the world for the taking. Oh yes, and the white pages here only list first initial anyway :)

  • They've been claiming '99 billion served' for several years now. Either they have a Y2Kish problem with their signs, or they're about to unleash the biggest wave of advertising the world has ever seen.

    One Hundred Billion Served!. Could become as common as that evil Castaway DVD commercial that's repeated at least 50 times a night on TV.

  • We're all aware of the fact that google r0x0rs, but one thing I've always been curious about Google and their "linux boxen" is, do they only use Linux for their servers, or do they have other practical uses, IE Quake Servers, and just workstations, or is Linux used only for price reasons? Anyone know?

"Confound these ancestors.... They've stolen our best ideas!" - Ben Jonson

Working...