Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
News

How does Google do it? 261

Posted by CmdrTaco
from the stuff-to-read dept.
Doc Tagle writes "With Google reportedly on the verge of going public, more and more people want to know what makes Google tick. The Observer, serves up the answers to our questions."
This discussion has been archived. No new comments can be posted.

How does Google do it?

Comments Filter:
  • by Paul Townend (185536) on Sunday April 25, 2004 @08:22AM (#8964445) Homepage
    If truth is the first casualty of war, openness is the first casualty of going public

    OK - I can (perhaps) see this as being the case prior to an IPO, but that statement can't be true after it has happened...

    I mean....surely once they've gone public, they'll be obliged to detail and list the sort of information that the article postulates about? The shareholders would be entitled to know how many servers google has, what their specifications are, and what their current commercial strategy is.....surely?!
    • by Anonymous Coward on Sunday April 25, 2004 @08:36AM (#8964498)
      They will not have to disclose the number of machines, the OS, the anything related to the machines. Wall Street isn't buying their technology, they are buying their cash flow.

      If you do not believe me, buy a share of GE. Pick up the phone, call Investor Relations and ask them how many Unix computers they have and what OS and patch level they run.
      • by BigGerman (541312) on Sunday April 25, 2004 @09:06AM (#8964638)
        unfortunately the technology spending IS part of the cash flow. "We went dumpster-diving and picked up a dozen new machines for the indexing farm" and "we entered agreement with Dell to secure a reliable source of cheap Intel servers" would both show up on the shareholder statements but the impact would not be the same.
        Going public WILL expose the siginificant portion of Google technology, more sp when it has to do with hardware.
      • by Smidge204 (605297) on Sunday April 25, 2004 @09:07AM (#8964647) Journal
        The problem with that analogy is that what software they run has absolutely nothing to do with what they do to make money.

        With Google, their entire "business" - their means of generating cash flow - relies on sheer quantity of computing muscle and high performance software for their search databases. With GE, their business is making lightbulbs, dishwashers, hair dryers, electric motors and any more of thousands of different products used in residential, commercial and industrial settings. How many Unix computers they have in all their offices around the world is a causality of doing business, not their means of doing business.

        I'm sure if you asked the GE Investor Relations department something relevant about how their business operates, you might get somewhere.
        =Smidge=
        • The original analogy is a little off. However, if you look at eBay, do they disclose how many systems they are running? How about Amazon? Do I care?

          The real fact of the matter is, they have custom software that they run. The number of systems, speed, memory and OSs are simply a byproduct of what they really offer: a service.

          Google is no different. They offer a service. As long as they are profitable, as an investor, I could care less if the systems were running on Dell's, White Boxes, Mac, or Commo
        • 'With Google, their entire "business" - their means of generating cash flow - relies on sheer quantity of computing muscle and high performance software for their search databases."

          Actually, their means of generating cash flow relies on how beneficial advertisers feel it is to advertise on Google.

    • I mean....surely once they've gone public, they'll be obliged to detail and list the sort of information that the article postulates about? The shareholders would be entitled to know how many servers google has, what their specifications are, and what their current commercial strategy is.....surely?!

      Why would a shareholder care about server specifications? Investing is all about money. Read any quarterly report from a public company. Income statement, balance sheet, and cash flow are the primary interests on the numbers side as well as a general roadmap of where the company's heading. Warren Buffett doesn't care if each server has two 80 GB drives, or whether they have four 250 GB drives per server. The only thing that matters is that there are competent people to handle these kinds of "dirty details" that an investor doesn't give a rats ass about.

      Take a look at the kinds of information [yahoo.com] you could expect from Google's quarterly reports.
      • I disagree. An investor deserves to know at least general information about the goings on of a business. If I were a stock broker I would want to know that say: FruitCompanyA uses insecticide whereas FruitCompanyB doesn't. I personally would choose FruitCompanyA as a a rise in the insect population would ruin FruitCompanyB.

        With google: before I give them my money, I would like to know how many servers they have, how close to capacity they are, what softwares they use (compatibility issues).

        Honest reportin
        • With google: before I give them my money, I would like to know how many servers they have, how close to capacity they are, what softwares they use (compatibility issues).

          I agree it would be nice to know. But if those are your conditions for investing in Google, I think Google would probably tell you to keep your money. I imagine Google's quarterly reports would probably say something like:

          "Our operation depends on having the ability to increase our server and bandwidth resources as we grow our services. Business may be adversely impacted should capacity be unavailable. Our servers are also at risk for viruses, worms, and DDoS attacks which could put the operation of those servers at risk and adversely affect business." etc...

          That would give you, as an investor, the information you need to determine whether those risks are worth your money. In all likelihood you'll just have to rely on the fact that they have an army of PhDs who are smarter than you and I put together and know their shit when it comes to security, databases, clustering, etc.

          Now I could be wrong. Perhaps Google is waiting for the IPO and will then detail their server infrastructure, wow Wall Street (and geeks worldwide) with their amazing capacity, and their stock will skyrocket on the first day of trading. I'd wager that Google's stock is going to have amazing gains anyway given that it's a bit of an industry darling. Other tech companies which have been thinking of going public would be wise to time their IPO very shortly after Google's and ride the wave.
          • Say I just invested twenty years at Google figuring out search engines.

            Now figure I am selling my options.

            Now add that more people will buy them at a higher price if they are impressed with the number of computers.

            I think there is a big temptation for Google to expose whatever it has to expose if it means getting the option value up.

            After they cash out their options - google can compete, not compete or whatever - it will be the publics problem.

            AIK
          • Having a PhD does NOT make one smarter, it just means that said person found (or has) the financial means to become more educated; or is it edumicated....I forget.
        • Do you know how many servers IBM have? Akamai? Microsoft?

          Be reasonable.

          Financial information is important, their business plan is important, it is probably important to know that they are running Linux so that SCO-type problems can be factored in. The sort of fine technical details the Observer goes into are totally irrelevant, just an incidental business expense. We know that it all works and that Google are on top of what they do. That is what matters.
          • by Anonymous Coward
            Akamai?

            "When I visited the company in January, the screen said that Akamai was serving 591,763 hits per second, with 14,372 CPUs online, 14,563 gigahertz of total processing power, and 650 terabytes of total storage. On April 14 [2004], the number had jumped to a peak rate of 900,000 hits per second and 43.71 billion requests delivered in a 24-hour period."

            From this article [technologyreview.com].
        • With google: before I give them my money, I would like to know how many servers they have, how close to capacity they are, what softwares they use (compatibility issues).
          Not to mention source code for custom applications, maintenance schedules, software upgrade schedules, standard permissions settings, root passwords, type and model of CPU cooling fans used, average uptimes and other relevant information which all prudent investors need.
      • hy would a shareholder care about server specifications? Investing is all about money.

        I, for one, would. Now, unfortunately I don't have enough money to start investing on Wall Street, but hopefully that will change soon. So, why would I want to know technical details for a company? Obviously, because I'm a geek. But someone has to track this kind of stuff to produce a stock report. You can't have a company saying "We bought an IBM X Server and it now ballances our accounts and brokers international deals

    • first casualty ?? (Score:5, Informative)

      by Sad Loser (625938) * on Sunday April 25, 2004 @09:13AM (#8964667)

      Recycling without attribution [technologyreview.com] is the first casualty of bad journalism.

      I thought I had read this article before, and then I realised, I had read it before...
      (although I now realise that you are not supposed to read the linked articles before posting comments - sorry)
  • Google is faltering (Score:3, Interesting)

    by Anonymous Coward on Sunday April 25, 2004 @08:22AM (#8964447)
    Google has been at 4.285 billion pages for more than three months straight. The count hasn't increased in a long time... The index is maxed.

    Google has recently removed tens of thousands of "duplicate content" sites from its index - where "duplicate content" is as simple as being an affiliate site (e.g. Amazon) and having the same textual item descriptions as many other sites.

    Google is now in the process of dropping millions of link records from its index, presumably to make room for more pages.

    Google is wavering.

    Gmail is a distraction, a venture into some other space to keep people from noticing that their search product is degrading.

    May she last as long as possible...
    • Interesting (Score:4, Interesting)

      by Motherfucking Shit (636021) on Sunday April 25, 2004 @08:32AM (#8964483) Journal
      I lost a couple of sites from Google this month, presumably due to duplicate content; they were nearly verbatim clones of some of my other sites. The original sites are still there, the "clones" vanished from Google. As in, even if I search for those domains directly, I get nothing, where I used to get a cached copy of the sites. They've quite literally vanished from Google's database.

      Can you back up your assertions that Google's index is full? It's a rather interesting theory, and perhaps an explanation for all the tweaking they've done lately.
      • Re:Interesting (Score:5, Informative)

        by ShaunC (203807) on Sunday April 25, 2004 @08:48AM (#8964551)
        Google is definitely cracking down [webworkshop.net] on duplicate content [seochat.com]. In fact, they've recently patented [searchguild.com] the concept.

        Insert software patent debate (where Google is the default hero due to its geek factor) here...
      • Re:Interesting (Score:3, Insightful)

        by galaxy300 (111408)
        It's possible that their index is full. A more likely theory is that they don't really see the benefit of having content duplicated throughout the database.

        How many times have you run a search and seen a link at the bottom that says something like "Google removed information from this search that is redundant to information already displayed on the page" (Can't remember exactly what it says right now). Usually, there's nothing valuable in the hidden links - why index them at all?
    • by jabbadabbadoo (599681) on Sunday April 25, 2004 @08:43AM (#8964526)
      "Google has been at 4.285 billion pages for more than three months straight. The count hasn't increased in a long time... The index is maxed."
      Hmm... are they using a 32-bit integer to keep the page count?
      2^32 = 4.294 billion, pretty close to 4.285 billion pages.
      Newbies...
      • by Decameron81 (628548) on Sunday April 25, 2004 @09:16AM (#8964683)
        I bet you wouldn't know you need more than an unsigned 32 bit integer before you hit it.

        On a side note I would really like to know which one is page number 1.

        Diego Rey
    • See Google's chastity belt too tight [com.com] (PartsExpress.com listing removed via SafeSearch because "sex" in domain name) and Google In Controversy Over Top-Ranking For Anti-Jewish Site [searchenginewatch.com] (Google picking out Googlebombed results) for recent examples.
    • Wouldn't it be fair to say that the index probabaly isn't maxed out because the number of actual websites indexed wouldn't be the limiting factor?

      Wouldn't it be the keyword index?

      Image the slashdot front page? How many 'keywords' or 'whatevers' would you have to categorize and organize to maintain a searchable structure? 100? 1000?

      So if you have 100 rows in a DB relating to 1 page then the DB would have been maxed out a factor of 10^2 ago instead of now...

      Right?
    • Google has recently removed tens of thousands of "duplicate content" sites from its index - where "duplicate content" is as simple as being an affiliate site (e.g. Amazon) and having the same textual item descriptions as many other sites.

      Google is now in the process of dropping millions of link records from its index, presumably to make room for more pages.

      It's possible that the index is full, but I would imagine that they would have seen this coming long ago, as it "filled up", and taken measures. What

    • Why 4.285 billion? (Score:5, Interesting)

      by NotQuiteReal (608241) on Sunday April 25, 2004 @02:35PM (#8966681) Journal
      Just because the front page says "©2004 Google - Searching 4,285,199,774 web pages " doesn't mean you have to believe them. Maybe it is understated. For example, I just did a search on "the" and got:

      Results 1 - 10 of about 5,750,000,000 for the [definition]. (0.11 seconds)

      Doesn't that imply more than 4.285 billion?

  • by Talez (468021) on Sunday April 25, 2004 @08:23AM (#8964450)
    PigeonRank [google.com]! Duhhhhhh
  • Here (Score:5, Insightful)

    by mfh (56) on Sunday April 25, 2004 @08:24AM (#8964457) Homepage Journal
    > If truth is the first casualty of war, openness is the first casualty of going public.

    Maybe this is the reason after all, but I think it's more about Google being simple, smart and clean. They play fair (no browser interstitials, no sneaky crap, no registration necessary...etc); I would equate Google's victory thusfar to a kind of no-nonsense attitude to business, always, no-exception.
    • Re:Here (Score:5, Insightful)

      by evilviper (135110) on Sunday April 25, 2004 @09:02AM (#8964615) Journal
      They play fair (no browser interstitials, no sneaky crap, no registration necessary...etc)

      And the fact that there are so many articles, from people that just can't understand why google is successful, just goes to show you how screwed we all are...

      Practically everyone in business is determined to be as evil as possible torwards their customers (and employees) and assume that anybody doing anything else must be doing something wrong, no matter what all other indicators may say.

      For a great example, read The Wal-Mart Myth [guerrillanews.com].
      • Excellent article on Wal-Mart!

        I went to tompaine.com, which had originally published it, and found more articles by the same author. Back to Basics [tompaine.com] was a very thoughtful look at the outsourcing debate.
      • BTW, I was quite surprised to hear about the part about CostCo's success with it's farier treatment of employees. I much prefer shopping at CostCo than I do at Sam's Club. (At the very least, the free samples of various food they hand out make shopping less monotonous) Coincidence? I think not.

        Do yourself a favor, slashdot, and get a membership. It's worth it.
  • by Anonymous Coward
    Sure would be nice to see some of that amazing tech coming back into the community...
    • As far as I can tell there is no better way for that hardware to have come "back into the community."

      The service is free, and they're really good at what they do. I would say I'd be lost without google on the internet, but really this compliment goes for lots of search engines - I'm really very grateful this sort of service still exists for free (well, with ads.)

      Unless you want to talk about cures for diseases through protien folding simulations, I can't think of a better way for this hardware to be u
    • Maybe they do not give back in terms of technology, but they do give back to community! I can think of at least in two ways.
      1) They provide an alternative to Microsoft. Not only search, it looks like they will give a blow to hotmail as well. They prevent MSN from becoming the portal. I think this is very important, people see things can be done better than the Microsoft way, and it can be done with Linux ;-)
      2) They make the communication within Open Source and Free Software community much easier. I keep a
    • Anyone else notice its usually an anonymous coward who selfishly demands access to that which they do not deserve access to?
  • by krs-one (470715) <vic@ope3.1415926nglforums.com minus pi> on Sunday April 25, 2004 @08:28AM (#8964466) Homepage Journal
    I read the article and it didn't say much at all about how Google operated. Instead, it just said we don't know how they operate because they keep it secret. But maybe that was the point to begin with.

    -Vic
  • by WhitePanther5000 (766529) on Sunday April 25, 2004 @08:28AM (#8964469)
    The only thing it's missing now (IMO) is spellcheck and an online translator, which I'm sure they're already planning. I'm also looking forward to Gmail being open to the public. After they conquer these 3 thing, whats next.. Google ISP? Google National Army?
  • As a consultant (Score:5, Informative)

    by elinenbe (25195) on Sunday April 25, 2004 @08:29AM (#8964472)
    having been a consultant at their data center a year or so back I can attest that they had well over 50,000 machines. I am not sure about the 80GB drive per machine because from what I understood was they bought whatever drive at the time was the cheapest MB/$ and would replace any dead ones with the larger ones. Also, at any given time machines just die and many of them are not replaced or repaird for months. Their cluster accounts for all this...
  • Two Thingies (Score:5, Interesting)

    by BoldAC (735721) on Sunday April 25, 2004 @08:33AM (#8964487)
    One -- Slashdot seems to be into content-directed ads now... as google was my ad for this story.

    Two -- If you want your pages indexed faster and more frequently, sign-up and place a google adsense ad on your page. Many webmasters believe that google is having to index so many adsense pages... that is difficult for google to add many more non-ad driven pages.

    Just sign up for adsense and run it a couple of weeks while you build your site. After google has spidered your site well, then just drop adsense.

    Good luck. I would love to hear any of your google-related tricks.

    AC
    • Ads? OHHH like banners. I havent seen those for months since i got the Adblock extension for firefox. Will even block iframes so the page just looks like there never was an ad there.

      Try it out [texturizer.net]
  • 'nuff said.
    (You may wish to take issue with the above..)
  • Perhaps this is another form of secrecy - the number of pages indexed never seems to go up, except in huge jumps. According to archive.org, it's been stuck on 4,285,199,774 pages for about a year now :/
    • Perhaps this is another form of secrecy - the number of pages indexed never seems to go up, except in huge jumps. According to archive.org, it's been stuck on 4,285,199,774 pages for about a year now :/

      Anyone notice 4,285,199,774 just so happens to be ~99.8% of 2^32? Is this a 32bit counter about to overflow?
    • by Anonymous Coward
      Searching for 'the' [google.com] gives about 5,740,000,000 pages while they index 'only' 4,285,199,774 web pages... Anyone knows why?
  • by gevmage (213603) on Sunday April 25, 2004 @08:50AM (#8964562) Homepage
    It's quite possible the reason that they keep their mouth shut about their capabilities is to avoid the NSA (or someone like them) to come calling. After all, they basically have a distributed database of the entire net, which they index efficiently on a continuous basis. Who wants to bet that their system is better at gathering intelligence than any government agency in the world?

    On the other hand, here's the conspiracy theory version: what if Google IS the NSA? The IPO is a smokescreen to try to avert attention. The reason they can't show their true capability is that when the company goes public, only 20% of their hardware will actually go into the public company "Google", the rest of the hardware will still be hidden and a part of the NSA's system. :-)

    [For the humor impaired, I'm just joking, but it does make you wonder...]

    • One word. (Score:4, Informative)

      by Viceice (462967) on Sunday April 25, 2004 @10:34AM (#8965059)
      Robot.txt

      The Google bot respects it, so if you're up to no good, it's easy to get Google to not index your page.

      Anyway, I'd like to see a version of google that didn't respect robot.txt. You'd used to be able to dig up alot of infermation on peopel on google before they started to use robot.txt on alot of sites.
  • by gtoomey (528943) on Sunday April 25, 2004 @08:55AM (#8964582)
    The software/hardware architecture seems impressive.

    Putting on my computer scientist hat I would guess:
    - instead of backup, hold data in multiple places at once
    - use a "cascaded rsync" to trickle software changes to thousands of nodes
    - then load software via NFS at node bootup
    - use nodes just to store data; keep software in RAM for speed

    Just a few thoughts.

    • instead of backup, hold data in multiple places at once
      Even better, instead of backup just crawl the pages again in the event of a lost disk. Of course some data needs to be in multiple places for performance reasons, but not all data are accessed frequently. How often do you think they will need the page with the lowest rank? (OK, I know there will probably be a lot with exactly the same rank, but you get the idea).

      load software via NFS at node bootup
      There are better protocols for this than NFS. But
    • Still a heckuva distributed database. After all, I very much doubt each node holds a complete copy of the database: with 2x80GB hard disks and 4 billion pages that would give only 40 bytes per record, with no space for operating system or software.

      That'll be a no, then.
  • by zogger (617870) on Sunday April 25, 2004 @08:59AM (#8964598) Homepage Journal
    GIMMEE would be nice. Well, nice for awhile and if they didn't get weird with it. Don't know if that could happen though, nature of man and all that philosophical stuff. Goes along with the current VoIP articles. They would dominate the net then if they implemented that. I know I would pay cash to them have a universal works great, any OS VoIP and no-spam, no commercial email service.

    So far we know they have just a cubic load of servers, the most on the planet most likely with one private company. The government probably has more, but it's a mish mash of them, not near as sleek or coordinated, AFAIK. What COULD be next with them, practical cheap 50 dollar thin clinets that you could do a TON on, using distributed computing, from games to communication to running any business? With tech savvy like they got and their already established heavy hardware base and heavy committment to R&D, they could just 'splode with an extra 25 billion in cash all of a sudden from an IPO. OR, the money could get to them and they become just another weird company that forgets it's roots as "brains come first" and switch to "marketing crap comes first" like certain other unnamed megacorps do now.

    Interesting times
  • How Google do that? (Score:4, Informative)

    by elpecek (712453) on Sunday April 25, 2004 @09:00AM (#8964599)
    For those who haven't read - there is an article written by Brin and Page - maybe a little outdated, but still interesting: The Anatomy of a Large-Scale Hypertextual Web Search Engine [stanford.edu]
  • Supplmental Result (Score:4, Interesting)

    by Richard5mith (209559) on Sunday April 25, 2004 @09:03AM (#8964622) Homepage
    There is plenty of evidence to suggest that Google has run out of docid's, hitting the 32-bit integer limit.

    The best evidence is doing a search which returns results which say "Supplemental Result" next to them. That'll be coming from a second document store I'd guess.
    • by Webz (210489) on Sunday April 25, 2004 @09:30AM (#8964745)
      That doesn't make any sense. A well-designed system is a transparent one, so Google would have no reason to let you know that they're running out of IDs.

      By the way, for supplemental result... By doing a quick keyword search on Google using my domain name, I'm led to believe that pages marked "Supplemental Result" are pages that look like search results. That is, they aren't filled with any real content, other than search results from other engines. Results that could "supplement" your "result" from Google.
    • Supplemental results do come from a second store, yes:

      Hey, pages get added to the supplemental index using automatic algorithms. You can imagine a lot of useful criteria, including that we saw a url during the main crawl but didn't have a have a chance to crawl it when we first saw it.

      Think of this as icing on the cake. If there's an obscure search, we're willing to do extra work with this new experimental feature to turn up more results. The net outcome is more search results for people doing power sear

  • much more frequent in Linux than in proprietary systems from Microsoft or Sun

    Huh? Does it!? Since when? I like these throw-away lines the media people dish out. What is their basis for this statement? Even when they see Linux obviously succeeding, they dish out a statement like this.

    I certainly don't have to patch my Linux boxes as frequently as my Windows boxes. Actually... no... wait, they're right! I only need to patch Windows once. Ctrl-Alt-Del -> Boot Debian CD.

    • Not sure if it needs more patching, but at least OSS-pastches come out in a timely manner after the discovery, whereas MS patches sometimes take ages to materialize. Thus, more patches don't necessarily mean more security holes - just better housekeeping.

      Baumi
  • by MarkWatson (189759) on Sunday April 25, 2004 @09:39AM (#8964784) Homepage
    Here is a PDF file of the paper [rochester.edu].


    If that link gets slashdotted, here is another link of a PDF PowerPoint presenation [brandeis.edu].


    Good read! This paper (with the discusion of the goodness/fastness of file appends) made me more interested in Prevalence [advogato.org] - so much so that I am using it for my new project.

    -Mark

  • The meat of the article is just the observation that the numbers Google puts out (for # of servers, # of hits, etc) are inconsistant. The only conclusion it comes to is that google has more 'horsepower' than it's letting on.
  • I think this is the wrong question investors need to be asking about Google before they IPO. Sure, it makes for some great geek gab; the fetishistic wonderment of just how many servers Google is running, how many hits they get and how exactly they manage to, well, manage that many servers. In the end though, answering those questions doesn't tell us anything about what Google is actually selling.

    The more and more I look at it, the more and more I fear Google is just nothing more then a very well calculated
    • by laura20 (21566) on Sunday April 25, 2004 @12:29PM (#8965785) Homepage
      The problem is, I've never paid these people a single penny for ANY of this. How the hell are they going to make money?

      Um, you do realize that Google already makes a profit [businessweek.com], don't you? I daresay the IPO will puff the value of the company up beyond the rational amount, but that's not 'Enron' -- if you are going to use buzzwords, use the right ones. Enron was a case of internal actors in the company using financial games to siphon off profits and inflate the value of the company on the books. You accusing Google of financial fraud? If you are going to use a buzzword, use 'Yahoo' or something -- a solid company that got its stock price puffed up excessively due to investor mania.

      How the hell did this get moderated up, except as 'Funny'?
    • by _Sprocket_ (42527) on Sunday April 25, 2004 @04:25PM (#8967486)


      The problem is, I've never paid these people a single penny for ANY of this. How the hell are they going to make money?


      1) Google has an effective advertisement system

      2) My last two employers bought Google boxes for their intranet
    • by Lord_Dweomer (648696) on Sunday April 25, 2004 @08:57PM (#8969053) Homepage
      "Sure, we can say that Google has integrated advertising within the search results, but the advertising model has always proven to be of dubious effectiveness at best."

      Correction, the ad model has proven to be of dubious effectiveness with companies that have no credibility.

      Google is perhaps the most trusted company on the net today, and with the traffic they get, I'm not surprised at all that they can support all their financial needs with ad revenue, especially with some of the big bucks that large companies dump into advertising with Google. I challenge you to show evidence showing that their advertising business model cannot support their costs, because so far you've done nothing but toss up tin-foil hat ideas without any proof to back it up, and as someone else so kindly pointed out to you, Google is ALREADY in the black.

  • Underpants gnomes.
  • by lunar_legacy (715938) on Sunday April 25, 2004 @09:46AM (#8964818)
    Another wonderful speculation about Google infrastructure which You can find it here [topix.net].
  • by penginkun (585807)
    I mean, how else could they do it?
  • The Observer, serves up the answers to our questions.

    the article never answered any of our questions - heck, i even looked for a "Page 2" link after reading the entire thing, sadly, the article ended w/o even attempting to answer its own questions.

  • by tmalsburg (556195) on Sunday April 25, 2004 @10:51AM (#8965167)
    For example, how do you implement security patches and operating-system upgrades (much more frequent in Linux than in proprietary systems from Microsoft or Sun)

    Come on, the nodes in their clusters are not desktop computers with office software on it.

    The system running these machines are rather very stipped down: They only need very few applications and a very simple kernel (not many device drivers, maybe no graphic card driver, ...).

    Furthermore there are no local users on the the machines -> many security flaws wont affect the integrity. And remote holes in the kernel occur not very often.

    And above all these cluster nodes are certaily shielded by some sort of firewall. Therefore they don't have to care for network security themselves.

    All in all: I believe that you need to update such machines rather infrequent. At least not for security reasons.

    Titus

    • Also it's fairly simple and quick to get one update done and working on a test machine, and then do every other one with an image from the master server. Quick and simple, the testing is done in advance and the actual update takes very little time. Googles speed would be slowed very little if two or three servers were out of the loop for a few minutes at a time.Kind of like how most place test a program before releaseing it in their production/working system.
  • by mabu (178417) on Sunday April 25, 2004 @12:22PM (#8965726)
    I can understand how in some cases an IPO can help generate revenue necessary to operate and break into new markets, but does this apply to Google? I really don't think so. They have market share; they have resources. Any infusion of funds to the company is more likely to give them the ability to further diversify and enter different markets, which history has shown is more often than not, a bad business idea.

    So one has to assume the IPO is the first phase of the principals "cashing out". The press will probably signal this as a sign of the next dot com boom, and a bunch of nerds within the company will suddenly become millionaires, and subsequently quit their job and open up a Bed & Breakfast in some obscure town or join the World Poker Tour. There goes the talent.
  • by alien_tracking_devic (579685) on Sunday April 25, 2004 @03:26PM (#8967042)
    from the artice:

    "Google manages to achieve this with sophisticated techniques for rippling changes through the cluster, yet achieves 100 per cent uptime. This is serious stuff, and there are a lot of IT managers out there who would give their eye-teeth to be able to do it half as well."

    Sigh...as an IT manager I can only dream of 50% uptime. Damn you, Google!

The idle man does not know what it is to enjoy rest.

Working...