Forgot your password?
typodupeerror
Books Education Google

Google Books As "Train Wreck" For Scholars 160

Posted by kdawson
from the mishmash-wrapped-in-a-muddle dept.
Following up on our earlier discussion, here's more detail on Geoffrey Nunberg's argument that Google Books could prove detrimental to academics and other scholars. Recently Nunberg gave a talk at a conference claiming that the metadata in Google Books is riddled with errors and is classified in a scheme unfit for scholarly use. This blog post was fleshed out somewhat a few days later in the Chronicle of Higher Education. Quoting from the latter: "Start with publication dates. To take Google's word for it, 1899 was a literary annus mirabilis, which saw the publication of Raymond Chandler's Killer in the Rain, The Portable Dorothy Parker, [and] Stephen King's Christine... A search on 'internet' in books written before 1950 and turns up 527 hits. ... [Google blames some errors on the originating libraries.] ...the libraries can't be responsible for books mislabeled as Health and Fitness and Antiques and Collectibles, for the simple reason that those categories are drawn from the Book Industry Standards and Communications codes, which are used by the publishers to tell booksellers where to put books on the shelves. ... In short, Google has taken a group of the world's great research collections and returned them in the form of a suburban-mall bookstore." The head of metadata for Google Books, Jon Orwant, has responded in detail to Numberg's complaints in a comment on the original blog post — and says his team has already fixed the errors that Nunberg so helpfully pointed out.
This discussion has been archived. No new comments can be posted.

Google Books As "Train Wreck" For Scholars

Comments Filter:
  • by Nefarious Wheel (628136) on Monday September 07, 2009 @07:54PM (#29345161) Journal
    ...when you have Search? Pick your own keywords.
    • How? If you don't like it just ignore it.
    • by timeOday (582209) on Monday September 07, 2009 @08:03PM (#29345215)
      To read the article, it is mostly a problem for people who are essentially studying trends in metadata itself, such as the emergence of some particular word over time. The "oddball" categorizations, I agree, why would anybody browse the "technology" section of a collection with millions of titles?

      The odd thing about complaining about this is, what are they comparing to? A hypothetical perfect online database that doesn't exist anyways? The article says google got it wrong in some cases where, e.g. the Harvard Library got it right. OK, that's an issue for all of us deciding whether to search on our nearest computer, or at the Harvard library.

      To me, google's project was a long time coming - somebody had to scan the world's back catalog. Maybe it would be better if governments had done it, but (and this is the point) they didn't. Google is.

      • by Artraze (600366) on Monday September 07, 2009 @08:25PM (#29345367)

        > The odd thing about complaining about this is, what are they comparing
        > to? A hypothetical perfect online database that doesn't exist anyways?

        That's exactly why this article is little more than some long winded trolling. So the metadata is wrong... As long as the books themselves are perfectly fine (which they seem to be), you can always check the metadata your self. I must think that as far as Google is concerned (and 99+% of its users) the metadata isn't nearly as important as the data itself. Once the data is collected you can always fix the rest.

        Expect a new "tagging game" in the next year or two to manually correct these error.

        • I wouldn't say the whole article is a troll (the "omg Google book monopoly" stuff sure). It did bring to light some errors and even got them fixed, that's worth something.
        • As long as the books themselves are perfectly fine (which they seem to be),

          Well, some are really good and well scanned, but others are a mess. From some organizations that do the scanning, you get missing pages and mangled pages. You get pages where the person doing the scanning sometimes put their hand between the page and the glass, so you can read the rings on their fingers but not the text on the page. (Books scanned at NY Public Library for example.) If ever there is a fold-out, you get at max half o

      • "somebody had to scan the world's back catalog."

        Interestingly, Vannevar Bush proposed doing this in 1945 [wikipedia.org]. Shame it's taken so long to come to fruition.

      • by martin-boundary (547041) on Monday September 07, 2009 @09:31PM (#29345839)

        The odd thing about complaining about this is, what are they comparing to?

        How about good old fashioned legwork? It *is* possible to make sure that the metadata is consistent with the facts, but that involves doing actual research and verification such as academics have been doing for hundreds of years.

        To me, google's project was a long time coming - somebody had to scan the world's back catalog.

        Then you have very low standards indeed. There's absolutely no reason why a single entity had to / has to scan all the world's back catalog on their own as fast as they can. It's pure commercial greed, and leads to the garbage we have on the net today.

        What is needed is an open standard for scanned works, with minimum resolution, minimum quality, and minimum verified metadata such as subject, author, publisher, year etc. All those are trivially listed on the title page of every book. All one has to do is open the damn book and flip a few pages, but that appears to be too hard for some people.

        This is a long term project for humanity. There's absolutely no point in having crappy scans with garbage metadata available quickly today, when it could be available correctly with good quality in say five years. It's also a perfect case for crowdsourcing, with some real standards to ensure quality.

        The current dreck that's online only causes duplication and waste. Take a look someday at archive.org (for example), and see how many copies of the same book are available, if it's a popular book. You'll typically find 5-10 scanned versions, by Google, Microsoft, and various local library projects, in black and white or colour none of which is truly good quality: broken characters, pages with dark margins, missing pages, typos or incorrect titles, wrong authors etc.

        Why did they bother?

        • You ask "Why did they bother?" as though their archive is of no use whatsoever in its current state and we should all just wait for the completion of the "long term project" (that nobody, to my knowledge and as you define it, is working on). On the contrary, Google's project is extremely useful if you are interested in something other than the borked metadata like, I don't know, the words actually written in the books.

          • No, I ask why did they bother to not do it right?(*) I think that question is appropriate whenever the work will have to be substantially reprocessed or even scanned again by someone else in the future. That criticism applies to a whole lot of books published before the 1950s.

            The fact that for any one book, you can read and quote those parts that are scanned well, and you can search those words that don't happen to be OCR'd incorrectly, and you can research for yourself what the title and author and year

            • Well, it sounds like you're just saying that it's a shame that they weren't able to do a better job. And surely that's true. So, if that is indeed what you're saying, we have no disagreement.

              It's just that it sounded to me like you were saying "Why do something if you can't do it perfectly?" and that seemed to me like an obvious mistake. I'm glad to hear, then, that I misunderstood.

            • Are you aware that OCR is not perfect? You ask why Google did not do it right as if they had chosen a cheaper / faster option. The software that they used has an error rate of one in million characters. This is the best that is currently available. The problem is that with the sheer amount of text Google has scanned they expect about a million errors.

              You seem quite insistent that they've messed up somewhere. How would you have done it better?

        • Worse is better. I would rather have a barely-legible scan of a book right now than a perfect copy in five years when my research is already old. There's a time value to the availability of data. I would like to think that the standards you speak of could be achieved, but all the evidence we have shows us it's the opposite. How many web sites comply to standards? How many well-ripped MP3s have you downloaded? Heck, how many well-written books (complying with all the language and grammar standards) are
        • Re: (Score:2, Interesting)

          by Anonymous Coward

          Why did they bother?

          1. I call absolute BS on the poor scanning quality. I have looked at 50+ books on Google Books, and not once noticed a problem with the scanning. Certainly a hell of a lot better than *I* would have done.

          2. The cost and time and legal battles required to do the scanning pretty much make it impossible unless a private corporation is leading the charge. What good does it do to try to rely on random-ass people to scan every book in existence, and every book as it comes into existence as fas

          • Actually I'd say that one of two things should happen... Google is allowed to do this but they have to hand over the all end result data to the US government for it's free use by any other individual/organization in the US after a 2-3 year exclusive embargo; or the US government should fund doing this and again allow anybody in the US to use the results.

        • Re: (Score:3, Informative)

          I worked for the Harvard Law School Library and saw such a work in progress for the documents used in the Nazi war crimes tribunal at Nuremberg. The process of putting this together was extrordinarily expensive and even with the HLSL donating the Server, Traffic, labor to maintain the back end code (which it still does), etc. the project ran out of funding 13,904 scans in and is currently seeking funding.

          Although the metadata surrounding the scans of these books would not have to be nearly as detailed, it's

        • by introspekt.i (1233118) on Tuesday September 08, 2009 @02:37AM (#29347729)
          You act like the technology and processes use to generate this catalog are going to remain deficient indefinitely. You ignore the fact that consumer demand for better (metadata|accuracy|whathaveyou) will drive improvements in the technology. In the meantime, we get access to the early iterations of the technology and the benefits it can provide today.

          What is needed is an open standard for scanned works, with minimum resolution, minimum quality, and minimum verified metadata such as subject, author, publisher, year etc.

          Necessity is the mother invention. Wait for one to pop up, or go make one up. Nobody's stopping you.

          All those are trivially listed on the title page of every book. All one has to do is open the damn book and flip a few pages, but that appears to be too hard for some people.

          Opening the covers of every possible resource you use is quite easy when you have a discrete, present set of resources to thumb through. What if your resources aren't present, are high in number, or (lo!) are undefined...because you don't even know what exactly it is you're looking for?

          This is a long term project for humanity. There's absolutely no point in having crappy scans with garbage metadata available quickly today, when it could be available correctly with good quality in say five years.

          I think you're absolutely wrong. It's naive to assume we can just have an instant rubber-meets-the-road system available in x years without rigorous testing and input on the part of users. No point? Hah! This is absolutely the best way to go about things! Let the system work itself out with angry users pushing technicians to improve archives to have the best working system in the end. The Google system is hardly "done" and it's only going to get better with time.

          The current dreck that's online only causes duplication and waste. Take a look someday at archive.org (for example), and see how many copies of the same book are available, if it's a popular book.

          God forbid we have multiple copies of popular books in different archives.

          black and white or colour none of which is truly good quality: broken characters, pages with dark margins, missing pages, typos or incorrect titles, wrong authors etc.

          Quality is relative. Why prohibit use because we lack perfection?

          Why did they bother?

          Why did you bother? Why did I bother? Why does anybody bother? Probably because we all feel like it.

        • by dbcad7 (771464)
          Well, it would seem that if it is a matter of scan quality, then it should be somewhat easy from here to throw some computing power into OCR and clean things up.. It would take up a lot less space as well, I imagine. Of course pictures, and illustrations are going to be difficult and probably never to anyones satisfaction... Perhaps, just perhaps, having done the first step of capturing the scanned data makes the 5 year job of converting them to the way you want, possible.
        • There is no reason for you to post this comment here when you could have put together a properly formed and documented essay in a couple of months. There is was no reason for Newton to come up with his theory of gravity when in a few centuries Einstein would come up with a more complete theory.

          This is a long term project for humanity. We damn well better start now rather than waiting to do it right. Badly data can be cross compared and corrected. Data which has not been digitized at all is completely useles

        • by hxnwix (652290)

          All those are trivially listed on the title page of every book. All one has to do is open the damn book and flip a few pages, but that appears to be too hard for some people.

          Exactly! Just flip a few pages in the scanned book, and...

          Or is that too hard for you?

          Why did they bother?

          Why not? Because the results are not perfect? Jesus, man... Point out errors as you find them to Google or Microsoft if you truly want them fixed.

        • . There's absolutely no reason why a single entity had to / has to scan all the world's back catalog on their own as fast as they can.

          First, you make a good point. The danger of Google doing this is that once they have done it (no matter how poorly), if it is comprehensive, it significantly reduces the incentive for another organization to do it. This is compounded by the agreement that Google reached with the Authors' Guild, which makes it legally problematic for another organization to do it.
          It doesn't mean that Google should not have done it, but it does mean that it is important for people to point out the shortcomings of Google's effort. By loudly complaining about the shortcomings of what Google has done here, the author(s) push Google to fix the problem and/or make it easier for someone to gain the funding to create an online collection that addresses their concerns.

        • Re: (Score:3, Interesting)

          by natehoy (1608657)

          Given a project of this magnitude, there are inevitably going to be bad scans, and bad data, and other issues.

          And, just as inevitably, the problem areas are going to be updated and replaced with good ones when they become available.

          "There's no point in having crappy scans with garbage metadata today" would be indisputably true if every book out there was a crappy scan with garbage metadata. Instead, what we have a starting point with some good scans and some bad ones, but there's no point holding back the

          • by lennier (44736)

            "This isn't a NASA mission. If a book ends up being a crappy scan, it won't explode on re-entry killing its reader."

            That would probably make reading cool again though.

        • by AndersOSU (873247)

          Wow contradictions.

          There's absolutely no reason why a single entity had to / has to scan all the world's back catalog on their own as fast as they can

          The reason one entity is doing this is the orphaned copyright problem. Google was sued and settled, and now seems to have the right to distribute these copyrighted works. Anyone could do the same, but they'd have to be prepared for the legal battle. Perhaps this is a question best settled through the legislative process, but that's not the way things stand t

        • by ajs (35943)

          How about good old fashioned legwork? It *is* possible to make sure that the metadata is consistent with the facts, but that involves doing actual research and verification such as academics have been doing for hundreds of years.

          Read the text from the last link in the Slashdot blurb. That's Google's response (and the original complaint's author's responses inline). In it, Google clearly lays out each of the errors cited (some as batches) and what sorts of errors they stem from. However, the really telling part is the numbers. They have over a trillion metadata records for hundreds of millions of books. In those trillions of records, they claim to have millions of errors. Think about that for a second....

          For a database that hasn't e

    • by Potor (658520) <{moc.liamg} {ta} {1rekraf}> on Monday September 07, 2009 @08:49PM (#29345547) Journal

      Exactly. And the whole argument totally ignores the fact that these books are now easily available.

      Shock horror: I am a liberal arts scholar. And Google Books has helped me incredibly in a project I am doing on a 18th century scholar. I have original texts in various editions at my fingertips, wonderful reference books (including a dozen 18th and 19th century Latin grammars), and serious secondary literature. Not all of these are fully posted on Google Books, but now I know what books to check out of the library, or even buy.

      As an arts scholar, I love Google books.

    • Who needs metadata any more ...when you have Search? Pick your own keywords.

      This is missing the point. The metadata is being used by search engines for indexing, so when the metadata is incorrect, you'll get incorrect results filling up your keyword search.

      In a typical search (on Google or any other search engine), you input a few keywords. Those keywords tend to match a very large number of documents, so there needs to be a method of ranking them so that you see the most likely ones first. The rankin

    • > ...when you have Search? Pick your own keywords.

      Unfortunately there are some major problems with searching, such as ``A OR B'' returning fewer results than when searching separately for A, B.

      http://www.gale.cengage.com/reference/peter/googlebooks.htm [cengage.com]

  • Error free system? (Score:3, Informative)

    by Bacon Bits (926911) on Monday September 07, 2009 @08:01PM (#29345201)

    So, the argument is that the new system is bad because it may have errors or bad data?

    Were card catalogs immune to this? It's a database. It's only as good as what you put into it. A bad database is not useful. It just means someone needs to do it better. Honestly, if anything this seems like an argument that the database shouldn't be proprietary. It should be open to everyone so that someone can always make a better version of the metadata with the same base data.

    "It's a piece of shit" shouldn't be the same argument as "nobody should even try it". The Wright brothers didn't exactly start out with a 747 or an F-35.

    • Card catalogs (Score:5, Interesting)

      by dpbsmith (263124) on Monday September 07, 2009 @08:49PM (#29345533) Homepage

      Tangential, but "card catalogs." Ha! I once had a compelling need to look up an article in the Occasional Papers of the Bingham Oceanographic Collection. So I went to the card catalog.

      It wasn't under O. It wasn't under P. It wasn't under B. It wasn't under C.

      It was under N.

      Why? Because, naturally, as of course everybody knows, the Bingham Oceanographic Collection is part of the Peabody Museum. Which is part of Yale. Which (drum roll...)... ...is in New Haven.

      The great thing here is that you can't even say there was an error in the card catalog, unless filing something under a heading that is perfectly correct, but under which nobody would dream of looking for it, is considered an error.

      • Re:Card catalogs (Score:5, Informative)

        by Peter H.S. (38077) on Monday September 07, 2009 @11:03PM (#29346473) Homepage

        Well, organizing books by listing them in which city they are from (printed) is among the oldest way of cataloging printed books. The practice goes back to Gutenberg and the so called "incunabula" period where book dealers/printers/publishers (often the same persons) would make book catalogs out a certain city. So if you needed a certain edition of a title, you would have to track it by such book catalogs, since the Leipzig edition would be different from the Mainz edition.

        It is of course sad that once such common knowledge among scholars now seems forgotten, probably not a hindrance when working with modern sources, but still necessary to know when working with old stuff, just like knowing that words/names starting with J were filed under I etc.
        Many academics still puts the printing city in their sources, though many seems to have forgotten why they do so.

        You just happened to stumble into a book /journal catalog organized by a centuries old and previously very well known method. The error wasn't in the card catalog or the way it was organized, but in that no one ever told you about these ancient methods in your library course.

        --
        Regards

        • by dkf (304284)

          You just happened to stumble into a book /journal catalog organized by a centuries old and previously very well known method. The error wasn't in the card catalog or the way it was organized, but in that no one ever told you about these ancient methods in your library course.

          The real issue to note here is that one thing computers are much better at than what went before is maintaining indices of cataloged data and performing searching of it. Sure GIGO rules still, but it's now practical to be able to search for a work on any facet of its metadata or even on automatically extracted information from the work itself. That's massively beyond what libraries used to offer. (I've had occasion to use card catalogs, and the biggest problem with them is the restricted number of axes on w

          • by Peter H.S. (38077)

            I don't think the issue is that computers are much better than card catalogs, that fact is just given. The issue here is that once common knowledge are forgotten, so that when the OP used an old catalog system that baffled him, he thought that organizing journals by printing city was an error. But that system is centuries old and so common that even today scholary sources often includes the books printing city even though it doesn't make sense nowadays. Within a generation this more than 500 year old system

          • Oh, there, I think, I disagree. I once read a book entitled "Indexing, The Art Of," about how book indexes are created, and it was an eye-opener.

            Conversely, there's nothing more useless than a completely computer-generated book index. You're looking for a topic that's discussed in three substantial sections and mentioned in passing fifty times, and the index lists fifty-three page numbers because the computer doesn't know which are the important ones.

            The same principle probably applies to card catalogs and

            • by emj (15659)
              Indexes are insanely expensive to maintain and to create. I've spent 40 hours working with an index, trying to simplify it, still wasn't done with it after that but I had to stop somewhere. Three books where to be put together to a big 380 pages collection, and the index entries got too big.
    • I think the argument that TFA is making is not merely "it's a piece of shit"; but "it's a piece of shit, and a regression(in terms of metadata) because their method is designed to meet very different objectives". They may or may not be correct(I certainly suspect that scholarly use was not Google's #1 priority, when they hope to get to that and how much they are willing to spend to achieve it I don't know); but it is a much more serious charge.

      The Wright brothers didn't start out with a 747 or an F-35; bu
      • Then I guess he is free to start his own collection aimed at scholarly use. The he can be on the receiving end of criticisms that he didn't design his system for normal humans instead of academics.

      • Re: (Score:3, Interesting)

        by Bacon Bits (926911)

        The Wrights didn't start out building toy birds, true. They first tried to use the data from some Russian or European who had modeled wings after birds. They found that the lift his data predicted was so far off from what they observed in their gliders that they could no longer assume that the data hadn't just been made up. Then they went and built a small scale wind tunnel and designed small model wings which could be reformed and shaped and angled easily and a scale which could be used to measure lift

  • We are trying to correctly amalgamate information about all the books in the world. (Which numbered precisely 168,178,719 when we counted them last Friday.)
          - Jon Orwant (Google)

    why does that number seem incredibly low to me?

  • by Aurisor (932566)

    As someone who majored in English Literature in college, I can tell you that academics love getting their panties in a bunch over what is Scholarly Publication and what is not. Some teachers will actually have special assignments that have to be written entirely using Scholarly sources, or in response to a Scholarly article.

    Before the advent of the internet, I can see how it might have been useful to have an in-group comprised of people who had some sort of qualifications to write about something, but it s

    • by ahoehn (301327) <andrew@@@hoe...hn> on Monday September 07, 2009 @09:08PM (#29345677) Homepage

      Sorry if I sound bitter, but I spent a lot of time reading this crap, and very little of it was as insightful or interesting as even my classmates' comments.

      That sounds like more of a you problem than an academia problem. If you don't enjoy using a work's minutiae to accuse perfectly innocent authors of misogyny, innuendo, (to add a couple you forgot) blatant colonialism or latent homosexuality, what the fuck were you doing in an English Lit program? The rest of us live for that shit.

      As someone who should not have majored in English Literature in college

      There. I fixed it for you.

      • by moosesocks (264553) on Monday September 07, 2009 @10:12PM (#29346061) Homepage

        Actually, the GP's got a good point. Back in college, I took a number of humanities courses whenever I could squeeze them into my schedule.

        I can say from firsthand experience that there are a lot of "scholarly" articles that are complete and total crap. When writing papers, I'd frequently peruse JStor [jstor.org] for pertinent articles about my topic, keeping an eye out for particularly good articles, as well as the heinously bad ones. Picking apart and systematically disproving a bad paper published in a "good" journal was an easy ticket to an 'A' on the paper.

        These papers, of course, were certainly the exception. Most scholarly papers I encounter are humbling in their brilliance. However, I've seen more than a few bad journal articles, as well as quite a few blog entries that would be worthy of scholarly publication. It's hard to make any generalizations about the validity of certain sources of information.

        Unfortunately, Physics wasn't quite as easy to bullshit (Random aside: The physical sciences certainly have their fair share of bad journal articles, especially in light of the fact that printed media is a terrible means by which to communicate scientific results. It's a cruel irony that the www was invented to enable collaboration and information exchange between scientists, but is rarely (if ever) used for that purpose. Also, any use of the word 'trivial,' or its synonyms needs to be punishable by death.)

        PS. Don't judge our writing abilities based upon out slashdot comments. I'm sure the GP had his own reasons for majoring in English, even though literary discourse is often trite and contrived.

      • by Aurisor (932566)

        If you don't enjoy using a work's minutiae to accuse perfectly innocent authors of misogyny, innuendo, (to add a couple you forgot) blatant colonialism or latent homosexuality, what the fuck were you doing in an English Lit program?

        Umm...racking up easy A's for Law School?

        • by ahoehn (301327)

          Umm...racking up easy A's for Law School?

          Touche good sir. I would have also accepted, "picking the major with the greatest percentage of sexually curious coeds" and "picking a major where facts are far less important than the way in which they are presented."

    • by Petrushka (815171)

      I can tell you that academics love getting their panties in a bunch over what is Scholarly Publication and what is not. Some teachers will actually have special assignments that have to be written entirely using Scholarly sources, or in response to a Scholarly article.

      There are two separate things going on there, and you've mixed them up slightly.

      First: in the kind of scenario you raise, "scholarly publication" acts as a mechanism for filtering information. There's a lot of information in the world; stuff that appears in "scholarly publications" should, if that criterion is well-designed, have a better average quality. As filtering mechanisms go it's imperfect: sometimes stuff that has passed peer review is still fishy, and sometimes good stuff gets excluded, as you your

    • by Kirijini (214824)

      ...academics love getting their panties in a bunch over...

      It doesn't matter how you end that statement. It's true. But, that's their job - academics overthink and overanalyze everything they can.

      ...over what is Scholarly Publication and what is not.

      There's a very good reason for that. Scholarship involves putting your reputation on the line. "Scholarly" works are those in which the author says: "This is a contribution to human knowledge and understanding of the world around us." In contrast, popular literature is produced for a very different reason - to make money, or because the author is passionate about the

      • by arethuza (737069)
        "This is a contribution to human knowledge and understanding of the world around us."

        More like "I need to publish stuff to get promoted to get more status & money" - and yes, I have worked in academia and played that game until I thoroughly sick of it and left to found a tech company.

  • Anonymous Coward (Score:5, Interesting)

    by Anonymous Coward on Monday September 07, 2009 @08:28PM (#29345381)

    Google has scanned many volumes of the Laws of Indiana, which go back to 1816. These are the session laws of the Indiana General Assembly and have never been copyrighted. However, Google has arbitrarily decided not to make most post-1922 volumes it has digitized, and even some pre-1922 volumes (e.g. 1877, 1893, 1895, 1909, 1917 and 1918), available, using the claim of copyright.

    Google has done all the decision-making here. Anyone who might object to the classification of one of these volumes as copyrighted and thus available in "snippet-view only" presumably would have the burden of proving the contrary. (And where would you even start? Who would you contact? I have seen nothing on this.)

    Once (or if) the settlement is approved early this fall, Google's "rights" attach to these volumes. If I understand correctly, at that point any individual who wishes to access one of these volumes of Indiana's session laws not already in "full view" will have to pay for it, and for the money will obtain only individual rights, NOT the right to make it freely available to others.

    Broader implications: Finally, this analysis has been limited to volumes of Indiana session laws, but surely similar situations exist more broadly.

    For more on this, see this Aug. 2, 2009 Indiana Law Blog entry: http://indianalawblog.com/archives/2009/08/courts_my_probl.html

    • Google is dealing with a ton of books as fast as they can. Theres no doubt that not everything is perfect, but the books are scanned and available. With time things will improve, but as of now, they are simply in the scanning things and getting them out there mode, not the "make everything perfect" mode.
      • Google is dealing with a ton of books as fast as they can.

        And that may be precisely the problem. "There's never time to do it right, but there's always time to do it over."

  • by Anonymous Coward on Monday September 07, 2009 @08:30PM (#29345403)

    And this is no exception. Before google books you had access to books from various libraries, books you owned, books you could loan from friends (*shock* *gasp* copyright infringement), books you could buy and books from non-google online sources. Now you have access to all of those and additionally google books. Even if google books is 99% "piece of shit" (which in my experience is simply not true, but nevertheless) you still have the 1% potentially useful material available that wasn't available before, so you win.

    • What about signal-to-noise? If I have a nicely organised library and you donate a truck full of books, many of which are filled with drawing by your toddler, it may not be worth my time to sift through them to find the gems. It would be a very bad idea to add them to my library without going through them because I am increasing my odds of getting a bum book, even though the number of good books has gone up.
    • Re: (Score:3, Insightful)

      by julesh (229690)

      The problem is that the existence of google books makes it harder for others working on similar systems (and there are others, this isn't just a pipedream) to become established. A Google Books court-approved class-action copyright settlement would make it harder for somebody else to reach a similar agreement (because the public interest argument will be harder to make). Essentially, this is a field where the first person to do it is likely to end up with a monopoly, and Google have done it badly, thus pr

  • by mschuyler (197441) on Monday September 07, 2009 @08:30PM (#29345409) Homepage Journal

    like shelving 'Life of an Iceberg' under biographies, but by and large they strive to be and are correct. If they mess up, some other library will fix the error. Libraries' cataloging data is usually centralized by OCLC so that the data is uniform throughput the country as other libraries pull from this central source for their own catalogs. Libraries also use a recognized and standardized subject scheme with a controlled vocabulary, not just a bunch of meta tags. Cataloging librarians are a rare and little-recognized breed of people who spend their entire professional lives trying to make it easier to gain access to material. The result is an organized body of knowledge--not just a heap of books on the floor in no particular order, like the Internet--and Google. For Google to blame libraries for their troubles is like blaming the Machinist Mates on the Titanic for crashing the ship into an iceberg. There, full circle. How did that happen?

  • by LifesABeach (234436) on Monday September 07, 2009 @08:31PM (#29345413)
    With all the class act talent that Google hires right out of college, why can't Google create its own Public Library on the Internet? Chrome could be the entry way to any book that is in the Public Domain, or by the Authors written permission. Turning the page of a book could be as simple as the [Back], or [Next] button. The "Card Catalog" would be a No-Brainer. No Library goes through these many hops. There's even translation to other languages, Brail, and Audio; from my viewpoint, this SHOULD be the challenge, not what word category is or isn't. If it's a case of "buy the book", then to buy 10 copies of "Gone with the Wind", and ONLY allow up to 10 readers to ONLY read "Gone with the Wind". Google could even have a "Google Online Library Card"; this is were the company hums "Ka-Ching".
    • Re: (Score:3, Funny)

      by QuantumG (50515) *

      So you haven't read any of the stories that have appeared on Slashdot in regards to Google's plans for their Books service eh?

    • by riffzifnab (449869) on Monday September 07, 2009 @09:21PM (#29345769) Journal

      With all the class act talent that Google hires right out of college, why can't Google create its own Public Library on the Internet? Chrome could be the entry way to any book that is in the Public Domain, or by the Authors written permission. Turning the page of a book could be as simple as the [Back], or [Next] button. The "Card Catalog" would be a No-Brainer. No Library goes through these many hops. There's even translation to other languages, Brail, and Audio; from my viewpoint, this SHOULD be the challenge, not what word category is or isn't. If it's a case of "buy the book", then to buy 10 copies of "Gone with the Wind", and ONLY allow up to 10 readers to ONLY read "Gone with the Wind". Google could even have a "Google Online Library Card"; this is were the company hums "Ka-Ching".

      I think that's the idea, perhaps you should go check it out: http://books.google.com [google.com]

    • It's called the IPL. www.ipl.org. It has public domain works in the categories you'd expect to find them. (ie. Gutenberg content)

      www.refdesk.org is similar but for reference.

  • Obnoxious (Score:3, Insightful)

    by burgundysizzle (1192593) on Monday September 07, 2009 @08:33PM (#29345427)

    The inline replies are written with a smug sense of self-entitlement as though he and other "scholars" are the only legitimate users of Google Books. It's NOT about you - you are not going to create enough adsense hits to make this whole thing worthwhile (or turn a profit).

    • Re:Obnoxious (Score:5, Insightful)

      by Volante3192 (953645) on Monday September 07, 2009 @08:45PM (#29345513)

      Definatly. It's like, "Oh, look, I found an error. If I had done this, that error wouldn't be there!!" And to that I respond, then do it yourself. YOU go tack metadata onto the 100 million books they have, you smug egocentric bastard.

      And, of course, he completely ignores the 999,999 proper entries compared to the 1 error. Google seems to know there's lots of problems here, and they're not going to get it right the first pass. But having a first pass at all is better than nothing.

    • Re: (Score:3, Insightful)

      If you were a scholar, writing for an audience of other scholars, why wouldn't you write about the concerns of scholars and from their perspective? I'm sure he knows exactly why Google is doing what it's doing; but that doesn't mean that he can't point out the downsides.

      It's like saying that Slashdot is obnoxious because it is "written with a smug sense of self-entitlement as though he and other 'geeks' are the only legitimate users of the Internet". This is true; but that is because it is a geek website
    • by jefu (53450)

      Indeed. He seems to think that his sole goal as a scholar is to grab information from wherever and make publications (in fairness, that is the job of most university professors and they have often forgotten the real point of scholarship), instead of trying to improve the state of knowledge of the world (in which case he should be finding the best metadata for his sources and helping google - or other sources - to incorporate that). He also seems to believe that google is there primarily to support hi

  • by Looce (1062620) * on Monday September 07, 2009 @08:36PM (#29345459) Journal

    ... is that academics can't rely on Google Books to make their bibliographies, because the publication date and authorship information, which are used in all citation styles (MLA, Harvard, etc.) are incorrect on Google Books for an apparently large amount of books. Categories aren't used in citations, they're used by searchers.

    Jon Orwant of Google said that 1899 was a placeholder year for unknown publication dates, as provided by some of their metadata providers... which leads me to ask if they sanitise their data or do any research into publication dates themselves!

    • Re: (Score:2, Informative)

      by Anonymous Coward

      WorldCat.org

      Find it on Google Books, look it up on there; Google Scholar if it is an article. I am a historian, and when I check citations (for journals or my own work), that is how I get it done.

    • by timeOday (582209)

      academics can't rely on Google Books to make their bibliographies, because the publication date and authorship information, which are used in all citation styles (MLA, Harvard, etc.) are incorrect on Google Books for an apparently large amount of books.

      What you mean is, they have to bother to pull up the book's title page for that information, rather than simiply pulling off google's metadata. Boo hoo.

  • by dpbsmith (263124) on Monday September 07, 2009 @08:44PM (#29345505) Homepage

    This is much like Google itself.

    Google's brilliance, and woe, is its sloppy imprecision.

    You type in a query. It returns a bunch of stuff. Quite a lot of it is irrelevant and as perceived as not meeting the requirements of the search, but you don't mind because all you care about is that it finds what you want, not that it finds other stuff. Unfortunately, Google is so good that it tricks you into believing that it always finds everything that matches your query. But, of course, there's no way to find out what it _missed_.

    I've personally noticed and been puzzled by the publication dates. I'd noticed it particularly with periodicals. What seems to be the case here is that Google is very prone to give the date that a journal began publication as the publication date of every article that has ever appeared in that journal.

    Wikipedia editors are well aware of the dangers of using Google hit counts as data. It's amusing to see that there are 1,930,000 hits on "Ghandi" compared to 22,900,000 for "Gandhi" and conclude that Gandhi's name is misspelled 10% of the time... or to notice, as I have, that that percentage is increasing and project the year in which "Ghandi" must inevitably become the accepted spelling... but it is, as they say, "for amusement purposes only."

  • by presidenteloco (659168) on Monday September 07, 2009 @08:47PM (#29345517)

    Yes, having all of the world's literature available for instant full text search sounds
    disastrous for scholars.

    • Yes, having all of the world's literature available for instant full text search sounds disastrous for scholars.

      It certainly is, if the text is sometimes right, sometimes wrong...

      • by swillden (191260)

        Yes, having all of the world's literature available for instant full text search sounds disastrous for scholars.

        It certainly is, if the text is sometimes right, sometimes wrong...

        I see no hint, in any of the linked discussion, that any of the text is wrong. Some metadata is wrong, but that can be checked against the scanned frontmatter quite easily.

        And the metadata will get fixed. This is a massive undertaking and it will take time to get it right.

  • by moon3 (1530265) on Monday September 07, 2009 @08:49PM (#29345545)
    They pushed the copyright law to over hundred years (just to make sure they will make money of writers even after they are dead), now comes our big brother Google to the ring to resurrect all the OUT OF COPYRIGHT books -- meaning those dead books that publishers no longer exclusively distribute. What an offense against the poor publishers. Google is creating a real e-Library of enormous proportions of virtually free books, what a threat. I bet I am not alone who wants to see the Newton's books on physics e-published again and searchable.
  • The impression I get from these stories is that once Google scans them, no one else can. Is that somehow the case?

    • The impression I get from these stories is that once Google scans them, no one else can. Is that somehow the case?

      Yes, once Google scans them they gather up all the copies and burn them. Just kidding, any one is free to scan them and put them online too. Microsoft used to scan books [wikipedia.org], and the Internet Archive has it's own scanning project [wikipedia.org] that is still ongoing (but they might be restricting themselves to out of copyright works, I don't know).

  • Please give it a rest, anyone can scan all the books they want and post them online. The only problem is that the law hasn't established an efficent way to get the right to post books online. If Google had tried to do this with the laws current they would have had to figure out who owned the right to every book. Imagine how much the internet would suck if search engines had to do the same thing.

    Also to get back to the topic at hand, it looks like they are trying to fix this as best they can and librar
  • I wish these people would just quit their whining about Google and it's book scanning. If you don't like what Google is doing go scan them yourselves. Google is creating something that never existed before - a large repository of the history of books in digital, searchable, available form - and all I hear is complaining. I don't believe that Google has an exclusive on this. I don't believe that their agreements to scan books prelude anyone else from undertaking the same project. And with technology improvin
  • Spurious Argument (Score:2, Interesting)

    by mikethicke (191964)

    As an aspiring academic half way through a philosophy Ph. D., I find Nunberg's argument pretty absurd. Google books is a godsend for academics, and would be much more so if there was full access to their entire catalog rather than "limited previews" for most books. I have used Google books countless times to quickly check out whether a book is relevant to my research, or to get the gist of an author's argument without having to trudge down to the library. I know many others who do this as well. In all this

  • In inline comments to the Google head guy's reply to the original blog entry, I find:

    Google: Geoff asks why we decided to infer BISAC subjects in the first place. There is only one reason: we thought our end users would find it useful.

    Scholar: The question is, why did you think end-users would find this useful? Which end-users did you talk to about this? I don't think you'd find a whole a lot of scholars who would embrace the idea of using the BISAC classifications in place of other library classification s

    • Re: (Score:3, Insightful)

      by bigbigbison (104532)
      I don't read him as saying, "any book that can be found in the holdings of a major research library is only of interest to scholars." at all. Rather, I read him as sayin that the systems that libraries use to organize books be they Dewey Decimal, Library of Congress, or some other system were created to help organize books for users to use them. The BISAC classifications were developed to help companies sell books. Why use that rather than what the libraries -- the source of these books -- uses?
    • by grcumb (781340) on Tuesday September 08, 2009 @12:26AM (#29347027) Homepage Journal

      And I think he's entirely off-base. Nose-in-the-air "Scholars" like this gentleman fail to recognize that Google's efforts are about making material available to "the rest of us" who don't have access to those major research libraries. And categorical indexing of material makes complete and total sense if you expect to have non-PhD sorts searching for it.

      You're fighting the wrong battle here. It's easy to find any number of legitimately nasty things about 'Scholars' and 'Academics' and elitism in general. But arguing for proper classification in Google Books is not one of them.

      For several years I was an avid amateur of Information Retrieval. Classification (and other useful organisational models) of information into related collections is essential when you don't know what keywords you're looking for. This is especially important with historical works, where the use of 21st Century names, terms and other common keywords is next to useless.

      Google search is useful when you know what you're searching for. But knowing what to look for in Google Books is an entirely different matter. Categorisation matters here.

      By using a classification system that is designed for book sellers, Google's chosen a very poor set of criteria. Not only will most of the titles be poorly characterised (and thus harder to find), the effort required to find them increases with their rarity or uniqueness. These aren't always a measure of importance or interest, but often enough, they are.

      Asking Google to consider a proven, effective and well-understood categorisation system is not being snooty; it's an effort to suggest - as we geeks often do - that there might actually be a correct way to perform this task.

      Sometimes what looks like 'arrogance' is actually the state of being right [imagicity.com] about something when no one else will listen.

  • This may be a trite point, but yes, Google does err. Google also does a better job than most companies at going back and fixing their errors. This, being an online database, is pretty easy to correct. If by some principle the scholarship potential of this otherwise unavailable information was irredeemably corrupted, then yes, I'd worry. Instead, it sounds like a pretty amazing project which happens to be in beta.
  • by syousef (465911) on Monday September 07, 2009 @10:12PM (#29346063) Journal

    This could be the stupidest and most disingenuous argument I've encountered all year. I guess I'll never know since the metadata is not at my finger tips. This might be a good argument for getting the metadata right. It isn't a good argument for tossing the virtual books out with the bathwater.

    So no I won't get off your lawn. We're better off without scholars who'd rather hoard information. Begone!

  • Sounds like Google are doing their best to fix the problems. What I couldn't quite figure out is why bad data is overriding usually good data like Harvard. Maybe they need to give reliability rankings or something. We are 84% sure this date is right (because it came from Harvard), but there is a 10% chance this one is right (because some other place said that), and a 6% chance of this one (because some guys in Korea said it). Have the option to search only best guesses or all guesses.

  • While it's unlikely that Google's scanning technology is as dramatic as the one in Vinge's novel, there appear to be striking similarities. I wonder if Larry Page or Sergey Brin have read it.

  • Having read the original blog post this is clearly the vituperative rant of a imagine-wronged academic with which I am all too familiar.

    Google is doing the hard work of scanning and attaching some meta-data. Once that is done (a) more meta-data can be added and (b) errors fixed. Additional mete data will be needed as there TWO academic classifications for english, and many more for non-englisg languages.

    This is just stupid carping, by those who would rather retain control of their baliwick.

    He does not seem
  • Perhaps someone should point out to Mr. Nunberg (if one can get past his ceaseless caterwauling) that the books digitized come from LIBRARIES, and if scholars find their digitization, cataloging, or other minutiae somehow insufficient, they can always go back to said LIBRARIES and do their research the old fashioned way?

    Some complaints just ring with irrelevance in immaturity. Complaining when someone has gone to great effort and expense to GIVE you something where you had nothing before, simply because th

  • If Google's service isn't sufficient for your research needs, THEN DON'T FUCKING USE IT. Dear god....
  • I heard an author talk about on The Discovery of Air [amazon.com] at the local bookstore. The book is about the correspondence between Priestly and Thomas Jefferson about Priestly's scientific ideas. This author talk was the first time I heard an author say that Google Books was an important reference source for him. This is a sweet spot for Google Books: 19th and early 20th century books out of copyright, but captured by google's university library digitzation effort.
  • by ajs (35943) <ajs&ajs,com> on Tuesday September 08, 2009 @02:02PM (#29354269) Homepage Journal

    I hate to be so cynical, but there was a huge uptick in negative articles on Slashdot about Google as soon as Microsoft started their anti-Google PR effort in DC. Now I see at least one anti-Google article on Slashdot every day. Is Slashdot falling for an extensive trolling effort from MS?

    More info available from previous Slashdot article... [slashdot.org]

Put your Nose to the Grindstone! -- Amalgamated Plastic Surgeons and Toolmakers, Ltd.

Working...