Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Software Books Media The Internet Book Reviews

Lucene in Action 109

Simon P. Chappell writes "I don't know about you, but I hardly bother with browser bookmarks any more. I used to have so many bookmarks, back in the early days of Netscape's 4 series, that I would have to regularly trim and edit my bookmark file to prevent my browser from crashing on startup -- that's a lot of bookmarks, folks! Now, I go to my favourite web search engine, enter a couple of appropriate search terms and voila, there's my page! Search engines are so ubiquitous that we rarely give much thought to the technology that powers them. Lucene in Action by Otis Gospodnetic and Erik Hatcher , both committers on the Lucene project, goes behind the HTML and takes you on a guided tour of Lucene, one of a generation of powerful Free and Open-Source search engines now available." Read on for the rest of Chappell's review.
Lucene in Action
author Gospodnetic and Hatcher
pages 421 (7 pages of index)
publisher Manning
rating 9
reviewer Simon P. Chappell
ISBN 1932394281
summary Solid introduction to Lucene

Who's it for?

Lucene is a library and framework, rather than a complete application. It truly is an engine, around which you are expected to build and extend your own application. Like Lucene, the book is targeted at those who are looking for a tool to build their own search facility application rather than just "download and go." The book does include a number of case studies of Lucene usage (including at least one download and go search engine) but those are included to show how to use and adapt Lucene to fit differing environments rather than as ends in themselves.

The Structure

The book is sensibly divided into two parts. The first part looks at "Core Lucene" functionality, while the second part addresses "Applied Lucene".

Part one has six chapters, covering the central components and inner workings of Lucene. It's here that the book starts with a tutorial introduction, familiarising the reader with the concepts of Lucene as a search engine around which you wrap your own code. The other five chapters move steadily through good search engine fare, with indexing getting the whole of chapter two to itself The discussion of how to retrieve text from the documents being indexed is mentioned here but postponed until chapter seven, where it is dealt with exhaustively. Chapter three covers searching, and especially how Lucene ranks documents.

Chapter four examines analysis. In it's chapter introduction, the book explains that "Analysis, in Lucene, is the process of converting field text into it's most fundamental indexed representation, terms." This process is performed by an analyser, which tokenises text according to it's own built in rules; each analyser will have a different emphasis, some want only dictionary words, others might explicitly include acronyms and sometimes you'll want an analyser that will block stop words (those words in languages that are part of the structure, but that add nothing to the information being conveyed by the text; classic examples of stop words in English include "a", "and" and "the").

Chapter five looks at advanced search techniques; everything from sorting search results, searching on multiple fields to filtering searches. Many free or open source software tools are extensible, and Lucene is no exception. Chapter six addresses creating and using custom components within Lucene, everything from custom sort methods to custom filters.

Part two, the final four chapters, cover Applied Lucene. It is dedicated to practical uses of Lucene and answers the question "So, what can I do with a search engine?" Chapter seven covers ways and means to parse common, non-plain text document formats. The primary formats covered are RTF, XML, PDF, HTML and Microsoft Word. The ability to parse and index these file formats will cover the search engine needs of the majority of Lucene users. Chapter eight looks at a number of Lucene tools and extensions that are available; many of them being free and open source software. Chapter nine covers ports of Lucene. While for many users, Lucene being a Java library is not a problem, some users want its functionality in environments that do not have Java. The chapter looks at ports written in C++, C#, Perl and Python. Lastly, chapter ten takes a thorough look at seven Lucene case studies. Perhaps the "star" case study is the one about Nutch, a download and go search engine written by Doug Cutting , the original author of Lucene.

There are three appendices. The first offers installation advice for Lucene; a useful addition that those newer to working with Java libraries will surely appreciate. The second appendix has a very well explained description of the Lucene index format. This is the kind of information that can be hard to find, so it is welcome in a book of this sort. The last appendix contains a number of categorised resource references. The number and breadth of the resources provided could provide quite an incredible education in information retrieval theory if the reader was inclined to read them all.

What's to Like?

There are several things to like about this book. Let's start with the fact that the authors are part of the core development team of Lucene. This gives them both credibility and an excellent understanding of the internal workings of Lucene. Co-author Erik Hatcher is a fantastic writer, having previously been a co-author of the only Ant book worth bothering with, Manning's Java Development with Ant . (Full disclosure: I do know Erik personally.)

The structure of the book is well thought out and each chapter does seem to move your understanding forward when combined with what you learned from the proceeding ones. The division into core and applied Lucene is also helpful. While you'd hope that this was the case, it often isn't; hence I note it as a positive.

I especially appreciate that this book does not fill up page after page with API documentation. The authors appear to have grasped that if you have Internet access to download the software, you might just be able to access the documentation online; rather, they concentrate on the way to use the software. What a concept!

As a part of Manning's "in Action" series, the book has excellent layout and has obviously been thoroughly edited by both technical evaluators and copyeditors. This might seem to be a small thing to some, but a well-edited book stands out clearly from the crowd.

What's to consider?

If you are looking for a book on using and configuring a download and go style of search engine, this book would be less suitable. While the case study on Nutch is of good length, it would be too short to useful as a configuration guide.

Conclusion

I enjoyed reading this book. If you have any text searching needs, this book will be more than sufficient equipment to guide you to successful completion. Even, if you are just looking to download a pre-written search engine, then this book will provide a good background to the nature of information retrieval in general and text indexing and searching specifically.


You can purchase Lucene in Action from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

Lucene in Action

Comments Filter:
  • My solution (Score:3, Interesting)

    by Neil Blender ( 555885 ) <neilblender@gmail.com> on Wednesday August 24, 2005 @02:55PM (#13392035)
    My home page is a nicely sorted webpage with all my frequently visited links in a password protected section of my web site. If something gets used enough in my bookmarks, it gets put on that page and gets deleted from my bookmarks. Then, no mater where I am or what computer I am on, I can access my links.
    • Yes, I've managed to do this to... right now I'm using MediaWiki. I found it to be a faster alternative to HTML and I can have friends / colleagues, etc help maintain it. I just started using wiki's recently... but now I'm addicted... I use it for more than just bookmarks too.. such as my list of restaurant reviews.
    • I use Booby [nauta.be] for online bookmarks. It requires PHP and one MySQL database. I have access to my bookmarks regardless of what computer I'm on.
      • Why is everybody going so high-tech about this? I just use the syncmark extension for FF and back the bookmarks file up to a directory on my webspace. It's not pretty, but it works great when I'm on somebody else's PC.
        • Why is everybody going so high-tech about this? I just use the syncmark extension for FF and back the bookmarks file up to a directory on my webspace. It's not pretty, but it works great when I'm on somebody else's PC.

          My solution is not really high tech, but it has one major advantage over bookmarks - it's way, way faster. Scrolling through bookmarks is slow and tedious because scrolling in general is slow and tedious. If you have them on your home page, the first thing you are presented with is links. I
    • Re:My solution (Score:1, Interesting)

      by Anonymous Coward
      Well, exactly.

      The "home" button on a browser is supposed to take you to YOUR OWN web space, maintained by you - i.e. your home. Some bits might be your front garden, visible to others, others private.

      People who use the "home" button as just another bookmark to a search engine are missing the point of the web.

      It isn't helped by the fact that current browsers aren't actually good as editors (unlike the original web browser vision) - Your web site should be a WYSIWIG-editable persomal/private pseudowiki. Many
    • My home page is a nicely sorted webpage with all my frequently visited links in a password protected section of my web site.

      My solution is similar, but I just put a PHP page on my home desktop that serves my FF bookmarks.html. It took about 2 minutes to create and now I can get to my bookmarks from anywhere.

      The only downside is that I would have to spend a little more time if I also ever wanted to add new bookmarks from anywhere..

  • Thanks! I was looking for a good book on Open Source search engines. While I have never heard of "Lucene" I will definitely be looking into it now. Its probably a good opportunity to learn all about Search Engine Heuristics, methods, etc...

    Also, I agree with the author that bookmark functionality has gone the way of the dinosuars... with the exception of the "open all tabs" feature found in many browsers today... that is about the only one that I use often.

    Im just wondering how the "search" functio
    • by Anonymous Coward

      I was looking for a good book on Open Source search engines.

      Well, you could have used Google to fi-- oh, I see.

    • by juancn ( 596002 ) on Wednesday August 24, 2005 @03:35PM (#13392352) Homepage
      Lucene is not like google (it's not a full application), it's a library focused on searching text based documents (you could use it to build a mini-google).

      The basic idea is that you want to build an index, and then search it, to find some document.

      A document has several fields (e.g. text, title, lastModificationDate, author, categories, summary, url, etc.) which may be indexed, stored, or both.

      You usually build your lucene documents, based on some real documents (e.g. web pages, PDF, records in a database, etc.), and then add them to the index.

      Once you have an index, you build a query to search one or more fields (lucene provides a QueryParser class, which handles the most common cases), and you get a Hits collection containing the documents matching your query in some order (this can be customized).

      Before a document is added to the index, it is passed through an Analyzer which converts the text in the fields to terms, which are the basic internal concept that is indexed.

      Another interesting feature of lucene indexes is that they can be searched while they are being built without noticeable loss of search performance, and that they are process-safe (many processes can access them for reading, only one for writing), this has the drawback that the indexes are append-only (actually a separate index is created if you modify an index), but periodical optimization of the indexes removes unnecessary entries and inefficiencies.

      Hope this helps!

      juancn
    • Before the web, a "search engine" was a piece of software that provided a way to search a collection of documents efficiently. The usual method is to create an inverted index, a data structure analogous to the index in the back of a book, in which you can look up a word and get back a list of all the documents containing the word. There are also a set of standard techniques for ranking the results, for example based on statistics about the distribution of words across documents.

      Lucene is a search engine

    • It sounds like you may be interested in Nutch [apache.org], a sub-project of Lucene. Nutch is a full search engine package (fetcher, indexer, searcher, etc.), made to work in a cluster, etc. Of course, at the core of indexing and searching functionality is Lucene.
  • Bookmarks are better (Score:4, Interesting)

    by saskboy ( 600063 ) on Wednesday August 24, 2005 @02:58PM (#13392061) Homepage Journal
    Bookmarks are more secure than a search engine, since a search engine could provide a poisoned link, and if you're typing in the URL by hand, if you make a spelling mistake, you could find yourself at a pharming site, or someplace you didn't want to go.

    I tend to use bookmarks in Firefox and the autocomplete about equally, and make use of the Quick Links toolbar for my most popular sites.

    The Firefox bookmark all tabs feature is a breakthrough, since you can close your browser, and reopen it to the same set of tabs as before, which is great when installing extensions and you're forced to restart. The only drawback is that scrolling through bookmarks is too slow, but if you use your scroll wheel it speeds up considerably. That's a trick I didn't figure out until just last month.
    • I've always used the SessionSaver [mozilla.org] extension to get this functionality. I don't have to go to the trouble of acutally setting anything this way. I can close my browser any time and it will be in the same state the next time I open it. Or, I can save a specific session to load later.
    • by zhiwenchong ( 155773 ) on Wednesday August 24, 2005 @03:26PM (#13392275)
      Quite... I just want to say something:

      I think that abandoning bookmarks altogether is a bad idea.

      Search, while useful, only works if you can find the exact keywords necessary to bring up a certain page. Search merely complements, rather than replaces, bookmarks.

      Looking through my bookmark lists, I see many websites which I would never have known how to search for (they're mostly websites I stumbled upon from other websites). Some of these sites are hard to find because:

      1) they don't have enough Statistically Improbable Words. e.g. try searching for software that describes biology of a python.

      2) the page doesn't contain words associated with its significance to me (yes, it can happen). e.g. let's say you come across a page that has a nice layout that you want to revisit later -- if you ever forget the keywords on that page, you may never find it again. Whereas if I were to file it under "Nice websites" in my bookmark folder, I'd be able to find it again.

      3) I can't remember any of the keywords associated with the page.

      4) I forget that I've ever visited those webpages. Some search engines (e.g. a9.com) have histories that you can revisit, but they're no use unless you can classify them. And if you classify them, they're basicallly bookmarks.

      I think the reason people dislike bookmarks is because they're a hassle to organize. We need some sort of tool to autoorganize bookmarks.

      There two basic requirements:
      1) Multiple hierarchy - a bookmark must be able to belong to more than one category. Example of this is GMail's labels [g04.com] -- each email can belong to more than one label.

      2) Automatic classification - the proper term for this is automatic taxonomy. This can be accomplished using a Bayesian algorithm (like the one POPmail is using). In fact, DEVONthink already does this [devon-technologies.com].

      When a user makes a bookmark, the program should come up with a list of category folders (sorted from likeliest to least likely) to file that bookmark under, and the user must be allowed to select more than one folder.
      • Yep, I also think the poster's abandonment of bookmarks is truely bizzare. I have top-level folders of bookmarks, each of which becomes an instantly available pull-down menu. Most have subfolders.

        I like your automatic classification ideas.

        A complaint about Firefox: when I choose "bookmark this page" it comes up with a little dialog. This dialog has a one-line selector for where I want to create the bookmark (default being the folder named "bookmarks") and a little button to expand this one line into a scree
    • Scroll wheel? Thanks, that is a major helper.

      What I've begun doing is using the "Bookmarks Toolbar Folder" for all of my bookmarks. I've got "Essentials" with links to Gmail, Adsense, my website, Distributed.net stats and so forth, basically all of the sites that I try to visit daily. Then I've got "Favorite sites" that holds Slashdot (even though now it's "home"), Woot, Craigslist, Free6.com (hehe), Assambassador.com, Myspleen, demonoid, you get the point.

      Then I've got the essential one: "Functions" - that
    • i don't think that bookmarks better than search engine makes any sense at all, are you trying to say that instead of indexing automatically a broad range of documents for intranet usage using lucene or htdig YOU will bookmark manually every page using your favorite browser?,
    • by xant ( 99438 )
      You can combine the best of both worlds, bookmarks and search.

      I find bookmarks slow to navigate, and it's hard for me to remember my own hierarchy when I've got enough bookmarks to organize. The problems with search have been expanded on by others in this thread.

      So here's the solution: http://del.icio.us/ [icios.us].

      You can create, edit, tag, describe, and search your own personal bookmarks. When you've done that, the world can see your links too. Subscribing to an RSS feed of some tags you're interested in ("pytho
    • The Firefox bookmark all tabs feature is a breakthrough, since you can close your browser, and reopen it to the same set of tabs as before, which is great when installing extensions and you're forced to restart.

      You will need to do that only one more time. That is when you install the Firefox extension called SessionSaver. From the website [mozilla.org]:

      SessionSaver restores your browser -exactly- as you left it, every startup, every time. Not even a crash will phase it. Windows, tabs, even things you were typing -

  • That sounds interesting. At the moment, I'm dreaming of a "textual exchange service center" for pupils at my school or even schools in whole Hamburg/Germany. (in other words: a good, dialer-free, non advertising, trusted, backfeeded homework exchanger).
    I've heard of Lucene through my fav. Computer magazine ( http://www.heise.de/ct [heise.de]), but I was more interested in indexing algorithm at that time.
    So how much weight does the book give into algorithms? Is there anyone out there who's as mathematically/scientifi
    • the book (I have a copy) goes into detail on extending lucene with handlers for different formats, filters, and things like stem rules (language specific rules for stemming words).

      It also talks about the indexing format; how the indexes are stored and searched. If that aint enough, well, the source is up on apache.org
  • Are the benefits of having such a customizable search engine enough to justify the work required to code for it? It seems like even many of the "customizable" features that you could code have already been included in the major search engines of today, and it would be difficult/impossible to beat their algorithms when they are already developing super-efficient algorithms to stay competitive. It seems like only highly specific or unusual search engine applications could make use of something like this.
    • I think the common use for this is for having an insite search. Sure you can link to google and limit site to your domain but you won't get results as good as a search engine that completly indexes just your site.
    • You bring up a good point. In my group, we use Lucene to index XML files because there is a good deal of metadata that (for legitimate reasons I won't go into here) doesn't make it into the HTML presentation that google and human readers see. In order to use the search interface for effective research, thereby using the metadata, a Lucene index was most helpful.

      That said, for most projects you are better off to just use a google search, but there are times when knowing the structural properties of your da
    • Re:Benefits? (Score:4, Informative)

      by coflow ( 519578 ) on Wednesday August 24, 2005 @03:14PM (#13392201)
      Typical search engines have licensing fees associated with them if you're embedding them in your application. This is basically an open source alternative. And you can customize the hell out of it. I've used it on several web-based applications and on SOA platforms, and it is fast, reliable, and easy to use. Did I mention it's open source? Take a look at the Apache site [apache.org].

      Some examples of customizable features are that you can index database entries and achieve quantum leaps in performance over that offered by Oracle, MySQL, PostGres, Firebird, etc. indexing. You can index formats that are not supported by the major search enginges.

      It may not offer quite the performance of Google, Alta Vista, etc., but it's a FREE product, well supported by the folks at Apache, and many open source J2EE frameworks support it as well.
    • One word yes... more words... in about 10 projects I had in the recent past, I had to apply lucene in about 3 of them due to the requirements of the project....
  • Better Memory Than I (Score:3, Interesting)

    by Flamesplash ( 469287 ) on Wednesday August 24, 2005 @03:03PM (#13392107) Homepage Journal
    Now, I go to my favourite web search engine, enter a couple of appropriate search terms and voila, there's my page!

    You have a better memory than I my friend. Many times I only barely remember something I want to find again. Maybe I remember it was humourous, or maybe I remember it was an online game with pigs in it. Unless it's popular I doubt 'pig game' is gonna get me far. So bookmarks aren't so useless to those of us who don't keep everything in RAM.

    Bookmarks, and a good hierarchy, also leverage the Associative aspect of our minds. Skim through your high level bookmark folders and you'll probably find what you were thinking of pretty quick. Additionally it reminds you of things you may have bookmarked yet forgotten.
    • I doubt 'pig game' is gonna get me far.

      Maybe not, but I'll bet it would make an interesting Google Image Search with Safe-search turned off.

      *dares not try at work*

    • Comment removed based on user account deletion
    • Mmm... that's why I suggested moving Bookmarks and the like into a DBFS [blogspot.com]. In doing so, the user gain the power to organize and search [blogspot.com] on his data in ways that were previously impossible. Just imagine if your bookmarks automatically attached the meta-data about themselves (based on the website). You could then search for "humor" and find a list of everything you thought was funny enough to bookmark!

      That's my idea, anywho. :-)
    • by garcia ( 6573 ) *
      I haven't used Bookmarks since 1998 or 1999. Too much of a hassle finding stuff when the links are dead anyway.

      His solution, using a search engine, is a much better method as you might even come across something new and even MORE useful than what you had originally bookmarked.

      I check a handful of websites daily. Mostly Google News, slashdot, MNspeak, geocaching.com, mngca.org, and usually some others. While having them setup in a hierarchy might leverage the association aspect, typing them in everytime e
      • Whether I bookmark something or not usually depends on how difficult it was to locate in the first place. A few months ago, I was looking for instructions on how to replace the CMOS battery in an old Winbook. I had to wade through half-a-dozen pages on Google with links to companies that wanted to sell me a new laptop battery before I found something relevant. That one got bookmarked.
      • Let's see...
        1. Visit google.
        2. Type in "Qt 3.4 documentation"
        3. Hit submit
        4. Find and click on the link

        OR

        1. Click on the bookmark.

        Yeah NOT using bookmarks is so efficient.
      • Dead links - that's a good point regardless of the bookmarking technology. One solution is to automatically cache everything you consider worth bookmarking, permanently. (Tie a bliki to a proxy like squid, maybe?) Another is to design the bookmarking system to go to archive.org whenever the link is dead.
      • typing them in everytime exercises my memory

        Exactly!

        My cellphone has only work contacts programmed into it, because the only time I'm going to need these numbers is when I'm on the clock, and when I'm carrying the cellphone.

        But personal contacts? I've learned over the years to deliberately NOT program these in - forced repetition of typing in the numbers means I commit them to memory. Extremely handy for when I don't have my cellphone on me, or its battery dies.

        Personally, I found bookmarks were almost harm
      • I stopped using bookmarks a long time ago too. It was too much of a hassle to keep track of them. One bad side effect is that sometimes I don't know where to go, and over time I start to forget about pages I previously frequented.
      • There's a difference though, those are sites you visit frequently. What about sites you don't visit frequently? THose are the ones that are going to be hard googling for as you may not remember enoughy specific keywords to google for it. For instance. I like this site of laid back little games called Orisinal.com [orisinal.com], but I go there maybe every couple months. It has an odd name and not a distinct subject matter, not something easily refindable through google.
  • Like em, keepin em (Score:3, Insightful)

    by Cylix ( 55374 ) on Wednesday August 24, 2005 @03:05PM (#13392122) Homepage Journal
    I might reference a search engine to tell someone how to find a site via word of mouth, but it has not replaced my bookmarks. If I am away from my system and all I can remember some common words maybe, but not so long ago I used to sync bookmarks with a firefox plugin. (Kept breaking after version updates, never went to reinstall it... though I think I will now)

    Anyhow, I simply build to critical mass before I sort them into their respective folders. Some things are automatically tossed into temporary bookmark folders that are going to get washed away after they are no longer useful. (Think auction links)

    Now I'll tell you why using a search engine as replacement bookmark concept is a bad idea. Page ranking changes. That particular combination of words you can remember... might just not produce the same results next time. Wonder why? The interenet changes! It is not Aol keyword search...

    That said, I did something as foolish as to rely on google to get back to some website regarding video sync signals. It was an excellent page and then I went back to search for it again and I could not find it. (Eventually I did though)

    Bookmarks good, search engine good... not mutually exclusive.
  • Chapelle: What do you expect from my review....It's a search Engine Biotch!!!

    Oh wait.. wrong chappelle
  • RSS (Score:2, Interesting)

    by ezweave ( 584517 )

    While search engines are great, bookmarks are not obsolete. I use RSS feeds to keep up on anything that is serialized that I might care about. FF is great for that.

    I still use a few regular bookmarks (like the URL that logs me into /.). Or for development servers with obscene URLs. That is the kind of thing that a search engine won't find. Especially if you have to deploy to a few web servers (this is the WebLogic machine, this is the OAS machine, etc). I have even bookmarked LDAP strings for testing.

    • As an aside, can Lucene be used for local searches?

      Yes, but...

      The distribution contains some demo applications that you can point to a filesystem. One app will index the text, another will index HTML (or maybe one does both, I can't remember). Then you execute another app to query the index.

      The hard part is to get Lucene to index non-text files such as Office files. The version of Lucene I've used is the Java version. Third-party libraries exist for Word and Excel docs (on a Windows filesystem), but none

    • Lucene is an Apache licensed java project; there is a .NET version that may work on Mono too.

      The nice thing about Lucene is it adds indexing and searching to anything you want -some search plugin for outlook (blech) is built on lucene.net; imagine an equivalent for the unix mail systems -thunderbird , evolution or emacs, for example.

  • by bad_outlook ( 868902 ) on Wednesday August 24, 2005 @03:11PM (#13392171) Homepage
    the Lucene (http://jakarta.apache.org/lucene [apache.org]) indexer will be inplememtned within Hula the web and cal application (http://hula-project.org/Hula_Server [hula-project.org]) made from open sourced Novell NetMail code. Samples of the search engine have been comitted and should start functioning within weeks, just in time for the new cal UI, which you can now view a demo of here: http://nat.org/2005/august/hula.html [nat.org] That's looking to be an amazing app...
  • When I was in HS, the preferred way on a Mac to find the telnet application to go run pine was using Find. It was almost always quicker than finding out what folder it was in on the machine, as they were surprisingly nonstandard installs.
  • Google anyone ? (Score:3, Interesting)

    by Potatomasher ( 798018 ) on Wednesday August 24, 2005 @03:14PM (#13392198)
    Does anyone find it a little funny that on the main lucene.apache.com webpage, there is a "Search this site with Google" textbox ? Kind of makes you NOT want to use their search engine if they dont' even trust it enough to work on their own site....
    • Seriously, why would I want to use a library that the authors don't consider good enough to use themselves?
      • Don't write up Lucene just because they're not using Lucene for the site search. Lucene is good. I've only used it on my laptop to index a couple of hundred thousand news articles, but even on a laptop Lucene performs well.

        Take a look at http://www.theserverside.com/ [theserverside.com] - the enterprise java community. Their search is powered by Lucene. It's pretty fast and a very capable site search.

        You also have open source projects, such as Beagle (the desktop search for Gnome), that uses the .Net version of Lucene. Lo
    • Re:Google anyone ? (Score:4, Informative)

      by Anonymous Coward on Wednesday August 24, 2005 @03:48PM (#13392443)
      I'm going to assume your post wasn't a joke and explaina a few valid reasons.

      The best reason is that its very, very easy to set up a Google search... all you have to do is add site:your_site to the search query, and Bam! instant search.

      Lucene takes some work to setup, and is best used where normal Web crawling doesn't work. For example, I work on an eCommerce Web App where all our products are stored in the database, and you reach them by setting a CGI parameter in the URL. Not all products have links to them on our site. We use Lucene because we can pull all the products out of the database and index them, and get hits that crawling would have missed. We can also customize things like redirecting a search for "help" to the help page, set up synonym lists, etc.

      So long story short, their search needs are not complex enough to justify the effort of setting up a Lucene based application.
  • I used to use bookmarks, until I got in the habit of exiting my shells with Control-D. I spend 90% of my computer time either in a terminal window or mozilla, and I don't use click to focus. Therefore, many times when I hit control-D to exit a shell, I have accidentally left focus on the mozilla window and I add an unwanted bookmark. My bookmarks quickly become cluttered beyond use in this way. Surely there is some way to remap this function to another key, but I've yet to find it.
    • So, I didn't know about Control-D exiting shells. And I thought to myself, that's kinda neat. So I clicked over to SecureCRT, hit Ctrl+D. Nothing. Hit it again. Nothing. Railed it about ten more times. Nothing.

      I was focused on another SCRT.

      I just wanted to thank you for closing screen, a tail with a lengthy grep, mysql, bind (I was running in the foreground for debugging), and god knows what else.

      That is all.
  • .. here at UMIACS for searching huge e-mail corpora. Luecene rocks!
  • Good article (Score:3, Informative)

    by Linux_ho ( 205887 ) on Wednesday August 24, 2005 @03:22PM (#13392250) Homepage
    Check out this article [perl.com] for a good intro to Plucene, the Perl port of Lucene.

    This is also a good link for all of you slashdotters who have no idea what Lucene is for and are posting rants wondering why people don't just use Google instead.
  • my problem solver (Score:2, Informative)

    by shareme ( 897587 )
    Check out regain: http://regain.sourceforge.net/index.php [sourceforge.net] Works as a computer indexer using Lucene.. Seems to do better on search than MS stuff :) On a typical 2.97 GHZ system with 100 gig hd 70% full is about 6 hours to do first index.. It runs fine in backgroudn no noticeable slowing down of other apps while indexing.. It also come sin a server vesion as well for website searches to build search engines like google and yahoo :)
  • My solution is, every 6 months or so, save the bookmarks.html somewhere with a date in the name and start over with a blank bookmarks.html. Then, if I need to find something old I just open up the old bookmarks_6_2003.html or whatever - the interesting thing is, it's like going back in time to review what I was interested in at the moment. Like if I was researching a lot on electric park flyer airplanes in 2003 it would have a lot of links. That way it's kinda like a scrapbook of your life - if your sorry
  • by Ransak ( 548582 ) on Wednesday August 24, 2005 @03:25PM (#13392268) Homepage Journal
    After realizing I had over 800 bookmarks spread across four different workstations in different geographic areas, I consolidated them into a Sitebar [sitebar.org] install. I'd recommend it to anyone; you can tinker with the PHP or MySQL side, or simply leave it alone beyond the default installation. It's really designed for bookmark sharing for teams, but has options for single user installations.

    Usual disclaimer: I have nothing to do with Sitebar or its development, just a majorly satisfied user.

  • Try delicious? (Score:2, Informative)

    by delete ( 514365 )
    Why not try delicious [del.icio.us]? It allows you to keep your bookmarks online so that they're accessible from multiple locations, while also allowing you to search [del.icio.us] your bookmarks and those belonging to other people.

    If you use Firefox, there are extensions that allow you to view your bookmarks in a sidebar [mozdev.org] and sync your online bookmarks [ganx4.com] with your browser bookmarks.
  • It is true, for many purposes there is no need for a bookmark. Google will take you just where you need to go, and will keep you informed if a better source for what you need becomes available and makes its way up the results page.

    But other times, the search engine screws you over. Case in Point - fishy

    The other day at work I wanted to play Fishy, so I typed it into Google, went to the top link, and started playing. What??? They changed FISHY?? NO, they didnt, the top link was some bizarro version o

  • I'd like a blog which is also part of a wiki (aka a "bliki"). And I'd like to be able to set each post public or private - so I can record my own thoughts about ideas which I do not want to share, vs. plain links and comments which I do want to share. Probably I will try to use MediaWiki for this but I'm not sure about the privacy aspect of it. I have used it at work for an internal blog; the "my talk" page that each user gets is very much like a blog. You can mod the code a little bit to automatically
  • If you like Python, then there's LuPy from divmod, which is a python port of Lucene.

    And if you've ever wanted to create a personal proxy server that gives you a searchable database of your history and bookmarks, then you can do that too, just like I did: http://www.suttree.com/code/pps/ [suttree.com]
    • Re:PyLucene (Score:2, Informative)

      by khanyisa ( 595216 )
      Much better than LuPy is PyLucene [osafoundation.org] which uses the actual Lucene libraries compiled with gcj and wrapped with SWIG, thus giving you Python beauty with Lucene performance...
      • There are also python wrappers for clucene. I was not able to get gcj to work on my system (FreeBSD - even "hello world" did not run and I don't know why), while clucene works. Well sort of, it is clearly alpha, but good enough.

  • So here you have a free alternative to proprietary search engine indexing software that allows you to run an intranet with MS Office (as well as PDF, HTML, text, etc.) files on a non-MS web platform (it's "free" except insofar as Java itself is not "free"). In truth, the document parsers are external to Lucene, but they do work together, plus Lucene itself is a solid piece of work. Also, Lucene itself is just an indexing engine - the other plumbing and connections of a full "search engine" have to be cons
  • by MarkWatson ( 189759 ) on Wednesday August 24, 2005 @05:04PM (#13392918) Homepage
    Also, I wrote a DevX article on Lucene:

    http://www.devx.com/Java/Article/27728/0 [devx.com]

    Lucene is so well documented and simple to use that I am surprised that this subject would fill an entire book :-) Just kidding.

    Lucene can be used as is, or you can extend it with your own document type handlers, etc.

    As a programmer, I way prefer dynamic languages like Common Lisp, Ruby, Python, Smalltalk, etc. However, one of the things that keeps me firmly in the "Java camp" is the great free infrastructure software tools (like Lucene, Tomcat, JBoss, etc.) As a programming language, Java is kind-of weak.
    • As a programming language, Java is kind-of weak.

      Java is anything but weak.
    • ah yes, the dynamic languages. those are fun to debug. this variable is what again? what methods can i call on it? the ides for those must be excellent. having only seen ruby on some slides, python in gentoo scripts, and, well, that's if of the 4 your mention, how are the ides? i've been submerged into javascript land lately, and the dynamic nature of the language (coupled with the bugginess nature of the sandbox) is just a joy to work with.
  • Lucene rocks! (Score:2, Informative)

    by swf ( 129638 )
    Lucene is a pretty amazing piece of software. Lucene is to text indexing what postgres is to relational databases. The API is simple, and though many people have reservations about java it is very, very fast. I've written lucene apps that could perform queries in 40-60ms that would take a relational database up to 20 minutes to perform on the same hardware. I've found it to be even a few orders of magnitude faster than Oracle text indexing.

    And you can index pretty much anything you want, so long as you can
  • I've personally used DotLucence [dotlucene.net], which is the .NET version of Lucene.

    I used it to index a fairly complicated ASP.NET portal site on which there was little or no static content and all content was secured using a custom implementation of ACLs. The ASP.NET application allowed you to run mini-applications within it, called Portlets.

    These portlets had very complex security rules. For instance, you could say that certain users could click this button, while others could not. Certain users can view this portlet p
  • A year ago I was looking at search engine software and search query parsers. I didn't want to mess with setting up Lucene on Java. I found another tool called Xapian [xapian.org] which compiles on Linux from C and has bindings for PHP, Perl, and other languages. I've found it to be fast and stable. The documentation is sort of spotty but the guys on the mailing list are great.
  • ...which the author admits up-front that he has 500 bookmarks. The solution to our bookmarked memories, is not another search engine. Silly to think that any search engine could come close to having our every bookmark in memory.

    del.icio.us, the abstraction, is half the answer. Apple's "iDrive" is the technological half. What we all need is a *follow-me* resource available anywhere, anytime that is totally abstracted above the hardware layer.

    iMarks, personal bookmarks, that load on launch. An open stand
  • Hello from Otis, one of the co-authors of Lucene in Action. It is interesting the book review starts with a problem with bookmarks in the browser, because I run Simpy [simpy.com], a fairly popular social bookmarking service. The reason I started the service a few years back was because with a few keywords + search I could locate my bookmark far more easily and much faster than traversing my bookmark folder hierarchies.

    Anyhow, I just wanted to connect these 3 islands - Lucene in Action + bookmark problem + Simpy. I'l
  • by g_lightyear ( 695241 ) on Thursday August 25, 2005 @04:40AM (#13396048) Homepage
    We moved most of our searching out of SQL and into Lucene for a variety of reasons; and this book has been useful, not just in figuring out good ways of writing those queries that maintain microsecond response times (which Lucene is absolutely brilliant at), but...

      - Good ways of doing batch indexing operations
      - The purpose of the compound document format
      - How to generate explainers for searches
      - Field-specific handling, and how to do it well
      - Ideas like metaphone replacement (soundex) and use of WordNet to integrate a synonym database into search queries
      - When to use CachedFilters to remember complex filters
      - Ideas for how to build "Things Like This" lists
      - Ideas on autocategorisation and geographic searches
      - Named Entities and LingPipe - making the search system recognize "proper names" for things
      - NGRAM recording to gauge word frequency and search terms to detect misspellings and offer alternate searches ("Did you mean XXX"?)

    etc.

    If you're building a search engine, this isn't just a useful resource on implementation - you probably don't need a book for that. What it is brilliant at is providing a lot of ideas that can take you to the next stage - how to build something really cool with your information, and not just a dumb text field search.

    For that alone, the book was worth the purchase price, for me. It's now well annotated, and the back pages are full of references to ideas that can be used in our own implementation, and the page numbers to use to get there.

    Highly recommended for anyone who needs something more than what a Google search of your site provides.
  • I recently worked at a company where one of our web applications required a fast search capability of thousands of documents, and the alpha programmer chose Lucene instead of trying to make it work with a relational database. I was very impressed with Lucene's speed and efficiency.

"Hello again, Peabody here..." -- Mister Peabody

Working...