Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Book Reviews Books Media

Programming Collective Intelligence 74

Joe Kauzlarich writes "In 2006, the on-line movie rental store Netflix proposed a $1 million prize to whomever could write a movie recommendation algorithm that offered a ten percent improvement over their own. As of this writing, the intriguingly-named Gravity and Dinosaurs team holds first place by a slim margin of .07 percent over BellKor, their algorithm an 8.82 percent improvement on the Netflix benchmark. So, the question remains, how do they write these so-called recommendation algorithms? A new O'Reilly book gives us a thorough introduction to the basics of this and similar lucrative sciences." Keep reading for the rest of Joe's review.
Programming Collective Intelligence
author Toby Segaran
pages 334
publisher O'Reilly Media Inc.
rating 9/10
reviewer Joe Kauzlarich
ISBN 9780596529321
summary Introduction to data mining algorithms and techniques
Among the chief ideological mandates of the Church of Web 2.0 is that users need not click around to locate information when that information can be brought to the users. This is achieved by leveraging 'collective intelligence,' that is, in terms of recommendations systems, by computationally analyzing statistical patterns of past users to make as-accurate-as-possible guesses about the desires of present users. Amazon, Google and certainly many other organizations, in addition to Netflix, have successfully edged out more traditional competitors on this basis, the latter failing to pay attention to the shopping patterns of users and forcing customers to locate products in a trial and error manner as they would in, say, a Costco. As a further illustration, if I go to the movie shelf at Best Buy, and look under 'R' for Rambo, no one's going to come up to me and say that the Die Hard Trilogy now has a special-edition release on DVD and is on sale. I'd have to accidentally pass the 'D' section and be looking in that direction in order to notice it. Amazon would immediately tell me, without bothering to mention that Gone With The Wind has a new special edition.

Programming Collective Intelligence is far more than a guide to building recommendation systems. Author Toby Segaran is not a commercial product vendor, but a director of software development for a computational biology firm, doing data-mining and algorithm design (so apparently there is more to these 'algorithms' than just their usefulness in recommending movies?). Segaran takes us on a friendly and detailed tour through the field's toolchest, covering the following topics in some depth:
Recommendation Systems
Discovering Groups
Searching and Ranking
Document Filtering
Decision Trees
Price Models
Genetic Programming
... and a lot more

As you can see, the subject matter stretches into the higher levels of mathematics and academia, but Segaran successfully keeps the book intelligible to most software developers and examples are written in the easy-to-follow Python language. Further chapters cover more advanced topics, like optimization techniques and many of the more complex algorithms are deferred to the appendix.

The third chapter of the book, 'Discovering Groups,' deserves some explanation and may enlighten you as to how the book may be of some use in day-to-day software designs. Suppose you have a collection of data that is interrelated by a 'JOIN' in two sets of data. For example, certain customers may spend more time browsing certain subsets of movies. 'Discovering Groups' refers to the computational process of recognizing these patterns and sectioning data into groups. In terms of music or movies, these groups would represent genres. The marketing team may thus become aware that jazz enthusiasts buy more music at sale prices than do listeners of contemporary rock, or that listeners of late-60's jazz also listen to 70's prog, or similar such trends.

Certainly the applications of such tools as Programming Collective Intelligence provides us are broader than my imagination can handle. Insurance companies, airlines and banks are all part of massive industries that rely on precise knowledge of consumer trends and can certainly make use of the data-mining knowledge introduced in this book.

I have no major complaints about the book, particularly because it fills a gap in popular knowledge with no precursor of which I'm aware. Presentation-wise, even though Python is easy to read, pseudo-code is more timeless and even easier to read. You can't cut & paste from a paper book into a Python interpreter anyway. It may 've been more appropriate to use pseudo-code in print and keep the example code on the website (I'm sure it's there anyway).

If you ever find yourself browsing or referencing your algorithms text from college or even seriously studying algorithms for fun or profit, then I would highly recommend this book depending on your background in mathematics and computer science. That is, if you have a strong background in the academic study of related research, then you might look elsewhere, but this book, certainly suitable as an undergraduate text, is probably the best one for relative beginners that is going to be available for a long time.

You can purchase Programming Collective Intelligence from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

Programming Collective Intelligence

Comments Filter:
  • by 4D6963 ( 933028 ) on Wednesday April 16, 2008 @11:46AM (#23092466)

    So, the question remains, how do they write these so-called recommendation algorithms?

    For now I'm more interested to know how they quantify these improvements.

    • by Otter ( 3800 ) on Wednesday April 16, 2008 @12:07PM (#23092752) Journal
      Let's say I have a dataset where 1000 people have each reviewed 20 movies. If I give you a set with five reviews blanked out for each person, how accurately can you predict them from the other 15?
    • by robizzle ( 975423 ) on Wednesday April 16, 2008 @12:13PM (#23092852)

      Which improvements? The Netflix competition?

      They basically have a large dataset consisting of User, Movie, Rating. Of this set, they split it into two data sets. In the smaller subset they removed the ratings and didn't release these to the public. They didn't modify the larger subset at all. They had cinematch make predictions on the smaller subset (without having been told the real predictions) and use this as the baseline. Next, people that compete in the competition make predictions on the missing data and improvements can be calculated. They calculate the percent improvement as 100 * [Submission's Error] / [Cinematch's Error]

      There are a number of ways to calculate the error but for the Netflix competition they use MASE (Mean Average Squared Error). Basically you take the sum of the squared difference between what was predicted and what the real rating was then divide it by the number of ratings.

      Detailed information can be found on the Netflix Prize rules page [netflixprize.com] and there are a number of good posts on the forums as well.

    • by Gorobei ( 127755 ) on Wednesday April 16, 2008 @08:45PM (#23098930)
      For now I'm more interested to know how they quantify these improvements.

      Quantification is fun field in itself, and by no means trivial. As other posters have noted, there are many leave-n-out approaches: basically, divide the dataset into a training set and a test set, and rank by how accurately the code predicts the test set given the training set.

      These types of tests are good in that they are easy to understand by the judges and participants. The problem, of course, is that over repeated trials, information about the test set leaks out in the scoring, and the participants slowly overfit their algorithms to the test set based on scoring feedback (in the extreme case, there is no training data, only test data - the winning algorithms are just maps of matched test inputs to correct outputs.)

      Even if you manage to ameliorate this problem (e.g by requiring submission of a function that will be applied to an unknown training set to produce a set of predictions,) there is still the risk that the high scoring functions are not very useful (e.g. predicting someones rating of "The Matrix" is easy and has a low RMS error, but do you even care about error from most peoples rating of "Mr Lucky," most have never heard of it?)

      So, to be really useful, you want your rating (objective) function to be weighted by usefulness from the point of view of your business (e.g. yes, everyone like the current blockbuster, but will John Q Random be happy geting "Bringing Up Baby" instead?) Here, "happy" is defined as maximizing profits for the firm :)

      So, you often a prize with a simple (but wrong) objective function. Then offer the winners a chance a real money if they work on the actual hard problems the firm is facing (this is what we do on Wall St, anyway ;)

  • by Jynx77 ( 974092 ) on Wednesday April 16, 2008 @11:50AM (#23092534)
    I was initially intrigued by reccomendation algorithms. Sadly, it's easy to get them up to a certain point and then almost impossible to make them any better. At least for movies. Netflix rates almost everything between 2.5 to 4 stars. Movies it rates 1 or 2 stars, I wouldn't have considered watching anyways. It never rates anything 5 stars. And for things between 3 and 4 stars, I seem equally as likely to really like a 3 star rated item as I am to not really like a 4 star rated item. So why is Netflix paying a million bucks to change that 3 to a 3.1 or 2.9?
    • because it does make a big difference when you scale the system up to millions of users.
      • by Jynx77 ( 974092 )
        Really? How so? We're not talking about money here or something tangible that really adds up.
        • Re: (Score:3, Insightful)

          Think of it more like marketing. because thats exactly what it is. They are basically showing you billboards of other movies you may have an interest in. This algorithm decides which billboards are to be shown to you. Now, if the algorithm is 0.1 percent better at deciding which billboards to show you, does that really matter to you as an individual? not at all. Does it matter to netflix across a userbase of millions of people? absolutely. hence this contest.
          • by Jynx77 ( 974092 )
            I could see your point if I was paying on a per-movie basis. I guess if every one feels a tiny bit better about Netflix because their predictions are 0.1 or 0.2 stars better, it's a win for Netflix. I'm guessing there were better ways to spend the million + achieve that. Although as someone else pointed out, they may not ever pay the 1M and they are getting lots of free pub.
    • Re: (Score:2, Insightful)

      I think there should be seven stars. This is an endless debate I know--which data entry metric to choose--but seven stars seem to provide meaningful choices, whereas five limits the field too much, and 10 choices make some of them functionally meaningless.

      Of course people who still decide to rate The Wedding Singer seven stars can throw the whole thing off, like on iTunes where *no* album scores under a four or a five. But that's the problem isn't it, humans are entering these things. Not only do differenc

      • After they enter their star rating, you could give the person a list of similarly rated movies (by that person) that are currently inferred to be related, and ask the person "which movies did you love/hate for similar reasons?" If people actually took the question seriously, you would quickly build up a set of very good predictors and even on a person-person basis (look for shared actors/directors/screenwriters amongst their picks).
      • Of course people who still decide to rate The Wedding Singer seven stars can throw the whole thing off, like on iTunes where *no* album scores under a four or a five.

        The fallacy is allowing extreme votes to be worth more than moderate votes, because moderate votes are more likely to be accurate.

        It's better to use a simple up/down vote system. Everyone's vote is worth as much that way.

    • There was a fairly good wired article on what they are trying to accomplish and it has less to do with ranking and more to do with recommending movies for individuals.

      http://www.wired.com/techbiz/media/magazine/16-03/mf_netflix?currentPage=all [wired.com]

      The algorithm wants to analyze your habits and then recommend the best movie for you. The interest to netflix is if you can get more people interested in movies that they haven't seen then they will rent more movies.

    • Re: (Score:3, Interesting)

      I was initially intrigued by reccomendation algorithms.

      Me too. Last time this topic rolled around I took a brief look at the Netflix competition and was disappointed. The star rating system was limited but more importantly there was a remarkable lack of data. Many of the teams that edged out some improvement did so by importing lots of data from other sources - with lots of holes in that process - and trying to discern patterns from that.

      On the whole the exercise seems to be a variation of a couple

      • With fancy algorithms and math constructs being all the rage these days (dare I say a bit of a fad?) it behooves us to remember that they are far from the whole story. It helps to have some useful data with which to make connections. No matter how fancy the algorithm you aren't going to harvest rice in a desert.

        Sure, but that's why people import "lots of data from other sources," so why do you call that a bad thing? Yes, collecting more and better data is often more important than additional algorithm de

      • Any interesting non-trivial problem suffers from a lack of data. I agree though that a lot of the teams are suffering from a movement of copying the top teams and hoping to get lucky by adding incremental improvements.
        The winner of Netflix will probably be someone that took the problem from a completely new angle. Eventually the increments will reach there, as new algorithms are tweaked and edited to reach the milestone, but I'm guessing that someone will come along and take the prize another way before t
    • So why is Netflix paying a million bucks to change that 3 to a 3.1 or 2.9?
      That's the clever part. they're not paying a million bucks, they're offering a million bucks to anyone who gets to 10%, which may never happen. And in the meantime they've gotten some better algorithms for free, as well as good publicity
      • Re: (Score:2, Informative)

        by Jynx77 ( 974092 )
        I think they are paying 50K a year out to the top team. Not sure if that's got a time limit on it. I guess the pub is good.
    • I'm not sure how you equate a 10% accuracy improvement of "predicted like to actual like" to an 0.1 star delta on a 5 star system. In a 5 star system surely each star is equivalent to 20% predicted like, so a 10% accuracy improvement would be 0.5 star reduction in prediction vs actual mismatch.

      I think it's reasonable to expect that a 0.5 star accuracy improvement on a 5 star system would be noticeable by enough people (although not all) to make a difference - presumably resulting in better confidence in the
      • by Jynx77 ( 974092 )
        First, you can't rate something 0 stars, so there's only a true range of 4 stars. 4 * 0.1 = 0.4 which would be +/- 0.2. However, as I mentioned in my post, the vast majority of predictions (for me) are in the 2-4 star range. Hence, 2 * 0.1 = 0.2 is +/- 0.1 star. Based on what my perceived margin of error is, +/- 0.1 would probably not even be noticeable.
    • by in10se ( 472253 )
      The question isn't what the current rating is. That's just the average of everyone else's ratings. The recommendation system attempts to figure out if *YOU* would like it based on various factors. If their system is accurate, then they can suggest more movies to you that you will actually like. If they suggest more movies to you that you like, you will continue using their service, or perhaps upgrade your subscription so you can have more of those great movies at once.
      • by Jynx77 ( 974092 )
        "recommendation system attempts to figure out if *YOU* would like it based on various factors."

        Thank you, Caption Obvious!

        • by in10se ( 472253 )
          You are the one who posted the question. The question implies that you either:

          a.) do not understand the difference between a recommendation system and a ratings system
          -or-
          b.) do not know English well enough to coherently phrase a meaningful question

          My answer to you assumed "a". I'm sorry it turns out it was "b".

          The fact is, Netflix isn't trying to change a rating from 3.0 to 3.1 or 2.9. They just want to know if you will like that movie regardless of its average rating.
          • by Jynx77 ( 974092 )
            I suggest you re-read the posts in threaded mode. The first sentence of my post is "I was initially intrigued by reccomendation algorithms." It's obvious that all subsequent use of the word "rates" applied to the reccomendation engine. Only you seemed to have trouble understanding that.

            What question did I post that you thought you were answering? The last sentence of your last post shows a complete misunderstanding on your part. Is English your first language? Based on your sig, I wouldn't have thou

  • Numbers? (Score:2, Informative)

    by drquoz ( 1199407 )
    The numbers in the summary don't match up with the numbers on Netflix's leaderboard [netflixprize.com]:

    BellKor: 9.08%
    Gravity/Dinosaurs: 8.82%
    BigChaos: 8.80%
  • How are they defining this %10 improvement? How do they judge it? And how can they get it down to things like %.07. There have to be user test groups involved and I can't believe their that objective. %10 increase in rentals, in click throughs, in user agreement that the recommendations are helpful? What?
  • by Animats ( 122034 ) on Wednesday April 16, 2008 @11:57AM (#23092618) Homepage

    There are now 35535 entries in the Netflix competition. If they all used roughly the same algorithm, with some randomness in the tuning variables, we'd expect to see results about like what we've seen. I think we're looking at noise here.

    The same phenomenon shows up with mutual funds. Some outperform the market, some don't, but prior year results are not good predictors of future results.

    • Re: (Score:3, Insightful)

      by CastrTroy ( 595695 )
      But the teams that are good continue to refine their algorithms and do better and better. The top teams continue to be at the top over the life of the competition. Also, you can't compare this to the stock market. If company A is doing well now, there is no guarantee that they will still be doing well in 2 or 3 years. However, if you liked a movie, you will probably always like the movie. Sure tastes change, but a lot less than the stock market.
      • From Netflix's perspective, it doesn't matter whether I liked it or not - it matters that I rented it.
        • Yes, but if they keep recommending movies you don't like, you may stop renting movies altogether.
        • From Netflix's perspective, it doesn't matter whether I liked it or not - it matters that I rented it.

          Netflix is a subscription all-you-can-eat service. So they would most prefer if you got a large plan and never used it. Since the only thing that keeps you renewing your subscription is your enjoyment of the movies, and since it costs them money every time you rent a movie, they have a vested interest in trying to maximize enjoyment per movie.

          Actually, they'd probably rather you really enjoy 1/3 of the

          • Re: (Score:3, Insightful)

            by SQLGuru ( 980662 )
            Actually, I don't think they care whether you like the movie or not.....I think the point is to maximize the movies out to subscribers and minimize the movies stored in a warehouse. If I have 1,000 movies in inventory and only 100 are "active", I have 900 movies taking up space. I also have customers who are waiting on one of the 100 movies to become available so they can watch it. If I recommend to you one of the 900, you get to watch a movie while waiting for one of the 100 popular titles which means y
    • Re: (Score:3, Informative)

      by glyph42 ( 315631 )
      You should read the competition rules. The test set is so enormous that you would need 2^something_huge entries to see the results we've seen based on randomness. I did a back-of-the-envelope calculation at the beginning of the competition to see if a random search would be feasible to win the prize, and it's not. Not in a million years. Literally.
  • I bought this book (Score:5, Informative)

    by iluvcapra ( 782887 ) on Wednesday April 16, 2008 @12:03PM (#23092698)

    I was at the Borders and was looking for something to pass the weekend, and I'd been doing some sound effects library work, so I took a look at this.

    It has a lot of statistics; it's essentially a statistics-in-use book , with code examples in Python of all of the algorithms. That said, it makes all of the topics very accessible, and proposes many different ways of solving different wisdom-of-crowds type problems, and gives you enough knowledge so you'd be able to hear someone pitch you their dataset, and you'd be able to say "Oh, you wanna do full-text relevance ranking" or "You need decision tree for that" or "you just want the correlation." The book very much has a sort of statistics-as-swiss-army-knife approach.

    Also, I'm not Pythonic, but I was able to translate all of the algorithms into Ruby as I went, even turning the list comprehensions into the Rubyish block/yield equivalents, so his style is not too idiomatic.

    • Re: (Score:3, Informative)

      by StarfishOne ( 756076 )
      Very nice summary! I own the book and I must say that it's very nice and accessible.

      The examples are practical and described quite well, even if ones math skills are not that great.

      And the example in Python are almost looking pseudo-code like, even if one has little to no Python skills, the language is not a huge barrier.

      5 stars out of 5!

      The reviews at amazing are also quite quite good:

      http://www.amazon.com/review/product/0596529325/ref=pd_bbs_sr_1_cm_cr_acr_txt?_encoding=UTF8&showViewpoints=1 [amazon.com]

      23 ratings
      • Yeah, good reviews there.

        A point the reviewers make that I didn't very clearly is that it does have a bunch of statistics, but it also has neural networks, and a bunch of other stuff that are more along the lines of "machine learning." One of the reviewers said it was the "best book on machine learning ever written," which may be true, but if and only if you're not a theorist or academic computer scientist.

        • Yes, there are statistics, neural networks, genetic algorithms, clustering/distance measures, etc.

          I might call it "the best PRACTICAL/APPLIED book on machine learning ever written". :)

          For a more theoretical approach, this book is quite nice: Machine Learning, Tom Mitchell, McGraw Hill, 1997.
          ( http://www.cs.cmu.edu/~tom/mlbook.html [cmu.edu] )

          (Btw: great signature. :))
  • The problem is where you post your algorithm, if you wait till they are paying for their items ( as at Amazon) where they add in the shopping cart, the people who bought this book also bought this book, or we have a sale, 2 books one of which you have plus this one, for less...

    This can only be done with a shopping cart style, where as Netflickshas to wait for them to select their movie before they can recommend anything, seriously they should partner up with Amazon,
    the people who rented this movie from Netf
  • "As of this writing" (Score:3, Interesting)

    by Anonymous Coward on Wednesday April 16, 2008 @12:37PM (#23093194)
    When was this written? According to the leaderboard, http://www.netflixprize.com//leaderboard BellKor is leading by 0.26 and has been leading for several months.
  • Among the chief ideological mandates of the Church of Web 2.0...
    Shut. The. Fuck. Up.

    Seriously. It's a trend to create websites with more dynamic and shared content. That's it. No church, no ideology, no 2.0.
  • I've read this book, and let me say I found it to be a superb introduction to the topic. It teachs you different methods applicable to a lot of different situations. In fact, after reading it, I decided to build my own social news site [ffloat.it] based on user recommendation. However, I had to research a lot into the field before coming with a good and fast algorithm. That's the only flaw I found in the book, all the algorithms are poorly implemented (altought this may be for the sake of clarity).
  • I came across this book browsing through Safari Books Online's titles, and was almost halfway through the book before I was able to get hold of an actual copy. While the main focus of the book is on data mining (definitely not only recommendation algorithms, it also shows how Google's PageRank algorithm works, how to mine user data from Facebook and write matching algorithms etc.) it provides a good introduction to pattern recognition in general. It shows you how to write a simple neural network in Python,
  • by wintermute42 ( 710554 ) on Wednesday April 16, 2008 @02:46PM (#23094758) Homepage

    The Netflix competition, in principle, is an example of an interesting class of prediction algorithms. There is a lot of good work in academia in this area and on the face of it one might be surprised that no one has beat Netflix yet.

    Unfortunately Netflix restricts the data that can be applied to prediction. You have to use their data which includes only movie title and genre. A much better job could be done if something like the Internet Movie Database were fused with the title selection information. This would allow the algorithm to predict based on actors, directors and detailed genre. For example, I see all movies directed by John Woo. Given that I've seen all of his movies, it's not hard to predict that I'm going to see his next movie.

    • see: http://developers.slashdot.org/article.pl?sid=08/04/01/189230 [slashdot.org]

      "A teacher is offering empirical evidence that when you're mining data, augmenting data is better than a better algorithm. He explains that he had teams in his class enter the Netflix challenge, and two teams went two different ways. One team used a better algorithm while the other harvested augmenting data on movies from the Internet Movie Database. And this team, which used a simpler algorithm, did much better -- nearly as well as the best
    • People easily beat Netflix (or did you mean "no one has beat the Netflix challenge yet"?). They just haven't beat the $1,000,000 mark yet.

      Does Netflix restrict what you can use in your algorithm now? I haven't checked the rules recently, but I know at first a lot of people were using IMDB and other sites for extra predictors.
      • As we have a team in the contest (first page of the leaderboard) I know that using the IMDB's downloable info is prohibited due to some clause that states it can not be used for commercial purposes. This is IMDBs rule however Netflix has raised no objection.

        Moreover the problem with the Netflix dataset is they have intentionally inserted misinformation into the dataset for whatever reason.

        Our answer was to have someone (read:me) to comb over each of the 17,000 entries and screen for basic accuracy. For ins
    • by Jainith ( 153344 )
      I would agree that adding Actor, Director, Art director...grip...whatever is likely to be the "next big thing" in making movie picks more accurate.
  • Could you not just add an extra box on the rating section that asks for the customers mood? Say a box that says rate this film 1-5 stars. Below that a drop down with the most common moods, happy, sad, angry, annoyed. It seems to me a big factor in when you rate a film is your current mood. If your in a good mood your more likely to be forgiving of a film, in a bad mood your going to be critical. This extra information might help you determin the accuracy of a given rating. I'm shure a study could help det
  • All I know about these recommendation algorithms is that they're a bit crazy. I have had The L Word recommended because I liked Alias, 24, and Roswell.

    Of course maybe The L Word is about lesbian alien spies with super powers. Huh. I'm gonna go check it out.
  • I have also read Collective Intelligence. I think I enjoyed it significantly more than the Slashdot reviewer. Here is my review:

    ~~~~

    Have you ever wondered how:

    * Google comes up with its search results
    * Amazon recommends you books/movies/music
    * spam filters decide good from bad

    Well, Toby Segaran not only explains these topics and more in Collective Intelligence, but he does so in a way accessible to software developers t

The truth of a proposition has nothing to do with its credibility. And vice versa.

Working...