Men vs. Machines 252
FFriedel writes "In October classical chess world champion Vladimir Kramnik is scheduled to play Deep Fritz in Bahrain. Now Garry Kasparov, who lost his title to Kramnik in 2000, but is still ranked as the strongest player in the world, has announced that he will play the computer chess world champion Deep Junior in Jerusalem at almost exactly the same time. Both programs are distributed by ChessBase. In 1997 Kasparov lost his famous match against Deep Blue."
He Should Just Take up Go. (Score:2)
BlackGriffen
Re:He Should Just Take up Go. (Score:2)
BlackGriffen
For the chess nuts (Score:2)
It's been noted for years that one benchmark of a machine's ability to think intelligently was to beat a grandmaster in chess. That goal has been significantly harder to achieve than beating the Turing test. Now just for a Go playing computer, a harder still benchmark.
Re:For the chess nuts (Score:4, Insightful)
Re:For the chess nuts (Score:3)
Re:For the chess nuts (Score:5, Funny)
Some of them are pretty good at chess though.
Re:For the chess nuts (Score:3, Interesting)
Eh, kinda... (Score:5, Funny)
ALICE would probably make a good CEO, rather than a conversation tool.
CEOBot: What would you like to know?
Interviewer: What were your profits this year?
CEOBot: What would you like to know about our profits this year?
Interviewer: How much were they?
CEOBot: How much do you think they were?
Interviewer: Well, you claimed 22billion.
CEOBot: I'm afraid I really don't know anything about that. Would you like me to sing you a song?
-Jayde
How hard is turing test (Score:2)
In the modern version of the turing test, ie IRCbots, most people are very easy to fool when they are not expecting it. However, fooling a discering judge who is trying to tell human thought from canned waffle is still impossibly hard, in the 'We still have very little idea of how to do it' category.
Re:For the chess nuts (Score:2)
On the other hand, we barely have the remotest clue (and that's being generous) of how to create an artificial intelligence algorithm to simulate human conversation. (I would personally argue that the term "artificial intelligence" is more or less meaningless, which compounds the problem somewhat.) But, we can at least be sure that human-brain-compatible hardware can run it in real time.
Basically, the hardware and software problems for each problem are quite different, but both are still pretty hard.
Re:For the chess nuts (Score:2)
Which I think shows that we don't have the least idea how humans play chess or how they think. It's as if John Henry could still show up at a railroad cut in 2002 and have a fighting chance to beat a D-12 Cat!
If I were an AI or chess-playing-computer researcher, I would be ashamed to show my face in public!
sPh
Re:For the chess nuts (Score:2)
Calculation, yes. Storage, no. A human brain stores exabytes or more of information (using a lossy format, yes), much of it in very efficient indexes. John Henry could probably place a pin quicker than most machines (the accuracy of robot assemblers and the like relies on controlled conditions) though that probably wouldn't make him feel any better...
Re:For the chess nuts (Score:2)
Re:For the chess nuts (Score:1)
I disagree with your statement that "one benchmark of a machine's ability to think intelligently is to beat a grandmaster in chess".. Simply, chess can be reduced to a series of patterns and combinations that have very little to do with intelligence per se
If a simple database search through millions or billions of records which returns matches can be termed "intelligent", then the current set of chess playing computers are indeed intelligent.. But humans play chess differently, there is a lot more intuition, and far far less brute force techniques to evaluate a particular move..
As others have noted, the Deep Blue vs Kasparov match was tailormade by the programmers to defeat ONLY Kasparov.. his past games, his playing style were all analysed in depth and preprogrammed into the machine.. if you like, call it the difference between rote learning and knowledge.. What humans do with chess now is knowledge management, computers are STILL stuck in rote learning, and from what I know, its unlikely that computers will make the step forward into true knowledge management in the near future..
Don't get me wrong, I share your excitement at another chance to see a computer vs human chess match.. but I can't understand why ppl use this as an indication of the advances made to truly intelligent computers; when in reality, its nothing of the sort; just a combination of database technology, rule evaluation and faster processors..
Re:For the chess nuts (Score:3, Interesting)
This is a great point for debate, but i am of the opinion that the human brain is just a large collection of facts (a database), a really fast processor, and really efficient algorithms for searches. Original thought, i feel, is done in a similar way to computers.... generate all the possibilities and evaluate the outcome, choose the best one... we can do it tremendously better than machines and that is why it appears to be original thought, but is merely extrapolating from current rules.
Re:For the chess nuts (Score:3, Insightful)
Re:For the chess nuts (Score:2)
Random generation of hypothesis and discarding of faulty ones is easily mistaken for 'original insightful thought', especially since the discarded flawed ideas will likely never be talked about, making it seem as if the conclusion was arrived at from out of the blue.
Further, there isnt anything inherently unrandomizable about paradoxes. Paradoxes are merely an indication of a failure to discard faulty random data or a failure to generate an inclusive explanation for the data.
Re:For the chess nuts (Score:2)
If your original claim about the way the brain works is correct, then it would all be too deterministic to be able to doodle at random ! Where is the randomness coming from ? Ie, shifting the problem of inspiration from the final work down to the doodling does not answer anything, it just pushes the genesis of the concept to a lower level.
And, if you accept a random seed somewhere in the pipeline, then you can also view the brain as not just a database, processor and searching algo, but also as a convertor that channels the random seed into a useful result -- doodling in this case, chess intuition in the other. Which would bring the problem back to having to teach a computer to do something we currently have no mathematical concept of (the channeling).
Re:For the chess nuts (Score:2)
The basics of the channeling we can emulate in the form of neural networks. The human mind is a bit more complicated than a database, processor and searching algorithm, so it would depend on how wide definitions you allow for those concepts. Neural networks can fall into a searching algorithm definition, and teaching them isnt that complicated.
Re:For the chess nuts (Score:2)
Of course, Eschers drawings themselves have further been passed through another sieve. While similar drawings have probably been generated since the invention of the crayon by countless of children failing to portrait a cube or a stair accurately, the Escher drawings pass through the sieve of popularization. The children or other artists producing similar things havent gotten popularized, which gives the appearance again of something original and insightful. Still, despite appearance, it remains brute-force randomization passed through filters for meaning, rather than sudden insight.
Anyway, on the topic of chess I pretty much agree with you. I dont really find it very interesting either.
Re:For the chess nuts (Score:2)
How you think is affected by how you procesed patterns in the past. If you are lazzy or not curious (each incremental match is less important). If your pattern matcher matches lots of non-matches, you are also doomed (once you reach a critical point of false positives vs. true positives, the pattern matcher gets corrupted beyond repair).
I'm truly damaged beyond repair
Re:For the chess nuts (Score:2)
When Deep Blue can play chess and do those other tasks as well, we can talk about making an appointment for a Turing Test! Chess is about the easiest AI problem imaginable.
sPh
Deep, man. (Score:4, Funny)
"Kasparov would move Qe4 here, man."
"Whoa, deep blue, man."
"Hey guys, we need a name...for...hey!"
And thus it's perpetuated.
Re:Deep, man. (Score:1)
Re:Deep, man. (Score:1)
I imagine they did that in honor of Deep Blue which was multiple processors. Deep Blue was called Deep Thought for awhile and then IBM "IBM-ized" it as Deep Blue.
Re:Deep, man. (Score:5, Informative)
Re:Deep, man. (Score:2)
Kasparov was the first of the international masters to beat Deep Thought. He was also the first world champion to be defeated by any computer... which at the time happened to be IBM's Deep Blue.
Re:Deep, man. (Score:2)
Re:Deep, man. (Score:3, Interesting)
One should expect this from slashdot I guess.
Re:Deep, man. (Score:1)
Re:Deep, man. (Score:2)
Once IBM got on the bandwagon, they named their machine Deep Blue (Big Blue, get it?) as a homage/spoof of the earlier effort.
And now the newer programs are Deep x where x is whatever name you this is partcularly witty. It's sort of a spoof of a spoof at this point, and largely beyond immediate appreciation by the average person. Sorta like how Japanese ships have "maru" in their names.
For more info on the history and nomenclature, look here [geocities.com].
Re:Deep, man. (Score:2, Funny)
What's the point? (Score:1)
The problem is that these machines are being programmed to play against 1 opponent, and are being fed data about that player's past games, habits, techniques...
Deep Blue won because it was programmed to defeat Kasparov, and only Kasparov.
Until a computer is programmed to accept a blind challenge without background information about their opponent, I will continue to be unimpressed.
Re:What's the point? (Score:3, Funny)
If you know the computer will know how you will play, you should play in a different way. But the computer will obviously know you will know it knows how you play, and thus expect this. As a result, you should alter your strategy back to your original. The computer will also realize you will do this though, so you should again try to alter your playing manner.
Re:What's the point? (Score:1)
When they change the program nightly to adjust for his play, he was not playing against the computer but 6 hidden people.
Re:What's the point? (Score:3, Interesting)
There was a chess program for the Vic 20 that could whip my dad's ass every time. Machines have been whipping general players asses for a very long time. My dad is really good but for all of that my dad is still an amateur and could never hope to make a showing in a real competition. It's only the great grandmasters that give the machines trouble... these grandmasters are several orders of magnitude better than the amateur players like my father and are far better than most pros. It says a *LOT* that a machine is able to beat someone like Kasparov... even knowing his moves ahead of time.
It's true that the machine was made just to beat kasparov, but that was probably from a lack of programmer time..... it could be programmed the same, and a Bobby Fisher module added, and a Karpov module and a Kramnik module and so on.
Actually you are wrong... (Score:1, Insightful)
Re:What's the point? (Score:1)
Re:What's the point? (Score:2)
>programmed to play against 1 opponent, and are
>being fed data about that player's past games,
>habits, techniques...
This gets posted at least 10 times a story like this hits
Programs like Fritz, Junior and even Deep Blue are tuned to perform as well as possible against a wide range of testing opponents or known testpositions.
You can't just feed the computer some games from it's future opponent and have it magically adapt itself to this. About the only thing that is done in reality is to have the computer play into an opening that it is know to play well. For a human vs computer match, this just means trying to get an open position where tactics become predominant over longtime strategy.
And that is independent of whether you're playing Kramnik, Kasparov, Anand. It's the same for any human opponent.
--
GCP
That seemed further than Third. (Score:1)
But to stay on topic, I think the most amazing thing about this is that given point value systems, which do exist in chess, or the simple idea of "how to win" the more advanced technology gets the better they are going to become. It won't be long till computers can be programmed and run efficiently code that allows them to add up all possible endings for a single game of chess since the first moves and then calculate which would be the best move to make based upon how many endings turn out in favor of the computer. Where did that Fischer boy go to?
Re:That seemed further than Third. (Score:1)
http://www.everything2.com/index.pl?
http://www.everything2.com/index.pl?node_id=121
In re: your comments on the evolution of chess, you are probably right. Computers will become stupid fast and will be able to defeat any human (not just a specific target) through brute force. But chess will still be fun for humans, and will be an especially good tool for preparing young minds. Plus, there will be increased popularity in chess variants, such as Fischer random (http://www.everything2.com/index.pl?&node_id=110
Re:That seemed further than Third. (Score:1)
Kasparov vs. Deep Blue... (Score:1)
- A.P.
Just a matter of time... (Score:1)
Deep Blue = Unfair (Score:2)
But I do remember quite a few people criticizing the Deep Blue stunt because IBM trained Deep Blue by examining every Kasparov match on record. Kasparov had no idea what to expect since Deep Blue never played anyone else. Did Deep Blue every play any other grandmasters?
Re:Deep Blue = Unfair (Score:2, Insightful)
At any rate, they could have provided Kasparov with a history of Deep Blue's games against other computers; they could also have provided Kasparov with Deep Blue's analyses of other match games. Either would have been easy to produce, and given Kasparov ample material to study.
I suspect Kasparov's arrogance led him not even to ask for such. He certainly didn't seem to take the match itself seriously (a mistake Kramnik is not repeating), and I don't recall hearing that Kasparov was explicitly denied those materials.
Re:Deep Blue = Unfair (Score:2)
Re:Deep Blue = Unfair (Score:1)
At the time of this match there was a split in the international chess community. Kasparov and a few of the other top rated players had taken off on their own and recognized Kasparov as World Champion. FIDE said that he abdicated and declared someone else champion. My theory is that IBM used this rift to dictate the terms of the match. Basically that if Kasparov didn't agree to play the 6 game match without ever having seen a game by Deep Blue then they would put their millions of dollars of advertising behind another chess player and promote him as the chess champion in their commercials.
Re:Deep Blue = Unfair (Score:2)
So far, no interviewer I've seen has had the balls to ask the imho logical next question : is it really Kasparov that is such a great chess player, or is it specifically the combo Kasparov + the-database-laptop ? Suppose Kasparov wins, can we really say that the human beats the machine ? From the interviews, I get the impression that he wouldn't be half as strong if he didn't have his machines to fall back to, cyborg style.
who am i rooting for? (Score:2)
(simpson's reference)
Re:who am i rooting for? (Score:1)
Well, I'd like to suggest, "GO STINKY!", or "Go Washing Machine!", but hey, that's just me and my waste of brain donated to the Simpsons.
Man-Canine World Championship (Score:1, Funny)
Although I am unranked, I'm not overly nervous as my dog Poo licks her butt. Unfortunately, our last match ended in a draw when Poo decided that my queen looked mighty tasty. Luckily, I was able to recover said queen from Poo's poo in the neighbor's yard. I hosed it down pretty well and we should be able to begin anew tomorrow.
Vaporware (Score:1)
Game Theory (Score:2)
Shall we play a game? (Score:1, Interesting)
I find it interesting... (Score:1)
Re:I find it interesting... (Score:2, Insightful)
If a human grandmaster is about to play Gary Kasaparov for instance, do you think he's likely to study as much about the way that Kasparov plays? Its just that the in this case the computer is able to forget the rest and _only_ focus on Kasparov's style. In any competition you would be foolish not to gain as much knowledge about your competition.
As for having multiple teams to win the Superbowl, ever heard of offense and defense and special teams. Its just an example of using the team that is most likely to win the play, or the game or whatever, it just so happens that these teams are all part of a larger team.
Its called specialisation and its pretty much what enabled us (humankind) to give up nomadic existence and focus on doing wonderful things like making chess playing computers and reading Slashdot instead of working.
Re:I find it interesting... (Score:2, Interesting)
Take for example the reports of "Fischer" on the Internet, beating Nigel Short after giving away what amounts to a 10 move advantage. There is no way a human can give such a highly ranked player such an advantage in a blitz game and win so convincinly, it was obviously a bot running on the Crafty engine (or something similar), beating the crap out of Short.
So yes, there are general engines out there which are very highly rated and can beat 85% to 90% of grand masters.
Apples and oranges.... (Score:3, Interesting)
Re:Apples and oranges.... (Score:3, Insightful)
Archon is what you are talking about ... (Score:2)
Here's a site with a review [mameworld.net] and screen shots.
Someone posts a chess computer story... (Score:5, Insightful)
Well, here's a heads up. That is exactly how human players prepare for matches against each other. They sit down and play through their opponents previous matches, and try to find weaknesses and holes to use against them.
The point of all this is equally questioned. People seem to think that creating large expert systems is a done deal, and no more research needs to be done into how to construct programs that use a set of variables to give advice, in this case which chess piece to move. Again, here's a clue:
This kind of stuff is fundamental, basic research. Absolutely vital and incredibly useful as we continue to learn about how to better realise and utilise computer technology.
Insert old saw about dogs walking here.
Chess-playing research seems to be a dead end (Score:3, Interesting)
That hasn't turned out to be the case. The search algorithms that the chess-playing programs use don't appear to be any great use for anything except playing chess (or closely related games like go or checkers).
Personally, I want to see a computer kick Kasparov's and Kramnik's ass (though I'm unconvinced it's going to happen this time around, it certainly will eventually) so that chess players shut up about defending the honor of humanity or some such rubbish. Knowing a little about how chess-playing programs work, I feel about as threatened by the prospect that the world chess champion can be trounced by a computer than the fact that in one second the PC I'm typing this at can do more arithmetic operations than I'll do in a lifetime.
Re:Chess-playing research seems to be a dead end (Score:2)
Give it the name of the player it's about to play. The software connects to a number of databases, finds the matches in some machine readable code (yea... well... we need that part first I guess) and then "studies" them. Use pattern recognition and a neural net to learn how the opponent thinks etc.
I think that would lend more credibility to the argument of computers as valid players as well. Afterall, that way they'd be doing their own research.
Of course, that could be the way it is now and I could be completely clueless.
Re:Someone posts a chess computer story... (Score:5, Insightful)
...and that is precisely the opportunity that was denied Kasparov. Deeper Blue and its handlers -especially Joel Benjamin - had years to dissect Kasparov's games, but Kasparov had no access to DB's oeuvre. That's not a level playing field.
Another aspect you've overlooked is that human preparation to play a particular opponent is usually on the order of weeks or months, and does not significantly sacrifice the preparer's ability to play other opponents. Even in the middle of preparing to play Kramnik or Anand, Kasparov could go to a tournament and beat just about anyone else. By contrast, DB was in preparation for years and the result was so finely tuned toward playing Kasparov that DB would have fared very poorly in any top-level tournament involving anyone other than Kasparov. That kind of inflexibility is not a hallmark of a intelligence, artificial of otherwise. What it indicates is that the basic methods were so old and so well understood that people have been able to spend years just tuning the implementation.
Making a computer beat the world champion is a respectable feat. However, it's not even the highest goal in computer chess. Making a computer that could beat a series of opponents, without fundamental changes equivalent to a brain transplant between matches, would be more impressive. Making a computer that could win a 16-player round robin tournament against a whole field of top grandmasters - something Kasparov still does regularly, to this day - would be more impressive still. Making a computer that could play speed chess better than Anand or Hawkeye would be another worthwhile challenge in a different direction. Then there's Go, and then a bunch of other challenges, and then there's the real world. Spending years to create a program that can beat one player in one chess match under less-than-fair conditions is really a pretty low goal.
Re:Someone posts a chess computer story... (Score:2)
If the computer's preparation were comparable in duration and resources to the human's preparation, that would be fine. My point, though, is that in the particular case of Deeper Blue vs. Kasparov that was not the case. The computer had much more time to prepare for Kasparov than vice versa (he was kinda busy winning tournaments and such). Similarly, DB had many more of Kasparov's games to study than vice versa. It's not a problem that DB was allowed to prepare; it's a problem that Kasparov effectively was not.
Re:Someone posts a chess computer story... (Score:2)
False, players also need to talk, walk, remember other things, and a bunch of other non-specialized activities. They need to be humans. On the other hand, a computer is "anything, but not a human". Why? Because there is NO limit to how much memory they can use, how much processing power they might use. Perfect memory, perfect calculus.
So if you could build a infinite memory/ infinit processing power computer, you could just precalculate all the possible outcomes of matches. Say a ply of 60 or more.
That computer is smart?
Vision 1: To call a computer chess program "inteligent" you need to draw a line and state: "this memory and this CPU should give you enough resources to beat any human". Anything else is just plain unfair and it's no longer inteligent.
Vision 2: A chess program should be an entityand not a bunch tuned of knowledge / rules / algoritms. It should be able to learn from experience without human intervention (ie: no specialized learning program, this one should be tuned by the computer itself), it should be able to plan it's own strategy, and autotrain itself. Ie: you teach them the chess rules, let it comunicate (gather more data, a database if requested, etc) and then the computer must do everything without interference.
Level 1 is acceptable, but we'd like to see a computer beat a human under the much more fair Vision 2.
Re:Someone posts a chess computer story... (Score:2)
Maybe if you were, say, a programmer or a computer professional, or maybe even had read an IT industry magazine at some point you would understand a little about the fundamentals of this discussion.
This has nothing to do with "computers counting faster" and everything to do with expert systems, that is programming computers to make "clever" decisions based on states. If this was just brute-force then why do you think it costs so much money to put together one of these systems? Because they have teams of programmers and serious hardware. More hardware than is needed for a brute-force approach, actually, so what's all the extra hardware doing? If you think this is all as easy as that one class you took in highschool where you typed:
10 PRINT "Hello!"
20 GOTO 10
and then laughed in that odd, shrieking way you have, then you really should get hit with the clue stick.
Re:Someone posts a chess computer story... (Score:4, Insightful)
Deep(er) Blue used some special-purpose hardware, but Deep Fritz and Deep Junior don't. Multiprocessors are a commodity nowadays.
Deep(er) Blue's custom ASICs were basically there to make the brute-force approach go faster. They didn't implement some sort of expert system or neural net, they had little to do with sophisticated position evaluation, they were mostly just there to speed up the nuts-and-bolts operations of walking extremely large decision trees.
The scorn you heap upon this post's grandparent seems just a trifle misplaced, since you yourself seem to know little about the programs being discussed. They're a combination of chess-specific knowledge and fast implementations of fairly ancient algorithms, so they're pretty formidable opponents, but in terms of AI research they've progressed little beyond an early-to-mid-80s level. Nobody that I know who actually works in AI would say any different, either.
Re:evaluation function in Deep Blue hardware (Score:2)
Then I stand corrected. Nonetheless, I think even that still falls into the category of "making the brute-force approach go faster". The type of evaluation function involved, no matter how "sophisticated" it is in a certain context, is not truly sophisticated in the same manner as something that actually performs planning or learning functions. It's basically a calculator for a complex mathematical function, and is still driven more by the intelligence of the person who assigned weights to all of the positional factors than by any actual sophistication as an AI researcher would use the term.
A great hollow victory (Score:2, Interesting)
And as far a computer beating a human? Its just not that interesting a problem anymore. Especially when Ken Thompson (of UNIX fame) showed 20 years ago that brute force searches was the way to create a winning system against a human. Not very sporting. A great book on this was "Chess Skill in Man and Machine" edited by Peter Frey.
Its fun to watch humans race each other. Its boring to watch a human race a car. I think the same holds with humans, computers and chess competition.
Re:A great hollow victory (Score:1)
That really depends on where the human is standing at the time. I think with the person in the right position this could be far more interesting than any other competition I've ever seen.
Could Deep Junior be easily distributed? (Score:2)
Even if Garry Loses (Score:2)
Block Quote
"The six games will be played at the classical time control and the prize fund is that roundest of big round numbers, one million dollars. (Kasparov gets half a million up front and the other half is split 60/40 winner/loser. Ka-ching! Garry is definitely paying for dinner next time.)"
End Quote.
So Garry is getting at least 700,000 just for showing up. Man I wish I were that guy.
Go Humans! Whoo-whee! (Score:1, Troll)
.
Could be great IBM PR (Score:2)
Kasparov's Secret Weapon (Score:3, Funny)
If that fails, he plans to challenge his opponent to a "Double or Nothing" drinking contest at a local bar.
Re:Kasparov's Secret Weapon (Score:2)
Aaaaawww yeah - it's chess Russian style!
Another article on Kasparov vs. Junior (Score:4, Informative)
NOT impressive (Score:2, Interesting)
IBM even trained deep blue for kasparov, but kasparov never got a chance to play deep blue so could not have any idea of weaknesses in it's game (eg positions not in its database where it would have to waste time looking at the move tree.) which forced him to play very nonstandard games and use styles he is not used to using
to me, the fact that deep blue took kasparov does not mean anything except that kasparov is a truly amazing player (who else can compete against a super-comptuer programmed by computer scientists at a top corporation created soley to beat them?)
even more amazing is that kasparov only lost the series on a game where he was completely off
Re:NOT impressive (Score:2)
Define real AI then.
A chessplayer also explores a tree of variations in his head.
The difference is that they have much more selective heuristics (which moves not to look at) and usually a better esimatation of who is better (evaluation).
This is also exactly the area where computers have been making progress. More selectivity (so they can look deeper and at more crucial variations) and a better assesement of the board position.
--
GCP
Go (Score:4, Interesting)
I'm more interested in seeing someone write a strong Go [well.com] opponent. It's pretty obvious that chess is rather simple for a powerful computer to brute force, but even the most sophisticated hardware and software can be beaten by an amateur Go player. The strongest Go programs rate at around the 8-kyu level (Go ratings start at 30-kyu for complete beginners, on up to 1-kyu, then from 1-dan to 9-dan for pro players).
There have been cash awards (on the order of a million dollars in at least one instance) put out on the table for developers who could write a Go program capable of beating a certain level player. So far, nobody's succeeded. MindZine has a nice (albeit a bit dated) article [msoworld.com] explaining why this is.
When a computer can play a really strong game of Go, I'll be impressed.
Re:Go (Score:2, Informative)
Also the level of Go programs is to be taken with a grain of salt, because if you play a lot of games against them, even weaker players will discover the weaknesses of the program and exploit them (often playing unorthodox but still not bad moves does the trick), which always works because the programs don't learn from their mistakes.
For a program to beat a pro player, faster hardware won't be of any use. What's needed is a major breakthrough in AI software technology, which may not happen anytime soon. Also the advantage of brute force looking ahead isn't that great for computers, since professionals routinely read 100 moves ahead (which makes some pro games very hard to understand for a lowly amateur as myself (~12 kyu)).
Re:Go (Score:2)
It now takes a tremendous effort and a good deal of luck for me to beat my computer at chess.
I think that it is only a matter of time and attention before go programs become very strong. Chess programs have recieved far more programming effort than go programs. Give it time and I believe that you WILL be impressed.
ChessBase link to NY Times article on Go (Score:2, Offtopic)
(For those who say "fuck that registration shit")
***************
Early in the film "A Beautiful Mind," the mathematician John Nash is seen sitting in a Princeton courtyard, hunched over a playing board covered with small black and white pieces that look like pebbles. He was playing Go, an ancient Asian game. Frustration at losing that game inspired the real Mr. Nash to pursue the mathematics of game theory, research for which he eventually won a Nobel Prize.
In recent years, computer experts, particularly those specializing in artificial intelligence, have felt the same fascination -- and frustration.
Programming other board games has been a relative snap. Even chess has succumbed to the power of the processor. Five years ago, a chess-playing computer called Deep Blue not only beat but thoroughly humbled Garry Kasparov, the world champion at the time. That is because chess, while highly complex, can be reduced to a matter of brute force computation.
Go is different. Deceptively easy to learn, either for a computer or a human, it is a game of such depth and complexity that it can take years for a person to become a strong player. To date, no computer has been able to achieve a skill level beyond that of the casual player.
The game is played on a board divided into a grid of 19 horizontal and 19 vertical lines. Black and white pieces called stones are placed one at a time on the grid's intersections. The object is to acquire and defend territory by surrounding it with stones.
Programmers working on Go see it as more accurate than chess in reflecting the ineffable ways in which the human mind works. The challenge of programming a computer to mimic that process goes to the core of artificial intelligence, which involves the study of learning and decision-making, strategic thinking, knowledge representation, pattern recognition and, perhaps most intriguingly, intuition.
"A good Go player could make a move and other players say, `Yes, that's a good move,' but they can't explain to you why it's a good move, or how they even know it's a good move," said Dr. John McCarthy, a professor emeritus at Stanford University and a pioneer in artificial intelligence.
Dr. Danny Hillis, a computer designer and chairman of the technology company Applied Minds, said that the depth of Go made it ripe for the kind of scientific progress that comes from studying one example in great detail. "We want the equivalent of a fruit fly to study," Dr. Hillis said. "Chess was the fruit fly for studying logic. Go may be the fruit fly for studying intuition."
Along with intuition, pattern recognition is a large part of the game. While computers are good at crunching numbers, people are naturally good at matching patterns. Humans can recognize an acquaintance at a glance, even from the back. "Every Go book is filled with advice on patterns of different kinds," Dr. McCarthy said.
Dr. Daniel Bump, a mathematics professor at Stanford, works on a program called GNU Go in his spare time. "You can very quickly look at a chess game and see if there's some major issue," he said. But to make a decision in Go, he said, players must learn to combine their pattern-matching abilities with the logic and knowledge they have accrued in years of playing.
"If you watch really strong players," Dr. Bump said, "some seem to make fairly mundane moves, but at the end of the game they're ahead. Others do spectacular things."
One measure of the challenge the game poses is the performance of Go computer programs. The last five years have yielded incremental improvements but no breakthroughs, said David Fotland, a programmer and chip designer in San Jose, Calif., who created and sells The Many Faces of Go, one of the few commercial Go programs.
Mr. Fotland's program was the winner of a tournament last weekend in Edmonton, Alberta, that pitted 14 Go-playing programs -- including several from Japan -- against one another. But even The Many Faces of Go is weak enough that most strong players could beat it handily.
Part of the challenge has to do with processing speed. The typical chess program can evaluate about 300,000 positions per second, and Deep Blue was able to evaluate some 200 million positions per second. By midgame, most Go programs can evaluate only a couple of dozen positions each second, said Anders Kierulf, who wrote a program called SmartGo.
In the course of a chess game, a player has an average of 25 to 35 moves available. In Go, on the other hand, a player can choose from an average of 240 moves. A Go-playing computer would take about 30,000 years to look as far ahead as Deep Blue can with chess in three seconds, said Michael Reiss, a computer scientist in London.
If processing power were all there was to it, the solution would be simply a matter of time, since computers are growing ever faster. But the obstacles go much deeper. Not only do Go programs have trouble evaluating positions quickly, they have trouble evaluating them correctly.
Nonetheless, the allure of computer Go increases as the difficulties it poses encourage programmers to advance basic work in artificial intelligence. Graduate students produce dissertations on the topic, and a handful of researchers around the world devote much or all of their attention to it.
The game attracts people from all fields. For example, Chen Zhixing, a retired chemistry professor in Guangzhou, China, wrote a program called Handtalk, which dominated the computer Go field for several years. Dr. Bump, 50, whose field is number theory, has been playing Go for 35 years and taught himself the C programming language four years ago so he could write Go software. Mr. Fotland, 44, the creator of The Many Faces of Go has been working on computer Go for 20 years and is chief technology officer at Ubicom, a small semiconductor company in Silicon Valley.
All are very strong Go players, and it takes a strong Go player to write even a weak Go program. Mr. Fotland, for instance, said he had written programs for checkers, Othello and chess. The algorithms are all very similar, and it is not difficult to write a reasonably strong program, he said. Each of the games took him a year or two to finish. "But when I started on Go," he said, "there was no end to it."
Mr. Fotland said that his Go programming was especially weak when he was a beginning player. "A lot of the stuff I wrote was just plain wrong because I didn't understand the game well enough," he said.
Even when skill develops, however, translating it into a program is not an obvious task. "There's a certain stream of consciousness when you're looking at positions," Dr. Bump said. "You might look at 10 variations, but you don't really know what's going on in the back of your mind. Even a strong player doesn't know how his mind works when he looks at a position."
"We think we have the basics of what we do as humans down pat," Dr. Bump said. "We get up in the morning and make breakfast, but if you tried to program a computer to do that, you'd quickly find that what's simple to you is incredibly difficult for a computer."
The same is true for Go. "When you're deciding what variations to consider, your subconscious mind is pruning," he said. "It's hard to say how much is going on in your mind to accomplish this pruning, but in a position on the board where I'd look at 10 variations, the computer has to look at thousands, maybe a million positions to come to the same conclusions, or to wrong conclusions."
Dr. Reiss, who is the author of Go4++, a previous champion that placed second in last weekend's playoff, agrees with Dr. Bump. Dr. Reiss, who is an expert in neural networks, compares a human being's ability to recognize a strong or weak position in Go with the ability to distinguish between an image of a chair and one of a bicycle. Both tasks, he said, are hugely difficult for a computer.
For that reason, Mr. Fotland said, "writing a strong Go program will teach us more about making computers think like people than writing a strong chess program."
Dr. Reiss, who works on Go full time, said he would not think of devoting his time to any other problem. "It's a fundamentally interesting problem, but also it's just the right level of difficulty," he said. "If it was too easy it would have been solved already. If it was fantastically difficult, people might give up in frustration."
"I think in the long run the only way to write a strong Go program is to have it learn from its own mistakes, which is classic A.I., and no one knows how to do that yet," Mr. Fotland said. A few programs have some learning capabilities built into them.
Mr. Fotland's program, for instance, refers to a database of games played by strong players in deciding its moves, and Dr. Reiss's program employs a learning scheme for deciding which moves are interesting to look at.
Dr. Reiss said he had come up with an idea for a new Go program that would learn by analyzing professional games. But to pursue his idea would require too much work, he said, depriving him of time to continue making updates to his current program.
It seems unlikely that a computer will be programmed to drub a strong human player any time soon, Dr. Reiss said. "But it's possible to make an interesting amount of progress, and the problem stays interesting," he said. "I imagine it will be a juicy problem that people talk about for many decades to come."
Here's a real "what's the point" question: (Score:3, Insightful)
Chessbase has several chess programs for sale on their website. While quite inexpensive (~$45-$80 USD) they are advertised as being damn near impossible to beat. In fact, Chessbase's front page highlights one of the programs for sale kicking the ass of the entire Swiss Chess Team!
So why would you want to actually buy one of these programs? They aren't teaching programs. They aren't for a friendly game against the computer. They aren't open sourced (that I could see) so you can't study the algorithms. They are meant to destroy every human they come in contact with.
Does anyone outside of chess grand masters use these things? (How many grand masters are there, anyway?) I'm a very mediocre chess player myself, and if I want my ass handed to me in chess I'll go down to the local high school club and call them all smelly virgins before starting a game. At least I'll have some face-to-face interaction.
So what's the point?
Re:Here's a real "what's the point" question: (Score:3, Informative)
Well, Fritz (and other programs) have "analysis" modes, where you can load up a game you played against another person, and it can analyze the game in depth and point out any mistakes or missed opportunities for you. This feature alone makes it worth the $50 they charge you for it.
True, very few people can beat Fritz head-to-head, but it is a good way to strengthen yourself tactically - you make even a small tactical error, and Fritz will exploit it.
Does anyone outside of chess grand masters use these things?
Yes, I do (and I'm a 1200-level player, only been really playing for a few months now). Almost everybody else at my club who uses any sort of computer program (which is the majority or people there) uses Fritz too.
(How many grand masters are there, anyway?)
Several hundred worldwide AFAIK.
I'm a very mediocre chess player myself, and if I want my ass handed to me in chess I'll go down to the local high school club and call them all smelly virgins before starting a game. At least I'll have some face-to-face interaction.
Yeah, well. Computer chess is no substitute for the real thing. Of course, lack of smelly virgins (with the possible exception of yourself) is definitely a benefit.
Rest easy, humanity. (Score:2)
ai != chess champion (Score:2, Insightful)
in any event- im reminded of the checkers champion computer players... they always win. the real question is- how do they win? the answer is: by storing a set number of move in lookup tables. in other words- once a game gets to a certain point, the computer opponent looks up, in a database, a winning set of moves from the given point in the game. how is this ai? how is this 'machine bettering man' on a level playing field? the answer is that it isn't.
programming a computer to play until it reaches a point where the number of moves left in the game are finite, and the computer has a database of moves that guarantee wins from this position is not artificial intelligence- it's loading the deck.
if you really want to impress people, build a machine that has no idea what the rules are, but rather is taught the rules as it plays the game. if that machine can beat the best players in the world, then we have an argument for a machine intelligence that is both strategic and insightful.
until that point, we have nothing but technical deception; technical deception in the same sense that Eliza was programmed as an 'ai'. what it appears to be on the surface is not, in fact, what it actually is.
Re:ai != chess champion (Score:2)
Have you ever looked at chess books? Many of them have a little section where if you have two knights and a king, and he has one knight and a king, then here's how to win the game. That's the same thing.
Can there ever be a fair match? (Score:2)
When Kramnik offered to play Fritz, he said "Fine, give me a copy of the program and let me play with it before hand." The creators of Fritz freaked out and everybody said "But then you'll be able to find the weaknesses and just exploit those!" Well, that's not Kramnik's fault -- if he found a human player that always made the same mistake, he'd certainly take advantage of it every time, right?
The list of fairness questions goes on and on...since a computer can memorize openings, can't a human player be allowed to have his books with him? Since a computer doesn't need rest breaks, can't they be as short as possible? Are the programmers allowed to tweak the computer between every match, every move? Why?
So what I'm wondering is, what has to happen in these matches in order for both sides to consider them fair fights?
Re:Can there ever be a fair match? (Score:2)
I spoke to the author of Fritz at the recent World Computer Chess Championships. I'm pretty certain he did not freak out; anyone who has any experience at all of computer chess knew that any strong player would ask for this in the wake of the Deep Blue II match against Kasparov. He was absolutely delighted to have the opportunity to play.
I think that all that needs to happen is that both sides agree with the conditions. I don't think it's meaningful to try to make the conditions for both sides identical in this sense. When people ask "are chess programs as strong as the strongest grandmasters?", they mean chess programs as they are normally run, not some subset based on how humans play chess.
Re:Can there ever be a fair match? (Score:2)
Fair enough, my word choice was poor. I just remembered there being some sort of controversy and a bunch of people saying that getting a copy of the program was a horrible thing to ask for, because weaknesses could be discovered and exploited.
He was absolutely delighted to have the opportunity to play.
Was he ever given a copy to play with? How was the question resolved?
Re:Can there ever be a fair match? (Score:2)
Yeah, there was some debate about this on the various computer chess discussion boards. The programming team will be able to change the opening book during the match. I'm not sure about changing parameters to alter its style of play, but I assume that is allowed too.
I think the agreement was that it had to be made available one month (or maybe two/three months I can't remember) before the match. So it hadn't yet been shipped, but that was going to happen. I was a bit surprised to hear that Kramnik will be able to play with it on some sort of 8-processor box (similar to the one Fritz will use) before the match.
Re:Can there ever be a fair match? (Score:2)
When Kasparov asked for records of Deep Blue's games to study, he was told no
Not exactly true. The agreement between Kasparov and IBM was that IBM would have the records of all the public games Kasparov had played (which he provided), and Kasparove would have the records of all the public games Deep Blue had played. Unfortunately, Kasparov forgot the fact that Deep Blue hadn't played any public games, so there were no records! He wasn't turned down, he just wasn't thinking when he agreed to the terms of the game.
When Kramnik offered to play Fritz, he said "Fine, give me a copy of the program and let me play with it before hand." The creators of Fritz freaked out and everybody said "But then you'll be able to find the weaknesses and just exploit those!"
I highly doubt this. Not the least since Fritz is a retail product which you can buy. Want a copy? You can buy it here [chessoutpost.com]. And a steal at only $47.50.
Re:Can there ever be a fair match? (Score:2)
Moreover, he didn't want just a copy of the program, he also wanted a copy of the HARDWARE it was going to be running on.
There's a hell of a difference between Fritz on a 1ghz pc and Deep Fritz on an 8-way xeon box with 4 gigs of ram and the complete tablebases for all endgames with 5 or fewer pieces on the board.
Re: (Score:2)
Deep (Score:2)
Re:Any word yet... (Score:2)
Re:Machine vs Machine (Score:2)
In the latest World Computer Chess Championship (July 2002) Junior won in a tiebreak over Shredder.
Fritz did not even make the top 3. (They participated with the name Quest to limit their damages)
--
GCP