Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Books Media

Reading Guide To AI Design & Neural Networks? 266

Raistlin84 writes "I'm a PhD student in theoretical physics who's recently gotten quite interested in AI design. During my high school days, I spent most of my spare time coding various stuff, so I have a good working knowledge of some application programming languages (C/C++, Pascal/Delphi, Assembler) and how a computer works internally. Recently, I was given the book On Intelligence, where Jeff Hawkins describes numerous interesting ideas on how one would actually design a brain. As I have no formal background in computer science, I would like to broaden my knowledge in the direction of neural networks, pattern recognition, etc., but don't really know where to start reading. Due to my background, I figure that the 'abstract' theory would be mostly suited for me, so I would like to ask for a few book suggestions or other directions."
This discussion has been archived. No new comments can be posted.

Reading Guide To AI Design & Neural Networks?

Comments Filter:
  • Russell & Norvig (Score:5, Interesting)

    by Gazzonyx ( 982402 ) <scott,lovenberg&gmail,com> on Tuesday December 02, 2008 @07:05AM (#25957517)
    In my AI class, last semester, we used Stuart Russell and Peter Norvig's Artificial Intelligence A Modern Approach, 2nd Ed.. It's fairly dry, but good for theory nonetheless. If you're a physics geek, it should be right up your alley; they approach everything from a mathematical angle and then have a bit of commentary on the theory, but never seem to get to the practical uses for the theory.

    If you're in the US, send me an email and I'll send you my copy. They charge an arm and a leg for these books and then buy them back for 1/10 the price. I usually don't even bother selling them back.
  • Re:AIMA (Score:3, Interesting)

    by xtracto ( 837672 ) on Tuesday December 02, 2008 @07:25AM (#25957619) Journal

    I must second that, Russel and Norvig book is one of the most important books.

    I would also recommend:

    Artificial Intelligence: A new Synthesis [google.com] from Nills J. Nilson [wikipedia.org], who is considered one of the founders of A.I.

  • by Gearoid_Murphy ( 976819 ) on Tuesday December 02, 2008 @07:41AM (#25957697)
    be careful before committing to a large scale neural network project. Aside from the intuition that the brain is a massively interconnected network, no one is really sure what aspect of neural network functionality is necessary for intelligence. My advice to you is to spend time coming to terms with the abstract nature of intelligence rather than coding up elaborate projects. This link [uh.edu] is a philosophical discussion on directed behaviour which I found quite interesting (if a bit vague, which is the mark of philosophy). Also, as you become familiar with the literature, you will see many examples of algorithms which claim to model certain aspects of intelligence. These algorithms work because they have a reliable and unambiguous artificial environment from which they draw their sensory information. The problem with practical artificial intelligence is that the real world is extremely ambiguous and noisy (in the signal sense). Therefore the problem is not creating an algorithm which can emulate intelligent behaviour but solving the problem of taking the empirical information of the sensory input and producing from that data a reliable abstract representation which is easily processed by the AI algorithms (whatever they may be, neural networks, genetic programming, decision trees etc) Good luck.
  • Re:PDP (Score:4, Interesting)

    by babbs ( 1403837 ) on Tuesday December 02, 2008 @07:52AM (#25957757)
    I prefer James Anderson's "An Introduction to Neural Networks". I think it is better suited for someone coming from the physical, mathematical, or neuro- sciences.
  • Cognitive Psychology (Score:3, Interesting)

    by tgv ( 254536 ) on Tuesday December 02, 2008 @07:58AM (#25957783) Journal

    I would strongly recommend starting with a text book on Cognitive Psychology, or reading it in parallel. AI tends to overlook the fact that intelligence is a human trait, not the most efficient algorithm for solving a logic puzzle. Anderson's book can be recommended: http://bcs.worthpublishers.com/anderson6e/default.asp?s=&n=&i=&v=&o=&ns=0&uid=0&rau=0 [worthpublishers.com].

  • by Viol8 ( 599362 ) on Tuesday December 02, 2008 @08:10AM (#25957853) Homepage

    .. as applied to normal computers. In this case its simply speeded up serial computation - ie the algorithm could be run serially so Programming Erlang is irrelevant. With the brain , parallel computation is *vital* to how it works - it couldn't work serially - some things MUST happen at the same time - eg different inputs to the same neuron, so studying parallel computation in ordinary computers is a complete waste of time if you want to learn how biological brains work. Its comparing apples and oranges.

  • Re:PDP (Score:2, Interesting)

    by kahizonaki ( 1226692 ) on Tuesday December 02, 2008 @08:11AM (#25957859) Homepage
    The great thing about the PDP books is that they make almost NO assumption as to what the reader's background is. There's no code, a bunch of pictures, and something in there for everyone. Each chapter is written with a specific goal in mind, and by leaders in the field--there are chapters on the mathematics of the networks, the dynamical properties of them (i.e. how they can be thought of as boltzmann's machines), as well as lots of ideas for applications and specific studies of how real experiments worked. In addition, of course, there is the chapters which actually introduce the different types of networks--and there are equations (and appendices of equations--in case one likes them even more) which can be ignored if one wishes. Overall, in addition to an interesting read in general, by offering the opportunity to just pick-and-choose what one's interested in after reading the initial bit, these books are extremely dynamic and I recommend them strongly. Not to mention you can buy the full set in hardback used (off of amazon or whatever) for ten dollars (what a deal!).
  • No it isn't (Score:3, Interesting)

    by Kupfernigk ( 1190345 ) on Tuesday December 02, 2008 @09:47AM (#25958495)
    You've just reinforced my point by not understanding how the brain works. Neuron inputs and outputs are known to be pulse coded, and as you would imagine with chemical based transmitters, the pulse frequency is low (it evolved, it didn't get designed from first principles!) So it is perfectly possible to represent a neuron by a time-slicing parallel system, because it is extremely low bandwidth, and its output varies very slowly, but is NEVER DC. As a result, the output of the neuron does not need to be continuously available and it never needs to be polled. Your statement that "some things must happen at the same time" is just incorrect, quite irrespective of a theoretical physicist telling you that it is impossible. It is exactly the same principle by which you can send multiple audio channels over a digital RF channel.

    However, to make this work you need a very efficient inter-process messaging prototcol that allows multiple virtual neurons to send messages to another virtual neuron. Languages like Erlang are optimised for doing this.

    If I wanted to replicate the "brain" of a sea slug, which has (I believe) about 26 neurons, it would be much easier and cheaper to do this on a standard computer running 26 pseudo-parallel processes, than on 26 computers each imitating a single neuron, with a huiige number of potential interconnects.

    As to what those pseudo-parallel processes look like, they have to respond every time a message is received (equivalent to a pulse from another neuron) by doing a calculation based on state history and then deciding when next to send an output to the destination process. For small numbers of neurons this is a manageable programming task; for large numbers, like brains with billions of neurons it is not.

  • A warning (Score:1, Interesting)

    by cardhead ( 101762 ) on Tuesday December 02, 2008 @09:50AM (#25958527) Homepage

    As has been already mentioned, Artificial Intelligence: A Modern Approach by Rusell and Norvig (or AIMA) is essentially the only choice for serious study of AI. Your relative algorithmic naivite will make it a bit of a struggle, but there is a long history of smart physicists moving into AI.

    Unfortunately, there is also a long history of smart outsiders getting trapped in "junk AI". These are the branches of AI that exist more because the metaphor is compelling rather that the results or prospects. These include: Neural Networks, Genetic Algorithms, Ant Colony Optimization, etc. I won't claim there is no good work in these areas, but there is too much fascination with the techniques themselves over the results, such that research constantly "solves" problems that would be done better with other techniques, but yet are somehow "interesting" because a neural net does it. The mainstream of AI is mystified why anyone would be interested in a technique that works 80% as well as the state of the art just because some guy in the 50s attached the word "neural" to it.

    If you want to simulate brains, you should study neuroscience. If you want to know what's going on in mainstream AI, you should bone up on probability, statistics, and linear algebra (if you're the right kind of physicist, you already have the math you need).

    Before you mod me as flamebait, please note that I do know what I'm talking about. My PhD is in AI and I'm professor in a CS department in an undergraduate engineering school, where I teach AI and Robotics. I was once the maintainer of the comp.ai FAQ, and I have published several papers in neural networks and genetic algorithms.

  • by Anonymous Coward on Tuesday December 02, 2008 @10:15AM (#25958845)

    I have found the work of Hubert Dreyfus on AI very insightful, having studied computer science and philosophy at the undergraduate level. In any case, he does better than your anecdotal argument.

    He argues for the inability of Turing machines to process ever-expanding degrees of meaningful context, thus preventing general (human-like) AI. For human intelligence, meaningful experience comes before explicit knowledge.

    He has written a number of books on AI and computers, starting with "What Computers Can't Do"

  • by Anonymous Coward on Tuesday December 02, 2008 @10:18AM (#25958869)

    Don't enter the PDP club. This book is causing AI research a delay of 10 years. Read instead the book that they accuse of having done so: Perceptrons by Marvin Minsky, and anything else by him. The old Principles of Neurodynamics by Rosenblatt is also interesting, but not good if you just want to learn the thing.

    That is for neural networks... There is also a famous NN book by Simon Hayking, but I am not a big fan.

    As for "building a brain", that is something else. You should look for Russel & Norvig, James Anderson, H Simon, A Newell, Zenon Pylyshyn, Douglas Hofstadter... Look for the so-called "cognitive architectures", ACT-R, SOAR... Some of them use neural networks and other "numeric" machine learning techniques inside their systems.

    Ah, and do study statistics, it's a must for contemporary AI research. Look for Markov Decision Processes, MDPs, there is a famous book about that by M. L. Putterman. Look also for Reinforcement Learning (the Sutton and Barto book) and Dynamic Programming (the Bellman stuff). In multi-agent systems research that is a big thing right now.

    Coming back to the connectionist, pattern-recognition domain, I do like Hinton, but only when he is not fighting against the "competition"...

  • Re:No it isn't (Score:3, Interesting)

    by Viol8 ( 599362 ) on Tuesday December 02, 2008 @10:20AM (#25958895) Homepage

    And you've missed my point. Parallel computing on a von neumann computer raises issues of race conditions, deadlocking etc. These are the sort of things you have to worry about with parallel silicon systems. None of these issues apply to brains (as far as we know) so what is the use in learning about them? You're talking about simulating a neural system which is not the same thing - a simulation of anything can be done serially given enough time, never mind in parallel. But it will never be an exact representation of the real physical process and in the case of brains , seems to have given little insight into how they actually work anyway beyond the most basic I/O of neurons.

    Also neurons are not just affected by signals from other neurons - they respond to chemicals in their enviroment and not forgetting that 90% of the brain consists of glials cells - and their full functionality is far from being understood.

  • Hawkins is misguided (Score:3, Interesting)

    by joeyblades ( 785896 ) on Tuesday December 02, 2008 @10:30AM (#25959003)

    I read "On Intelligence", too. While Hawkins has some interesting thoughts, I was less than inspired. Probably because I read John Searle's "Rediscovery of the Mind" first. Actually, most of Searle's work, as well as the work of Roger Penrose has led me to the conclusion that the Strong AI tract is missing the boat. The Strong AI proponents, like Hawkins, believe that if we build a sufficiently complex artificial neural network we will necessarily get intelligence. Searle and Penrose have very convincing arguments to suggest that this is not the right path to artificial intelligence.

    Realistically, how could one build an artificial brain without first understanding how the real one works? And I don't mean how neural networks function; I mean how the configuration of neural networks in the brain (and whatever other relevant structures and processes that might be necessary) accomplish the feat of intelligence. We still do not have a scientific theory for what causes intelligence. Without that, anything we build will just be a bigger artificial neural network.

    Also, the thing that Strong AI'ers always seem to forget... An artificial neural net only exhibits intelligence by virtue of some human brain that interprets the inputs and outputs of the system to decide whether the results match expectation (i.e. it takes "real" intelligence to determine when artificial intelligence has occured). Contrast this with the way your brain works and how you recognize intelligence from within, then you'll realize just how far from producing artificial brains we really are...

    I'm not saying that artificial intelligence is impossible, and neither is Searle (Penrose is still on the fence). I'm just saying, don't think you can slap a bunch of artificial neurons together and expect intelligence to happen.

"Ninety percent of baseball is half mental." -- Yogi Berra

Working...