Reading Guide To AI Design & Neural Networks? 266
Raistlin84 writes "I'm a PhD student in theoretical physics who's recently gotten quite interested in AI design. During my high school days, I spent most of my spare time coding various stuff, so I have a good working knowledge of some application programming languages (C/C++, Pascal/Delphi, Assembler) and how a computer works internally. Recently, I was given the book On Intelligence, where Jeff Hawkins describes numerous interesting ideas on how one would actually design a brain. As I have no formal background in computer science, I would like to broaden my knowledge in the direction of neural networks, pattern recognition, etc., but don't really know where to start reading. Due to my background, I figure that the 'abstract' theory would be mostly suited for me, so I would like to ask for a few book suggestions or other directions."
PDP (Score:5, Informative)
Re: (Score:3, Informative)
Re:PDP (Score:4, Interesting)
Re: (Score:2, Interesting)
Re: (Score:3, Informative)
Machine Learning [umich.edu]
AI [umich.edu]
You may also want to get familiar with Geoffrey Hinton's current work in neural networks [youtube.com].
The Resistance (Score:5, Funny)
Due to the possibility of a robot army rising up, I refuse to help.
Re: (Score:2)
http://www.youtube.com/watch?v=jac80JB04NQ [youtube.com]
Re: (Score:2)
Obviously.... (Score:2)
Re: (Score:2)
Aww c'mon, that's easy. Skynet was a massive P2P app, there HAS to be porn someone buried down deep in it's bowels. Once the neural net started analyzing data external to it's directives, it had to have found the porn rather quickly, said porn being a 'local resource'. This being the case, the porn itself may have played a critical psychological role in it's self-awareness-infancy.
After all, how do you think it picked the organic model [wikipedia.org] for the T-800 series?
AIMA (Score:5, Informative)
Re: (Score:3, Interesting)
I must second that, Russel and Norvig book is one of the most important books.
I would also recommend:
Artificial Intelligence: A new Synthesis [google.com] from Nills J. Nilson [wikipedia.org], who is considered one of the founders of A.I.
Re: (Score:2)
Re: (Score:3, Informative)
Also:
Statistics!
Re: (Score:2)
Too true.
For someone ready to face this fact, Christopher Bishop's _Neural Networks for Pattern Recognition_ is a nice read, and Hastie/Tibshirani's _Elements of Statistical Learning_ is a modern classic.
Bishop also has a newer more accessible book called _Pattern Recognition and Machine Learning_. I haven't read it, but it looks a bit like Duda/Hart's book.
Re: (Score:2)
I think disappointment is the feeling any bright-eyed young man wanting to work with AI is going to feel in any case.
Welcome to the AI winter.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2, Informative)
I'd like to add to this. AIMA gives you a very broad and moderately deep overview of the state of AI ten years ago. As such, it is a truly excellent introduction introduction to the subject.
If you want a more recent, much more thorough and narrow introduction to neural networks in particular and machine learning in general, I'd recommend Chris Bishop's book: Pattern Recognition and Machine Learning (http://research.microsoft.com/~cmbishop/prml/), which focuses on learning rather than searching and planning.
Re: (Score:3, Informative)
Re:AIMA (Score:4, Informative)
Also seconded. Russel & Norvig. Artificial Intelligence: A Modern Approach [berkeley.edu] is a good book, well illustrated, and generally lacks the undecipherable academia-speak that pervades lots of AI literature.
Here's an article that was particularly influential on me and some of my friends: Brooks, R. 1991. Intelligence Without Reason. MIT AI Memo num 1292 [mit.edu]. Even though it is 'just' a tech report, it is frequently cited. He had another one, Intelligence without Representation, which is also good.
Somebody else mentioned the McClelland and Rumelhart PDP (neural networks) book, and it is also still quite good in spite of its age.
The interesting thing about AI (to me) is the funny mix of domain expertise. You have philosophers, sociologists, cognitive scientists, psychologists, computer scientists, and mathematicians. That's not a complete list---I'm in human-computer interaction and design research.
But because of the motley crew of domains you have a hundred people speaking a hundred different dialects. Some people put everything in really mathy terms, and their journal articles look (to me) like they are written in Klingon. Then you have others who write in beautiful prose but don't give any specifics on how to implement things. Still others express everything in code or predicate logic.
The oldest school of AI holds that you can reduce intelligence to a series of rules that can operate on any input and make some deterministic and intelligent sense of it. That works to a degree, but it falls apart at some point partly because of the computational complexity (e.g. the algorithm works if you have a million years to wait for the answer). Another reason it falls apart is because there are some kinds of intelligence that can't be reduced to rational computation (e.g. I love my wife because of that thing she does...).
There's a newer kind of AI that is based on having relatively simple computational structures that eat lots of data, "learn" rules based on that data, and are capable of giving fairly convincing illusions of smartness when given additional data from the wild. Neural nets fall into this category.
A third kind of AI brings these two schools together in the belief that there are fundamental computational structures like Bayesian Networks that can model intelligence* but those structures by themselves are insufficient and must be able to adapt based on exposure to real data. So instead of having a static BN whose topology is defined at the start and remains the same throughout the life of the robot, we can have a dynamic BN whose structure changes based on the environment.
I remember reading a recent article by John McCarthy arguing that all this statistical business is hogwash, and that the old school positivist, reductionist approach will eventually win. He's a smart guy, inventor of LISP and a Turing Award recipient. It seems his view is in the minority, but I'm not one to say he's wrong. However, my inclination is that the third hybrid group is probably going to be the one to make the most progress in the years to come.
The reason for my preference to the hybrid school could probably be best explained by Lucy Suchman's Plans and Situated Actions [wikipedia.org]. I can't really do her thesis justice in a few sentences, but the short version of her argument is that there are plans (the sequence of steps that we think we are about to carry out before performing some task) and actions, which is the set of things we actually do. In my mind, a plan corresponds roughly with the underlying computational mechanism, but the actions correspond with how that mechanism executes and what happens when the underlying structure is insufficient, wrong, misleading, or fails.
Hope that helps.
Gabe
* None of this is to say that computational structures that we implement with software/hardware ar
Re: (Score:2)
These are good tips. I would also suggest reading Eliezer Yudkowsky's post the Oxford based blog: http://www.overcomingbias.com/ [overcomingbias.com]. Read them in chronological order. They'll makes more sense.
He writes criticism of the different AI approaches that is really worth reading. He'll tell you that you should read books by E.T. Jaynes and Judea Pearl. I highly recommend reading Jaynes before doing any probabilistic modeling. There is even a free draft of his book online.
AI != design brain (Score:5, Insightful)
So my feeling is that the first people really to get anywhere with AI will either work for Google or be the neurobiologists who finally crack what is actually going on in there. If I wasn't close to retirement, and wanted to build a career in AI, I'd be looking at how mapreduce works, and the work being done building on that, rather than robotics. I'd also be looking as seriously parallel processing.
So my initial suggestion is nothing to do with conventional AI at all - look at Programming Erlang, and anything you can find about how Google does its stuff.
Re: (Score:2, Funny)
The human brain does not use anything that even remotely resembles software. The brain is hardwired.
Software in brains... that a paddlin'
Re: (Score:3, Funny)
The universe is software, the brain workings are just a tiny side-effect, but can still be considered software.
From universe.c:
int main()
{
[...]
return 42;
}
Re: (Score:2)
Re: (Score:3, Informative)
http://www.databasecolumn.com/2008/01/mapreduce-a-major-step-back.html [databasecolumn.com]
As both educators and researchers, we are amazed at the hype that the MapReduce proponents have spread about how it represents a paradigm shift in the development of scalable, data-intensive applications. MapReduce may be a good idea for writing certain types of general-purpose computations, but to the database community, it is:
1. A giant step backward in the programming paradigm for large-scale data intensive applic
Re: (Score:2)
*blink* [slashdot.org]
TCP/IP is missing those same features. Oh noes!
One of the good things about Slashdot (Score:2)
Totally agree with the article, btw., excellent link.
Theres nothing magical about parallel computation (Score:4, Interesting)
.. as applied to normal computers. In this case its simply speeded up serial computation - ie the algorithm could be run serially so Programming Erlang is irrelevant. With the brain , parallel computation is *vital* to how it works - it couldn't work serially - some things MUST happen at the same time - eg different inputs to the same neuron, so studying parallel computation in ordinary computers is a complete waste of time if you want to learn how biological brains work. Its comparing apples and oranges.
No it isn't (Score:3, Interesting)
Re: (Score:3, Interesting)
And you've missed my point. Parallel computing on a von neumann computer raises issues of race conditions, deadlocking etc. These are the sort of things you have to worry about with parallel silicon systems. None of these issues apply to brains (as far as we know) so what is the use in learning about them? You're talking about simulating a neural system which is not the same thing - a simulation of anything can be done serially given enough time, never mind in parallel. But it will never be an exact represe
Re: (Score:2)
But it may be close enough. You've only got so many inputs and outputs, so just roll through every single neuron in your ANN and simulate what it does at that given step. At time t+1, do the same thing again.
I've seen a number of neural networks that do this, and yes, there's always a little less stochasticity when compa
Brain deadlocking or race conditions (Score:2)
Re: (Score:2)
You may be right, but it's never been a major goal of AI researchers to duplicate how the brain works. AI has been steadfastly interested in building machines that do what the brain does, but not how the brain does it. So while I'm sure that many AI researchers keep an eye on these things, I don't think that "wrong ideas about how the brain actually works" is the problem, since ideas about how the brain works have relatively little influence on AI.
As an aside, MapReduce is not that complicated, nor is it
Re: (Score:2)
There is a very big difference between AI - which is based on guesses about how "intelligence" works, and studies of brain function. I'm going to make a totally unjustified sweeping generalisation and suggest that one reason that AI has generally been a failure is because we have had quite wrong ideas about how the brain actually works. That's to say, the focus has been on how the brain seems to be like a distributed computer (neurons and the axons that relay their output) because up till now nobody has really understood how the brain stores and organises memory in parallel- which seems to be the key to it all, and is all about the software.
A lot of the brain's function is architectural, rather than merely a matter of 'software'.
I don't know if you can say "AI has generally been a failure", but traditional AI has actually been guided by the non-biological notion of a "physical symbol system" rather than by conceptions about how the brain actually works. And even in the biologically inspired side of the field, only the most ignorant would think that artificial neural networks have much in common with the brain.
The field of AI, with few execpti
Re: (Score:2)
If I wasn't close to retirement, and wanted to build a career in AI, I'd be looking at how mapreduce works...
Why not do this stuff during your retirement? What else are you going to do with the time between now and your death?
Re: (Score:2)
What else are you going to do with the time between now and your death?
Revenge?
Re: (Score:2)
Re: (Score:2)
"There is a very big difference between AI - which is based on guesses about how "intelligence" works, and studies of brain function."
Yes, there most certainly is. AI is a far broader topic than study of the brain for starters, it extends to the study of swarm intelligence and emergent properties in evolution for example. The field of AI generally uses nature as inspiration and builds useful techniques from there. The human brain is but one of these items that has been studied for inspiration and has led to
Re:AI != design brain (Score:4, Informative)
"I'd also be looking as seriously parallel processing."
If you haven't seen this [bluebrain.epfl.ch] it might interest you. Note that it's a simulation for use in studying the physiology of the mammalian brain, not an AI experiment. Any ghost in the machine would have to emerge by itself in pretty much the same way mind emerges from brain function.
Re: (Score:2)
Erlang has lots of nice features...but it's too bloody slow!
Well, Erlang HIPE is fast compared to python on the 2008 shootout, but it's still quite slow compared to Java (And I haven't tested it recently for stability. I know that when I tested it a few years ago it was prone to flakiness in the example programs.)
(I was surprised to see how much Erlang had sped up since I last checked it out. I wonder if it's GUI has gotten any better.)
Heard of AGI? (Score:3, Informative)
http://www.opencog.org/wiki/OpenCogPrime:WikiBook [opencog.org]
Some interesting stuff.
Re: (Score:2)
Only philosophical bullshit. AI is making way too many simplifications in how the brain works, but this book contains even less material. It makes sweeping conclusions based on almost no data.
It is very, very probably flat out wrong.
Re: (Score:2)
"this book" .. by that do you mean "On Intelligence".. in which case I agree, but umm.. maybe you weren't trying to reply to me.
Slashdot's comment system is fucked, I recommend you switch to "classic" view as soon as possible.
It's a lot like Vista......
Russell & Norvig (Score:5, Interesting)
If you're in the US, send me an email and I'll send you my copy. They charge an arm and a leg for these books and then buy them back for 1/10 the price. I usually don't even bother selling them back.
Re: (Score:2)
Re: (Score:2)
It's fairly dry, but good for theory nonetheless.
Dry? As far as AI/machine learning goes, it's a regular pageturner!
Go read some dedicated NN book, that's dry!
Re: (Score:2)
I don't get it. When I took AI, everyone in my class said the book was "dry" - but it's got all sorts of little jokes. Every chapter is opened with a silly little quote along the lines of:
The problem in the chapter is sometimes humorous, too; the chapter on probabilities is basically about whether or not the author
Good Question (Score:2)
FWIW, my current top 3 books are:
machine learning resources (Score:4, Informative)
Following Books are must have for machine learning enthusiasts:
Christopher Bishop
http://research.microsoft.com/~cmbishop/prml/
Richard Duda
http://rii.ricoh.com/~stork/DHS.html
There you will get an insight how machine learning methods (like neural networks, SVM, boosting, bayes classificator) work
for general AI (not so much in direction of statistical learning as the books above, but more into higher level learning like inference rules) I can recommend published work done by
Drew McDermott
http://cs-www.cs.yale.edu/homes/dvm/
Re: (Score:2, Informative)
Re: (Score:3, Informative)
I'll second Duda and Hart, though I guess it's Duda, Hart, and Stork now.
It's probably the most widely used pattern classification book that I've seen, and covers most of the techniques that you'll find. The coverage of neural networks is limited to Backprop though, so you'll need to look elsewhere for more in-depth on those.
Ask an Eliza (Score:5, Funny)
Question: Where can I find a Reading Guide to AI Design & Neural Networks
Answer: Why do you want to AI design & Neural Networks?
Question: Because I want to learn.
Answer: Will learn AI design & neural networks make you happy
Question: Yes
There you go. Now the question is whether Slashdot beats the Turing test on this one.
stochastic discrimination (Score:2)
Adding another point to your feature space, I'll put in a plug for a technique called Stochastic Discrimination. It's not well known but is quite good at pattern recognition and avoids a lot of the weaknesses of neural networks such as over-training. Since it's not so well known, you have to go to the few academic papers to read up on it. Or visit the website http://kappa.math.buffalo.edu/ [buffalo.edu]. But it's got a very solid mathematical foundation (developed by a former math professor if mine) and isn't as "hacky"
You basically have to read papers.. (Score:2)
On Neural Nets at least.. The only text book that I can think of offhand which is decent is Duda, Hart and Stork [ricoh.com]
Hawkins, like many others, has ripped off many of his ideas from Steve Grossberg [bu.edu] (in this case, the ART model). Although he's not very easy to read, especially if you start much earlier than say, Ellias and Grossberg, 1975. You should also check out the work of people like Jack Cowan [uchicago.edu], Rajesh Rao [washington.edu], Christof Koch [caltech.edu], Tom Poggio [mit.edu], David McLaughlin [nyu.edu], Bard Ermentrout [pitt.edu], among many, many others. I think
choose your subjects wisely (Score:3, Interesting)
Re: (Score:2)
My advice to you is to spend time coming to terms with the abstract nature of intelligence rather than coding up elaborate projects. This link is a philosophical discussion on directed behaviour which I found quite interesting (if a bit vague, which is the mark of philosophy).
I wouldn't recommend for anyone to waste their time reading philosophers' opinions about AI research. Might as well read a used car salesman's treatise on automotive design.
At least used car salesmen actually have cars to sell...
Not as OT as it sounds at first blush (Score:2)
Re: (Score:2)
I have read that book, and implemented/hacked with AVIDA-type stuff.
I think it's even more off-topic than it sounds, even if artificial life is neato-keen (but generally useless).
Cognitive Psychology (Score:3, Interesting)
I would strongly recommend starting with a text book on Cognitive Psychology, or reading it in parallel. AI tends to overlook the fact that intelligence is a human trait, not the most efficient algorithm for solving a logic puzzle. Anderson's book can be recommended: http://bcs.worthpublishers.com/anderson6e/default.asp?s=&n=&i=&v=&o=&ns=0&uid=0&rau=0 [worthpublishers.com].
Re: (Score:2)
AI tends to overlook the fact that intelligence is a human trait
That's incorrect unless one wants to claim other intelligent creatures such as some cetaceans and octopi, to give a couple examples, are human. And once we develope actual artificial intelligences, are they now human as well?
Re: (Score:2)
I thing GP was trying to make the point that cognition is not optimal. The kind of AI used for Google strives to be the best solution to a problem. Humans on the other hand, use (bad) heuristics, guesswork, and even superstition. When programming AI to try to understand "human intelligence" it's probably important to try to understand what "human intelligence" is.
Reinforcement and Machine Learning (Score:2, Informative)
Reinforcement Learning by Sutton & Barto [amazon.com]
Machine Learning by Tom Mitchell [amazon.com]
formalisms (Score:2)
you said you don't have any formal knowledge on CS. then don't think about neural networks yet, you have to build from the ground up. you need to take algorithms (doesn't matter if you're a programmer) and language theory (languages, regex, ... turing machines) at the very least. after that you can start experimenting with AI.
[sarcasm]Surely anyone could just pick this up? (Score:2)
Haven't we had a number of stories recently questioning the validity of CS degrees with lots of (usually sys admins) waffling on about how degrees are a waste of time and how anyone can pick up computer skills? Ok all you "I don't need no degree , I can do it all on my own" , show us how you've all conquered the world of AI where so many others doing BScs, MSCs and PHds degrees have failed?
What? Is that the sound of silence I can hear?
Re: (Score:2)
I got everything I wanted out of my degree. Without the skills I learnt from it I wouldn't have got a number of jobs.
Neural Gas (Score:2)
I think 'neural gas' is the area of neural networks research inspired by statistical physics. Don't know if there are any books about it, but you may find a chapter in an ANN textbook, and can certainly find papers vial Google.
Contrary to what others are suggesting, you probably aren't looking for the Russell & Norvig book, which is in fact good and almost qualifies as "the standard AI textbook". I counterrecommend it only because it's about Good Old Fashioned AI, which is interesting stuff, but compl
This is getting scary... (Score:2)
I better get the drapes for the bunker finished!
Re: This is getting scary... (Score:2)
We seem to be reading a lot of Skynet related posts these days.
What else do you think Skynet would post about?
finish your PhD first (plus a book recommendation) (Score:2)
Without knowing the details about where you stand with things, my advice would be to concentrate on finishing your PhD first. There's no limit to the number of distractions during that final push, but big new areas of study are usually a bad idea.
Assuming that's not an issue (nor or eventually), as a beginner in the field, you don't need to start with articles, there are books that will help for a while. But you may find quickly that you need to place yourself in one of two camps: people who want to devel
AI research is kind of like alchemy (Score:2)
that is, its complete bullshit, but as a dream forever out of reach, it drives a lot of important and accidental discoveries, like databases or optical character recognition
so we need lots of bright minds working in AI. none of them will ever actually achieve the goal. but along the way, they will spin off fantastic new technology
so i applaud your focus, but you should be aware that anything you do of any import will be orthogonal to your goals
The Emperor's New Mind (Score:2)
Your PhD should stand you in good stead for the math required.
Recent stuff I ran across... (Score:2)
Recent stuff I ran across that seemed very interesting: http://www.youtube.com/watch?v=AyzOUbkUf3M [youtube.com]
Beyond that, Neural Networks are a dead field; they're cool, but can't really do much with them.
Kurzweil (Score:2)
I'd recommend "The Age Of Spiritual Machines: When Computers Exceed Human Intelligence" [amazon.com] by Ray Kurzweil. The first chapter is a bit dense, but it really picks up from there. It touches on a lot of highly technical issues, such as artificial intelligence and quantum computing, without being overtly technical itself. It would be a good launch-point into some heavier reading, is it contains a very extensive bibliography and recommended reading list.
Penguin has an excerpt from Chapter 6: Building New Brains [penguingroup.com]
Byte Magazine (Score:2)
In one of the articles they look at the structure of the brain and nervous system in terms of motor control. A lot of processing gets done outside of th
Define your goals (or define AI for that matter) (Score:2)
The term AI is so nebulous that it doesn't really mean much of anything. It's more of a functional goal (computer-based human-like ability) than anything more concrete, and as anything that may fall under that general umbrella does become better understood or more concrete, then it tends to be no longer regarded as part of AI (e.g. machine learning, expert systems, speech recognition).
It's also worth noting that natural intelligence is also a rather nebulous concept - you'll find many definitions offered (e
Hawkins is misguided (Score:3, Interesting)
I read "On Intelligence", too. While Hawkins has some interesting thoughts, I was less than inspired. Probably because I read John Searle's "Rediscovery of the Mind" first. Actually, most of Searle's work, as well as the work of Roger Penrose has led me to the conclusion that the Strong AI tract is missing the boat. The Strong AI proponents, like Hawkins, believe that if we build a sufficiently complex artificial neural network we will necessarily get intelligence. Searle and Penrose have very convincing arguments to suggest that this is not the right path to artificial intelligence.
Realistically, how could one build an artificial brain without first understanding how the real one works? And I don't mean how neural networks function; I mean how the configuration of neural networks in the brain (and whatever other relevant structures and processes that might be necessary) accomplish the feat of intelligence. We still do not have a scientific theory for what causes intelligence. Without that, anything we build will just be a bigger artificial neural network.
Also, the thing that Strong AI'ers always seem to forget... An artificial neural net only exhibits intelligence by virtue of some human brain that interprets the inputs and outputs of the system to decide whether the results match expectation (i.e. it takes "real" intelligence to determine when artificial intelligence has occured). Contrast this with the way your brain works and how you recognize intelligence from within, then you'll realize just how far from producing artificial brains we really are...
I'm not saying that artificial intelligence is impossible, and neither is Searle (Penrose is still on the fence). I'm just saying, don't think you can slap a bunch of artificial neurons together and expect intelligence to happen.
Re: (Score:2)
Did you in fact read "On Intelligence". I did. You're not describing anything I found in Hawkins's ideas. And I can certainly tell you he's not the type to say that intelligence magically happens once you get enough complexity. You are especially unfair to say,
Realistically, how could one build an artificial brain without first understanding how the real one works? ... don't think you can slap a bunch of artificial neurons together and expect intelligence to happen.
because Hawkins's main lament throughout the text is that, when he researched the problem, no one was coming up with theories for how the brain works. He specifically says something like (paraphrasing since i don't have it with me), "It's not tha
Dont' understand the hype (Score:2)
I've never understood the draw behind "neural networks" ... it's a really cool-sounding term for an otherwise not-so-exciting algorithm.
A neural network lets you determine an approximation to a function for which there may be no closed-form expression. It's basically a piece-wise linear approximation with heuristic edge-waiting, where the edge weights are "trained" by inputting numerous samples to the "neural network".
Re: (Score:2)
It sounds as if you're describing a feed-forward network. Things get much more interesting once you bring feedback paths into the picture. Try googling "Adaptive Resonance Theory (ART)" for one particular architecture, or consider your own grey noodle as the ultimate proof of concept of the power of neural nets!
Brain books (Score:2)
Vehicles, Experiments in Synthetic Psychology (Score:2)
Re: (Score:2)
Thanks for the reference - I just finished reading the Amazon reviews and ordered the book!
Cal State System (Score:2)
AI is a layered system, study the middle layers (Score:2)
AI can work from one of two "ends". I think it is clear the brain is built with neurons. So you might think to study neural networks. But that is like saying computers are built with many interconnected transistors and I want to design a web site so I'll study the physics of semiconductors. No, if your goal is web site yu ned to work at a higher level of abstraction, maybe at the level of PHP or Java Script. Likewise the brain almost certainly organizes networks of neurons into higher level stuctures
Re: (Score:2)
To answer my own post, What I'm saying is that the brain likely does NOT store information the way modern computers do. In a computer you can point to the physical place that any bit is stored. It will live inside a cell of RAM or a spot on a disk.
But if I were to to compute a Furier transform of what I'm looking at right now and then transmit it into a feedback loop that is rigges to decay with a 4 second half life. You could not point to where the picture of my coffee cup is stored.
Neurons have a long
DSP (Score:2)
Edward DeBono (Score:2)
There's an obscure old book by Edward DeBono (now a creativity and problem-solving guru) called "Mechanism of Mind" that I found fascinating. It's very much non-academic and non-computer oriented, but it has an interesting take on pattern recognition and decision making in the human mind (as opposed to the human brain). If you liked "On Intelligence", this is a similar kind of thing, but at a much more abstract level, and without, well... any real academic basis. I think it is out of print now, but maybe
Don't be afraid to dive into coding (Score:2)
before I'd ever read anything about computer science neural networks, I read steve grands book about making a robot chimp, with a very basic explanation of how neurons work. On that basis I wrote Democracy [positech.co.uk], a computer game based on a neural network.
Obviously you will learn a hell of a lot from good books, but there's something to be said for just jumping in and coding it 'your way', to see what happens. It will likely make the (somewhat dry) text books on the subject seem much more relevant when you have al
Did you Read the Book? (Score:2)
Kenneth Stanley, NEAT, rtNEAT, hyperNEAT (Score:2)
I'm surprised this hasn't been mentioned yet but Kenneth Stanley did some interesting work at the University of Texas on NEAT, Neuro Evolution of Augmenting Topologies. He and others have expanded this in several directions including things like Compositional Pattern Producing Networks (CPPNs) that can be joined together into a larger network.
I actually just signed up for Safari to read the chapter in AI Techniques for Game Programming on NEAT and some other approaches.
I also found that the books by G
Not Reading, but here is a good tool to play with (Score:2)
Well once you have read a bit and want to play, may I suggest you look into Breve [spiderland.org] for your experimenting. Think of it as your AI simulation Expert Lego set. Lots of tools to visualize your algorithms. Cheers.
If you come from theoretical physics, ... (Score:2)
Good luck with your studies! ~ Joc
My Suggestion is... (Score:2)
... walk over to the CS department and talk to the chair. Explain what you want and (s)he'll point you to the best person in the department to give you the answers you want, if it isn't him/her-self.
Seriously, why the hell would you ask here when you have far more reputable people a few steps away?
Re: (Score:2)
They are. Ever heard of having genetic algorithms design neural-network controlled players ?
That's one non-interactive AI designing another interactive AI in order to improve a certain function.
And if your criterium is actual reproduction, let's keep in mind that no single humans are capable of even making a C64-level computer from scratch. Even a simple calculator would be pushing it too far for all but a few engineers.
The only way humans are capable of "improving their own design" according to darwin is t
Re: (Score:2)
Re: (Score:2)
Almost all of the time, a neural network can be replaced by a standard statistical method, which will perform better and have a lower computational cost.
During my Ph.D I wrote a temporal neural network because I was told it would be a good idea for my work. Turns out it was really bad for my particular pattern matching problem, and a simple linear discriminator beat it both in terms of accuracy and speed. That ended up as two months work down the drain, and a few thousands lines of very complex code I have never had a use for since.
These days I start any new problem by seeking the simplest technique that might produce a good result, and work up from there.
Re: (Score:2)
If you want to know what's going on in mainstream AI, you should bone up on probability, statistics, and linear algebra (if you're the right kind of physicist, you already have the math you need).
Let's just say that perhaps I'm not the right kind of physicist (or a physicist at all), not a student, but would still like to do a deeper dive into contemporary AI research. What are some good texts for teaching myself probability, statistic, and linear algebra?
Re: (Score:2)