Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Books Media Book Reviews Entertainment Games Technology

Artificial Intelligence for Computer Games 250

Craig Reynolds writes "In his recent book Artificial Intelligence for Computer Games: An Introduction , author John Funge takes us on a whirlwind tour of techniques from the literature of academic AI research and discusses their application to the nuts and bolts of game AI programming. While some of these topics are quite advanced, the text remains easily readable and grounded in what the techniques mean to real game programmers developing real game AI." Read on for Reynolds' review.
Artificial Intelligence for Computer Games: An Introduction
author John David Funge
pages 160
publisher A K Peters
rating 8
reviewer Craig Reynolds
ISBN 1568812086
summary Written for game AI programmers, this book provides a practical introduction to advanced AI techniques and practices for constructing sophisticated non-player characters.

Funge's background includes both academic AI research and commercial development of game AI technology. This has allowed him to write a refreshingly practical book for the game AI programmer which will also expand the reader's knowledge of AI. He presents advanced AI research in a way that is meaningful to the working game AI programmer. Non-player characters (NPCs) are the focus of this book, although it touches upon techniques applicable to other kinds of AI. Funge begins with a simple NPC architecture, then goes on to consider how they act in their world, perceive and react to their surroundings, remember their past experiences, plan their actions, and learn from the past to improve their future behavior. In addition, Funge hopes his book will contribute to a "common framework and terminology" to promote better communication between practitioners interested in game AI, leading to better interoperability for their software. (Please note that John Funge is a friend and former coworker of mine. I was pleased to accept John's invitation to review his book.)

The field of Artificial Intelligence has been actively studied since the 1950s. In that half century many useful techniques have been developed and applied to a broad range of scholarly and commercial applications -- most quite serious and sometimes a bit dry. In contrast, today the most economically significant application of AI is in computer games. This commercial application motivates today's students to study AI and drives a good deal of academic AI research. Modern games have incredible graphics and their animation technology is becoming very sophisticated. As graphic animation increasingly becomes a solved problem, more and more attention is being paid to game AI. It seems likely that the next few years will see a tremendous investment in game AI technology leading to significant improvements in the state of the art.

As I read Funge's book I was struck by how oriented it was to the interests of AI programmers working on commercial games. Certainly the discussion focused on the practical rather than the theoretical. (There are many asides, footnotes and citations of the academic literature for those with an interest in pursuing the theory.) More concretely, the text is peppered with fragments of C++ code. A working programmer who visits the academic literature is often faced with the daunting task of converting prose, equations or breezy pseudo-code into something suitable for compilation. If a reader of this book does not follow a bit of the discussion, a glance at the nearby C++ code listing will usually set things straight. I have it on good authority that functioning source code for the examples in the book will appear on the www.ai4games.org website "soon."

The book is divided into seven chapters (Introduction, Acting, Perceiving, Reacting, Remembering, Searching, and Learning) plus a Preface, two appendices, an extensive Bibliography and an Index. The chapter on "Acting" introduces the simple game of tag used as an example throughout the book. It further sets the stage by describing the principal components of the game engine and the AI system. The third chapter, "Perceiving," introduces percepts -- the formal framework used to encapsulate and manipulate an NPC's awareness of its world. In many games a key concept is filtering out information which is available in the game state but should not be "known" by the NPC. Chapter 4 describes reactive controllers. Funge uses a very strict definition of reactive -- informally, it means a non-deliberative controller, but in this book the term is used to mean strictly stateless. This distinction has a practical consequence since a stateless controller can be shared among multiple NPCs. (Yet I wondered how important this was in practice. That point was not explored in any depth, and a "slightly stateful" reactive controller can be very useful.) The chapter on "Remembering" introduces memory percepts, mental state, beliefs and communication between NPCs. The sixth chapter covers "Searching" -- through trees of possible future actions, often referred to as planning. The extensive treatment of search includes both examining the host of options that are available to an NPC at each juncture, as well as reasoning about the interaction of one NPC's behavior with another, known as adversarial search. The final chapter covers "Learning." It looks at both offline learning (which happens before the game is shipped) and online learning (happening during gameplay). The first is merely an aid to game development, the latter promises NPC that can adjust to the player's skill and style of play. Online learning present many more technical challenges. In fact, my first impression on reading this section that it was less practical than the rest of the book because of the difficulties of online learning. However, from the description of this GDC 2005 lecture, it appears that Funge and his colleagues have made significant progress in this area.

I recommend Artificial Intelligence for Computer Games: An Introduction to commercial game AI programmers, as well as other game programmers and designers who wish to learn more about this area. Because of its sound academic underpinning, the book will also be of interest to students of artificial intelligence and to professionals in related areas such as agent-based simulation and training.


Reynolds is a Senior Research Scientist in the R&D group of Sony Computer Entertainment America. His interests center on modeling behavior of autonomous characters, particularly steering behaviors for agile life-like motion through their worlds. See his page on Game Research and Technology. You can purchase Artificial Intelligence for Computer Games: An Introduction from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
This discussion has been archived. No new comments can be posted.

Artificial Intelligence for Computer Games

Comments Filter:
  • What I need... (Score:5, Interesting)

    by It doesn't come easy ( 695416 ) * on Wednesday January 26, 2005 @03:38PM (#11483733) Journal
    What I need is a game with AI that can evaluate my game play and tell me how to play better against my opponents, kind of like where you view your opposing team's old games to learn their patterns and weaknesses, only give me feedback in real time while I'm playing.
  • by hedley ( 8715 ) <hedley@pacbell.net> on Wednesday January 26, 2005 @03:45PM (#11483820) Homepage Journal
    I have yet to find suitable AI for that game. To me that type of AI (dealing with imperfect information) is the holy grail of game AI. This includes a vast family of such games (poker comes to mind), where over time, information about the game state is disclosed. I once found a small Stratego game on the net that played an unbelievably good game. After saving and restoring a game a few times and losing (whereup it shows you its pieces) it had actually cheated real-time and would move its pieces to match yours. Thus the small executable! :)

    Hedley
  • Steve Jackson: Ogre (Score:5, Interesting)

    by Weaselmancer ( 533834 ) on Wednesday January 26, 2005 @03:48PM (#11483861)

    Buy an old C64 or Amiga copy still in the box if you can. Seriously, I mean it. It comes with 2 manuals.

    Book 1 has a short story and some player info, and Book 2 describes step-by-step exactly how they developed the AI for the game. Going to cons and watching successful players play, getting them to give out their strategies, and then translating those ideas into code. As a bonus, they describe the exact formulas the Ogre uses to determine its move, targeting sequence and deployment of arms.

    It's brilliant, informative, and well worth the price of the game alone. Highly recommended reading if you're into game AI.

  • Re:starcraft yay (Score:4, Interesting)

    by vadim_t ( 324782 ) on Wednesday January 26, 2005 @03:53PM (#11483913) Homepage
    Not very surprising, really.

    Starcraft is a RTS, and in a RTS omniscience and omnipresence would be a quite nice advantage. Put simply: The AI is never distracted, knows the position of every unit, and the progress of every operation and every resource.

    Now, combine that with a decent AI implementation that can use elements of the game to its advantage, and it shouldn't be that hard to code something that would crush newbies without much trouble.

    I'd be more impressed if it was turn based, since that would make competition on equal terms perfectly possible. I remember the AI in X-Com used to give me quite a lot of trouble.
  • Intelligent FPS? (Score:4, Interesting)

    by DrXym ( 126579 ) on Wednesday January 26, 2005 @03:53PM (#11483914)
    Most NPCs or 'bots in FPS games I've played have had an AI that can be encapsulated by these few lines of pseudo-code:

    IF playerCanSeeMe() THEN
    IF coverNear() && rand() > 0.5 THEN
    takeCover();
    ELSE
    standUp();
    shoot();
    ENDIF
    ELSE
    advanceTowardsPlayer();
    ENDIF

    I wish the likes of Doom 3, HL2 et al would pay half as much attention to making the enemies smart and resourceful as they do to making the scenery pretty. Sometimes I wonder if zombies are such a staple of FPS games to explain why the game AI is so retarded.

    Even multi-player games could benefit (e.g. the Battlefield series) if the single player training mode bots had an ounce of sense or tactics.

    The only FPS I would consider containing remotely convincing AI is Far Cry and even the NPCs in that are fairly predictable and easy to fool - just swim to an island and pick them off one by one as they swim to you or drown trying. But at least they seem to have a spoonful of brains in their heads - crouching, taking cover, encircling, giving orders and other tactics that other games haven't even bothered to implement.
  • Re:starcraft yay (Score:5, Interesting)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Wednesday January 26, 2005 @03:57PM (#11483975) Homepage
    Well, in a certain sense, "good" AI for games doesn't necessarily mean "harder to beat". Take a fighting game like Tekken as an example. Technically, you could just program the AI opponent to block every time you press the "attack" button, and to attack whenever you can't block. Suddenly you have "unbeatable AI", but it's not really *good* AI.

    And I don't just mean "it's not technically impressive". I mean, considering the purpose of in-game AI, it's *bad* AI. Good AI generally simulates more complicated human interactions. Good AI can be tricked or distracted, and can learn so it's not so easily tricked the next time. I really like the idea of AI that will adjust to the player's skill level to always provide gameplay that is exciting and challenging, yet beatable.

    In other words, I believe "good" AI in a game is not defined by being hard to beat, but by being fun to play against.

  • by EXTomar ( 78739 ) on Wednesday January 26, 2005 @04:13PM (#11484165)
    The next "killer app" for MMOGs is advanced, learning AI. Right now games are trying to cover up the simplistic behavior of NPCs by creating complex scripts around them.

    Example:
    - Between 100%-75% health, Dragon will fight as normal.
    - At 75% health, the dragon will breath fire in attempt to kill as many players nearby, fly over to the west part of the chamber.
    - Between 75%-50% health, Dragon will fight as normal and start using its tail.
    - At 50% health, Dragon will fly to the east part of the chamber, breath fire onto the players as they run from the west part of the room to the east.

    So on and so forth. The problem is that humans easily can see paterns like this. This "event driven" behavior only works when players are "surprised" and becomes a serious liability when players discover the pattern. If the pattern is "discovered", players will scatter around 75% to avoid the fire. At 50% they will run to the eastern part of the room before the dragon gets there to avoid it breathing fire onto the western half.

    To avoid some of this predictability, some monsters appear to have "randomized behavior". A monster has 5 different "actions" where a programer weights the choices and generates a random number. This makes the monster appear to have some tactics trying different attacks but just as much as it succeeds in throwing the player off they will often randomly chose the poor action.

    I believe advanced AI techniques like nueral nets will be the next "killer app" for MMOGs. Learning AI is not impractical for a single player stand alone game but it is not as "exciting" nor do single player system have enough computing power and "experience" to really put a nueral net through its paces.

    The Dragon in the example starts out like the players in that neither side knows exactly how to win. Reguardless of the outcome both the sever/Dragon and the players should learn something from the encounter. Have enough players run against The Dragon and it might start to learn things like "fire seems to be more effective against melee". When it sees a raid comprised of mostly melee and very few casters it choses its fire attack far more than its melee. This is a far better option than "randomizing attacks" or scripting their behavior. The Dragon is now actually using tactics and reacting to the players in a psuedo-intelegent manner.

    If we really want to go far fetched, it would be great if each server instance of The Dragon "learned" on its own and developed personality and behavior unique unto itself. One server's Dragon might like to fly around compared to another that likes to walk when moving around. Of course one of the tricks is keeping the game engaging. No one wants to fight The Dragon if they know it will beat them 9/10 times.

    Some NPCs should be designed simplistically because that is their nature. Some NPCs are highly intelegent and should act occordingly. I await the day when you can do true tactical attacks against the computer instead of having to resort to a scirpted monster or just filling the other side with other human players.
  • Re:starcraft yay (Score:2, Interesting)

    by Wildclaw ( 15718 ) on Wednesday January 26, 2005 @04:21PM (#11484244)
    That is completly incorrect. The starcraft AI could be beaten playing 1 vs 7 if you knew what you were doing and played on the correct maps and exploited the flaws in the AI.

    The map is best determined by the distance from the AI homebase to yours. Farther distance gives a few extra seconds that determines life or death.

    By getting a good enough defense (especially at the front door) you can hold of the computer AIs with minimal expenditures until they are out of resources. That is your cue to slowly move in and destroy them one by one.

    Terran was probably the most easy to do so with. Tanks with bunkers in front of them can hold of most things that come through the front door and later in the game the base should be quite heavily turreted. You can also use supply depots to completly block of the base entrance. I do consider that to be exploiting the path finding AI and it is possible to do it without them.

    Protoss makes good use of a massive static defense early on. Place pylons in front of the cannons to make the AI target those first. A couple of reavers will also help out and as the game goes on you need to get air superiority (carriers) quickly. The main problem if I remember correctly is the first terran tanks. They outrange the static defense and they appear quite early.

    Zerg is the most difficult. I havn't done it myself actually. I have only seen it be done by a friend. Guardians are the key. The problem is getting them in time. A few anti ground building will probably also be needed to slow the early onslaught. Once you have air superiority with zerg it becomes easier.

    These tactics won't work with the AI script that blizzard added in a later patch because that script cheated (Created units out of thin air). Also, you will probably want to use lower speed settings. Even with the tactics I mentioned it is still a lot of micro managing needed to actually win.
  • by ThePyro ( 645161 ) on Wednesday January 26, 2005 @04:32PM (#11484375)

    All of the Total Annihilation AIs (that I'm aware of) cheat by knowing where your units are without having to do any reconnaissance. The very first attack by the AI will always head straight for your base, even though the AI has sent no previous scouts. The AI does a similar thing when attacking your expansions.

    The StarCraft AI also cheats in this fashion.

    The trouble with most RTS AIs is that they're just not set up to deal with imperfect information. Exploration, one of the X's in classic 4X games, gets totally left out by the AI. Consequently, the human player loses the opportunity to try all sorts of "stealth" tactics.

    As another poster already mentioned, a big step forward will be game AIs which can deal well with imperfect information. An AI which must use scouts, and can be sometimes be fooled by cleverly planted misinformation / diversions...

  • Re:What I need... (Score:3, Interesting)

    by SharpFang ( 651121 ) on Wednesday January 26, 2005 @04:37PM (#11484424) Homepage Journal
    Actually, that's a neat idea that could be realized in a simple way: Get an "adaptative AI" that learns your habits. Play for a hour or so, "teaching". Then switch to "learning mode" when the AI doesn't try to EXPLOIT your habits but to EMULATE them. The result is you fight "against yourself" and can learn your own strengths and weaknesses. Wash, rinse, repeat.
  • difficulty (Score:2, Interesting)

    by Khashishi ( 775369 ) on Wednesday January 26, 2005 @04:57PM (#11484658) Journal
    This is why you need configurable difficulty settings or some sort of adaptive diffuculty servo.
  • by TiggertheMad ( 556308 ) on Wednesday January 26, 2005 @05:07PM (#11484763) Journal
    Funny story: A few years back, I was walking through the MS building where the were working on the Mechwarrior 4 game. It is always fun to walk by the game dev departments, and chat with people about the latest projects.

    I was having a conversation with a guy who was working on AI algorithims, and I asked what sort of schemes he used, fuzzy-logic, Genetic learning, or weighted neural nets? He told me that they didn't bother with academic AI techniques, because he could already write an AI that could beat the player every time without them.

    I was completely at a loss for words, so I just thanked him and ran away.
  • by Pausanias ( 681077 ) <pausaniasx@ g m a il.com> on Wednesday January 26, 2005 @05:12PM (#11484824)
    Some of you might be aware that the PC/Mac/Linux Game Neverwinter Nights [bioware.com] includes a toolset with a C-like scripting language that allows users to code the behavior of characters in a game---not just for combat, but generic interactions as well.

    BioWare, the developers of the game, are known for the imaginative story lines in their Star Wars: Knights of the Old Republic [bioware.com] and Baldur's Gate [bioware.com] series. However, by their own admission, they never have as much time as they want to work on creature AI. In Neverwinter Nights, this shortage of time resulted in a number of unfortunate situations during game play. For example, friendly characters would waste powerful spells on pitifully weak enemies; or they would continually attempt to cast spells in close hand-to-hand combat, not realizing that this gives the close-by enemy countless opportunities to tear them into pieces, and that pulling out that dagger in their backpack might be a better idea. Especially sad were near-death enemies who would try to heal themselves with woefully inadequate healing spells (in RPG talk, down 80 hit points and casting cure minor wounds).

    Luckily, the toolset allowed a number of us to code improvements to NPC behavior. I was one of them, starting the Henchman Inventory and Battle AI project [ign.com], now lead by Tony K. The focus of our project was immediate improvement of game play. An even more impressive community is the Memetic AI [memeticai.org] group. These folks are putting together a full package of complex behaviors for an entire world, from peasant farmers to fearsome dragons. Impressive stuff.
  • Re:starcraft yay (Score:2, Interesting)

    by greyhoundpoe ( 802148 ) on Wednesday January 26, 2005 @05:51PM (#11485296)
    I don't want a game to let me win any more than I want it to cheat. Adjustable AI takes all of the meaning out of structured difficulty levels. I don't want an AI to take pity on me and make me think I'm better than I am--if I should lose, I should lose. The example that comes to mind is Warcraft II: Beyond the Dark Portal--the AI was intensely difficult, but if you destroyed your own towers at the beginning of many levels the AI would think you were weak and take a fall. Some levels were pitifully easy if you sacked parts of your own base early on.

    Now what would be useful is what another poster mentioned--if the AI would kick my ass anyway, but then figure out what I did wrong and give me a tip or two (build more troops earlier, you teched too fast; work on keyboard shortcuts, your actions-per-second rate is too slow).
  • Re:starcraft yay (Score:4, Interesting)

    by nine-times ( 778537 ) <nine.times@gmail.com> on Wednesday January 26, 2005 @06:19PM (#11485597) Homepage
    Adjustable AI takes all of the meaning out of structured difficulty levels.

    I'd say something more like, self-adjusting AI and structured difficulty levels are oppositional methods. Which is to say, if you had good self-adjusting AI, you wouldn't need "difficulty levels". It would adjust.

    Or another possibility: you could have different difficulty levels, the easy levels being self-adjusting, and harder levels that amount to telling the AI, "Eh, do your worst, even if I stink."

    Adjustable AI takes all of the meaning out of structured difficulty levels.

    But that's more an issue of poor implementation than the idea itself being bad. Any time there's some trick like that, one that *always* fools the AI, it's a problem of poor implementation. In other words: a bug.

    There are two reasons why I like the idea of AI that calibrates itself to the skill level you're playing on. First, there is a certain level of realism to it. In real life, if it seems like you stink at something, your competition will underestimate you. They won't try as hard, because they're not expecting a real challenge. If you constantly pull this, though, against the same opponents, they'll eventually catch on. Good AI should mimic this.

    Second, there are games where, when I'm playing it, it's like an interactive movie. Sometimes, when I play a game, I'm not that interested in "rising to the challenge". I just want to take control of the main character while he does something cool, and then get on with the story. When playing games like this, "getting stuck" on some stupid boss just isn't fun. It's annoying.

    So this is often where people do cheat. They like the game, but they want to get past some stupidly-difficult part. Cheating, however, breaks the illusion. If the game were capable of "helping you out" a little, it would maintain the experience and let you past.

    I'm not saying the idea is easy to do well. However, ultimately, when I play a game, I'm not looking to prove myself by being "733t". I just don't care, as long as it's fun. Good AI (in relation to games) is AI that makes playing fun. Whatever that entails.

  • by DG ( 989 ) on Wednesday January 26, 2005 @06:42PM (#11485834) Homepage Journal
    I'm totally with you on the ability of people to recognise patterned behaviour, typically far faster than a game designer might suspect.

    But people can also discover "emergant" patterns that aren't necessarily explicitly programmed in.

    I remember playing Sargon III Chess on my C-64. I accidentally discovered that the AI couldn't see - I'm no chess geek, so I'm sure there's an official term - "indirect" attacks. Rather than move piece A to sqaure Q ro threaten enemy piece X, I'd move some other piece B onto the line of attack that I wanted to make A->X, blocking the attack. Then piece A would be moved into attack position on Q, and piece B moved out of the way, unblocking the A->X attack.

    The AI seemed to be able to predict that a straight move to Q by A would threaten X, and it would be very good at countering those moves. But attacks from a third piece by moving some other piece out of the attack line were invisible to it.

    Once discovered, this lead to strategies that involved setting up elabourate attacks that hinged upon "reveals". It'd drive the AI nuts. Sadly, actual humans do not suffer from this blind spot and ol' Sargon did not improve my RL chess playing ability one bit.

    Here's another example of a different kind:

    One summer, a group of my friends played a TON of the original Battletech board game aginst each other. We'd start after supper and go to the wee hours of the morning, day after day after day.

    In so doing, we developed a particularly effective strategy. We'd have a 4-lance company. The first lance was composed of stripped-down lightweights equipped with maximum jump jet capacity and a single weapon - a flamer. The second lance was of superheavy, very low-mobility, weakly-armoured, long-range rocket artillery units. The third was ultra-heavy, low mobility, heavily armoured massive close-in-damage units, and the fourth was the reserve unit of heavy cannon equipped hovercraft.

    As is typical for wargames, the faster you move, the harder you are to hit. There was a further negative modifier if the 'mech was jumping. Our lightweights, if they jumped full distance every turn, accumulated so many negative to-hit modifiers that they were unhittable. They would fan out over the game board, spotting the enemy and setting fire to terrain - which in the game rules, happened 100% with the use of the flamer - and which caused vision blocking due to smoke, plus there was a chance for the fire to spread to adjacent hexes.

    The lightweights could also spot for the indirect fire lance with minimal penalties. The indirect fire lance would never move; it would just fire salvo after salvo of long range missiles. The hit rate wasn't great and the distribution of LRM fire tends to spread damage easily, but enough would hit as to ablate off some enemy armour - and the psychological effect of taking damage from an unseen source without the ability to retaliate... it was maddening.

    Meanwhile, the heavy, close-in units would slowly advance up to intercept positions. Thanks to the madly-hopping lightweights and the smoke, we'd know where the enemy was but the enemy wouldn't know where we were.

    The enemy would thus blunder up against the close-in units, which did monster amounts of damage with a high hit probability (the enemy unit was often moven slowly, due to the smoke, and the close-in unit would be stationary). It was not unusual to destroy an enemy unit in a single turn.

    If things got sticky for whatever reason, the hovercraft would race in from the flank/rear and could disrupt the most cleverly planned counterattacks.

    With all the practice we got, these tactics became drills - they could very easily have been scripted.

    We put this to the test at a wargame convention, and we slaughtered everybody, without losing a single 'mech in any battle. Towards the end, the organizers were matching us upwards of 3 to 1; we just could not be beat.

    Needless to say, we were not invited back. :)

    DG
  • by Anonymous Coward on Wednesday January 26, 2005 @06:47PM (#11485882)
    The "AI Wisdom" books are great, especially if you are looking for a fairly specific game algorithm or topic that a more general book might not pick up on.

    "AI Game Development" is a really good book for learning specifically about Neural Nets and Genetic algorithms, complete with code.

    Another book that came out recently is "AI Game Engine Programming," which is pretty cool because the book actually gives working code for each of the AI techniques it discusses. It also has a great section where it breaks down all the major game genres and talks about which kinds of AI might be better/worse to use in each.

    All in all, the books coming out for game AI programmers are getting better and better. I wish I'd had the above titles when I was first learning...
  • by TheMESMERIC ( 766636 ) on Wednesday January 26, 2005 @09:44PM (#11487508)
    Not a nested set of IF-THEN-ELSE statement.
    I want to know if all the theories of Artificial Inteligence are actually implemented in ANY modern game.

    * Do the characters in the game learn from their enviroment?
    * Do the characters adjust their tactics to deal with different players.
    * Do the characters have persistent memory?
    * Is the "brain" of the characters actually programmed using an AI language? (Lisp, Prolog)

    But most importantly - does any game pass the Turing Test? ;)

The Tao is like a glob pattern: used but never used up. It is like the extern void: filled with infinite possibilities.

Working...