© Copyright JASSS

  JASSS logo

Computational Intelligence in Games

Edited by Norio Baba and Lakhmi C. Jain
Heidelberg: Physica-Verlag
Cloth: 3-790-81348-6

Order this book

Reviewed by
Takeshi Takama
Transport Studies Unit, Department of Geography, University of Oxford, UK.

Cover of book

Computational intelligence in games is the hot topic of today. Powerful computers are used not only in rocket science but also in developing commercial computer games. In fact, investing in computer games can be more attractive than investing in blockbuster movies. However, references to these facts are probably not needed, especially if you are planning to read this book, which is (perhaps surprisingly) part of a series called "Studies in Fuzziness and Soft Computing". It broadly covers three types of games:

Each chapter contains its own specialised methodologies and provides interesting background of its game.

I shall briefly summarise and comment on each chapter and then give some overall comments about this book and provide a recommendation to its potential readers.

Summary and Comments on Each Chapter

Introduction to Computational Intelligence Paradigms
Z. Chen, A. M. Fanelli, G. Castellano and L. C. Jain

This chapter deals with several important paradigms in computational intelligence including probability theory, fuzzy logic, artificial neural network, genetic algorithms and rough sets. Some of these, such as artificial neural networks, are explained better than others are since it would be impossible to discuss them all in depth within a short introductory chapter. Despite the inevitably brief introduction which results, each paradigm is dealt with in a way which will be accessible to new researchers.

Evolving a Neural Network to Play Checkers without Human Expertise
K. Chellapilla and D. B. Fogel

This chapter first explains the neural network and evolutionary algorithms used in Chess programs. The authors go to some lengths to explain each paradigm. They mention the limitations of the back propagation method and discuss one particular approach that involves adding to the degrees of freedom available in the network. They also explain the procedure for the evolutionary algorithm that selects the network architecture. It appears that the methodology used for computer programs to play Checkers (Draughts) may not be sufficient for Chess. However, the history and comparison of various Checkers programs is still very interesting and enjoyable to read.

Retrograde Analysis of Patterns versus Metaprogramming
T. Cazenave

This chapter talks about game solving in terms of a deductive approach and deals mainly with the game of Hex but also considers Go. Initially, the author explains the histories and rules of the two games, as do other authors in the book. His main point is that knowledge of the game is more important than search ability for an optimal solution in terms of deductive searching. This is interesting because the chapter provides a different viewpoint from common pattern recognition approaches to game solving.

Cazenave continues by outlining two approaches to generating knowledge:

  1. Retrograde analysis (which is also known as back propagation in other disciplines)
  2. Metaprogramming

As one might expect, the first approach may not be computationally soluble as the game progresses. Cazenave criticises this limitation and proposes metaprogramming as a potentially better solution, arguing that the two approaches are likely to be complementary in practice.

Learning to Evaluate Go Positions via Temporal Difference Methods
N. N. Schraudolph, P. Dayan, and T. J. Sejnowski

This chapter is about playing the game of Go and improving the Go programs currently available. In brief, Go is a traditional Asian board game. Two players place their coloured stones (black or white) alternately, developing their territories with the chain of their stones.

Some of the interesting features of Go are described in this chapter. First, tree search (back propagation) is infeasible in Go since the branching factors are too large in this particular game. Second, unlike Chess, Go players retain the majority of their stones on the board at the end of the game, so the final state of the board is potentially very informative.

Conventional Go programs use expert systems but these systems have limited capability for game solving. As a result, these conventional Go programs are rarely better than beginner level. Therefore, the current authors use Temporal Difference Learning, which is the implementation of an artificial neural network. This is understandable, especially if you consider the second feature of Go mentioned above. The strength of pattern recognition by the neural network is maximised on this game because the evaluation takes place not only according to the score but also according to the patterns and portions of the stones at the end.

The other aspects of network modelling are not very novel if you have read this book from the beginning. However, the authors present a few interesting stories in this chapter: about the data used for the learning process of the network for example. It turns out that there is a huge bank of data about Go available on the web and some of this data is very good but most of it is of dubious quality. For this reason, the authors used existing computer games of Go as data. This approach makes sense to me. Why shouldn't you use the games of conventional Go programs as secondary input data if these programs use knowledge based systems? This argument could be related to that in the previous chapter. The result of this approach seems to be successful especially when set against one of the commercial Go programs. This shows how the learning process of the network can be advantageous provided it has quality data from skilled opponents to learn from.

Model-Based Reinforcement Learning for Evolving Soccer Strategies
M. A. Wiering, R. P. Saustowicz and J. Schmidhuber

In this chapter, the kind of game played (football aka soccer) is substantially different from the board games described in the previous chapters. The motivation of this research comes from the RoboCup. This is not computer game at all but an international contest in which robots play football autonomously. The authors simulate the RoboCup in the computer instead of creating actual robots. There is clear benefit in doing this since the world is much simpler inside the computer and the researcher can concentrate on the learning algorithms of autonomous players.

The authors look at two types of learning algorithm. The first method is reinforcement learning with CMAM, which optimises the strategy by trial-and-error procedures while filtering sensory inputs. The second method is probabilistic incremental program evolution (PIPE). This is based on the Genetic Programming architecture and generates successive populations of functional programs as tree structures. One major difference between PIPE and more traditional Genetic Programming is that the former adapts probability distributions, so good strategies remain in the candidate population with greater probabilities. The comparisons of these methods are discussed well before the authors start talking about their experiments. If you are just interested in comparisons between methods, the first half of this chapter would stand up well to being read on its own. However, I have to say that I found the explanation of methods and corresponding jargon rather difficult to understand as a reader without previous experience in this area.

Fuzzy Rule-Based Strategy for a Market Selection Game
H. Ishibuchi and T. Nakashima

In the final chapter, this book introduces a third type of game. The problem posed is how, over time, agents can find the most profitable market in which to trade. Matters are complicated by the fact that the profitability is based on the market price which is high when fewer agents choose the market and vice versa. This is an application of the general class of Minority Games. Agents in this game have different strategies or personalities (since the strategy in an agent does not change in this scenario). The strategies range from random choice to fuzzy learning. The explanation of the model setting is clear so it is pity that this chapter does not come earlier in the book. For example, the text explains stochastic selection strategy (Q-learning) more thoroughly than any other chapter.

Another interesting feature of this chapter is the combination of fuzzy logic and agent-based modelling. Both fuzzy logic and agent models are very popular concepts in social simulation but they are not often combined in research. In fact, the presented results of the simulation are rather predictable but I think this chapter is still worth reading if you are thinking of implementing fuzzy logic in an agent model.

* Overall Comments and Recommendation to Possible Readers

I found the final chapter of the book to be the most satisfying, probably because of my own personal interests. This book covers a wide range of games based on computer intelligence, so readers are likely to find one or more examples relevant. It is not necessary to read all the chapters and you can dip into a few if you know what kinds of games you are curious about. However, most of the chapters include interesting stories and relevant background on each game so you can read the rest of the book for entertainment and interest.

In addition, the introductory chapter (and the contribution by Ishibuchi and Nakashima which might have made a good chapter 2) are well written so young researchers can understand the concepts from several paradigms easily. These chapters thus provide a solid basis for understanding the rest of the book. Having said that, I cannot recommend this book wholeheartedly to new researchers even though the publicity on the book indicates that it is also meant for them. Overall, this should be seen as a relatively short and interesting book for more advanced readers.

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 2004