Order this book
Dipartimento di Scienze economiche e finanziarie, Università di Torino
The great novelty is that, while in the '60s, to became a good economist, you had to learn a lot of mathematics, now, besides math, you have to be confident with many sophisticated computing tools, copying also with general purpose computer languages.
An objective of the book is to introduce the reader to a number of high-level languages such as:
GAMS, whose home page is at http://www.gams.com/, and states: "The General Algebraic Modeling System GAMS is a high-level modeling system for mathematical programming and optimization. It consists of a language compiler and a stable of integrated high-performance solvers. GAMS is tailored for complex, large scale modelling applications, and allows you to build large maintainable models that can be adapted quickly to new situations".
Mathematica: the home page is at http://www.wolfram.com/ with the proud report that "Mathematica is the world's most powerful general computation system. First released in 1988, it has had a profound effect on the way computers are used in technical and other fields" and that "It is often said that the release of Mathematica marked the beginning of modern technical computing". Despite its emphasis, the assertion is not completely far from reality.
MATLAB: the home page is at http://www.mathworks.com/, where we find that "MATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++, and Fortran".
Duali: you can find the home page of this tool, programmed by Kendrick and Amman and now by Kendrick in a new C version for Windows, within the book web site, at http://www.eco.utexas.edu/compeco/dualidl.htm; in the user guide we read that "Duali (which is pronounced «dual I») provides a graphical interface for deterministic, passive and active learning stochastic models as well as solvers for deterministic models and for passive learning stochastic models. It does not yet contain a solver for adaptive control models. However it does provide a procedure to export a file, which can be used to solve adaptive control models with the Dual or Dualpc software packages".
Solver in Excel and Access database, i.e. the well known components of the Office suite.
That objective is very important and we can agree with it; the proposed learning sequence, however (i.e. becoming acquainted with several high level languages or tools and only at a later stage to progress "to lower-level languages such as Visual Basic, Fortran, C, C++, or Java"), can be discussed.
The study of low level languages, in my opinion, must go together with the more practical direct use of high level tools, to have immediately a comprehensive vision of what you can obtain by computational tools.
I also suggest to the authors to have a look to open source tools like Octave, for MATLAB, and Open Office, with OOo Basic (the equivalent of VBA-Visual Basic for Application for Office).
To build an object oriented application for economic simulation, like an agent based simulation model, we have also to consider powerful, but easy to learn, languages, such as Python, http://www.python.org/, or Ruby, http://www.ruby-lang.org/en/. We can also take into account the possibility of implementing the structure and the protocol of high level simulation tools, like Swarm, http://www.swarm.org, through that kind of simplified languages.
The first steps of the construction of a didactical tool which implements the Swarm protocol may be found at http://web.econ.unito.it/terna/simulazione_didattica.
* * *
About the contents of the book, I have summarized in Table 1 the various topics. The table also provides my personal judgment on the difficulty of each subject. Note that the authors allow the reader to accede to the code of all the applications at the book web site (http://www.eco.utexas.edu/compeco/, well done!).
Table 1. Chapters, Contents, Languages and Difficulty level
|Databases (and their use in economics)
|Thrift (saving, for common people, my note)
|General Equilibrium Models
|Genetic Algorithms and Evolutionary Games
|Genetic Algorithms and Portfolio Models
|A little bit hard, because of the novelty of the subject for many readers, not among the readers of JASSS
|Easy-medium (this is a difficult subject, but the Duali code simplifies the implementation)
|Rational Expectations Macro
|Difficult (even in this case Duali simplifies the implementation)
The presence of a chapter devoted to agent-based simulation in this kind of books is very important, but uncommon, and the choice made by the authors deserves great appreciation. The introduction of the chapter reports that (p. 267):
"Agent-based computational economics (ACE) is one of the newer fields in economics. Agent based models simulate the behavior of multiple heterogeneous agents interacting in a variety of ways. While the modeling of economic agents has a long tradition, agent-based modeling departs from it in a number of ways. For example, when modeling a market economy, the standard neoclassical competitive general equilibrium approach usually assumes that agents have fixed preferences, perfect and complete information, and no reproductive behavior. Also this approach assumes that trade is organized by a central auctioneer, which given all agents' preferences and endowments, computes the set of equilibrium prices. Thus, agents are price-takers and do not engage in trade at prices other than those given by the central auctioneer. Moreover, space, which is geography, is usually an absent dimension in that approach. In contrast, agent based models allow agents to display a number of more realistic characteristics and behaviors, that is, changing preferences, bounded rationality and memory, imperfect and incomplete information, and local trade; agents may interact with neighbors in a geographically defined space and prices emerge from these decentralized interactions".
The model introduced is the well known Sugarscape model, by Epstein and Axtell. Symmetrically, Epstein and Axtell (1996, Chapter 1, Introduction, p. 1) assert:
"Herbert Simon is fond of arguing that the social sciences are, in fact, the hard sciences. For one, many crucially important social processes are complex. They are not neatly decomposable into separate subprocesses - economic, demographic, cultural, spatial - whose isolated analyses can be aggregated to give an adequate analysis of the social process as a whole. And yet, this is exactly how social science is organized, into more or less insular departments and journals of economics, demography, political science, and so forth. Of course, most social scientists would readily agree that these divisions are artificial. But, they would argue, there is no natural methodology for studying these processes together, as they coevolve. The social sciences are also hard because certain kinds of controlled experimentation are hard. In particular, it is difficult to test hypotheses concerning the relationship of individual behaviors to macroscopic regularities, hypotheses of the form: If individuals behave in thus and such a way - that is, follow certain specific rules - then society as a whole will exhibit some particular property. How does the heterogeneous micro-world of individual behaviors generate the global macroscopic regularities of the society? Another fundamental concern of most social scientists is that the rational actor - a perfectly informed individual with infinite computing capacity who maximizes a fixed (nonevolving) exogenous utility function - bears little relation to a human being. Yet, there has been no natural methodology for relaxing these assumptions about the individual. Relatedly, it is standard practice in the social sciences to suppress real-world agent heterogeneity in model-building. This is done either explicitly, as in representative agent models in macroeconomics, or implicitly, as when highly aggregate models are used to represent social processes. While such models can offer powerful insights, they 'filter out' all consequences of heterogeneity. Few social scientists would deny that these consequences can be crucially important, but there has been no natural methodology for systematically studying highly heterogeneous populations. Finally, it is fair to say that, by and large, social science, especially game theory and general equilibrium theory, has been preoccupied with static equilibria, and has essentially ignored time dynamics. Again, while granting the point, many social scientists would claim that there has been no natural methodology for studying nonequilibrium dynamics in social systems".
The book shows how to build the simplest version of an agent-based model, where the basic MATLAB features are sufficient (vectors and matrices, with the addition of the data-type named structures and cell arrays, all explained). More sophisticated simulations require the use of object oriented programming, also available in MATALAB, as well as in low-level programming languages (like C++, C# and Java). My addition is that there is also an important collection of specialized agent based simulation tools, such as Swarm, and all its derived clones, or NetLogo and StarLogo, that the book could have quoted to offer wider view and information; also script languages like Python and Ruby can be used at an introductory and didactical level, as seen before.
And now? Let us accept the invitation to play with the proposed models, always remembering that complexity is behind the corner, but... this is another story.
EPSTEIN JM and AXTELL RJ (1996) Growing Artificial Societies. Social Science from the Bottom Up. Cambridge, MA: The MIT Press.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 2007