© Copyright JASSS

  JASSS logo

Human Cognition and Social Agent Technology

Edited by Kerstin Dautenhahn
Amsterdam: John Benjamins Publishing Company
Cloth: 1-556-19435-8

Order this book

Reviewed by
Alexis Drogoul
LIP6, Université Paris 6, France.

Cover of book

Let me begin my review by saying that this book is perhaps one of the most interesting I have read in the last five years. I would also like to mention that, while it is usually underestimated relative to the authors' contributions, the work done by the editor is really awesome. The biggest difficulty, in this case, is that the authors come from very different disciplines (AI, sociology, art and robotics) and yet there is a strong unity throughout the book. The organisation is clever and allows the reader (whether their background is in sociology or computer science) to progress quite easily.

The rationale of this book is to present an innovative exploration of human social cognition that does not use the traditional tools of social scientists and psychologists. Instead, the project is carried out in a constructivist way, by designing artificial agents (software or robots) intended to enter into social interactions with their "users". Although most of the chapters apparently deal with design-only issues (What kinds of architecture are to be employed? How do we build artificial emotions?), they actually raise fundamental questions about the meaning of an "artificial social cognition", These questions in turn prompt re-examination of social cognition in human beings. (How can a machine be considered as "social"? What behaviours and representations are invoked in social relationships?)

This book therefore targets two categories of readers: social scientists interested in discovering what might well become a new kind of social science and computer scientists aiming to enhance the user experience of their creations (from GUI design to interactive features). It will provide them with an accurate overview of recent research in the areas of:

Someone like Daniel Dennett would enjoy reading this book. As a matter of fact, every chapter hinges on the intentional stance of the user and what might be called the intentional dye that researchers try to provide for their artificial agents. The core problem that the book deals with (different from the one found, for instance, in Distributed Artificial Intelligence) is not how to design "social" behaviours that could enhance the performances of artificial agents behaving on their own. Instead, it is how to provide these agents with behaviours that will be immediately perceived as "social" by their users. The consequence is substitution of a notion which can be judged in an objective way (by what amount does the development increase the speed, accuracy or reliability of multi-agent a system) with one that can only be subjectively and contextually evaluated. This is a welcome change in mentality, and this book will undoubtedly convince many computer scientists that developing socially situated agents requires strong interactions with social scientists. However, whether or not it will convince social and cognitive scientists that they might learn something new from such interdisciplinary research is another story. As a matter of fact, one can be sceptical about the real motivation that underlies some of this work. It is fairly easy, once an artificial character mimics what the user expects to see or hear in a social relationship, to change a user-oriented research perspective into a consumer-oriented marketing one. From there, it is a short step to relying on superficial notions for designing artefacts like interactive toys, virtual avatars and so on. Fortunately, these topics only seem to concern a few chapters in the book.

Of course, once the book has been read and closed, one cannot prevent oneself from facing a central question: what was this book really about? There is a French proverb "Qui trop embrasse mal étreint". (This can be rendered literally as "He who kisses too much embraces badly" and semantically as "Grasp all, lose all".) This expression is close to the overall impression that the book made on me, in that it implies that you may lose some of the things you are trying to grasp at once - but not all of them. As a matter of fact, by dealing with both what human (social) cognition is and whether and how advanced software programmes or robots can become social artificial agents, this book has a very ambitious goal. It is trying to define a notion of sociality per se which might be identified independent of its natural substrate (the human being). The book is perhaps too ambitious in that respect, and could lead to misinterpretation or (to be more precise) over-enthusiastic interpretation. Nevertheless, I would recommend it to anyone interested by the impact of innovative Artificial Intelligence research on social science.

I am still wondering, however, what would it would mean to designing asocial interactive artefacts...

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 2003