Macaulay Land Use Research Institute, Aberdeen, UK.
This special issue of CMOT on "Social Intelligence" contains four articles of high quality and should have broad appeal among those interested in computational social science, multi-agent computing or related areas. The text totals less than 100 pages (about average for issues of CMOT), but a great deal of thought-provoking material is packed in. All the papers are readable, and well structured, and the general quality of editing is high. Currently, pdf versions of CMOT papers back to 1997 are accessible from Kluwer.
In their introductory editorial, Bruce Edmonds and Kerstin Dautenhahn contend that the traditional divide between psychology and sociology is now mirrored by a divide between Artificial Intelligence and Cognitive Science on the one hand, and Artificial Life and Social Simulation on the other, but that a "new breed of interdisciplinary academics", who can be grouped under the rubric "social intelligence", are trying to bridge that gap, "probing the interface between the individual and society".
However, as Edmonds and Dautenhahn point out, the term "social intelligence" is ambiguous: as they phrase the ambiguity, it may refer either to "the intelligence an individual needs to effectively participate in a society", or to "the intelligence that a society as a whole can exhibit". I shall quibble with the editors' phrasing of the second alternative, but the papers in the special issue illustrate the distinction: Conte ("Social Intelligence among Autonomous Agents", pp. 203-228) concentrates on the interactions between social intelligence in the two senses, while the remaining three papers deal only with the second, and specifically with how to achieve the potential problem-solving advantages of agent collectives. So, does this special issue show how the gap between individually-oriented and socially-oriented approaches to multi-agent systems might be bridged using a computational approach, or does it, rather, illustrate how difficult the task will be?
The order in which the papers are discussed here is not that in which they are presented. I discuss first the two papers about which I have least to say (which does not imply lower quality); both of these discuss approaches to multi-agent learning in relation to specific application areas (circuit design and robot control systems). I then turn to the only paper here which concerns the cognitive capacities an individual needs to take part in a complex society (Conte) and finally to Heylighen ("Collective Intelligence and its Implementation on the Web: Algorithms to Develop a Collective Mental Map", pp. 253-280), which considers how humanity's collective intelligence could be improved by making it possible for the World Wide Web to respond adaptively to the way it is used.
In their editorial, Edmonds and Dautenhahn look beyond the work reported in the special issue, drawing a parallel between the physical and social environments of intelligent agents. Just as such agents are now recognised as depending on the computational resources implicit in their physical environment (substituting perceptual updating for prediction via an internal model, for example), so they may depend on those implicit in their social environment. I shall return to this point at the end of the review.
The papers by Takadama, Terano, Shimohara, Hori and Nakasuka ("Making Organizational Learning Operational: Implications from Learning Classifier Systems Agents", pp. 229-252), and by Bull ("On Evolving Social Systems: Communication, Speciation and Symbiogenesis", pp. 281-301) have a good deal in common in terms of topic and treatment. Both concern problem solving by a "society" or (as I would prefer) "team" of agents sharing a common, externally fixed goal, and each associated with a learning classifier system (Goldberg 1989, Holland 1992). Both papers investigate the performance, on a specific task, of a range of combinations of individual-level and social/organisational level learning mechanisms. In both cases, moreover, the structure of the task used as an example suggests a "natural" way of dividing responsibility between agents.
The task used by Takadama et al. is a printed circuit board design problem, in which 92 parts are to be arranged in a two-dimensional layout, with the aim of minimising wiring length, subject to constraints specifying minimum distances between the parts. An agent is assigned to each part, and the agents have a set of 10 primitive actions (translations and rotations) to take them from an initial position where wiring length is minimised but there are multiple overlaps between parts, to one with enough space between the parts. This task is performed repeatedly, adjusting the sequence of moves to reduce the final wiring length, until improvements between iterations become negligible. Four kinds of learning were tested. Two of these operate within the classifier systems assigned to individual agents:
The other two operate at an "organisational" level (i.e. at a level involving multiple agents):
It was found that all these types of learning contributed either to improving the quality of the solution reached, or to shortening the process of reaching it. The authors suggest that the four mechanisms perform different roles: reinforcement learning as a search function, rule generation as a generator of search methods, rule exchange as a way of changing the search range, and organisational knowledge reuse as a means of limiting large search ranges.
The task investigated by Bull is wall-climbing by a (simulated) quadrapedal robot. Bull tested three different agent-structures: a single agent controlling all four legs, a set of four identical agents each controlling a single leg, and a heterogeneous set of four co-evolved agents, each one controlling a specific leg. (In all of these conditions, each agent's knowledge of its task is encoded in the rules of a classifier system, evolved using a genetic algorithm.) The last approach provided the most successful structure, with the "non-social" (single agent) structure being least successful. In the heterogeneous agent condition, the sets of rules for the four agents were evolved in four distinct populations, but the rule actions could include message passing between the agents, allowing (p. 283):
"the most suitable organisation of the classifiers ... to evolve along with the classifiers themselves."
The idea is that the individual rule-sets constitute local models of the system, while the message-passing between them embody a co-ordinating global model. Bull also investigated the consequences of adding mechanisms allowing populations of rules to split ("speciation") or fuse ("symbiogenesis"), thus allowing evolution to occur between the homogeneous and heterogeneous multi-agent structures, and various possible intermediates (in which two or three populations of rules, and corresponding types of leg-controlling agent, exist). In the context of the wall-climbing problem the addition of a speciation operator was found to enable the system to find the advantageous heterogeneous agent structure; but a symbiogenesis operator offered no useful function. (It always appeared disadvantageous to fuse any pair of evolving rules-sets, thus reducing the heterogeneity of the set of four agents controlling the individual legs.)
Both of these papers would be at home in journals dealing primarily with what might be called "mainstream" multi-agent system (MAS) research, such as Autonomous Agents and Multi-Agent Systems, where the focus is on software engineering questions, such as when it is advantageous to divide a task between autonomously operating agents, and how sets of such agents can best be designed or evolved. In my view, however, these papers are about "social intelligence" only in a rather specialised sense. First, the intelligence displayed by the agents described has no social content. The systems described are social only in the sense that they involve interaction and message passing between agents. The agents do not explicitly represent or reason about social relationships or processes: they keep their attention on the domain problem. Second, the interactions between agents are of a limited range of types, since all agents are part of a system with a common, externally predefined goal, and real conflicts of interest between the agents do not occur.
This brings me to the second definition of "social intelligence" suggested by the editors: "the intelligence that a society as a whole can exhibit". This raises two linked questions. First, are the "societies" of agents constructed by MAS researchers sufficiently like natural societies (of humans or other animals) for the application of the term to be useful? I consider that agent teams such as those described above, which co-operate to solve a predetermined goal, and thus cannot find themselves in conflict over top-level goals, are so unlike natural societies that the use of the term is questionable. At the very least, since the term "social" is often applied to such teams, we should be aware how different they are from natural societies. Such a system lacks the tension between individual and group goals, and the consequent need for monitoring and enforcement, which run through the dynamics of natural societies. This tension has long been a major concern of social theorists and modellers (Hobbes 1914, Ostrom, Gardner and Walker 1994, Rouchier et al. 2000 and Gotts, Polhill and Law in press). Second, is a society the sort of thing that can exhibit intelligence? Intelligence has to be exhibited in solving problems, and it is not obvious that human or other animal societies have sufficiently determinate goals to be considered problem-solvers. Human social groups smaller than entire societies clearly act frequently as problem-solvers, often being set up by co-operating individuals in order to achieve particular goals - although in all but trivial cases, the possibility of clashes of interests between group members must be allowed for.
Several of the papers look into the practicalities of usefully exploiting human anthropomorphism in particular situations. These are not only important in terms of their practical conclusions but also provide great insight into the properties of this facility. Human social abilities are immensely complex and context-dependent which is the reason that such applied work can often reveal a lot more about social intelligence than armchair theorising which relies on judgements of plausibility. Thus in the paper by Paiva, Manchado and Prada ("The Child Behind the Character", pp. 361-368), they describe an interactive story creation tool which allows manipulation of characters, actions and expressions. They find that prompting the designer-children to make decisions about the justification for the behaviour of the characters in terms of the story and emotions of the character gives them more control.
"It is impossible to understand social intelligence without resorting to a general theory of intelligent individual action, as is provided by the science of artificial [sic - "sciences of the artificial"?] and especially by the AI discipline". (p. 203)
However, Conte also considers social intelligence to be a multiple agent property - one which applies to a set of individual autonomous agents in a common world - because the objective effects of social action extend the powers and means of individual agents, and place additional cognitive demands on them.
Near the start of the paper, Conte sets out (p. 204) a five-step argument as follows (I paraphrase):
The paper does not deal with these steps in strict succession. Conte first (section 2), discusses the concept of a socially situated autonomous agent, rejecting a number of possible accounts of the link between the individual and society. For example, she regards the strategic agents of game theory as erroneously static, neglecting the way that the social context requires the ability to adapt, and also as ignoring agents' need to take into account how the goals and actions of other agents relate to their own. All the approaches considered in this section, she contends, fail to account for the "micro foundations of social intelligence", and the mechanisms that generate social goals.
In section 3, Conte introduces her own model of an autonomous intelligent agent able to adapt to social demands. Conte describes it as a BDI agent (one endowed with representations of beliefs, desires and intentions, and the capacity to manipulate them) as described by Rao and Georgeff (1991), but with additional mechanisms for generating new goals and supporting social responsiveness. Crucially, such an agent must be able to reason about other agents' mental states (p. 210):
"A social intelligent action is based upon one's capacity to reason about another's mental states (social reasoning)."
Conte argues that such a capacity is necessary in social action, allowing agents to combine flexibility with autonomy; also, that agents which adapt only by reinforcement learning, and/or by mutation and selection - and not by reasoning - will lack a number of crucial capacities, including predicting and adapting to future events, and goal-driven rather than random innovation, as well as socially intelligent action. She defines an autonomous reasoning agent as one with an architecture allowing it to:
"filter inputs from the external (physical and social) world, acquire, modify and abandon internal representations (beliefs and goals) ... manipulate them in an integrated way, and act adaptively."
Conte's aim in the paper, as stated at a later point (p. 224) is to:
"show the emergence of social and organisational events among autonomous intelligent agents"
It is thus worth asking what range of real-world entities the definition covers. Unless common sense psychology is completely erroneous, cognitively normal human adults in their everyday environments meet the definition. Conversely, it would not cover social insects, if current accounts of their social and cognitive functioning such as Theraulaz and Bonabeau (1999) are correct. But whether babies, non-human mammals, or collectives such as firms and households do so may depend on how terms such as "filter", "internal representations" and "integrated" are interpreted.
The remainder of section 3 discusses goal-generation by autonomous agents, specifically, the conditions under which such an agent will adopt a goal from another agent, and three key concepts:
In section 4, Conte goes on to discuss how social commitment makes forms of interaction such as reciprocation possible, but argues that it is not sufficient to account for collective actions, in which a group pursue a common goal using a common multi-agent plan, to which all must conform if the goal is to be achieved. I am not sure such situations can be sharply distinguished from the group exchanges also described in which (say) agent x adopts y's goal, y adopts w's, and w adopts x's goal. In this case, the three agents might be considered to share the "super goal" that all the individual goals are met. The stated distinction is that in the case of a collective action, all participants want to achieve the common goal for reasons not derived from considerations of reciprocity. However, surely commitment to a goal, particularly one such as co-operating with other members of a group, may initially be purely instrumental, but come to be valued intrinsically? This possibility does not appear to be explicitly allowed for within Conte's model.
In the last two sections of the paper, Conte places the analytical work done in context, boldly asking:
"Given the complexity of the analysis, one may be led to wonder what is the use of such an investigation."
This question is posed in the light of work in which interesting collective effects emerge from systems of very simple, non-cognitive agents. Perhaps, Conte suggests, modelling of the more complex aspects of individual agents is not necessary to understand organisational phenomena, or only needed for phenomena specific to human societies. Conceding that cognitive architectures are not needed for societies and organisations to exist, Conte argues that we still need to explain how societies of intelligent, autonomous agents function. In my view, too much is conceded here. Phenomena specific to human societies (commerce, war, science, art and religion) form the vast bulk of the domain of social science. While it is true that many aspects of insect societies, and some aspects of human societies, can be modelled effectively using very simple agents, there are surely many more aspects of human societies, and indeed of societies of other non-human animals, for which this is unlikely to be the case. To give one set of examples, consider the phenomena of cultural transmission, which occurs in many non-human animals (Avital and Jablonka 2000) as well as (most obviously) in human societies. To understand cultural transmission within a particular kind of social group in any detail, we will surely need to model what members of those groups are capable of doing and of learning. More generally, it is indeed of interest that many social or collective phenomena can be modelled, and perhaps understood, on the basis of very simple behaviour and interaction by individual agents. Nonetheless, one task of computational social science is surely to identify those collective phenomena for which successive levels of cognitive complexity are required. Another is to account for the fact (if it is one), that our complex cognitive capabilities often make little difference to social dynamics.
One further aspect of the work presented in this paper is worth noting. Although Conte speaks of agents evolving social responsiveness, there is actually no specification of the steps by which social responsiveness, or the successive layers of emergent individual and collective properties of agents come into existence. Yet one of the features of human social life evident to casual observation is that individuals, social groups, and whole societies have histories, which often involve becoming more complex and gaining broader problem-solving capacities. The only way we know of to produce a functioning member of human society is to start with an agent with a limited in built repertoire of social responses (a baby), and encourage it to expand that repertoire by treating it as a social partner. Yet there has been very little if any computational work on individual social development of this kind. There has been somewhat more work on the growth of social complexity (e.g. Doran and Palmer 1995), but by no means as much as one might expect, given the importance of this process in producing the current human world.
In one specific instance, Conte's description of an autonomous reasoning agent appears incompatible with what we know of human development: she specifies (p. 212), that for such an agent to come to have a goal q, there must be some goal p which the agent already has, and which it believes achieving q will help it achieve. Yet this is prima facie incompatible with the apparent fact that children begin with a very limited range of types of goal, and expand this range as they mature, acquiringintrinsic, top-level goals that would not have attracted them, or even made sense to them, earlier in their development. It is conceivable that all such goals are initially adopted as instrumental to existing goals (e.g., of pleasing some adult), but this seems an unwarranted assumption, and the process by which such goals become "emancipated" would in any case require explanation.
The remaining paper by Heylighen argues that the World Wide Web (henceforth the Web) could be transformed into a "Collective Mental Map" (CMM) for humanity, using techniques inspired in large part by the stigmergic interactions of social insects (Theraulaz and Bonabeau 1999). The distinctive feature of stigmergy is that agents do not interact by direct one-to-one communication, but via alterations they make to a common environment, which may subsequently influence the behaviour of any other agent in the society concerned that happens to encounter those alterations. For example, ants leave pheromone trails, which their nest mates tend to follow, adding further pheromone. Simple feedback and averaging processes ensure that a network of trails develops that allows the efficient exploitation of the ants' territory, acting in effect as a CMM.
Heylighen notes that while insect societies appear to achieve considerably higher levels of problem-solving intelligence at the social or collective level than individual social insects display, there seem to be considerable problems in achieving high-level collective intelligence in groups of highly intelligent individual agents such as human beings. He does not regard the co-ordination displayed by a football team, for example, as an example of such high-level collective intelligence, since its actions can be comprehended by individual team members - the team does not appear more intelligent than its members.
Heylighen suggests that "the main impediments to the emergence of collective intelligence in human groups" include lack of mutual understanding (between specialists in different disciplines, for example), and the prevalence of competitive "power games" (in committee meetings, for example). Such power games, exacerbated by the sequential nature of communication in meetings, result in "pecking orders" among individuals, impeding the open and honest communication and criticism of ideas. Heylighen notes the use of computer supported co-operative work (CSCW) techniques to avoid some such problems in small groups, then proposes the Internet as a possible basis for developing humanity's collective mental map (CMM) - an "exteriorised, shared cognitive system". Heylighen's main purpose is to investigate how such a system could be developed.
One of a CMM's key functions is to act as a shared memory, with a capacity much larger than that of an individual. Heylighen proposes three mechanisms as the basis of a CMM: averaging preferences, (positive) feedback, and division of labour. The ants' pheromone-laying system automatically averages the preferred directions of movement from any spot, and their attraction to pheromone and consequent trail-following behaviour provides the positive feedback (which must not be too strong - individuals must retain some possibility of wandering off the trail). Heylighen suggests that voting and discussion are the corresponding human social mechanisms. The division of labour is discussed only in a human context, where it implies cognitive specialisation, and differences between individuals' "mental maps" (their internal representations of the environment, and associated problem-solving procedures). Such differences can allow the total domain covered by the union of mental maps to be much larger than that covered by any one map, but also generate communication problems.
Heylighen draws proposals for developing the Web as a CMM from his three basic mechanisms. The Web's enormous storage capacity, easy read and write access and facilities for commenting and cross-referencing are considerable advantages, but little problem-solving support is yet available. Search engines tend to return too much data and while there are useful methods for ranking (and returning) web pages on the basis of the Web's connectivity these do not allow the Web to adapt its pattern of connections to the way it is used. Heylighen then discusses some techniques for making this possible:
Experiments carried out with human subjects and "toy" web domains confirmed that these mechanisms could produce a meaningfully and usefully structured network.
Heylighen also suggests the use of software agents to search out relevant documents for a web user, employing a "spreading activation" process based on keywords, link strengths, an initial set of documents and possibly more sophisticated semantic approaches. This process also has been tested on toy web domains.
This paper is a significant and insightful contribution to a much-needed debate: how can the Internet and the Web be developed in ways which enhance user's problem-solving capabilities? An alternative to Heylighen's approach (not necessarily a mutually exclusive alternative), is the proposed development of the "Semantic Web" (Berners-Lee, Hendler and Lassila 2001), which would involve annotating the web with large amounts of highly-structured semantic data, designed for automatic processing by software agents using standard collections of inference rules. This approach emphasises using the content of the Web, and particularly metadata, to direct searches for the information desired, while Heylighen stresses the use of information about web topology and the possible advantages of making that topology adapt to the actions of users. Consideration should be given to how these two approaches to enhancing the Web's capabilities might interact with each other.
Heylighen draws on social insects' stigmergic interactions as a major source of ideas but says very little, at least in this article, about the technologies and institutions that have been devised to enhance collective human intelligence. He does mention the invention of writing, which he regards as having been "the first step toward the development of a CMM" - but only a first step, as not all books can be accessed by all individuals and reader's comments cannot be added to them for other readers to access.
Yet social insects' trail networks are certainly not the only available source of information helpful in finding the best ways to develop the Web. The development of the Internet and the Web is just the latest phase in the long historical (and prehistoric) development of "exteriorised, shared cognitive systems" for human social groups. Donald (1991) argues that the development of external symbolic storage systems fundamentally alters individual human cognitive capabilities (learning to use such systems influences the development of the brain) as well as enhancing collective ones. The Internet's potential to change human beings individually and collectively - and for better or worse - might be easier to understand if placed in the context of the effects of previous "information technology revolutions" such as figurative art, writing, printing, the telegraph, photography, and television. Moreover, many of these technologies will continue to be important. (Sellen and Harper 2002 argue that paper provides a number of "affordances" online information does not and thus is not going to disappear in the near future.) So will many of the institutions (universities, research institutes and departments, libraries, theatre companies, art galleries) designed at least in part to enhance human collective intelligence.
Interestingly, Heylighen also passes over the fact that any system such as the one he proposes is likely to be the target of attempts to distort its results: many web pages are controlled by people who want as many users as possible to visit them. Some might have more sinister purposes. Moreover the Internet and the Web are the objects of contests for economic and political control, and any proposal which would make a radical difference to the way they work is likely to become involved in these conflicts.
This special issue should be a valued and useful addition to any computational social scientist's library. I found the papers by Conte and by Heylighen the most stimulating, but this reflects my particular interests and preferences, and I will be rereading all the papers after sending off this review. The choice of papers probably reflects the current distribution of research effort, with the MAS "cluster" of work on ways to use multiple interacting agents to enhance problems-solving performance, predominant.
There is a surprising absence of work dealing with conflicts within societies and social groups: all is sweetness and light between the agents of this special issue. The special issue also exemplifies an important gap in studies of social intelligence within the computational social science tradition: the absence of work on the development of social intelligence during the lives of individuals. In the processes of individual development, the dependence of the individual on the computational resources made available by their social environment (on which the editorial comments), is particularly evident: we are taught to become social beings, and know no other way a social being can be produced. This process should surely be a focus of computational research into social intelligence. Somewhat similar questions about can be asked about the "careers" of organisations such as firms, states, and political parties: how does organisations' ability to use their members' cognitive resources, and handling of conflict between members, vary as the organisation "ages", for example?
AVITAL E. and E. Jablonka 2000. Animal Traditions: Behavioural Inheritance in Evolution. Cambridge University Press, Cambridge.
BERNERS-LEE T., J. Hendler and O. Lassila 2001. The semantic web. Scientific American, 284:28-37.
CONTE R. and C. Castelfranchi 1995. Cognitive and Social Action. UCL Press, London.
CONTE R., C. Castelfranchi and F. Dignum 1998. Autonomous norm acceptance. In J. Mueller, editor, Intelligent Agents V. Springer-Verlag, Berlin.
DONALD M. 1991. Origins of the Modern Mind. Harvard University Press, Cambridge, MA.
DORAN J. and M. Palmer 1995. The EOS project: Integrating two models of palaeolithic social change. In N. Gilbert and R. Conte, editors, Artificial Societies: The Computer Simulation of Social Life. UCL Press, London.
GOLDBERG D. E. 1989. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA.
GOTTS N. M., J. G. Polhill and A. N. R. Law in press. Agent-based simulation in the study of social dilemmas. Artificial Intelligence Review.
HOBBES T. 1914 . Leviathan. J. M. Dent and Sons, London.
HOLLAND J. H. 1992. Adaptation in Natural and Artificial Systems, second edition. The M. I. T. Press, Cambridge, MA.
OSTROM E., R. Gardner and J. Walker 1994. Rules, Games, and Common-Pool Resources. University of Michigan Press, Ann Arbor, MI.
RAO A. S. and M. P. Georgeff 1991. Modelling rational agents within a BDI architecture. In J. Allen, R. Fikes, and E. Sandewall, editors, Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning (KR'91). Morgan Kaufmann, San Mateo, CA.
ROUCHIER J., F. Bousquet, O. Barreteau, C. L. Page and J.-L. Bonnefoy 2001. Multi-agent modelling and renewable resource issues: The relevance of shared representations for interacting agents. In S. Moss and P. Davidsson, editors, Multi-Agent Based Simulation. Springer-Verlag, Berlin. [JASSS review]
SELLEN A. J. and R. H. R. Harper 2002. The Myth of the Paperless Office. The M. I. T. Press, Cambridge, MA.
THERAULAZ G. and E. Bonabeau 1999. A brief history of stigmergy. Artificial Life, 5:97-116.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 2002