©Copyright JASSS

JASSS logo ----

Maria Fasli (2004)

Formal Systems ∧ Agent-Based Social Simulation = ⊥?

Journal of Artificial Societies and Social Simulation vol. 7, no. 4

To cite articles published in the Journal of Artificial Societies and Social Simulation, reference the above information and include paragraph numbers if necessary

Received: 16-Dec-2003    Accepted: 05-Jun-2004    Published: 31-Oct-2004

* Abstract

This paper discusses some of the merits of the use of formal logic in multi-agent systems and agent-based simulation research. Reasons for the plethora of formal systems are discussed as well as how formal systems and agent-based social simulation can work together. As an example a formal system for describing social relationships and interactions in a multi-agent system is presented and how this could benefit from agent-based social simulation as well as make a contribution is discussed.

Formal Systems, Social Interactions, Social Agents, Commitments, Roles, Obligations

* Introduction

This paper discusses some of the merits of the use of formal logic in multi-agent systems and in particular it aims to show the value of formal systems for agent-based social simulation systems. It gives a view of the potential benefits, albeit this is from a subjective point of view and clearly reflects the author's biases. The concerns raised in (Edmonds 2003) are understandable and to some degree justified, but in principle this paper is in agreement with the views expressed in (Dignum and Sonenberg 2003) regarding the usefulness of formal systems in agent-based social simulation research and those arguments will not be repeated here.

The structure of the paper is as follows. The following section starts by giving some of the reasons for the proliferation of formal frameworks for multi-agent systems in the literature. Next how formal systems and agent-based social simulation researchers could benefit from working together is discussed. The rest of the paper is devoted to a formal framework for describing social relationships and interactions within a multi-agent system. The paper closes with some suggestions on how the author's own strand of research could be brought closer to and benefit from agent-based social simulation.

* On the Plethora of Formal Theories of Agents

As increasingly sophisticated systems are built based on the notions of an agent and a multi-agent system, the need for adequate theories that will be able to describe, predict, and explain the behaviour of such systems is increasing accordingly. These theories can serve as specification and validation tools to the designers and developers of multi-agent systems. Such theories view agents as intentional systems (Dennett 1987) which are ascribed prepositional attitudes. Although this approach is debatable (McCarthy 1979) and not problem-free (Montague 1973; Salmon 1988, Thomason 1980; Fasli 2003b), the intentional stance without doubt provides us with a powerful abstraction tool for explaining the behaviour of such systems.

In developing formalisms for representing the properties of agents, among other things agent theorists are faced with two key issues. The first one is to decide which combination of attitudes is appropriate for characterising an agent. Assuming that the agent's cognitive state consists of the information, motivation and deliberation states, which are the correct attitudes to represent them? There is no definitive answer to this question. As a result a number of approaches have emerged in the literature ranging from those that use knowledge or belief to represent the information state of the agent (Cohen and Levesque 1990; van der Hoek van Linder and Meyer 1998; van der Hoek 1990, Kraus and Lehmann 1988), desires, goals, wishes and preferences for the motivational state (Cohen and Levesque 1990; van Linder, van der Hoek, and Meyer 1996; Rao 1998; van der Hoek 1998), while the deliberation state is usually represented by intentions, persistent goals or commitments (Cohen and Levesque 1990; Rao 1998). This is by no means an exhaustive list of all the available theories or logical frameworks in the literature. Hence, one reason for the plethora and variation of formal frameworks for agents and multi-agent systems is the fact that there is no universally accepted view on the attitudes that characterise an individual agent's behaviour. What's more, in developing a theory of reasoning agents one is to give an account of the relationships and dynamics between these ingredients. In particular, a complete agent theory would have to explain how an agent's cognitive ingredients lead it to select sequences of actions (plans) and act upon them. However, yet again, we do not have a universally accepted view to draw upon; the dynamics and interrelationships between the various attitudes are far from clear. The greater the number of attitudes one considers, the more complicated their interrelations. The same problem arises when one attempts to describe within the bounds of a formal framework social relationships and interactions between agents. Views on isssues that have to do with for instance obligations, commitments and roles to name just a few vary significantly among researchers.

The second fundamental question one has to address is which logic to choose. There is an abundance of formal logics with different expressive power, ranging from classical propositional logic and predicate logic to modal logic, dynamic logic and higher order logics. The decision as to which logic to use as a base in one's theory may depend on what one wants to achieve. For instance, modal logics have been traditionally used for reasoning about necessity and possibility (Hughes and Cresswell 1968), but in the last few decades they have become very popular for formalising knowledge, belief and other propositional attitudes. Hence, it seems that a second reason for the increasing number of papers appearing on formal systems is because researchers choose different kinds of logics as well as develop variations when they feel that this is necessary.

Although, at the time being there may be perhaps too many of these theories around, this may not necessarily be a bad thing or impede progress in the area. Some of these theories may as well turn out to be complementary and people can only realise this and merge them if they have access to others' works. However, different approaches need to be tried out and this needs to be done methodically and ideally with the emphasis on progressing the field.

* Agent-based Social Simulation and Formal Systems Working Together

The objective of agent-based social simulation (ABSS) systems is to study and understand the dynamics of social phenomena as these arise in a society of agents. Over the last few years researchers working in the field of multi-agent systems have taken an increased interest in formalising social relationships and interactions in such systems (Cavedon and Sonenberg 1998; Dignum Kinny and Sonenberg 2002; Dignum Meyer and Weigand 2002; Royakkers and Dignum 2000) the purpose being to understand and explain social phenomena (or some of their aspects). The question that has been put forward is whether or not such formal systems are useful at all in agent based social simulation and how.

Formal models can be regarded as providing the requirements and specification for an ABSS system or a multi-agent system in general. Such a formal model may be based on a philosophical or sociological theory, observations and data of a particular phenomenon (Edmonds 2002), intuitions, or a mixture of the above. For instance, if one's goal is to model a society of agents that operate under certain constraints such as an e-market where the participants are artificial agents and one wishes to impose restrictions on the behaviour of the agents, then the role of the formal system would be to provide the constraints for the ABSSS model. However, although expressive, logical formalisms have some restrictions. In some cases, it is very difficult if not at all impossible to capture the dynamic nature of things like for instance social relationships that may evolve over time. Such formalisms can provide a "snapshot" of the system at a particular moment in time, but cannot show the process over time. Nonetheless, one can use an additional formal model to capture the development of the system over time such as a transition model (Dignum 2003), while using the formal (logic) model to check a static description of the system. By building an ABSS system based on a formal model, running experiments and using the results of the experiments the formal model can be verified. It may be the case that the restrictions initially imposed by the formal model are found to result in undesirable behaviour or have undesirable effects in the society as a whole. The formal model can be revisited in light of this, it can be amended and the changes fed back again into the ABSS system. As in classical software engineering, one starts with the requirements and proceeds with the specification and design and then moves on to the implementation and testing. But these processes are not strictly sequential, they may overlap and one informs the other sometimes in several cycles. The process is finished when the resulting system satisfies the requirements or the results of the ABSS verify the formal model.

Of course one can build an ABSS system without any reference to a formal model at all, or simply write down a very informal description of the basic intuitions underlying the ABSS system. And one may be successful in doing so. However the advantages of using a formal system are that one can gain a better understanding of the processes, easily trace problems with the underlying intuitions, check the validity of the ABSS model and specify which aspects of the formal system (if any) can be verified or not as well as the reasons why. Also it is much easier to "tweak" the ABSS system if you have a clear understanding of the formal model underpinning it. It may be the case that research in formal systems that attempt to capture complex interactions may still be in its early stages, but it has the potential to make a useful contribution to ABSS as well as benefit from it.

* Multi-agent Social Relationships and Interactions

The following sections present a formal approach to describing social relationships and interactions. The formal system is based the idea that a multi-agent system is an aggregation of social agents whose activity and behaviour are regulated via commitments, obligations and rights. Social agents can be individuals or aggregations of agents (groups). Such agents do not act in isolation; on the contrary their decisions and actions affect other agents, although sometimes unintentionally, and they are themselves affected by others. While in pursuit of their own objectives, agents may join social agents and thus engage in teamwork and cooperative problem solving. When an agent joins a social agent it assumes or is given a specific role. This role entails a set of social commitments, obligations and rights, and specifies the position of this agent in the social agent as well as its commitments towards the group and the rest of the social agents. Each member of the social agent knows its place and acts accordingly and furthermore each knows the implications of exercising rights and breaking commitments. However, when an agent adopts a role, it does not necessarily mean that it has to adhere to it forever. Circumstances may arise when an agent may decide to abandon a role, although this may not be without consequences. Moreover, agents may hold different roles in different groups and as a result conflicts of interest may arise. Stability and fairness in a multi-agent system are crucial and as a consequence some form of general rules and norms should restrict the behaviour of agents.

Social Agents and Multi-agent Systems

A social agent can be an individual agent (BDI[1]) or an aggregation of agents (group). There are different types of social agents: a football team is a social agent, but so is a department within a university and an individual agent such as a lecturer. Social agents can play a variety of roles in a multi-agent system and these roles and the relationships between them is what defines the structure of a multi-agent system. Moreover, agents within a social agent may play a variety of roles, but they will at least have one role, while a social agent itself will have at a least one role in relation to other social agents. The roles that can be played by agents within a social agent as well as their structure and their interactions, i.e. the relationships between roles, is what identifies and characterises a social agent.

Given a particular problem domain the sort Roles represents all the possible roles that can be played by social agents and the sort RelTypes includes all the valid generic relationship types that can exist between roles, RelTypesRoles × Roles. A social agent structure is a generic description of the roles and the relationships between them in a particular type of social agent. Formally, a social agent structure is a tuple SASi = < Ri, RIi > where Ri is a finite set of roles Ri Roles, i.e. Ri is the set of all possible roles that can be played by agents within a social agent of this type and RIi is the relationship interaction graph that specifies all the valid generic relationship types between roles, RIi: Ri × RiRelTypes. Each edge of the graph represents a relationship type (a,b) between roles a,bRi. For the special case that a social agent is a singleton, the set of roles is the empty set Ri = ∅ and the interaction graph has no edges. The definition of a social agent structure is a flexible one. It only describes possible roles and relationships between them. The exact form of a social agent depends on how agents interact with one another as they come to form the social agent. Some of the roles and relationships may not be instantiated for example. Moreover, this definition does not place conditions on how a social agent is held together in terms of commitments.

The sort SATypes is a set of constants representing the generic social agent structures within a domain. If SAS is the set of all generic social agent structures, then the function ST assigns a unique SATypes constant to every social agent structure SASiSAS, that is ST: SASSATypes. For instance, consider the generic social agent structure for a football team SASFT = < RFT, RIFT > . The set of roles for such a type of social agent may include roles such as coach, goalkeeper, defender.... while the interaction graph describes the generic relationships between these roles. If FT is a constant that identifies this particular type of social agent structure and Aces is an instantiation of a football team structure, the predicate TypeOf is used in order to express that the social agent Aces is of type FT (TypeOf(Aces,FT)).

A multi-agent system is a tuple MA = < SAgents, Roles, SG > where Roles is as above, SAgents is the set of all social agents and SG is the structure interaction graph in which for each edge SaSb we have Sa,SbSAgents and a,bRoles. For instance, a multi-agent system representing a university is MAUNI = < SAgentsUNI, RolesUNI, SGUNI > where RolesUNI is the set of roles that can be played by social agents within this multi-agent system. Among the possible roles may be Finance, Accommodation, Library, Lecturer, etc. SAgentsUNI are all the social agents and SGUNI is the interaction graph. For instance, there is an edge between the SectionAccommodation and SectionFinance since these two types of social agents clearly interact within an establishment such as a university.

The Building Blocks

The exposition that follows is by no means complete and it is mainly concentrated on identifying and describing the key concepts of the framework. Semantics, proofs and other features have been left out, but the reader interested in the formal aspects is referred to (Fasli 2003c; Fasli 2003d; Fasli 2003e).

Within the Belief-Desire-Intention (BDI) paradigm agents are considered to have beliefs, desires and intentions. The logical language L includes, apart from the usual connectives and quantifiers, three modal operators B, D, and I for expressing beliefs, desires and intentions respectively. The framework includes a branching temporal component based on CTL* logic, in which the belief-, intention-, and desire-accessible worlds are themselves branching time structures. The temporal operators used are optional, inevitable, next, eventually(◊), always, until. Furthermore the operators: succeeds(e), fails(e), does(e), succeeded(e), failed(e) and done(e), express the present and past success or failure of an event e. Semantics is given in terms of possible worlds relativised to time points. In terms of the axiomatisation the KD45 system is adopted for belief, the D for intentions and the K system for desires. The interrelations between the three attitudes are described by a variation of the strong realism axioms (Fasli 2003a):
I(i, γ ) B(i, γ ) D(i, γ ) ¬B(i, ¬ γ )
Group Attitudes

Social agents may be individuals or aggregations of agents. The fact that an agent i is a member of a social agent si is expressed simply as (i si). In order to be able to reason about a social agent's information state two modal operators EB( si, φ ) and MB( si, φ ) are introduced for "Every member of social agent si believes φ " and " φ is a mutual belief among the members of social agent si" respectively (Fagin et al. 1995). In the same way there are two modal operators EI( si, φ ) and MI( si, φ ) to express what every member of the social agent intends and what is mutually intended by the social agent. If the social agent is a singleton, then the MB and MI operators reduce to their individual constituents.
Obligations and Rights

Obligations arise between pairs of agents, a bearer and a counterparty, from interactions such as promises. An obligation φ is expressed as O( si, sj, φ ) and read as "Social agent si (bearer) is obligated to sj (counterparty) to bring about φ ". The same operator is used to express norms or rules/obligations that apply to all agents. To this end, a special constant s0 is used to denote the set of all agents, the society of the domain or in other words the entire multi-agent system. Once a social agent si has managed to bring about the desired state of affairs for agent sj, or it has come to its attention that the state of affairs is not an option any more, it needs to take some further action in order to ensure that the counterparty agent is aware of the situation. The social agent successfully de-commits itself from a relativised obligation in the following way (communicate is used in a generic way to indicate that the agent needs to communicate with the other agent involved):
succeeded( decommit ( si, sj, inevitable φ ) ) ⇒ (¬ O( si, sj, inevitable φ ) ∧ done( communicate ( si, sj, MB( si, φ ) ) ) ) ∨ (¬ O( si, sj, inevitable φ ) ∧ done( communicate( si, sj, ¬ MB( si, optional φ ) ) ) ∧ done( communicate( si, sj, ¬ O( si, sj, inevitable φ ) ) ) )

Obligations seem to arise pairwise with rights. If an obligation is not honoured, the counterparty agent may reserve the right to impose sanctions on the bearer agent. The fact that a social agent sj has the right ψ over another social agent si is expressed as Right(sj,si, ψ ). The formula ψ may express the form of the sanction that sj has the right to impose on si. If an agent si drops a previously adopted relativised obligation towards sj, and sj has a right over si, then sj may decide to exercise this right. Agents may have a lenient or a harsh/stringent policy of exercising their rights. An agent has a lenient strategy if it keeps its options open as to whether or not it will exercise its right over another agent:
Right( sj, si, ψ ) ∧ MB( sj, ¬ O(si, sj, inevitable φ ) ) ∧ ¬ MB( sj, φ ) ⇒ optional( MI( sj, optional ψ ) )

On the other hand, a harsh/stringent policy means that an agent will always exercise its rights on the deviating agent:
Right( sj, si, ψ ) ∧ MB(sj, ¬ O( si, sj, inevitable φ )) ∧ ¬ MB(sj, φ ) ⇒ inevitable( MI( sj, inevitable ψ ) ) U MB( sj, ψ )

The agent will keep trying to bring about ψ until it actually comes to believe that it has managed to do so. An agent may or may not reveal its policy on exercising rights to the other agents.

Agents express preferences when they are presented with a dilemma; when they are in a situation in which not every state of affairs that they would like to bring about is feasible at the same time. For instance, when somebody asks you what you would like to drink coffee or tea, this means that you can drink either coffee or tea, not both (of course being an autonomous agent you may decide to have both, but then there is no reason to express a preference). In this sense, preferences express an agent's choice between two states of the world that cannot both be realisable at the same time. This is how preferences should be understood in the context of this paper.

In order to be able to express that an agent prefers φ to ψ the language is extended by adding a modal operator Pref. Pref(i, φ , ψ ) means that agent i prefers φ to ψ . Semantics to this modality is given in terms of a world preference based on von Wright's conjunction expansion principle (von Wright 1963; Bell and Huang 1997). It seems that when an agent prefers φ to ψ , it believes that φ is an option or can be realised, but it does not believe that ψ can be realised or is an option at the same time:
Pref(i, φ , ψ ) ⇒ B( i, optional ◊ φ ) ∧ ¬ B( i, optional ◊ ψ )

Individual agents can express their preferences between states of affairs, but it also seems possible that groups of agents or a social agent can express a preference. EPref(si, φ , ψ ) is read as "everyone in social agent si prefers φ to ψ ":
EPref( si, φ , ψ ) ≡def ∀ i ( i ∈ si) ⇒ Pref( i, φ , ψ )
Then a mutual preference among the members of a social agent si is defined as:
MPref( si, φ , ψ ) ≡def EPref( si, φ , ψ ) ∧ MB( si, EPref( si, φ , ψ ) )

Commitments come in two kinds: social and collective. The former always involve two social agents, a bearer and a counterparty and may arise in promises or contracts for instance. The later are the internal commitments of a social agent and can be viewed as expressing the purpose or objectives of the whole group. Following Castelfranchi (1995) the view that commitments hold a group of agents together is endorsed. However, the exact mechanism that enables this is not entirely clear since there are different types of social agents. Two broad categories of social agents can be identified:

Tightly-coupled social agents. Such agents have a common collective commitment or objective. This is known by every constituent agent. This collective commitment is supported by social commitments between the constituent agents and the social agent as a whole. Additional social commitments bound to roles that the agents adopt contribute to the overall collective commitment.

Loosely-coupled social agents. There may be a commitment that expresses the social agent's objective or goal which may not necessarily be known by all member agents. It is known at least by one who plays a pivotal role in the social agent, perhaps that of the manager/delegator and who coordinates the efforts of the other agents. The social agent's activity as a whole is supported by the social commitments that its members undertake towards each other.

This distinction extends to social agents of the same generic type. Consider the example of a research team consisting of a Professor and two Ph.D. students. The research team may be loosely-coupled or tightly-coupled. In the first case the Professor may have a clear idea of what needs to be accomplished and how. But he may only delegate parts of the original goal to his students who can then work independently, without having knowledge of the overall objective or even knowing about the existence and the work of one another. In the second case, the Professor may make the objective known to the entire team. In the former case, the overall objective of the team is supported by the social commitments that the students will take towards the Professor, while in the latter the research team will actually have a collective commitment towards the objective which will be supported by the social commitments that each of the constituent members will then take towards each other. Thus social agents are flexible entities that can adapt according to the needs of their constituent agents and the tasks set before them. The formal definition of a social agent structure supports such flexibility.

Social commitments are expressed via an operator SCom(si,sj, φ ) which is read "social agent si is committed to social agent sj to bring about φ ". Since social commitments can arise between both individual and social agents a definition that covers all four cases is required. Moreover, adopting a commitment is a rights and obligations producing act:
SCom( si, sj, φ ) ⇔ O( si, sj, φ ) ∧ Right( sj, si, ψ ) ∧ MI( si, φ ) ∧ MB( {si,sj}, (O( si, sj, φ ) ∧ MI( si, φ ) ∧ Right(sj,si, ψ ) ) )

Intuitively, there should be conditions under which an agent is allowed to drop its social commitments (Dunin-Keplicz 1999). A social agent has a blind social commitment strategy if it maintains its commitment until actually it believes that it has been achieved (Fasli 2003c):
SCom( si, sj, inevitable φ ) ⇒ inevitable( SCom( si, sj, inevitable φ ) U MB( si, φ ))

A social agent follows a reliable strategy if it keeps its commitment towards another agent as long as it believes that it is still an option (Fasli 2003c):
SCom( si, sj, inevitable φ ) ⇒ inevitable( SCom( si, sj, inevitable φ ) U (MB( si, φ ) ∨ ¬ MB(si, optional φ ) ) )

A collective commitment is the internal commitment undertaken by all constituent agents of a social agent. Such a commitment seems to involve first of all social commitments on behalf of the individual members of the group towards the group, a mutual intention of the group to achieve φ , and finally a mutual belief that the social agent has the mutual intention φ :
CCom( si, φ ) ⇔ ∀ i(isi) ⇒ SCom( i, si, φ ) ∧ MI( si, φ ) ∧ MB( si, MI( si, φ ))

Roles are related to relationship types via a predicate RoleOf(a, R) which describes that a is one of the roles in relationship of type R (Cavedon 1998). The predicate In(si, a, r) asserts that social agent si is in role a of relationship r. RoleSCom( a, φ ) expresses that a role a involves the adoption of a social commitment φ . If role a involves the social commitment φ and social agent si has the role a in relationship r, then there exists another social agent sj (different to si) that has the role b in relationship r towards whom agent si has the social commitment φ :
RoleSCom( a, φ ) ∧ In( si, a, r) ⇒ ∃ sj,b In( sj, b, r) ∧ SCom( si, sj, φ ) ∧ ¬ (si = sj)

An agent may decide to drop a role if it comes to believe that it has fulfilled its social commitments (e.g. a supervisor may drop its role once its Ph.D. student has succeeded in the examination), or when it believes it can no longer fulfil the commitments of its role. This may happen for a variety of reasons: for instance the agent may decide that it should no longer adhere to a role since it may not be to its benefit or a second, perhaps more important, role is in conflict with the first one. However, the agent that decides to drop a role, needs to communicate this to the other agent:
succeeded( droprole( si, sj, a )) ⇒ (¬ In( si, a, r ) ∧ ¬ SCom( si, sj, φ ) ∧ done( communicate( si, sj, ( ¬ In( si, a, r ) ∧ ¬ SCom( si, sj, inevitable φ ) ) ) ) ) ∨ (¬ In( si, a, r ) ∧ ¬ SCom( si, sj, φ ) ∧ done( communicate( si, sj, (¬In( si, a, r ) ∧ MB(si, φ ))))) ∨ (¬ In( si, a, r ) ∧ ¬ SCom( si, sj, φ ) ∧ done(communicate( si, sj, ( ¬ In( si, a, r ) ∧ ¬ MB( si, optional φ ))))
An agent may also decide to drop a commitment which is part of its role without dropping the role itself and perhaps accepting that a form of sanction will have to be imposed.

Working Example

The formal framework introduced above will be used to analyse interactions among agents involved in the following scenario: John (J) is a Ph.D. student supervised by Sandy (S). John is also a member of the University football team the Aces (A) for which he plays on Sunday mornings. The situation regarding John's roles and relationships is described below:
In( J, student, r1 ), In( S, supervisor, r1 ) RoleScom( student, followadvice ) ∧ In( J, student, r1 ) ⇒ In( S, supervisor, r1 ) ∧ SCom( J, S, followadvice )
In( J, player, r2 ), In( A, team, r2 ) RoleScom( player, playgame ) ∧ In( J, player, r2 ) ⇒ In( A, team, r2 ) ∧ SCom( J, A, playgame )

On Friday morning Sandy asks John to finish writing a paper which needs to be sent to a very prestigious conference on Monday morning. If he doesn't, then this will have consequences on his progress. John realises that this needs to be done over the weekend. His commitments are:
SCom( J, S, writepaper ) ⇔ O( J, S, writepaper ) ∧ I( J, writepaper ) ∧ Right( S, J, inhibitprogress (S , J ) ) ∧ MB( {J,S}, O( J, S, writepaper ) ∧ I( J, writepaper ) ∧ Right( S, J, inhibitprogress( S, J) ) )
SCom( J, A, playgame ) ⇔ O( J, A, playgame ) ∧ I( J, playgame ) ∧ Right( A, J, exclude( A, J) ) ∧ MB({J,A}, O( J, A, playgame ) ∧ I( J, playgame ) ∧ Right( A, J, exclude( A, J ) ) )

Given the fact that John plays for the Aces every Sunday it is clear to him that not both of his commitments can be honoured. Thinking of the consequences of dropping each of its commitments and the possible repercussions, John's preferences are as follows:
Pref( J, writepaper, playgame )
Pref(J, exclude( A, J ), inhibitprogress( S, J ) )

Although he does not want to disappoint his team, he decides that it is impossible to play in the game while finishing the paper at the same time:
B( J, optional ◊ writepaper ) ∧ ¬ B( J, optional ◊ playgame )

Since John follows a reliable strategy regarding his commitments the belief that it is not an option any longer to play in the game leads him to drop its commitment. He decommits and also lets the team know about this:
succeeded( decommit( J, A, inevitable ◊ playgame ) ) ⇒ (¬ O( si, sj, inevitable ◊ playgame ) ∧ done( communicate ( si, sj, ¬MB( si, optional ◊ playgame ))) ∧ done( communicate( si, sj, ¬ O( si, sj, inevitable ◊ playgame) ) ) )

Now luckily for John the team has a lenient policy and this time he does not get excluded. Notice, that although John did not fulfil his commitment towards the team which was part of his role in the team, he did not drop this role.

Let the football team Aces (A) consist of the team of players and a coach for simplicity. The collective commitment that characterises the team is that they win the X cup. This is a collective commitment of the whole football team. According to the definition then we have:
CCom( A, wincup ) ⇔ ∀ i(i ∈ A) SCom( i, A, wincup ) ∧ MI( A, wincup ) ∧ MB( A, MI( A, wincup ) )

Accordingly, the football team has a collective intention to win the cup iff every member of the social agent has a social commitment towards the social agent to win the cup, and it is a mutual intention among the members to do so, and it is also a mutual belief among the football team that the team has the mutual intention to win the cup. The social commitments involved in this definition give rise to relativised obligations and personal intentions towards the state of affairs which is to win the cup. The structure of the social agent "football_team" is described by the relationship between a team of players and a coach with the corresponding roles. Each of the roles prescribes a set of social commitments which come in support of the collective commitment of the football team.

* Concluding Remarks

In this last part of the paper the beginnings of a formal analysis of social relationships and interaction within a multi-agent system were presented. The examples given above are simple and a paper describing this formal model along with these examples would be rejected under the criteria given in Edmonds (2003).

To begin with, one may argue that this paper is inundated with modal operators. A closer inspection of the framework proposed would reveal that the operators suggested here have been divided into three levels: individual, bilateral and collective, Figure 1. In fact the way the attitudes have been arranged in the pyramid shows how they are interweaved and interrelated. At the individual level, agents have intentions, beliefs, desires and preferences over states of affairs. At the bilateral level, agents hold attitudes in relation to one other agent: obligations, rights, social commitments and roles. Social commitments involve obligations, rights, individual intentions as well as mutual beliefs and in that sense they can be synthesized by other attitudes. As a result one could drop the operator for social commitments. Roles are associated with social commitments. At the collective level, a group (or a social agent) is held together by attitudes such as collective commitments and mutual intentions. An organization would be held together by a number perhaps of collective commitments undertaken by its constituent agents. These would include some mutual intentions and beliefs. The constituent members would have certain roles within the organization which require social commitments which in turn are based on intentions, obligations and rights. Collective commitments again are synthesized by other attitudes and as such the operator for collective commitments could be dropped. The operators for social and collective commitment could be seen as "syntactic sugar" used to provide a better understanding of the underlying intuitions.

Figure 1
Figure 1. The three levels of operators

Most traditional work in the area of logical formalisms for cooperative activity and teamwork has concentrated on commitments (Castelfranchi 1995; Dunin-Keplicz 1999) which are considered to link an individual agent's activity with the group's objectives. Works addressing collective attitudes such as joint intentions that lead to teamwork and social plans include (Cohen and Levesque 1991; Grosz and Kraus 1996; Rao, Georgeff and Sonenberg 1992). Works that consider issues regarding normative concepts such as obligations and rights as well as commitments include (Dignum 2002; Ma and Shi 2000; van der Torre and Tan 1999).

These studies are informative and they have offered important insights into cooperative problem solving and teamwork. However, the analysis of commitment provided, which is the cornerstone of collective activity, is insufficient and unsatisfactory. There seems to be a confusion between the concepts of social and collective commitment; the terms are used interchangeably to describe two different concepts while some researchers use one term when they actually mean the other. These theoretical models tackle the problem from different perspectives and they offer complementary, but not comprehensive views of collective activity; none of them covers the full spectrum of social and collective attitudes that are typically involved in teamwork. For instance, works that deal with commitments very often ignore the relevance of normative concepts such as obligations, while other works that examine joint intentions or goals do not explicitly consider commitments or obligations. Although concepts such as roles and relationships have been used in design methodologies for multi-agent systems (Ferber and Gutknecht 1998; Wooldridge Jennings and Kinny 2000; Zambonelli, Jennings. and Wooldridge 2001) and agent theories (Cavedon and Sonenberg 1998; Dignum, Meyer, and Weigand 2002; Royakkers and Dignum 2000), they are not often considered in relation to commitments and obligations.

The framework presented here attempts to provide a unified approach to teamwork accounting both for stability and regulation of behaviour. It adds to current work in the literature in four distinctive ways. First of all, multi-agent systems are considered to be aggregations of social agents which in turn may be individuals or aggregations of agents. The structure of such systems as well as that of their constituent social agents is formally defined in terms of roles and relationships between them. However, the definition of a social agent structure is a generic and flexible one as it allows for social agents of the same type to vary; i.e. some of the roles and relationships may not be instantiated. Moreover, commitments are considered to be the attitudes that hold social agents together. The type of commitment that holds an agent together depends upon the particular instantiation of a social agent structure and its conditions of creation. Secondly, this framework offers a more comprehensive view of social and collective activity since such activity is described in terms of organisational, normative, and social and collective concepts. Multi-agent systems and social agents are tied together with roles, commitments and obligations down to individual intentions and beliefs. In Ferber and Gutknecht (1998), Wooldridge Jennings. and Kinny (2000) and Zambonelli Jennings and Wooldridge (2001) roles and other organisational concepts have been used to describe the structure of multi-agent systems. Here such concepts are used to relate the macro level of interaction (societal) to the micro level (individual) in terms of the agents' cognitive state. Thirdly, stability and regulation of activity within multi-agent systems and social agents is explained in a unified way. Agents are free to join social agents while in pursuit of their own objectives, but at the same time they have to balance their commitments and not honouring them may have consequences. Preferences play an important role in an agent's decision making: social agents, be it individuals or aggregations, have to weigh their commitments and obligations in deciding what to do next. Finally, roles, obligations, commitments and preferences have been studied on their own before, but this has been done so within different logical frameworks. Here all these concepts are brought together and formulated within the BDI paradigm. Although this framework has built on previous work, the extensions and re-formulations made are important in their own right, for instance the concept of a mutual preference among the members of a social agent is distinctive of this framework.

This formalism is by no means a complete characterisation of social dynamics; there are several directions in which it needs to be extended. Firstly, this work does not account for how social agents are created in the first place; conditions under which social agents are formed need to be described. This is an essential extension which will ultimately shed some light into how different social agents are bound by different types of commitment. Moreover, roles within a multi-agent system and social agents are closely related to authority relations. Another issue to be investigated is what happens when a social agent (aggregation) si is socially committed towards another social agent sj: who performs the action on behalf of si and who bears the responsibility if such a commitment or obligations fails. There are issues and questions of extreme interest that will not be resolved by writing down axioms and theorems. Experimentation and simulation will have to play a big role here. In particular, although one may be able to write down some conditions of instantiation or creation of social agents this cannot be done using formal logic alone. When and why agents decide to join forces with others or how social agent structures emerge in the first place are not yet well understood and the input from ABSS would be invaluable. Moreover, as it is evident from the example one can provide a static description of the system, but changes that happen over time cannot be expressed. A transition model could be used to show the changes from one state to the other. Here one had to resort to graphical descriptions of the sort "On Friday morning....". Still a lot of work needs to be done, but it is envisaged that at a later stage when some of the details will have been resolved, the formal system (or parts of it) could be used as a specification to an agent-based social simulation system.

Finally, undergoing this exercise of expressing one's ideas on why formal systems can be useful in agent-based social simulation, the conclusion reached is that there is a lot of scope for cooperation between formal systems and agent-based social simulation researchers. Dogmatism from both sides can only lead to isolation and stagnation. Formal systems and agent-based social simulation are not in competition: the aims are complementary; the tools differ.

* Notes

1 Although the paper uses the BDI paradigm, it does not focus on the merits of this particular approach itself. Elsewhere we have argued that the BDI (and other similar frameworks) is not sufficiently expressive as a theory of agents as among other things it lacks the ability to express quantification over the propositional attitudes and self-referential statements (Fasli 2000; Fasli 2003b).

* References

BELL, J. and Huang, Z. (1997), "Dynamic goal hierarchies". In Cavedon, A.R. L. and Wobcke W., (Eds) Intelligent Agent Systems: Theoretical and Practical Issues, Springer-Verlag. pp. 88-103.

CASTELFRANCHI, C. (1995), "Commitments: From individual intentions to groups and organizations". In Proceedings of the First ICMAS Conference. pp. 41-48.

CAVEDON, L. and Sonenberg, L. (1998), "On social commitments, roles and preferred goals".

In Proceedings of the Third ICMAS Conference. pp. 80-87.

COHEN, P. R. and Levesque, H. J. (1991), Teamwork. Nous, 25. pp.485--512.

COHEN, P. R. and Levesque, H. J. (1990), Intention is choice with commitment. Artificial Intelligence, 42. pp. 213-261.

DENNETT, D. C. (1987), The Intentional Stance. The MIT Press.

DIGNUM, F., Kinny D. and Sonenberg L. (2002), "Motivational attitudes of agents: On desires, obligations and norms". In From Theory to Practice in Multi-Agent Systems, Proceedings of the Second International CEEMAS Workshop, Springer-Verlag. pp. 83-92.

DIGNUM, F., and Sonenberg, L. (2003), "A dialogical argument for the usefulness of logic in MAS". Available at: http://cfpm.org/logic-in-abss/papers/Dignum&Sonenberg-reply.pdf

DIGNUM, V., Meyer, J.-J. Ch., and Weigand, H. (2002), "Towards an organizational model for agent societies using contracts". In Proceedings of the First AAMAS Conference. pp. 694-695.

DUNIN-KEPLICZ, B. and Verbrugge R. (1999), "Collective motivational attitudes in cooperative problem solving". In Proceedings of the First International CEEMAS Workshop. pp. 22-41.

EDMONDS, B. (2003), "How formal logic can fail to be useful for modelling or designing MAS".


EDMONDS, B. (2002), "The purpose and place of formal systems in the development of science".


FAGIN, R., Halpern, J. Y., Moses, Y. and Vardi, M.Y. (1995), Reasoning about Knowledge, MIT Press, Cambridge, MA.

FASLI, M. (2000), Commodious Logics of Agents, PhD thesis, University of Essex, UK.

FASLI, M. (2003a), Heterogeneous BDI agents. Cognitive Systems Research, 4(1), pp.1-22.

FASLI, M. (2003b), Reasoning about knowledge and belief: A syntactical treatment. Logic Journal of the IGPL, 11(2), pp. 245-282.

FASLI, M. (2003c), "From social agents to multi-agent systems: Preliminary report". In Proceedings of the 2003 CEEMAS Conference. pp.111-121.

FASLI, M. (2003d), "Reasoning about the dynamics of social behaviour". In Proceedings of the AAMAS 2003 Conference. pp. 988-989.

FASLI, M. (2003e), "Social interactions in multi-agent systems: A formal approach". In Proceedings of the IEEE/WIC 2003 Intelligent Agent Technology Conference, pp. 240-246.

FERBER, J. and Gutknecht, O. (1998), "A meta-model for the analysis and design of organizations in multi-agent systems". In Proceedings of the Third ICMAS Conference, pp. 128-135.

HUGHES, G.E. and Cresswell M.J. (1968), An Introduction to Modal Logic. Methew & Co Ltd.

GROSZ, B. and Kraus, S. (1996) Collaborative plans for complex group action. Artificial Intelligence, 86(2). pp. 269-357.

KRAUS, S. and Lehmann, D. (1988), Knowledge, belief and time. Theoretical Computer Science, 58, pp.155-174.

MA, G. and Shi, C. (2000), "Modelling social agents in BDO logic". In Proceedings of the Fourth ICMAS Conference. pp. 411-412.

MCCARTHY, J. (1979), "Ascribing mental qualities to machines". In Ringle M. (Ed), Philosophical Perspectives in Artificial Intelligence. The Harvester Press Limited . pp. 161-195.

MONTAGUE, R. (1973), "The proper treatment of quantification in ordinary English". In Thomason R. (Ed), Formal Philosophy, Selected Papers of Richard Montague, Yale University Press. pp. 247-270.

RAO, A. and Georgeff, M. (1998), Decision procedures for BDI logics. Journal of Logic and Computation, 8(3). pp. 293-343.

RAO, A., Georgeff, M. and Sonenberg, E. (1992), "Social plans: A preliminary report". In Decentralised A.I.-3. pp. 57-76.

ROYAKKERS, L. and Dignum, F. (2000), "Organisations and collective obligations". In Proceedings of the Database and Expert Systems Applications Conference. pp. 302-311.

SALMON, N. and Soames, S. (1988), Propositions and Attitudes. Oxford University Press.

THOMASON, R. (1980). A note on syntactical treatments of modality. Synthese, 44. pp. 391-395.

van der HOEK, W. (1990), "Systems for knowledge and beliefs". In Proceedings of the European Workshop in Logics in Artificial Intelligence (JELIA '90), volume LNAI: 478, Springer-Verlag . pp. 267-281.

van der HOEK, W., van Linder, B. and Meyer, J.-J.Ch. (1998), "An integrated modal approach to rational agents. In Wooldridge M. and Rao A. (Eds), Foundations of Rational Agency, Applied Logic Series 14, Kluwer. pp. 133-168.

van LINDER, B., van der Hoek, W., and Meyer, J.-J.Ch. (1996), "Formalising motivational attitudes of agents: On preference, goals and commitments". In Wooldridge M., Muller J.P. and Tambe M. (Eds), Intelligent Agents II - Agent Theories, Architectures and Languages, LNAI: 1037, Springer-Verlag. pp. 17-32.

van der TORRE, L. and Tan Y.-H. (1999), "Rights, duties and commitments between agents". In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI'99). pp.1239-1244.

von WRIGHT G.H. (1963), The Logic of Preference. Edinburgh University Press, Edinburgh.

WOOLDRIDGE, M., Jennings, N.R. and Kinny, D. (2000), The Gaia methodology for agent-oriented analysis and design. Autonomous Agents and Multi-Agent Systems, 3. pp.285-312.

ZAMBONELLI, F., Jennings, N.R. and Wooldridge, M. (2001) "Organisational abstractions for the analysis and design of multi-agent systems". In Cinacarini P. and Wooldridge M. (Eds), Agent-Oriented Software Engineering. pp. 127-141.


ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2004]