© Copyright JASSS

  JASSS logo ----

Jaime Simão Sichman (1998)

DEPINT: Dependence-Based Coalition Formation in an Open Multi-Agent Scenario

Journal of Artificial Societies and Social Simulation vol. 1, no. 2, <https://www.jasss.org/1/2/3.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 14-Jan-1998      Accepted:15-Feb-1998      Published: 31-Mar-1998

----

* Abstract

This paper presents the main features and some simulation results for the DEPINT system, a multi-agent system conceived to illustrate some essential aspects of a social reasoning mechanism (Sichman, 1995), based on the notion of social dependence (Castelfranchi et al., 1992). This social reasoning mechanism is considered to be an essential building block of really autonomous agents, immersed in an open multi-agent system (MAS) context, i.e., where agents may dynamically enter or leave the society, without any global control. As the adaptation of an agent in such a scenario concerns, dependence relations allow an agent to know which of his goals are achievable and which of his plans are feasible at any moment. This way, an agent may dynamically choose a goal to pursuit and a plan to achieve it, being sure that every skill needed to accomplish the selected plan is available in the society. Concerning coalition formation, this model introduces the notion of dependence situation, which allows an agent to evaluate the susceptibility of other agents to adopt his goals, since agents are not necessarily supposed to be benevolent and therefore automatically adopt the goals of each other. Finally, as regardsbelief revision, the social reasoning mechanism allows an agent to detect that his representation of the others is inconsistent. Because agents' interactions are guided by their information about the others, it is exactly during these interactions that they may detect that this information is either incorrect or incomplete, and eventually revise it.

Keywords:
emergent organizations, dependence-based interaction, social behaviour, open systems

* Introduction

1.1
In Conte and Castelfranchi (1992), two different approaches to modelling social interaction are presented:
  1. top-down models: in these models, agents are considered to have a global problem to be solved "a priori". Hence, cooperation is taken for granted. Social interactions are usually constrained by some pre-established organizational structure, which "guides" agents in order to achieve the global goal. These models are often used in multi-agent systems (MAS) which adopt a problem solving perspective;
  2. bottom-up models: in these models, agents do not have common goals "a priori". Social interactions are produced as a result of their efforts to achieve their own goals. Neither cooperation nor any other kind of organizational structure is pre-established at the start. These models are used in most MAS which adopt a social simulation perspective.

1.2
The notion of an organizational structure which is dynamically built by the agents in bottom-up models is called a coalition. It is shown in Conte and Sichman (1995) that these models may also be classified into two main approaches:
  • utility-based models: in these, as in game theory (Luce and Raiffa 1957, Axelrod 1984), the social world is viewed as essentially consistent with the bellum omnium contra omnes principle. It is considered as a domain of social interaction between agents, where the latter must coordinate themselves for their to be a coherent global behaviour. This is usually carried out by developing conventions and constraints. The existence of other agents limits the autonomy, power and achievements of individual agents;
  • complementarity-based models: these propose a different perspective on social interaction, by taking into account the fact that agents may have complementary skills, which may be needed to achieve the agents' own goals. In that manner, the existence of other agents enhances the autonomy and power of individual agents. Even if an agent cannot achieve some goal on his own, he may achieve it by asking others for help.

1.3
The latter approach was adopted in the work reported in this paper. A social reasoning mechanism (Sichman, 1995), here seen as an essential building block of really autonomous agents, was developed. A social mechanism is one that uses information about others in order to infer some properties. Consequently, the existence of such a mechanism within an agent means that (i) an agent must explicitly represent some properties of the others, which may change dynamically; (ii) an agent must exploit this representation, and thus optimize his behaviour according to the evolution of the society and (iii) an agent must revise this representation when he detects that his beliefs about others are either incorrect or incomplete. This revision must be done in an autonomous way, without pre-established global control.

1.4
This social reasoning mechanism is based on the notion of social dependence (Castelfranchi et al., 1992). In summary, an agent is said to be dependent on another if the latter may facilitate/prevent him from achieving one of his goals. The notion of dependence is dual to that of social power (Castelfranchi, 1990): if an agent is dependent on another, the latter gains power over him. Dependence relations may be unilateral or bilateral. As regards bilateral relations , mutual dependence is the case when two agents depend on one another for the same goal, while reciprocal dependence is the case when two agents depend on one another, but for different goals. In Castelfranchi et al. (1992), it is shown that mutual dependence leads to cooperation, while reciprocal dependence leads to social exchange. A complete formal description of the model may be found inSichman (1995).

1.5
One may find in the literature other approaches to modelling dependence and power relations (see Skvoretz and Willer (1993) for a review). While this paper does not propose to compare such theories, the model on which it is based (Castelfranchi et al., 1992) has some advantages: (i) the dependence network is built dynamically by the agents themselves, and it is not created by the system's designer in an "ad-hoc" manner; (ii) the dependence network itself is just a representation of a formal theory and (iii) the model is predictive rather than a descriptive. By the way, it is exactly the fact that each agent constructs his private dependence network "on-the-fly" that justifies the cognitive complexity of the model: one cannot use reactive approaches in an open MAS scenario.1

1.6
This paper presents the main features and some significant simulation results from the DEPINT system, a multi-agent system conceived to illustrate some essential aspects of an agent's social reasoning mechanism, particularly the following:
  • concerning the adaptation of an agent, dependence relations allow an agent to know which of his goals are achievable and which of his plans are feasible at any moment. In short, a plan is said to be feasible if all the actions needed to accomplish this plan can be performed by at least one current member of the society. An achievable goal is a goal that has at least one feasible plan which when completed achieves this goal. A more detailed definition of these terms can be found in Sichman (1996). As a result, an agent may use his social reasoning mechanism to choose dynamically a goal to pursue and a plan to achieve it, being sure that every skill needed to accomplish the selected plan is available in the society;
  • concerning coalition formation, this model introduces the notion of dependence situation, which allows an agent to evaluate the susceptibility of other agents to adopt his goals (Sichman and Demazeau, 1995b). This being the case, the choice of partners is done more efficiently, because agents are not necessarily supposed to be benevolent, i.e., they do not automatically adopt the goals of each other;
  • concerning belief revision, the social reasoning mechanism allows an agent to detect that his representation of others is inconsistent (Sichman and Demazeau, 1995a). Because agents' interactions are guided by their information about the others, it is during interactions that they may detect that this information is either incorrect or incomplete. In the general case of open MAS, complete and totally correct information about each other is clearly an exception, because agents have only partial descriptions of one another and of the environment. By detecting the incorrect/incomplete beliefs he has about others, an agent may revise them to restore consistency (Sichman and Demazeau, 1996).

1.7
The rest of this paper is organized as follows. Section 2 presents some basic principles for the agents. Section 3 briefly presents the core notions of the social reasoning mechanism: external description, dependence network, goal situations and dependence situations. A multi-agent development environment called MASENV is described briefly in section 4. The major aspects of the DEPINT system are detailed in section 5 and the results of some runs are presented in section 6. Finally, conclusions and further work are presented in section 7.

* Basic Principles

2.1
The following principles about agents were adopted:

Principle 1 Principle of Non-Benevolence agents are not presumed to help each other: they decide autonomously whether or not to cooperate with others;

Principle 2 Principle of Sincerity agents do not try to exploit each other: they never offer erroneous information deliberately and always communicate information in which they believe;  

Principle 3 Principle of Self-Knowledge agents have a complete and correct representation of themselves: their goals, their expertise etc. However, agents may have beliefs about others that are either incorrect or incomplete;

Principle 4 Principle of Consistency agents do not maintain contradictory beliefs about others. Once an inconsistency is detected, they revise their beliefs in order to reestablish a consistent state.

* Social Reasoning Mechanism

3.1
The following subsections briefly present the core notions of the social reasoning mechanism: external description, dependence network, goal situations and dependence situations. A more detailed description, including the complete formal model, may be found in Sichman (1995).

External Description

3.2
We call external description the representation one agent has about the others. This representation is a private one, which is acquired from different information sources: perception, communication and inference. An external description consists of several entries, each containing some information about one particular agent. An external description entry contains the goals the agent is trying to achieve, the actions he is able to perform, the resources over which he has control and the plans he has in order to achieve his goals. A plan is a sequence of instantiated actions, composed of a single action plus a collection of resources needed to perform it.


Table 1: External description
Agent Goals Actions Plans
ag5 write_mas_paper write_mas_section write_mas_paper() := write_mas_section(), process_latex().
  write_ss_mas_paper analyse_mas_paper write_ss_mas_paper() := write_ss_section(), write_mas_section(), process_latex().
  review_oop_paper analyse_oop_paper review_oop_paper() := analyse_oop_paper().
      review_mas_paper() := analyse_mas_paper().
ag6 write_tel_paper write_tel_section write_tel_paper() := write_tel_section(), process_latex().
  review_sig_paper analyse_tel_paper review_sig_paper() := analyse_sig_paper().
  review_se_paper process_latex review_se_paper() := analyse_se_paper().
      review_tel_paper() := analyse_tel_paper().
ag7 write_sig_paper write_sig_section write_sig_paper() := write_sig_section(), process_latex().
  review_tel_paper analyse_sig_paper review_tel_paper() := analyse_tel_paper().
  review_se_paper process_latex review_se_paper() := analyse_se_paper().
      review_sig_paper() := analyse_sig_paper().
ag8 write_mas_paper process_latex ---
ag9 write_ss_mas_paper --- ---

3.3
In order to illustrate the notion of external description2, let us take as an example a computer and electronic engineering research laboratory, with 5 researchers (see Table 1):

  • ag5, who is interested in MAS and wants to write two articles: one about the common interests of MAS and social simulation, and a second one about MAS's foundations. He must also review a third article about object-oriented programming, a domain with which he is quite well acquainted;
  • ag6, an expert in telecommunications, who intends to write an article in this domain and review two others, one about software engineering and the other about signal processing;
  • ag7, a researcher in signal processing, who also aims to write an article in his domain and review two others, one about software engineering and the other about telecommunications;
  • ag8, a young researcher in the Distributed Artificial Intelligence (DAI) field, who wants to write an article about MAS's foundations, but who has not got any plan to achieve this goal;
  • ag9, a researcher in social simulation, who wants to write an article about MAS and social simulation, but who does not know how to accomplish this goal.

Agents ag6, ag7 and ag8 are well acquainted with the LATEX language, while agents ag5 and ag9 are not. The external description of this society, containing the agents' goals, actions and plans, is shown in table 1.

Dependence Network

3.4
In the implementation of the social reasoning mechanism, an agent stores all his dependence relations in a single structure called a dependence network. Taking the previous scenario as an example, the dependence network of agent ag6 would be:
ag6
<ag6>
----------  write_tel_paper 
         |----------  write_tel_paper:=write_tel_section(),
         |         |                   process_latex().
         |         |----------  A-AUTONOMOUS
         |                   |----------
         |  review_sig_paper 
         |----------  review_sig_paper:=analyse_sig_paper().
         |         |----------  analyse_sig_paper
         |                   |----------  ag7 
         |                             |----------
         |  review_se_paper 
         |----------  review_se_paper:=analyse_se_paper().
                   |----------  analyse_se_paper
                             |----------  UNKNOWN  
                                       |----------

3.5
As seen in the example, agent ag6 is able to perform all actions needed in the plan that achieves goal write_tel_paper. Hence, he is autonomous 3 for this goal. Concerning the goal review_sig_paper, agent ag6 depends on agent ag7, as the latter can perform action analyse_sig_paper. Finally, goal review_se_paper is not achievable, as action analyse_se_paper is not currently available in the society: there is no agent that can perform such an action.

Goal Situations

3.6
A goal situation relates an agent to a certain goal. The four possible goal situations are:
  • no goal (NG): the agent has not got the goal in his goal list;
  • no plans (NP): the agent has got the goal in his goal list, but he has not got any plan to achieve it;
  • autonomous (AUT): the agent has got the goal and some plans to achieve it. Moreover, at least one of these plans is such that he can perform all the needed actions on his own, without needing to ask others for help;
  • dependent (DEP): the agent has got the goal and some plans to achieve it. However, he cannot perform all the actions needed on his own in any of these plans.

3.7
In table 2, the goal situations of the research laboratory scenario are shown. One may observe that agents ag8 and ag9 have not got any plans for their goals: consequently, they will be inclined to accept any proposal of coalition sent to them in order to achieve their goals, as will be shown in paragraph 6.9.


Table 2: Goal situations
Agent Goal G-SIT
ag5 write_mas_paper DEP
  write_ss_mas_paper DEP
  review_oop_paper AUT
ag6 write_tel_paper AUT
  review_sig_paper DEP
  review_se_paper DEP
ag7 write_sig_paper AUT
  review_tel_paper DEP
  review_se_paper DEP
ag8 write_mas_paper NP
ag9 write_ss_mas_paper NP


Dependence Situations

3.8
If an agent depends on another one for a certain goal, he may wish to calculate if the latter also depends on him for some of his goals. This being the case, a proposal of cooperation or social exchange could be sent to this agent, with a chance of acceptance. On the other hand, as agents are heterogeneous, their planning mechanisms may differ4. An agent may thus infer a mutual dependence relating him and a possible partner, whereas the latter does not infer the same bilateral dependence.

3.9
In order to capture this possible awareness of the partners5, a notion called dependence situation, which relates two agents and a goal, was defined. In order to introduce this notion, the locality of a dependence is first established, i.e., which external description entries are used by an agent to infer possible bilateral dependence relations. An agent infers a locally believed dependence, either mutual or reciprocal, if he can infer this bilateral dependence using only his own plans, but not his beliefs about his partners' plans. When he uses his beliefs about his partner's plans, an agent infers a mutually believed dependence 6.

3.10
Consider two agents i and j, where the reasoning agent is i. If i infers the goal situation DEP for goal g, six different dependence situations between him and j may occur:
1.
Independence (IND): by using his own plans, he infers that he does not depend on j for goal g;
2.
Locally believed mutual dependence (LBMD): by using his own plans, he infers a mutual dependence between him and j for goal g, but he cannot infer the same conclusion using the plans he believes j has got;
3.
Mutually believed mutual dependence (MBMD): by using both his own plans and the plans he believes j has got, he infers a mutual dependence between them;
4.
Locally believed reciprocal dependence (LBRD): by using his own plans, he infers a reciprocal dependence between him and j for goals g and g', but he cannot infer the same conclusion using the plans he believes j has got;
5.
Mutually believed reciprocal dependence (MBRD): by using both his own plans and the plans he believes j has got, he infers a reciprocal dependence between them;
6.
Unilateral dependence (UD): by using his own plans, he infers that he depends on j for goal g, but the latter does not depend on him for any of his goals.

3.11
In table 3, the dependence situations of the research laboratory scenario are shown for agents ag5, ag6 and ag7. This table presents only the goals of agents whose goal situation is DEP, in accordance with table 2.


Table 3: Dependence situations
D-SIT Agents
me goal ag5 ag6 ag7
ag5 write_mas_paper -- UD UD
  write_ss_mas_paper -- UD UD
ag6 review_sig_paper IND -- MBRD
  review_se_paper IND -- IND
ag7 review_tel_paper IND MBRD --
  review_se_paper IND IND --

3.12
The notions of goal and dependence situations are used by an agent in order to choose a partner to whom a coalition proposal is to be sent, when the agent cannot achieve the intended goal on his own. In Sichman (1995) and Sichman and Demazeau (1995b) a criterion of choice of partner based on dependence situations is proposed. In short, this criterion states that (i) a mutual dependence (either mutually or locally believed) is always a better choice, since the reciprocation problem does not arise and (ii) a mutually believed dependence (either mutual or reciprocal) is always a better choice, since the problem of convincing the other does not arise. A more detailed description of these points may be found in Sichman (1995) and Sichman and Demazeau (1995b). In the example, one may notice that a social exchange between agents ag6 and ag7 will occur, with respect to goals review_sig_paper and review_tel_paper.

* The MASENV Environment

4.1
The DEPINT system was built using a MAS software development environment called MASENV, which is based on active objects (Cardozo et al., 1993) and interaction protocols (Demazeau, 1995). This environment aims to offer the MAS application designer a set of several prototypes of agents, organizations, physical environments and interaction modes, thus enabling more efficient development7.

4.2
In the design of the DEPINT system, the agent interaction language proposed by Demazeau (1995) was used. This language defines the common vocabulary and syntax of all the agents in the system. The syntax of this language is the following:

The <communication> field defines the implementation aspects of the adjacent distributed system layer: the identification of the sender and receiver, the message identification etc.

The <multi-agents> field is linked specifically to the multi-agent dimension. Based on speech act theory (Searle, 1969), it defines the type, nature and illocutionary force of the message and the identification of the interaction protocol being used (Demazeau, 1995). For the type of interaction, the primitives proposed in Gaspar (1991) were used: request, answer and inform. For the illocutionary force, a subset of the communication tones proposed in Campbell and D'Inverno (1990) was used, ranging from commanding (maximal priority) to informing, which characterizes a simple information exchange. For the nature of the interaction (Boissier, 1993), the status of the information being sent was specified in terms of dec (goals), ada (plans), com (actions) and obs (working hypothesis).

Finally, the <application> field must be instantiated for each application, containing the terms of the application domain.

4.3
Once an interaction language is chosen, one has to define how the interactions occur during the execution of the system. The model proposed in Populaire et al. (1993) and Demazeau et al. (1994) to represent interaction protocols was adopted. The main idea, which is inspired by so-called "law-governed systems" (Minsky, 1989), is to constrain the possible inter-agent interactions. An agent cannot send any information to any other agent whenever he likes: a certain structure in the information exchange is thus imposed. A more detailed description of these interaction protocols may be found in Populaire et al. (1993) and Demazeau et al. (1994)

4.4
Three interaction protocols for the DEPINT system were developed: a presentation protocol, an exit protocol and a coalition formation protocol. Whenever a new agent enters the system, he uses the presentation protocol in order to inform the others about his goals, actions, resources and plans. The other agents send back their own capabilities, allowing the new agent to update his external description. Similarly, whenever an agent leaves the society, he sends a message to the others, who take out the corresponding entry from their external descriptions. The coalition formation protocol is formed by propositions, acceptances, refusals and revision messages. The last is justified by the fact that a possible reason for an agent to refuse to take part in a coalition is because the sender has a false belief about his capabilities: the sender may believe that the agent can perform an action, when in fact this is not true. This is perfectly possible in the scenario adopted here, since the information sources, such as perception, inference and communication, may lead to errors. The complete formal description of the protocols of the DEPINT system may be found in Sichman (1995). The use of the coalition formation protocol will be illustrated in section 6.

* The DEPINT System

5.1
The DEPINT system aims at showing how agents can dynamically establish dependence-based coalitions, how they can adapt to the changing conditions of the society by choosing different goals, plans and partners and how they can detect and revise incorrect or incomplete information they may have about one another.

General Features

5.2
The agent model used in the system is based on the ASIC model (Boissier, 1993, Boissier and Demazeau, 1994) and is presented in detail in Sichman (1995). In this model an agent may gather information about the others by perception, communication or inference. In the current version of the DEPINT system, the following simplifications were adopted:
  • the agent's perception mechanism is simulated by a user interface;
  • the evolution of knowledge by inference is also simulated by a user interface;
  • coalitions are limited to two partners8;
  • concurrent coalition proposals are not handled. Therefore, there are two basic behaviours for the agents: active and passive. In active behaviour, the agent tries to propose a coalition to some other agent , when he is not able to achieve his selected goal on his own. He uses his social reasoning mechanism to choose a goal, plan and partners. In passive behaviour, the user simulates the information gathering (perception and/or inference) and the agent is limited to accepting or rejecting the coalition proposals sent to him by the other agents. The belief revision procedure is activated in both behaviours. It is up to the user of the system to decide which behaviour each agent of the system will have in a given run.

Internal Control Cycles

5.3
For both of the agent's possible behaviours, a sequence of activation of the internal mechanisms is defined, as described next.
Active behaviour

5.4
An agent in the active behaviour mode performs the following steps:
1.
first, the agent chooses a goal to pursue, taking into account its importance and whether or not it is achievable (Sichman, 1995, Sichman, 1996), i.e., the agent chooses the most important goal (which is quantitatively valued) among those which are currently achievable;
2.
after choosing a goal, the agent chooses a plan to achieve it. Similarly to the previous case, this choice is made in such a way that the least costly plan (which is also quantitatively valued) among those which are currently feasible is chosen;
3.
if the agent can perform all actions in the chosen plan on his own, he does so. Otherwise, using his dependence situations as a criterion (Sichman, 1995, Sichman and Demazeau, 1995b), he chooses a partner to whom a coalition proposal is to be sent.

After sending a proposal, the agent waits until he receives a reply. This reply may be of three different types:

(a)
the agent may receive a revision message. This means that his partner has detected that he has either an incorrect or an incomplete belief about him (Sichman and Demazeau, 1995a). If this is the case, the agent revises his beliefs about the partner. A context choice criterion for this revision was proposed in (Sichman and Demazeau, 1996). When the revision procedure is completed, the reasoning agent takes the partner out of the list of possible partners and tries to contact another possible partner;
(b)
the agent may receive a message of refusal. In this case, the partner is taken out of the list of possible partners and the agent tries to find another possible partner;
(c)
the agent may receive a message of acceptance. In this situation, a coalition is formed and the system returns to its initial state.

5.5
If the agent receives a message of refusal or revision, the list of possible partners might become empty. This being the case, the agent restarts reasoning about plans, because this means that the previously selected plan is no longer feasible. He then tries to find another feasible plan for the selected goal. If the list of feasible plans also becomes empty, the agent restarts reasoning about goals, because the previously selected goal is no longer achievable. Finally, if there is no other achievable goal, the system returns to its initial state.
Passive behaviour

5.6
An agent in the passive behaviour mode performs the following steps:
1.
the agent asks the user if he wants to simulate the inference mechanism9. If so, the user enters new data, specifying the agent, the type of information (goal, action, resource or plan), and the action to be taken (insert or take out the information). The belief revision procedure is then activated;
2.
the same procedure is used to simulate perception. The only difference lies in the data being differently tagged, corresponding to the information source used;
3.
the next step corresponds to message handling. In this state, four different types of messages may be received:
(a)
a presentation message, after which the agent updates his external description, by adding a new entry corresponding to the new agent. He also replies to the message, sending back to the new agent his own capabilities, goals etc;
(b)
an answer to a presentation message, previously sent by the agent when he himself has entered the system. In this case, as in the previous one, the agent updates his external description, but he does not send any message in return;
(c)
an exit message, after which the agent takes out the corresponding entry of his external description;
(d)
a coalition proposal, when the agent reasons and decides whether or not he should accept the proposal. In summary, the agent will accept a proposal if the proponent should be one of his favourite partners, as in the case of active behaviour. He will also accept it if his goal situation is NP for the offered goal. A complete description of this criterion may be found in Sichman (1995);
4.
the message handling is repeated until there are no more messages to be treated.

* Simulation Results

6.1
In the following subsections, results from running the DEPINT system are presented. The main goal is to show (i) that, within this system, agents can effectively establish coalitions dynamically, based on the notion of dependence, (ii) that they can adapt to the changing conditions of an open MAS by choosing different goals, plans and partners as agents leave or enter the society and (iii) that they can revise the incorrect/incomplete information they have about one another, as a result of their interactions.

6.2
In order to illustrate these points, the example of the research environment shown in section 3 will be used.

Example Of Autonomy

6.3
Suppose that initially the only agent in the system is ag5. If this agent is in the active behaviour mode, the simulation results are the following:
===== Initial state ...

Do you want the agent to be active in this cycle? (y/n): y

===== Reasoning about goals ...

My dependence network is:
  ag5
 <ag5>
----------  write_mas_paper (20)
         |----------  write_mas_paper:=write_mas_section(), 
         |         |                    process_latex().
         |         |----------  process_latex
         |                   |----------  UNKNOWN
         |                             |----------
         |  review_oop_paper (10)
         |----------  review_oop_paper:=analyse_oop_paper().
         |         |----------  A-AUTONOMOUS
         |                   |----------
         |  write_ss_mas_paper (30)
         |----------  write_ss_mas_paper:=write_ss_section(), 
                   |                      write_mas_section(), 
                   |                      process_latex().
                   |----------  write_ss_section
                             |**********  UNKNOWN
                             |         |----------
                             |  process_latex
                             |**********  UNKNOWN
                                       |----------

My current list of possible goals is :
     write_mas_paper(20) non achievable
     review_oop_paper(10) achievable
     write_ss_mas_paper(30) non achievable

===== Deciding about goals ...

The goal selected is : review_oop_paper (10) 

===== Reasoning about plans ...

My dependence network is:
  ag5
 <ag5>
----------  review_oop_paper (10)
         |----------  review_oop_paper:=analyse_oop_paper().
                   |----------  A-AUTONOMOUS
                             |----------

My current list of possible plans is:
     review_oop_paper:=analyse_oop_paper().(10) feasible

===== Deciding about plans ...

The plan selected is :
     review_oop_paper:=analyse_oop_paper(). (10) 

===== Reasoning about partners ...

My dependence network is:
  ag5
 <ag5>
----------  review_oop_paper (10)
         |----------  review_oop_paper:=analyse_oop_paper().
                   |----------  A-AUTONOMOUS
                             |----------

My goal situation is AUT
I do not need any actions in the committed plan

===== Deciding about partners ...

I am autonomous for the committed plan, no need of partners

6.4
One may notice that in this scenario, the agent has chosen the only goal which is achievable at the moment (review_oop_paper), even if the importance of this goal is the lowest among all his goals. As the agent is autonomous for this goal, no coalition proposal is sent to any other agent.

Example Of Unsuccessful Coalition

6.5
Suppose now that agents ag6 and ag7 enter the system, sending a presentation message10. By receiving such messages, ag5 updates his external description, adding two entries corresponding respectively to the two new members. The results concerning the active phase of agent ag5 are presented below:
===== Initial state ...

Do you want the agent to be active in this cycle? (y/n): y

===== Reasoning about goals ...

My dependence network is:
  ag5
 <ag5>
----------  write_mas_paper (20)
         |----------  write_mas_paper:=write_mas_section(), 
         |         |                   process_latex().
         |         |----------  process_latex
         |                   |----------  ag6
         |                             |----------
         |                             |  ag7
         |                             |----------
         |
         |  review_oop_paper (10)
         |----------  review_oop_paper:=analyse_oop_paper().
         |         |----------  A-AUTONOMOUS
         |                   |----------
         |  write_ss_mas_paper (30)
         |----------  write_ss_mas_paper:=write_ss_section(), 
                   |                      write_mas_section(), 
                   |                      process_latex().
                   |----------  write_ss_section
                             |**********  UNKNOWN
                             |         |----------
                             |  process_latex
                             |**********  ag6
                                       |----------
                                       |  ag7
                                       |----------
My current list of possible goals is :
     write_mas_paper(20) achievable
     review_oop_paper(10) achievable
     write_ss_mas_paper(30) non achievable

===== Deciding about goals ...

The goal selected is : write_mas_paper (20) 

===== Reasoning about plans ...

My dependence network is:
  ag5
 <ag5>
----------  write_mas_paper (20)
         |----------  write_mas_paper:=write_mas_section(), 
                   |                   process_latex().
                   |----------  process_latex
                             |----------  ag6
                                       |----------
                                       |  ag7
                                       |----------

My current list of possible plans is:
     write_mas_paper:=write_mas_section(), process_latex().(20) feasible

===== Deciding about plans ...

The plan selected is :
     write_mas_paper:=write_mas_section(), process_latex(). (20) 

===== Reasoning about partners ...

My dependence network is:
  ag5
 <ag5>
----------  write_mas_paper (20)
         |----------  write_mas_paper:=write_mas_section(), 
                   |                   process_latex().
                   |----------  process_latex
                             |----------  ag6
                                       |----------
                                       |  ag7
                                       |----------

My goal situation is DEP
My needed action  is process_latex
My current list of partners is :
     (ag6) UD NONE NONE 
     (ag7) UD NONE NONE 

===== Deciding about partners ...

The partner selected is : (ag6) UD NONE NONE 

===== Sending a message ...

---> Etape : conv1 DEPINTPROPOSITION init receive "5.5.92"
---> Destinataire : you
---> Type : request
---> Ressource : matter=dec,put,moi,proposal,

There is only one possible transition in the protocol

===== Trying to receive a message ...

6.6
This example shows that the social reasoning mechanism allows an agent to adapt effectively to an open MAS. If these results are compared to the ones presented in section 6.1, ag5 now chooses to achieve another goal (write_mas_paper), which has a higher importance than the previously chosen goal (review_oop_paper) -- 20 instead of 10. This new goal has become achievable by the arrival of agents ag6 and ag7.

6.7
Suppose that ag6 has the passive behaviour presented next:
===== Initial state ...

Do you want the agent to be active in this cycle? (y/n): n

Do you want the agent to leave the society? (y/n): n

===== Inferring properties about other agents ...

Do you want the agent to infer in this cycle? (y/n): n

===== Perceiving properties of other agents ...

Do you want the agent to perceive in this cycle? (y/n): n

===== Trying to receive a message ...

The message received is:
     ( PROPOSAL < ag5 polaris.imag.fr 13892 > 
     (write_mas_paper process_latex UD NONE NONE )

===== Reasoning about messages ...

I have received a proposal of coalition: 
     ( PROPOSAL < ag5 polaris.imag.fr 13892 >
     (write_mas_paper process_latex UD NONE NONE )
The partner has not offered any goal

===== Deciding about proposals ...

I will refuse the proposal, because there is nothing being proposed to me

===== Sending a message ...

Si error
---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92"
---> Destinataire : you
---> Type : answer
---> Ressource : matter=obs,put,moi,revision,
Si !error&&!best_option
---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92"
---> Destinataire : you
---> Type : answer
---> Ressource : matter=dec,put,moi,refusal,
Si !error&&best_option
---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92"
---> Destinataire : you
----> Type : answer
---> Ressource : matter=dec,put,moi,acceptance,

The transition chosen is !error&&!best_option

6.8
Agent ag6 has refused to take part in the coalition, because ag5 has not offered anything in exchange11. Since the message of refusal is received, ag5 takes out ag6 from the list of possible partners. He will then try then to form a coalition with ag7. The latter will also refuse to take part in the coalition, for the same reasons as ag6. Following the description of the active behaviour presented in paragraph 5.4, ag5 will restart reasoning about plans, because his list of possible partners is now empty. Since this list has only one plan for the committed goal, he will then restart reasoning about goals. As a result, he will choose to pursue the only achievable goal (review_oop_paper), as in the scenario presented in paragraph 6.3:.
===== Trying to receive a message ...

The message received is:
     ( REFUSAL < ag6 polaris.imag.fr 13893 >  )

===== Reasoning about messages ...

The partner has refused to form a coalition

===== Reasoning about partners ...

The committed partner has refused to form a coalition
Removing the partner (ag6) from the list of possible partners
My current list of partners is :
     (ag7) UD NONE NONE 

===== Deciding about partners ...

The partner selected is : (ag7) UD NONE NONE 

===== Sending a message ...

---> Etape : conv1 DEPINTPROPOSITION init receive "5.5.92"
---> Destinataire : you
---> Type : request
---> Ressource : matter=dec put moi proposal 

There is only one possible transition in the protocol

===== Trying to receive a message ...

The message received is:
     ( REFUSAL < ag7 polaris.imag.fr 13894 >  )

===== Reasoning about messages ...

The partner has refused to form a coalition

===== Reasoning about partners ...

The committed partner has refused to form a coalition
Removing the partner (ag7) from the list of possible partners
My current list of possible partners is empty

===== Deciding about partners ...

There are no more partners for the committed plan

===== Reasoning about plans ...

The committed plan is no longer feasible
Removing the plan 
     write_mas_paper:=write_mas_section(), process_latex(). 
from the list of possible plans
My current list of possible plans is empty

===== Deciding about plans ...

I do not have any more plans to achieve the committed goal

===== Reasoning about goals ...

The committed goal is no longer achievable
Removing the goal write_mas_paper from the list of possible goals
My current list of possible goals is :
     review_oop_paper(10) achievable

===== Deciding about goals ...

The goal selected is : review_oop_paper (10)

Example Of Successful Coalition

6.9
Suppose now that agent ag8 enters the society. Once the presentation protocol is over, the new active behaviour of ag5 is the following:
===== Initial state ...

Do you want the agent to be active in this cycle? (y/n): y

===== Reasoning about goals ...

My dependence network is:
  ag5
 <ag5>
----------  write_mas_paper (20)
         |----------  write_mas_paper:=write_mas_section(), 
         |         |                   process_latex().
         |         |----------  process_latex
         |                   |----------  ag6
         |                             |----------
         |                             |  ag7
         |                             |----------
         |                             |  ag8
         |                             |----------
         |  review_oop_paper (10)
         |----------  review_oop_paper:=analyse_oop_paper().
         |         |----------  A-AUTONOMOUS
         |                   |----------
         |  write_ss_mas_paper (30)
         |----------  write_ss_mas_paper:=write_ss_section(), 
                   |                      write_mas_section(), 
                   |                      process_latex().
                   |----------  write_ss_section
                             |**********  UNKNOWN
                             |         |----------
                             |  process_latex
                             |**********  ag6
                                       |----------
                                       |  ag7
                                       |----------
                                       |  ag8
                                       |----------
My current list of possible goals is :
     write_mas_paper(20) achievable
     review_oop_paper(10) achievable
     write_ss_mas_paper(30) non achievable

===== Deciding about goals ...

The goal selected is : write_mas_paper (20) 

===== Reasoning about plans ...

My dependence network is:
  ag5
 <ag5>
----------  write_mas_paper (20)
         |----------  write_mas_paper:=write_mas_section(), 
                   |                   process_latex().
                   |----------  process_latex
                             |----------  ag6
                                       |----------
                                       |  ag7
                                       |----------
                                       |  ag8
                                       |----------

My current list of possible plans is:
     write_mas_paper:=write_mas_section(), process_latex().(20) feasible

===== Deciding about plans ...

The plan selected is :
     write_mas_paper:=write_mas_section(), process_latex(). (20) 

===== Reasoning about partners ...

My dependence network is:
  ag5
 <ag5>
----------  write_mas_paper (20)
         |----------  write_mas_paper:=write_mas_section(), 
                   |                   process_latex().
                   |----------  process_latex
                             |----------  ag6
                                       |----------
                                       |  ag7
                                       |----------
                                       |  ag8
                                       |----------
My goal situation is DEP
My needed action  is process_latex
My current list of partners is :
     (ag6) UD NONE NONE 
     (ag7) UD NONE NONE 
     (ag8) LBMD write_mas_paper write_mas_section 

===== Deciding about partners ...

The partner selected is : (ag8) LBMD write_mas_paper write_mas_section 

===== Sending a message ...

---> Etape : conv1 DEPINTPROPOSITION init receive "5.5.92"
---> Destinataire : you
---> Type : request
---> Ressource : matter=dec put moi proposal 

There is only one possible transition in the protocol

===== Trying to receive a message ...

6.10
ag5 now chooses to interact with ag8 since, according to the criterion of choice of partners (Sichman and Demazeau, 1995b), a LBMD is a better dependence situation than a UD. Once more, the social reasoning mechanism enables the agent to adapt to the changing conditions of the society.

6.11
The passive behaviour of ag8 is the following:
===== Initial state ...

Do you want the agent to be active in this cycle? (y/n): n

Do you want the agent to leave the society? (y/n): n

===== Inferring properties about other agents ...

Do you want the agent to infer in this cycle? (y/n): n

===== Perceiving properties of other agents ...

Do you want the agent to perceive in this cycle? (y/n): n

===== Trying to receive a message ...

The message received is:
     ( PROPOSAL < ag5 polaris.imag.fr 13892 > 
     (write_mas_paper process_latex LBMD write_mas_paper write_mas_section )

===== Reasoning about messages ...

I have received a proposal of coalition: 
     ( PROPOSAL < ag5 polaris.imag.fr 13892 > 
     (write_mas_paper process_latex LBMD write_mas_paper write_mas_section )

My dependence network is:
  ag8
 <ag8>
----------  write_mas_paper (10)
         |----------  NO-PLANS
                   |----------

My goal situation is NP

===== Deciding about proposals ...

I will accept the proposal, because I do not have a plan for this goal

===== Sending a message ...

Si error
---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92"
---> Destinataire : you
---> Type : answer
---> Ressource : matter=obs,put,moi,revision,
Si !error&&!best_option
---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92"
---> Destinataire : you
---> Type : answer
---> Ressource : matter=dec,put,moi,refusal,
Si !error&&best_option
---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92"
---> Destinataire : you
---> Type : answer
---> Ressource : matter=dec,put,moi,acceptance,

The transition chosen is !error&&best_option

6.12
As explained before in paragraph 3.7, ag8 agrees to take part in the coalition, since his goal situation for this goal is NP. The acceptance message handling of agent ag5 is shown below:
===== Trying to receive a message ...

The message received is:
     ( ACCEPTANCE < ag8 polaris.imag.fr 13895 >  )

===== Reasoning about messages ...

*** The partner has accepted to form a coalition ***

Example Of Belief Revision

6.13
Suppose now that agents ag6, ag7 and ag8 leave the society, and that agent ag9 enters the system. Also suppose that ag5 infers that ag9 is able to perform actions write_ss_section and process_latex. For instance considering action write_ss_section, the inference may be carried out thus:
===== Initial state ...

Do you want the agent to be active in this cycle? (y/n): n

Do you want the agent to leave the society? (y/n): n

===== Inferring properties about other agents ...

Do you want the agent to infer in this cycle? (y/n): y

Type the name of the agent to be selected: ag9

Inference may be about goals, actions, resources or plans

Type the selected option (G/A/R/P): a

The current actions of agent < ag9 polaris.imag.fr 14015 > are:

Entries may be inserted or removed

Type the selected option (I/R): i

Type the INCOMPLETE ACTION: write_ss_section

Type the cost  of the ACTION: 10

===== Reasoning about the others ...

I must revise the following information:
     ( (ag9) ACTION INCOMPLETE write_ss_section )

===== Deciding about the others ...

Incomplete information is always updated

===== Revising information about the Others ...

Updating the external description
Action write_ss_section was included 
in the external description entry of agent
     < ag9 polaris.imag.fr 14015 >

6.14
The corresponding inference for action process_latex is similar and, therefore, will not be presented here. Now, ag5 will choose the most important of his goals to achieve (write_ss_mas_paper), since he believes that the latter has become achievable, as shown below:
===== Initial state ...

Do you want the agent to be active in this cycle? (y/n): y

===== Reasoning about goals ...

My dependence network is:
  ag5
 <ag5>
----------  write_mas_paper (20)
         |----------  write_mas_paper:=write_mas_section(), 
         |         |                   process_latex().
         |         |----------  process_latex
         |                   |----------  ag9
         |                             |----------
         |  review_oop_paper (10)
         |----------  review_oop_paper:=analyse_oop_paper().
         |         |----------  A-AUTONOMOUS
         |                   |----------
         |  write_ss_mas_paper (30)
         |----------  write_ss_mas_paper:=write_ss_section(), 
                   |                      write_mas_section(), 
                   |                      process_latex().
                   |----------  write_ss_section
                             |**********  ag9
                             |         |----------
                             |  process_latex
                             |**********  ag9
                                       |----------

My current list of possible goals is :
     write_mas_paper(20) achievable
     review_oop_paper(10) achievable
     write_ss_mas_paper(30) achievable

===== Deciding about goals ...

The goal selected is : write_ss_mas_paper (30) 

===== Reasoning about plans ...

My dependence network is:
  ag5
 <ag5>
----------  write_ss_mas_paper (30)
         |----------  write_ss_mas_paper:=write_ss_section(), 
                   |                      write_mas_section(), 
                   |                      process_latex().
                   |----------  write_ss_section
                             |**********  ag9
                             |         |----------
                             |  process_latex
                             |**********  ag9
                                       |----------

My current list of possible plans is:
     write_ss_mas_paper:=write_ss_section(), write_mas_section(), 
                         process_latex().(30) feasible

===== Deciding about plans ...

The plan selected is :
     write_ss_mas_paper:=write_ss_section(), write_mas_section(), 
                         process_latex(). (30) 

===== Reasoning about partners ...

My dependence network is:
  ag5
 <ag5>
----------  write_ss_mas_paper (30)
         |----------  write_ss_mas_paper:=write_ss_section(), 
                   |                      write_mas_section(), 
                   |                      process_latex().
                   |----------  write_ss_section
                             |**********  ag9
                             |         |----------
                             |  process_latex
                             |**********  ag9
                                       |----------

My goal situation is DEP
My needed action  is write_ss_section
My current list of partners is :
     (ag9) LBMD write_ss_mas_paper write_mas_section 

===== Deciding about partners ...

The partner selected is : (ag9) LBMD write_ss_mas_paper write_mas_section 

==== Sending a message ...

---> Etape : conv1 DEPINTPROPOSITION init receive "5.5.92"
---> Destinataire : you
---> Type : request
---> Ressource : matter=dec,put,moi,proposal,

There is only one possible transition in the protocol

===== Trying to receive a message ...

6.15
When ag9 receives this coalition proposal, he becomes aware that ag5 has a false belief regarding his capability to perform action write_ss_section. Therefore, he will inform ag5 of this fact, allowing the latter to revise his beliefs:
===== Initial state ...

Do you want the agent to be active in this cycle? (y/n): n

Do you want the agent to leave the society? (y/n): n

===== Inferring properties about other agents ...

Do you want the agent to infer in this cycle? (y/n): n

===== Perceiving properties of other agents ...

Do you want the agent to perceive in this cycle? (y/n): n

===== Trying to receive a message ...

The message received is:
     ( PROPOSAL < ag5 polaris.imag.fr 13892 > (write_ss_mas_paper 
     write_ss_section LBMD write_ss_mas_paper write_mas_section )

===== Reasoning about messages ...

I have received a proposal of coalition: 
     ( PROPOSAL < ag5 polaris.imag.fr 13892 > (write_ss_mas_paper 
     write_ss_section LBMD write_ss_mas_paper write_mas_section )

My dependence network is:
  ag9
 <ag9>
----------  write_ss_mas_paper (10)
         |----------  NO-PLANS
                   |----------

My goal situation is NP

===== Deciding about proposals ...

I will refuse the proposal, because I do not have the needed action

===== Sending a message ...

Si error
---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92"
---> Destinataire : you
---> Type : answer
---> Ressource : matter=obs,put,moi,revision,
Si !error&&!best_option
---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92"
---> Destinataire : you
---> Type : answer
---> Ressource : matter=dec,put,moi,refusal,
Si !error&&best_option
---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92"
---> Destinataire : you
---> Type : answer
---> Ressource : matter=dec,put,moi,acceptance,

The transition chosen is error

6.16
In order to show the strict relation between social action and belief revision, a context choice criterion was defined by Sichman (1995 and Sichman and Demazeau (1996), based on both the sincerity (P2) and self-knowledge (P3) principles. Briefly, an agent will prefer to believe in the information about agent i gathered by communication with agent i himself. A more comprehensive description of this context choice criterion may be found in Sichman (1995 and Sichman and Demazeau (1996).

Adopting this criterion of context choice, ag5 will prefer to drop action write_ss_section from the external description entry of ag9, because the new information source is more credible than the previous one. This procedure is shown next:

===== Trying to receive a message ...

The message received is:
     ( REVISION < ag9 polaris.imag.fr 14015 > 
     ( (ag9) ACTION INCORRECT write_ss_section )

===== Reasoning about messages ...

The partner has asked me to do a revision

===== Reasoning about the others ...

I must revise the following information:
     ( (ag9) ACTION INCORRECT write_ss_section )

===== Deciding about the others ...

Topic is (ag9)
Previous source was: <Inference>
New source is <Communication(ag9)>
New source is preferable

===== Revising information about the Others ...

Updating the external description
Action write_ss_section was removed from 
the external description entry of agent
     < ag9 polaris.imag.fr 14015 >

6.17
After updating his external description, ag5 would successively try to: (i) get a new partner for this plan, (ii) get a new plan for this goal and (iii) get a new goal to pursue. Consequently, as in paragraph 6.5, he will finally choose to pursue goal review_oop_paper.

* Conclusions

7.1
The main features of the DEPINT system were presented in this paper. It is a multi-agent system conceived to illustrate some essential aspects of a social reasoning mechanism (Sichman, 1995), based on the notion of social dependence (Castelfranchi et al., 1992). This mechanism enables an agent to adapt to the changing conditions of an open MAS by choosing different goals, plans and partners according to the circumstances of the society. It was also shown that the notion of dependence situations may be used to guide agents in the search for the most susceptible partner to form a coalition. Finally, concerning belief revision, the social reasoning mechanism allows an agent to detect that his representation of the others is inconsistent and to revise it. In this way, as also shown in Conte and Castelfranchi (1992), social action and belief revision are highly interrelated mechanisms: an agent uses his beliefs about the others to interact with the others and the results of this interaction may lead him to revise his beliefs.

7.2
No other system is known to use the notion of social dependence to form coalitions dynamically in an open MAS context. The great majority of cooperative problem solving methods found in the literature (e.g. Wooldridge and Jennings, 1994), are restricted to a formal level. Moreover, the DEPINT system is the first one to implement a subjective representation of agents' dependence networks, within their "minds", in contrast to other approaches which represent similar notions in an objective way (e.g. Yu and Mylopoulos, 1993, Carle et al., 1994).

7.3
Comparing the dependence-based coalition formation model presented in DEPINT system with the contract net model (Smith, 1980), one may notice that in the former there is a lower volume of global communication. Even if in the context of the DEPINT system each agent broadcasts a message to every other at the moment of his entry, this communication is done only once, and agents may take this information into account to update the information they have about one another and to limit the agents to whom a coalition proposal is to be sent, whenever they cannot achieve one of their goals by themselves.

7.4
The effects of relaxing some of the model's basic principles will be studied in the near future. As a first example, relaxing the principle of self-knowledge (P3) could enable the agents to have a kind of learning mechanism: an agent could be told by the others that some of his capabilities (i.e, actions/resources) could be useful to achieve different goals. As a second example, relaxing the principle of sincerity (P2) could enable the study of some additional social behaviours, such as manipulation and cheating. However, in order to do so, the belief revision criterion, which is partially based on this principle (Sichman and Demazeau, 1996), would need to be changed.

7.5
Several aspects of the DEPINT system could be improved. For instance, an extension to coalitions with more than two partners is quite easy to implement (Sichman, 1995): one has to store partial "engagements" to a certain goal. If some needed action is not available, these partial engagements must be broken. The very notion of engagement can be taken into account in the criterion for the acceptance of partners: an agent may refuse to take part in a coalition because he is already taking part in another one for the same goal. In addition, one can better exploit internal mechanisms such as inference and perception within the agents of the system.

* Acknowledgements

The work described in this paper was partially developed between 1991 and 1995, during the author's PhD program at LIFIA Laboratory, Grenoble, France, when he was supported by FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo, Brazil), grant number 91/1943-5. Currently, the author is partially supported by CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico, Brazil), grant number 301041/95-4. The author would like to thank Yves Demazeau (LEIBNIZ Institute, Grenoble, France), Rosaria Conte and Cristiano Castelfranchi (Istituto di Psicologia del CNR, Rome, Italy) for the useful discussions carried out over this period. Finally, the author would like to thank the anonymous referees and Suely Pfeferman Kagan for their invaluable comments.

* Notes

1 We call an MAS 'open' when agents may enter or leave the society at any moment, without any global control. This framework is developed to cope with this kind of system.

2 Forsimplicity's sake, the notion of resource will not be used in this example.

3 For simplicity's sake, the term a-autonomous is here used as a synonym of autonomous. A more comprehensive definition of these terms may be found in Sichman (1995).

4 In this framework, agents do not perform on-line planning, they use predefined plans, in a case-based reasoning style.

5 One must remember that the external description is a private structure and, as a consequence, agents may have incorrect beliefs about others.

6 The term mutual belief used in this context does not denote the idea found in the literature, for example, Levesque et al. (1990). The term here denotes the fact that the reasoning agent believes that his partner is also aware of their bilateral dependence relation.

7 Currently, only some parts of this development environment are fully implemented.

8 The extension to multi-partners is quite simple and is detailed in Sichman (1995).

9 Currently, there is no domain level inference in the system. Both the inference and perception mechanisms are restricted to the information stored in the external description.

10 For sake of clarity, the results of this simulation phase are not presented.

11 This fact is justified by the non-benevolence principle (P1).

----

* References

AXELROD, R. 1984. The Evolution of Cooperation. New York: Basic Books.

BOISSIER, Olivier. 1993 (January). Problème du Contrôle dans un Système Integré de Vision. Utilisation d'un Système Multi-Agents. Thèse de Doctorat, Institut National Polytechnique de Grenoble, Grenoble, France.

BOISSIER, Olivier and Demazeau, Yves. 1994 (August). ASIC: An Architecture for Social and Individual Control and its Application to Computer Vision. Pages 107-118 of: .Proceedings of the 6th European Workshop on Modelling Autonomous Agents in a Multi-Agent World.

CAMPBELL, John A. and D'Inverno, Mark P. 1990. Knowledge Interchange Protocols. Pages 63-80 of Demazeau, Yves and Muller, Jean-Pierre (eds.), Decentralized A. I. Amsterdam, NL: Elsevier Science Publishers B. V.

CARDOZO, Eleri and Sichman, Jaime Simão and Demazeau, Yves. 1993 (November). Using the active object model to implement multi-agent systems. Pages 70-77 of: Proceedings of the 5th IEEE International Conference on Tools with Artificial Intelligence.

CARLE, Patrice and Collinot, Anne and Zeghal, Karim. 1994 (December). Concevoir des Organisations: La Méthode Cassiopée. In: .Actes de la 3ème Journée Systèmes Multi-Agents du PRC-GDR Intelligence Artificielle.

CASTELFRANCHI, Cristiano. 1990. Social Power: A Point Missed in Multi-Agent, DAI and HCI. Pages 49-62 of: Demazeau, Yves and Muller, Jean-Pierre (eds.), Decentralized A. I. Amsterdam, NL: Elsevier Science Publishers B. V.

CASTELFRANCHI, Cristiano and Micelli, Maria and Cesta, Amedeo. 1992. Dependence Relations Among Autonomous Agents. Pages 215-227 of: Werner, Eric and Demazeau, Yves(eds.), Decentralized A. I. 3. Amsterdam, NL: Elsevier Science Publishers B. V.

Conte, Rosaria and Sichman, Jaime Simão. 1995. DEPNET: How to benefit from social dependence. Journal of Mathematical Sociology, 20(2-3), 161-177.

CONTE, Rosaria and Castelfranchi, Cristiano. 1992 (April). Mind is not Enough: Precognitive Bases of Social Interaction. Pages 93-110 of:Proceedings of 1992 Symposium on Simulating Societies.

DEMAZEAU, Yves and Boissier, Olivier and Koning, Jean-Luc. 1994 (October). Using interaction protocols to control vision systems. In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics.

DEMAZEAU, Yves. 1995 (March). From interactions to collective behaviour in agent-based systems. In: Proceedings of the 1st European Conference on Cognitive Science.

GASPAR, Graça. 1991. Communication and Belief Changes in a Society of Agents: Towards a Formal Model of an Autonomous Agent. Pages 245-255 of: Demazeau, Yves and Muller, Jean-Pierre (eds.), Decentralized A. I. 2. Amsterdam, NL: Elsevier Science Publishers B. V.

LEVESQUE, Hector J. and Cohen, Philip R. and Nunes, José H. T. 1990. On acting together. Pages 94-99 of: Proceedings of the 8th National Conference on Artificial Intelligence. Boston: Morgan Kaufmann Publishers, Inc. <

LUCE, R. D. and Raiffa, H. 1957. Games and Decisions: Introduction and Critical Survey. John Wiley & Sons Ltd.

MINSKY, Naftaly H. 1989 (April). The Imposition of Protocols over Open Distributed Systems. Technical report LCSR-TR-154. Laboratory for Computer Science Research, Rutgers University, New Jersey, USA.

POPULAIRE, Philippe and Boissier, Olivier and Sichman, Jaime Simão. 1993 (April). Description et Implementation de Protocoles de Communication en Univers Multi-Agents. Pages 241-252 of: Actes des 1ères Journées Francophones Intelligence Artificielle Distribuée & Systèmes Multi-Agents.

SEARLE, John. 1969. Speech Acts. Cambridge University Press.

SICHMAN, Jaime Simão. 1995. Du Raisonnement Social Chez les Agents: Une Approche Fondée sur la Théorie de la Dépendance. Thèse de Doctorat, Institut National Polytechnique de Grenoble, Grenoble, France.

SICHMAN, Jaime Simão. 1996 (October). On achievable goals and feasible plans in open multi-agent systems. Pages 16-30 of Proceedings of the 1st Ibero-American Workshop on DAI/MAS.

SICHMAN, Jaime Simão and Demazeau, Yves. 1995. Exploiting Social Reasoning to Deal with Agency Level Inconsistency. Pages 352-359 of: .Proceedings of the 1st International Conference on Multi-Agent Systems. San Francisco, USA: MIT Press.

SICHMAN, Jaime Simão and Demazeau, Yves. 1995. Exploiting Social Reasoning to Enhance Adaptation in Open Multi-Agent Systems. Pages 253-263 of: Wainer, Jacques and Carvalho, Ariadne(eds.), Advances in AI. Lecture Notes in Artificial Intelligence, vol. 991. Berlin, DE: Springer-Verlag.

SICHMAN, Jaime Simão and Demazeau, Yves. 1996. A model for the decision phase of autonomous belief revision in open multi-agent systems. Journal of the Brazilian Computer Society, 3(1), 40-50.

SKVORETZ, John and Willer, David. 1993. Exclusion and power: A test of four theories of power in exchange networks. American Sociological Review, 58(December), 801-818.

SMITH, Reid G. 1980. The contract net protocol: High-level communication and control in a distributed problem solver. IEEE Transactions on Computers, 29(12), 1104-1113.

WOOLDRIDGE, Michael and Jennings, Nicholas R. 1994 (August). Towards a Theory of Cooperative Problem Solving. Pages 15-26 of: .Proceedings of the 6th European Workshop on Modelling Autonomous Agents in a Multi-Agent World.

Yu, ERIC S. K. and Mylopoulos, John. 1993. An Actor Dependency Model of Organizational Work with Application to Business Process Reengineering. Pages 258-268 of:Proceedings of the Conference on Organizational Computing Systems (COOCS'93). Milpitas, CA: ACM Press.

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 1998