- When designing an agent-based simulation, an important question to answer is how to model the decision making processes of the agents in the system. A large number of agent decision making models can be found in the literature, each inspired by different aims and research questions. In this paper we provide a review of 14 agent decision making architectures that have attracted interest. They range from production-rule systems to psychologically- and neurologically-inspired approaches. For each of the architectures we give an overview of its design, highlight research questions that have been answered with its help and outline the reasons for the choice of the decision making model provided by the originators. Our goal is to provide guidelines about what kind of agent decision making model, with which level of simplicity or complexity, to use for which kind of research question.
- Decision Making, Agents, Survey
Introduction: Purpose & Goals
In computational social science in general and in the area of agent-based
social simulation (ABSS) in particular, there is an ongoing
discussion on how to best to model human decision making.
The reason for this is that although human decision making is
very complex, most computational models of it are rather simplistic
As with any good scientific model, when modelling humans, the
modelled entities should be equipped with just those properties and behavioural
patterns of the real humans they are representing that are relevant in the
given scenario and no more or less.
The question therefore is “What is a good (computational) model of a human (and
its decision making) for what kind of research question?”
A large number of
architectures and models that try to represent human decision making
have been designed for ABSS. Despite their common goal, each architecture
has slightly different aims and as a
consequence incorporates different assumptions and simplifications. Being aware
of these differences is therefore important when selecting an agent
decision making model in an ABSS.
Due to their number, we are not able to review all existing
models and architectures for human decision making in this paper. Instead we
have selected examples of established models and architectures as well as
some others that have attracted attention. We have aimed
to cover a diversity of models and architectures, ranging from simple production
rule systems (Section 3) to complex
psychologically-inspired cognitive ones (Section
7). In our selection we focussed on
(complete) agent architectures, which is why we did not include any
approaches (e.g. learning algorithms) which focus only on parts of an
For each architecture, we outline research questions that have been answered
with its help and highlight reasons for the choices made by the authors.
Using the overview of existing systems, in Section
8 we aim to fulfil our overall goal, to
provide guidelines about which kind of agent decision making model, with which level
of simplicity or complexity, to use for which kind of research question or
hope that they will help researchers to identify where the particular strength
of different agent architectures lie and to provide an overview that will
help researchers deciding which agent architecture to pick in case of
The paper is structured as follows: In the next section we provide a
discussion of the most common topics and foci of ABSS. The discussion
will be used to determine dimensions for classifying different agent
decision making models as well as outlining their suitability for particular
We then present production rule systems
3) and deliberative agent
models, in particular the belief-desire-intention idea and its derivatives
Section 5 is on models that
rely on norms as the main aspect of their deliberative component. Sections 6 and
7 review cognitive agent decision
making models. Section 6 focuses on “simple”
cognitive architectures, while Section 7 has a
closer look at psychological and neurological-inspired models.
The paper closes with a discussion of the findings and provides a
comparison of the models along a number of dimensions. These dimensions
include the applications and research questions the different models might
be used for. The section also points out issues that are not covered
by any of the architectures in order to encourage research in these
- In order to be able to discuss the suitability of different agent
architectures for different kinds of ABSS, a question to be answered is what
kinds of ABSS are existing and of interest to the ABSS community.
Starting from general ABSS overview articles (e.g. Gilbert (2004),
Meyer et al. (2009) or Helbing & Balletti (2011)) we filter out current research topics
and derive agent properties that might be of interest to
answer typical research questions within these general topics. The derived
properties are then used to define dimensions that are used in the following
sections to compare the different agent architectures.
One of the earlier attempts to categorize ABSS was conducted
by Gilbert(2004, p.6). He outlines five high-level dimensions by which
ABSS in general can be categorised, including for example the degree to which
ABSS attempt to incorporate the detail of particular targets. The last of his dimension deals with agents (and indirectly
their decision making), by comparing ABSS by means of the complexity of
the agents they involve. According to Gilbert this complexity of agents might
range from “production system architectures” (i.e. agents that follow simple
IF-THEN rules) to agents with sophisticated cognitive architectures such as SOAR
or ACT-R. Looking at the suitability of these different architectures for
different research questions/topics, Gilbert cites Carley et al. (1998), who
describe a number of experiments comparing models of organizations using agents
of varying complexity. In these cited experiments Carley et al. (1998) conclude
that simpler models of agents are better suited if the objective of the ABSS is
to predict the behaviour of the organization as a whole (i.e. the macro-level
behaviour), whereas complex and more cognitive accurate architectures were
needed to accurately represent and predict behaviour at the individual or small
A slightly different distinction is offered by Helbing & Balletti (2011), who
propose the three categories:
- Physical models that assume that individuals are mutually reactive to current (and/or past) interactions.
- Economic models that assume that individuals respond to their future expectations and take decision in a selfish way.
- Sociological models that assume that individuals respond to their own and other people's expectations (and their past experiences as well). Helbing & Balletti (2011, p.4)
In Helbing's classification, simple agent architectures such as rule-based
production systems would be best suited for the physical models and the
complexity and ability of the agents would need to increase when moving to the
sociological models. In these sociological models, the focus on modelling social
(human) interaction might necessitate that the agent can perceive the social network
in which they are embedded, that agents are able to
communicate, or even requirements for more complex social concepts such as the Theory of
Mind1 or We-intentionality.
Summarising, we identify two major dimensions that seem
useful for distinguishing agent architectures:
- The Cognitive Level of the agents, i.e. whether they are purely reactive, have some form of deliberation, simple cognitive components or are psychologically or neurologically inspired (to represent human decision making as closely as possible), and
- The Social Level of the agents, i.e. the extent to which they are able to distinguish social network relations (and status), what levels of communication they are capable of, whether have a theory of mind or to what degree they are capable of complex social concepts such as we-intentionality.
Another way of categorizing ABSS is in terms of application areas. Axelrod & Tesfatsion (2006) list:
(i) Emergence and Collective behaviour, (ii) Evolution, (iii) Learning, (iv) Norms, (v) Markets, (vi) Institutional Design, and (vii) (Social) Networks as examples of application areas.
Other candidates for dimensions to distinguish agent architectures
- Whether agents are able to reason about (social) norms, institutions and organizational structures; what impact norms, policies, institutions and organizational structures have on the macro-level system performance; and how to design normative structures that support the objectives of the systems designer (or other stakeholders); and
- Whether agents can learn and if so, on what kind of level they are able to learn; e.g.. are the agents only able to learn about better values for their decision functions and can they learn new decision rules.
The final dimension we shall use is the affective level an
agent is capable to express. This dimension results from a discussion of current
publication trends in the Journal of Artificial Societies and Social Simulation
(JASSS) provided by Meyer et al. (2009). Meyer et al. use a
co-citation analysis of highly-cited JASSS papers to visualize the thematic
structure of social simulation publications. Most of the
categories found thereby are similar to Axelrod & Tesfatsion (2006). However, they also
include emotions as an area of research. We cover this
aspect with a dimension indicating the extent that an agent architecture
can be used to express emotions. Research questions
that might be answered with this focus on affective components are, for example,
how emotions can be triggered, how they influence decision making and what a
change in the decision making of agents due to emotions implies for the system
as a whole2.
Summing up, the five main dimensions shown in Table
1 can be used to classify ABSS work in general and
are therefore used for distinguishing agent architectures in this
Table 1: Dimensions for Comparison Dimensions Explanation Cognitive What kind of cognitive level does the agent architecture allow for: reactive agents, deliberative agents, simple cognitive agents or psychologically or neurologically-inspired agents? Affective What degree of representing emotions (if any at all) is possible in the different architectures? Social Do the agent architectures allow for agents capable of distinguishing social network relations (and status), what levels of communication can be represented and to what degree can one use the architectures to represent complex social concepts such as the theory of mind or we-intentionality. Norm Consideration To what degree do the architectures allow to model agents which are able to explicitly reason about formal and social norms as well as the emergence and spread of the latter? Learning What kind of agent learning is supported by the agent architectures?
Production rules systems—which consist primarily of a set of behavioural “if-then-rules”
(Nilsson 1977)—are symbolic systems3 (Chao 1968) that have their
origin in the 1970s, when artificial intelligence researchers began to
experiment with information processing architectures based on the matching of
patterns. Whereas many of the first production rule systems were applied
to rather simple puzzle or blocks world problems, they
soon became popular for more “knowledge-rich” domains such as automated
planning and expert systems.
Figure 1 shows the typical decision making cycle of a
production rule system. It determines what actions (output) are chosen by an
agent based on the input it perceives. Production rule systems
consist of three core components:
- A set of rules (also referred to as productions) that have the form Ci → Ai where Ci is the sensory precondition (also referred to as IF statement or left-hand side (LHS)) and Ai is the action that is to be performed if Ci holds (the THEN part or right-hand side (RHS) of a production rule).
- One or more knowledge databases that contain information relevant to the problem domain. One of these databases is the working memory, a short-term memory that maintains data about the current state of the environment or beliefs of the agent.
- A rule interpreter which provides the means to determine in which order rules are applied to the knowledge database and to resolve any conflicts between the rules that might arise.
The decision making cycle depicted in Figure 1 is referred
to as a forward chaining recognise-act cycle (Ishida 1994).
It consists of four basic stages in which all of the above mentioned components
- Once the agent has made observations of its environment and these are translated to facts in the working memory4, the IF conditions in the rules are matched against the known facts in the working memory to identify the set of applicable rules (i.e. the rules for which the IF statement is true). This may result in the selection of one or more production rules that can be “fired”, i.e. applied.
- If there is more than one rule that can be fired, then a conflict resolution mechanism is used to determine which to apply. Several algorithms, designed for different goals such as optimisation of time or computational resources, are available (McDermott & Forgy 1976). If there are no rules whose preconditions match the facts in the working memory, the decision making process is stopped.
- The chosen rule is applied, resulting in the execution of the rule's actions and, if required, the updating of the working memory.
- A pre-defined termination condition such as a defined goal state or some kind of resource limitation (e.g. time or number of rules to be executed) is checked, and if it is not fulfilled, the loop starts again with stage 15.
One reason for the popularity of production rule
systems is their simplicity in terms of understanding the link between the
rules and their outcomes. In addition, the existence of convenient graphical
means to present decision processes (e.g. decision trees) has contributed to
their continuing use.
However, production rule systems have often been criticised as
inadequate for modelling human behaviour. The criticisms can be considered in relation to the dimensions introduced in the previous section.
The agents one can model with production rule systems
react with predefined rules to environmental events,
with no explicit deliberation or cognitive processes being available to them.
As such, production rule systems agents typically are not capable of affective
behaviour, the understanding of and reaction to norms, the consideration of
social structures (including communication) or learning new rules or updating existing ones.
Of course, since a production rule system is Turing complete, it is in principle possible to
model such behaviours, but only at the cost of great complexity and using many rules.
This is a problem because the more rules in the system, the more
likely are conflicts between these rules and the more computation will be needed to resolve
these conflicts. This can result in long compute
times, which make production-rule systems difficult to use in
settings with large numbers of decision rules.
- With respect to implementation, Prolog, a general purpose logic programming language, and LISP are popular for production rule systems. In addition, several specialised software tools based on these languages are available. Two examples are JESS, a software tool for building expert systems (Friedman-Hill 2003), and JBOSS Drools, a business rule management system and an enhanced rule engine implementation6.
- We next focus on a conceptual framework for human decision making that was developed roughly a decade after the production rule idea: the beliefs-desires-intentions (BDI) model.
- The Belief-Desires-Intention (BDI) model, which was originally based on ideas
expressed by the philosopher Bratman (1987), is one of the most popular
models of agent decision making in the agents community (Georgeff et al. 1999).
It is particularly popular for constructing reasoning
systems for complex tasks in dynamic environments (Bordini et al. 2007).
In contrast to the production rule system presented earlier, the basic idea
behind BDI is that agents have a “mental state” as the basis for their
reasoning. As suggested by its name, the
BDI model is centred around three mental attitudes, namely beliefs, desires and,
especially, intentions (Figure 2). It is therefore
typically referred to as an “intentional system”.
Beliefs are the internalized information that the agent has about the world. These beliefs do not need to correspond with reality (e.g. the beliefs could be based on out-dated or incorrect information); it is only required that the agent considers its beliefs to be true. Desires are all the possible states of affairs that an agent might like to accomplish. They represent the motivational state of the agent. The notion of desires does not imply that the agent will act upon all these desires, rather they present options that might influence an agent's actions. BDI papers often also refer to goals. Goals are states that an agent actively desires to achieve (Dignum et al. 2002). An intention is a commitment to a particular course of action (usually referred to as a plan) for achieving a particular goal (Cohen & Levesque 1990)7.
These three components are complemented by a library of plans.
The plans define procedural knowledge about low-level actions that are expected to
contribute to achieving a
goal in specific situations, i.e. they specify plan steps that define how to do
In the BDI framework, agents are typically able to reason about
their plans dynamically. Furthermore, they are able to reason about their own
internal states, i.e. reflect upon their own beliefs, desires and
intentions and, if required, modify these.
At each reasoning step, a BDI agent's beliefs are updated, based on its
perceptions. Beliefs in
BDI are represented as Prolog-like facts, that is, as atomic formulae
of first-order logic. The intentions to be achieved are pushed onto a stack,
called the intention stack. This stack contains all the intentions that
are pending achievement (Bordini et al. 2007). The agent then searches through
its plan library for any plans with a post-condition that matches the intention on top of the
intention stack. Any of these plans that have their
pre-conditions satisfied according to the agent's beliefs are considered
possible options for the agent's actions and intentions. From these options,
the agent selects the plans of highest relevance to its beliefs and intentions.
This is referred to as the deliberation process.
In the Procedural Reasoning System (PRS) architecture—one of the first
implementations of BDI—this is done with the help of domain-specific meta-plans as well as information
about the goals to be achieved. Based on these goals and the plans'
information, intentions are generated, updated and then
translated into actions that are executed by the agents.
The initial applications using BDI were embedded
in dynamic and real-time environments (Ingrand et al. 1992) that required
features such as asynchronous event handling, procedural representation of
knowledge, handling of multiple problems, etc. As a result of these features,
one of the seminal applications to use an implementation of BDI
was a monitoring and fault detection system for the reaction
control system on the NASA space shuttle Discovery (Georgeff & Ingrand 1990).
For this purpose 100 plans and over 25 meta-level
plans (including over 1000 facts
about it) were designed. This demonstrates the size and complexity of the systems that BDI is
capable of dealing with. Another complex application using BDI
ideas is a network management monitor called the Interactive Real-time
Telecommunications Network Management System (IRTNMS) for Telecom Australia
(Rao & Georgeff 1991). Rao & Georgeff (1995) generalised BDI to be especially appropriate
for systems that are required to perform high-level management and control tasks in
complex dynamic environments as well as for systems where agents are required to
execute forward planning (Rao & Georgeff 1995).
Besides these rather technical applications, BDI has also has been used for
more psychologically-inspired research. It formed the basis for a computational model
of child-like reasoning, CRIBB (Wahl & Spada 2000) and has also been used to develop
a rehabilitation strategy to teach autistic children to reason about other
people (Galitsky 2002).
A detailed discussion on modelling human behaviour with BDI agents can be found in Norling (2009).
Looking at BDI using the dimensions introduced in Section 2, at the cognitive level, BDI
agents can be both reactive and actively deliberate about intentions
(and associated plans). Hence, in contrast to production
rule systems, BDI architectures do not have to follow classical
IF-THEN rules, but can deviate if they perceive this is appropriate for the
intention. Although BDI agents differ conceptually in this way from production-rule systems, most BDI
implementations do not allow agents to deliberate actively about intentions
(Thangarajah et al. 2002). However, what is different to production rule systems is
that BDI agents are typically goal persistent. This means that if an agent for
some reason is unable to achieve a goal through a particular intention, it is able to reconsider the goal in the
current context (which is likely to have changed since it chose the original
course of action). Given the new context, a BDI agent is then able to attempt
to find a new course of action for achieving the goal. Only once a
goal has been achieved or it is deemed to be no longer relevant does an agent
A restriction of the traditional BDI approach is that is assumes
agents to behave in line with (bounded) rationally (Wooldridge 2000).
This assumption has been criticised for being “out-dated”
(Georgeff et al. 1999), resulting in several extensions of BDI, some of which
will be discussed below. Another point of criticism is that the traditional BDI model
does not provide any specification of
agent communication (or any other aspects at the social level)
and that—in its initial design—it does not provide an explicit mechanism
to support learning from past behaviour (Phung et al. 2005). Furthermore normative
or affective considerations are not present in the traditional BDI model.
BDI itself is a theoretical model, rather than an implemented architecture.
implementations (and extensions) have been developed since its origins in
the mid-1980s. One of the first explicitly embodying the BDI
paradigm was the Procedural Reasoning System (PRS) architecture
(Georgeff & Lansky 1987; Ingrand et al. 1992).
PRS was initially implemented by the Artificial Intelligence Center at SRI
International during the 1980s. After it had been applied to the reaction control
system of the NASA Space Shuttle Discovery, it has
been developed further in other institutes and was re-implemented several
times, for example, the Australian Artificial
Intelligence Institute's distributed Multi-Agent Reasoning (dMARS) system
(d'Inverno et al. 1998; d'Inverno et al. 2004), the University of Michigan's C++
implementation UM-PRS (Lee et al. 1994) and their Java version called
JAM!8 (Huber 1999). Other
examples of BDI implementations include AgentSpeak (Rao 1996; Machado & Bordini 2001),
AgentSpeak(L) (Machado & Bordini 2001), JACK Intelligent Agents
(Winikoff 2005), the SRI Procedural Agent Realization Kit (SPARK)
(Morley & Myers 2004), JADEX (Pokahr et al. 2005) and 2APL (Dastani 2008).
- Emotional BDI (eBDI) (Pereira et al. 2005; Jiang & Vidal 2006) is one extension of the BDI
concept that tries to address the
rational agent criticism mentioned above. It does so by
incorporating emotions as one decision criterion into the agent's decision making
eBDI is based on the idea that in order to model human behaviour properly, one
needs to account for the influence of emotions Kennedy (2012) .
The eBDI approach, initially proposed by
Pereira et al. (2005), was implemented and extended to emotional maps for
emotional negotiation models as part of her PhD work by Jiang (2007).
Although the idea of incorporating emotion into the agent
reasoning itself had been mentioned before eBDI by Padgham & Taylor (1996),
eBDI is the first architecture that accounts for emotions as mechanisms
for controlling the means by which agents act upon their environment.
As demonstrated by Pereira et al. (2008), eBDI includes an
internal representation of an agent's resources and capabilities, which,
according to the authors, allows for a better resource allocation in
highly dynamic environments.
The eBDI architecture is
considered to be a BDI extension by Pereira et al. (2005). They chose to extend the
BDI framework because of its high acceptance in the agent
community as well as its logical underpinning. The eBDI architecture is depicted in
Figure 3: The eBDI Architecture, reproduced from Pereira et al. (2005)
eBDI is centred around capabilities and resources as the basis for the internal
representation of emotions. Capabilities are abstract plans of actions which the
agent can use to act upon its environment.
In order to become specific plans, the abstract plans
need to be matched against the agent's ideas of ability and opportunity. This is done
with the help of resources, which can be either physical or virtual. The agents in
eBDI are assumed to have information
about only a limited part of their environment and themselves.
This implies that an agent might not be aware of all its resources and
capabilities. In order for an
agent to become aware of these capabilities and resources, they first need to
become effective. This is
done with the help of the Effective Capabilities and Effective Resources
revision function (EC-ER-rf) based on an agent's perceptions as well as
various components derived from the BDI architecture.
In addition to capabilities and resources, two further new components are
added: a Sensing and Perception Module and an
Emotional State Manager. The former is intended to
“make sense" of the environmental stimuli perceived by the agent. It filters
information from all perceptions and other sensor stimuli and—with the help
of semantic association rules—gives a meaning to
this information. The Emotional
State Manager is the component responsible for controlling the resources and
the capabilities used in the information processing phases of the architecture.
None of the papers describing the eBDI architecture (i.e
Pereira et al. (2005); Jiang et al. (2007); Jiang (2007); Pereira et al. (2008)) give a precise specification of
the Emotional State Manager, but they provide three
general guidelines that they consider fundamental for the component:
- It should be based on a well defined set of artificial emotions which relate efficiently to the kind of tasks the agent has to perform,
- extraction functions that link emotions with various stimuli should be defined, and
- all emotional information should be equipped with a decay rate that depends on the state of the emotional eliciting source (Pereira et al. 2005).
Both the Sensing and Perception
Module and the Emotional State Manager are linked to the EC-ER-rf, which
directly interacts with the BDI component in the agent's reasoning steps. On
each level it adds an emotional input to the BDI process that selects the
action that the agent executes. Detailed information on the
interaction of EC-ER-rf and the BDI module is given in
Pereira et al. (2005); Pereira et al. (2008) and Jiang (2007).
The authors of the eBDI framework envision it
to be particularly useful for applications that require agents to be self-aware.
this with several thought examples such as a static environment
of a simple maze with energy sources and obstacles and a hunting scenario.
One larger application using eBDI is a e-commerce negotiation protocol presented
by Jiang (2007).
eBDI is similar to BDI,
in that at the cognitive level it allows for reflective agents, but
does not provide any consideration of learning, norms or social relations.
As indicated by its name, however, it does account
for the representation of emotions on the affective level.
To our knowledge the only complete implementation of the eBDI architecture is
by Jiang (2007), who integrated eBDI with the OCC model,
a computational emotion model developed by Ortony et al. (1990). In the future work
section of their paper Pereira et al. (2005) point out that they intend to implement
eBDI agents in dynamic and complex environments, but
we were not able to find any
of these implementations or papers about them online.
- The Beliefs-Desires-Obligations-Intentions (BOID) architecture is an extension
of the BDI idea to account for normative concepts, and in particular
obligations (Broersen et al. 2002; Broersen et al. 2001).
BOID is based on ideas described by Dignum et al. (2000). In addition
to the mental attitudes of
BDI, (social) norms and obligations (as one component of norms) are required to
account for the social behaviour of agents. The authors of BOID argue that in order
to give agents “social ability”, a multi-agent
system needs to allow the agents to deliberate about whether or not
to follow social rules and contribute to collective interests. This deliberation
is typically achieved by means of argumentation between obligations, the actions
an agent “must perform” (for the social good), and the
individual desires of the agents.
Thus, it is not surprising that the majority of works on BOID are within the
agent argumentation community (Dastani & van der Torre 2004; Boella & van der Torre 2003).
The decision making cycle in BOID is very similar to the BDI one
and only differentiates itself with respect to the agents' intention (or goal) generation.
When generating goals, agents also account for internalized social
obligations9. The outcome of this deliberation
depends on the agent's attitudes towards social obligations and its own goals
(i.e. which one it perceives to be the highest priority)10.
With regard to the dimensions of
comparison, BOID has the same properties as BDI (and therefore
similar advantages and disadvantages for modelling ABSS), but in contrast to BDI,
it allows the modelling of social norms (i.e. it differs on the norm dimension). In
BOID, these norms are expressed solely in terms of obligations and other
aspects of social norms are neglected.
So far, most work on BOID has concentrated on the
formalisation of the idea (and in particular the deliberation process). As a
result of this rather formal focus, at present no separate implementation of the
BOID architecture exists and application examples are limited to the process in
which agents deliberate about their own desires in the
light of social norms.
- The BRIDGE architecture of Dignum et al. (2009)
extends the social norms idea in BOID and aims to provide “agents with
constructs of social awareness, 'own' awareness and reasoning update”
(Dignum et al. 2009).
It was first mentioned as model for agent reasoning and decision making in the
context of policy modelling. The idea is that one needs to take into account
that macro-level policies relate to people with different
needs and personalities who are typically situated in different cultural
settings. Dignum et al. (2009) reject the idea that human behaviour is typically (bounded) rational
and economically motivated, and advocate the
dynamic representation of “realistic social interaction
and cultural heterogeneity” in the design of agent decision making. They
emphasise that an architecture needs to take into account those
features of the environment an agent is located in (such as policies) that
influence its behaviour from the top down.
According to Dignum et al. (2009), one of the main reasons for basing the BRIDGE
architecture (Figure 4) on BDI
was its emphasis on the deliberation process
in the agent's reasoning. The architecture introduces
three new mental components: ego, response and goals and
modifies the function of some of the BDI components.
Figure 4: The BRIDGE Agent Architecture, reproduced from Dignum et al. (2009)
Ego describes an agent's priorities in decision making with the help of different filters and ordering preferences. It also includes the personality type of the agent, which determines its choice of mode of reasoning (e.g. backward- or forward reasoning).
Response refers to the physiological needs of the entity that is represented by the agent (e.g. elementary needs such as breathing, food, water). It implements the reactive behaviour of the agent to these basic needs. Additionally the response component is used to represent fatigue and stress coping mechanisms. Items in the response components directly influence goals and can overrule any plans (e.g. to allow for an immediate change of plan and the execution of different actions in life-threatening situations).
In contrast to other work on BDI, Dignum et al. (2009) distinguish goals
and intentions. To them, goals are generated from desires (as well as from the
agent's ego and response) and deficiency needs. Intentions are the
possible plans to realise the goals. The choice of goals (and therefore
indirectly also intentions) can be strongly influenced by response factors such
as fatigue or stress. Agents can change the order in which goals are chosen in
favour of goals aimed at more elementary needs.
The component of desires is extended in the BRIDGE architecture to also
consider ego (in addition to beliefs). They are complemented by maintenance
and self-actualization goals (e.g. morality) that do not go away by being
fulfilled. Beliefs are similar to the classical BDI idea, with the only
adaption being that beliefs are influenced by the cultural and normative
background of the agent.
In the reasoning cycle of a BRIDGE agent, all
components work concurrently to allow continuous processing of sensory
information (consciously received input from the environment) and other
“stimuli” (subconsciously received influences). These two input streams are
first of all interpreted according to the personality characteristics of the agent
by adding priorities and weights to the beliefs that result from the inputs. These
beliefs are then sorted. The sorted beliefs function as a filter on the desires
of the agent. Based on the desires, candidate goals are selected and ordered
(again based on the personality characteristics). Then, for the candidate
goals, the appropriate plans are determined with consideration of the ego
component. In addition, desires are generated from the agent's beliefs based on
its cultural and normative setting. As in normal BDI, in a last step, one of the
plans is chosen for execution (and the beliefs updated accordingly)
and the agent executes the plan if it is not overruled by the response component (e.g. fatigue).
In terms of our dimensions, although it is based on BDI, the BRIDGE
architecture has several differences. In contrast to, for example, eBDI, BRIDGE
does not explicitly represent emotions, although using the ego component, it is
possible to specify types of agents and their
different emotional responses to various stimuli. Furthermore, some social concepts are
accounted for. These concepts include a social interaction consideration, the
social concept of culture as well as a notion of self-awareness (and resulting
differentiation of one-self and other agents). On the norm dimension,
Dignum et al. (2009)
envision that BRIDGE can be used to model reactions to policies, which can be understood as a normative
concept. One of the key ideas of Dignum et al. (2009) is that the components they
introduced on top of BDI (e.g. culture and ego) influence social norms and their
internalization by agents. BRIDGE
nevertheless considers norms solely from an obligation perspective.
To the best of our knowledge, the BRIDGE architecture has not yet been implemented.
In Dignum et al. (2009) the authors mention their
ambition of implementing BRIDGE agents in an extension of 2APL in a Repast
environment. Currently the BRIDGE architecture has only
been applied in theoretical examples such as to help conceptualise the
emergence and enforcement of social behaviour in different cultural settings
(Dignum & Dignum 2009).
In Section 4.3, we introduced the BOID architecture as a
first step towards the integration of (social) norms in the agent decision
making process. This section extends this notion of normative architectures,
and moves away from intentional systems such as
BDI to externally motivated ones. In BDI, the agents act because of a
change in their set of beliefs and the establishment of desires to achieve a
specific state of affairs (for which the agents then select specific intentions
in form of plans that they want to execute). Their behaviour is purely
driven by their internal motivators such as beliefs and desires. Norms are an
additional element to influence an agent's reasoning. In contrast to beliefs and
desires, they are external to the agent, established within the society/environment
that the agent is situated in. They are therefore regarded as external motivators
and the agents in the system are said to be norm-governed.
Norms as instruments to influence an agent's behaviour have been
popular in the agents community for the past decade with several works
addressing questions such as (Kollingbaum 2004):
- How are norms represented by an agent?
- Under what circumstances will an agent adopt a new norm?
- How do norms influence the behaviour of an agent?
- Under what circumstances would an agent violate a norm?
As a result of these works, several models of normative systems
(e.g. Boissier & Gâteau (2007); Dignum (2003); Esteva et al. (2000); Cliffe (2007); López y López et al. (2007)) and
models of norm-autonomous agents (e.g.
Kollingbaum & Norman (2004); Castelfranchi et al. (2000); Dignum (1999); López y López et al. (2007)) have been
However, most of the normative system models focused on the norms
rather than the agents and their decision making, and most of the
norm-autonomous agent-architectures have remained rather abstract. This is why we have selected
only three to present in this paper.
The Deliberate Normative Agents of Castelfranchi et al. (2000)
is a cognitive research-inspired abstract model of norm-autonomous agents.
Despite being one of the earlier works, their norm-aware
agents do not limit social norms to be triggers
for obligations, but also allow for a more fine-grained deliberation process.
The EMIL architecture (EMIL project consortium 2008; Andrighetto et al. 2007b) is an
agent reasoning architecture designed to account for the internalization of norms by agents.
It focuses primarily on decisions about which norms to accept and internalize
and the effects of this internalization. It also introduces the idea
that not all decisions have to be deliberated about11.
The NoA agent architecture presented in
Kollingbaum & Norman (2004); Kollingbaum (2004); Kollingbaum & Norman (2003)
extends the notion of norms to include legal and social norms
(Boella et al. 2007).
Furthermore, it is one of the few normative architectures that presents a
detailed view of its computational realisation.
- The idea of deliberate normative agents was developed before BOID. It is based
on earlier works in cognitive science (e.g. Conte et al. (1999); Conte & Castelfranchi (1999)) and
it focuses on the idea that social norms need to be involved in the decision making process of an agent.
Dignum (1999) argues that autonomous entities such
as agents need to be able to reason, communicate and negotiate about
norms, including deciding whether to violate social norms if they are
unfavourable to an agent's intentions.
As a result of the complexity of the tasks associated with social norms,
Castelfranchi et al. (2000) argue that they cannot simply be implicitly
represented as constraints or external fixed rules in an agent architecture.
They suggest that norms should be represented as mental objects that
have their own mental representation (Conte & Castelfranchi 1995) and that interact
with other mental objects (e.g. beliefs and desires) and plans of an agent.
They propose the agent architecture shown in
Figure 5: Deliberative Normative Agents, reproduced from Castelfranchi et al. (2000)
The architecture consists of 6 components which are loosely grouped into 3 layers. These layers
are: (i) an interaction management layer that handles the interaction of an agent with other agents (through communication) as well as the general environment; (ii) an information maintenance layer that stores the agent's information about the environment (world information), about other agents (agent information) and about the agent society as a whole (society information); and (iii) a process control layer where the processing of the information and reasoning takes place.
To reflect semantic distinctions between different kinds of information,
Castelfranchi et al. (2000) distinguish three different information levels: one
object level and two meta levels. The object level includes the information that the
agent believes. All the information maintenance layer components are at this object level.
The first meta-level contains information about how to handle
input information based on its context.
Depending on its
knowledge about this context (e.g. the reliability of the information source), it then
specifies rules about how to handle the information. In the examples
given by Castelfranchi et al. (2000), only reliable information at the interaction
management level is taken on as a belief at the
information maintenance layer. There is also meta-meta-level reasoning (i.e.
information processing at the second meta-level). The idea behind this second
meta level is that information (and in particular norms) could have effects on
internal agent processes. That is why meta-meta-information
specifies how the the agent's internal processes can be changed and under which
At its core, the agent reasoning cycle is the same
as the BDI reasoning cycle. Based on their percepts (interaction management
layer), agents select intentions from a set of desires (process control layer)
and execute the respective plans to
achieve these intentions.
However, the consideration of norms in the architecture adds an additional
level of complexity, because the norms that an agent has internalized can
influence the generation as well as the selection of intentions. Thus, in addition
to desires, social norms specifying what an agent should/has to do can generate
new norm-specific intentions. Norms can also have an impact by providing
preference criteria for selecting among intentions.
Norm internalization is a separate
process in the agent reasoning cycle. It starts
with the recognition of a social norm by an agent (either through observation
or via communications).
This normative information is evaluated based on its context and stored in
the information maintenance layer. It is then processed in the
process control layer with the help of the two meta-levels of information. In
particular, the agent determines which norms it wants to adopt for itself and
which ones it prefers to ignore, as well as how it wants its behaviour to be
influenced by norms. Based on this, meta-intentions are created that
influence the generation and selection of intentions as outlined above.
Evaluating the deliberative normative agents against our dimensions, they exhibit
exhibit similar features as BOID agents, but enhanced on the
social and the learning dimensions. Not only does the deliberative
normative agent architecture include an explicit separate norm internalization
reasoning cycle, but its notion of (social) norms goes beyond obligations.
Castelfranchi et al. (2000) furthermore recognize the need for communication in
their architecture. Concerning the learning dimension, agents have limited
learning capabilities, being able to learn new norm-specific intentions.
- The EMIL-A agent architecture13
(Andrighetto et al. 2007b) was developed
as part of an EU-funded FP6 project which focused on “simulating the two-way
dynamics of norm innovation”
(EMIL project consortium 2008).
What this byline refers to is the idea of extending the classical micro-macro link
usually underlying ABMs to include both top-down and bottom-up
links (as well the the interaction between these two).
EMIL-A models the process of agents learning about norms in a society, the internalization
of norms and the use of these norms in the agents' decision making.
Figure 6: The EMIL-A architecture, reproduced from Andrighetto et al. (2007a)
Figure 6 shows the normatively focused part of the EMIL-A architecture.
The authors make a distinction between
factual knowledge (events) and normative knowledge (rules). The agent has a
separate interface to the environment for each of these knowledge types.
The agent also has two kind of memories,
one for each knowledge type:
(i) an event board for facts and events, and (ii) a normative frame for inferring and storing rules from the event board.
In addition to these components, the EMIL-A architecture consists of:
- four different procedures:
- norm recognition, containing the above mentioned normative frame,
- norm adoption, containing goal generation rules,
- decision making, and
- normative action planning
- three different mental objects that correspond to the components of
- normative beliefs,
- normative goals, and
- normative intentions
- an inventory:
- a memory containing a normative board and a repertoire of normative action plans, and
- an attitudes module (capturing the internalized attitudes and morals of an agent) which acts on all procedures, activating and deactivating them directly based on goals (Andrighetto et al. 2007a).
- four different procedures:
The first step of the normative reasoning cycle is the recognition of a norm using
the norm recogniser module.
This module distinguishes two different scenarios:
(i) information it knows about and had previously classified as a norm and (ii) new (so far unknown) normative information.
In the former case, the normative input is entrenched on the normative board
where it is ordered by salience. By 'salience' Andrighetto et al. (2007a)
refer to the degree of activation of a norm, i.e. how often the respective norm
has been used by the agent for action decisions and how often it has been
invoked. The norms stored on the normative board are then considered in the
classical BDI decision process as restrictions on the goals and intentions of
the agent. The salience of a norm is important, because in case of
conflict (i.e. several norms applying to the same situation), the norm with the
highest salience is chosen.
When the external normative input is new, i.e. not known to the agent,
the agent first needs to internalize it. To do this, first the normative frame is activated.
The normative frame is equipped with a dynamic schema (a frame of reference)
with which to recognise and categorise an
external input as being normative, based on its properties. Properties that the
normative frame takes into account include deontic specifications,
information about the locus from which the norm emanates, information about legitimate
reactions or sanctions for transgression of the norm, etc.14. The recognition of a norm by an agent does not
imply that the agent will necessarily agree with the norm or that it understands
it fully. It only means that the agent has classified the new information as a
norm. After this initial recognition of the external input as a norm, the normative
frame is used to find an interpretation of the new norm, by
checking the agent's knowledge for information about properties of the normative frame.
Once enough information has been gathered about the new norm
and the agent has been able to determine its meaning and implications, the newly
recognised norm is turned into a normative belief. Again, normative beliefs
do not require that the agent will follow the norm.
In EMIL-A, agents follow a “why not” approach. This means
that an agent has a slight preference to adopt a new norm if it cannot find
evidence that this new norm conflicts with its existing mental objects. Adopted
normative beliefs are stored as normative goals.
These normative goals are considered in the agent's decision making. An agent
does not need to follow all its normative goals when making a decision.
When deciding whether to follow a norm, EMIL-A agents diverge from
the general classical utility-maximising modality of reasoning. An
agent will try to conform with its normative
goals if it does not have reasons for not doing so (e.g. if the benefits of
following a norm do not outweigh its costs).
As well as the reasoning cycle taking into account the normative goals
and intentions of an agent, the EMIL-A architecture also recognises
that not all human actions result from an extensive deliberation process.
For example, Andrighetto et al. (2007a) note that
car drivers do not normally think about stopping their car at a red
traffic light, but instead react instinctively to it. To implement this in the
EMIL-A architecture, Andrighetto et al. (2007a) introduce shortcuts, whereby internalised
norms can trigger behaviour directly in reaction to external stimuli.
Evaluating EMIL-A according to our dimensions, on the cognitive level a deliberation
architecture is used, extended by short-cuts similar to the ones
in the BRIDGE architecture, in addition to a norm deliberation and internalization
Emotions or other affective elements are not included in EMIL-A. Instead,
(social) norms play a central role. Similar to
deliberative normative agents, EMIL considers the social aspects of
norms, and uses a blackboard to communicate norm candidates.
There are several implementations of the EMIL-A architecture, including EMIL-S
(EMIL project consortium 2008) and EMIL-I-A (Villatoro 2011; Andrighetto et al. 2010).
EMIL-S was developed within the EMIL project,
whereas EMIL-I-A, which focuses primarily on norm internalization, was developed
afterwards by one of the consortium partners in collaboration
with other researchers.
The EMIL-A architecture has been applied to several
scenarios15, including self-regulated distributed web service provisioning, and
discussions between a group of borrowers and a bank in micro-finance scenarios.
It has also been used for simpler models such as the behaviour of people
waiting in queues (in particular, whether people of
different cultural backgrounds would line up and wait their turn).
- The Normative Agent (NoA) architecture of Kollingbaum & Norman (2003) is one of the few that
specifically focuses on the incorporation of norms into agent decision
making while also using a broader definition of norms. Over the
last 12 years of research into normative multi-agent systems, the definition of
norm has changed from mere social obligations to
a much more sophisticated idea strongly inspired by disciplines such as sociology, psychology,
economics and law. Kollingbaum (2004) himself speaks of norms as
concepts that describe what is “allowed, forbidden or
permitted to do in a specific social context”. He thus keeps the social aspect
of norms, but extends it to include organisational concepts and ideas from
formal legal systems. That is why to him norms are not only linked to obligations,
such as in BOID, but also “hold explicit mental concepts representing
obligations, privileges, prohibitions, [legal] powers, immunities etc.”
(Kollingbaum 2004, p. 10). In the NoA architecture, norms governing the
behaviour of an agent refer to either actions or states of affairs that are
obligatory, permitted or forbidden.
The NoA architecture has an explicit representation
of a “normative state”. A normative state is
a collection of norms (obligations, permissions and prohibitions) that
an agent holds at a point in time. This normative state is consulted when
the agent wants to determine which plans
to select and execute. NoA agents are equipped with an
ability to construct plans to achieve their goals. These plans should fulfil
the requirement that they do not violate any of the
internalized/instantiated norms of the agent, i.e. norms that the agent has decided
Figure 7: The NoA architecture, reproduced from (Kollingbaum 2004, p. 80)
Figure 7 shows the NoA architecture.
The main elements that influence the behaviour of a NoA agent are
(i) a set of beliefs, (ii) a set of pre-specified plans and (iii) a set of norms.
In contrast to if-then-rules or BDI plans, a NoA plan
not only specifies when the plan is appropriate for execution (preconditions)
and the action to be taken, but also defines what states of affairs it
will achieve (effects). Norm specifications carry
activation and termination conditions that determine when a norm
becomes active and therefore relevant to an agent and when a
norm ceases to be active (Kollingbaum & Norman 2003).
A typical reasoning cycle of a NoA agent starts with a
perception that might alter some of its beliefs (which are symbolically
represented). As in conventional production systems, there are two sources
to change a set of beliefs:
(i) external “percepts” of the environment, and (ii) internal manipulations resulting from the execution of activities (plan steps) by the agent.
As with the EMIL-A architecture, NoA not only looks at knowledge gained from
percepts but also considers external norms.
That is why in addition to the normal percepts, the agent can obtain normative
specifications from the environment. The agent's
reasoning cycle then follows two distinct operations:
- the activation of plan and norm declarations and the generation of a set of instantiations of both plans and norms,
- the actual deliberation process including the plan selection and execution. This process is dependant on the activation of plans and norms (Kollingbaum 2004).
The NoA agent architecture is based on the NoA
language, which has close similarities with AgentSpeak(L) (Rao 1996).
It was implemented by Kollingbaum for his PhD
dissertation and he has used it for simple examples such as moving boxes.
We could not find any other application of the NoA architecture. Nevertheless,
NoA is far more sophisticated than other normative architectures such as BOID,
especially in terms of representing
normative concepts. This sophistication comes at the cost of an increased complexity.
One conceptually interesting facet of the NoA
architecture is that it implements a distinction between an agent being
responsible for achieving a state of affairs and being responsible for
performing an action. This allows for both state- and event-based reasoning
by the agent, which is a novelty for normative systems, where typically only
one of these approaches can be found.
In terms of the comparison dimensions, NoA has the same features as EMIL-A.
Its main difference and the reason for its inclusion here is its
extended notion of norms.
Having reviewed production rule systems, BDI-inspired and
normative architectures, in the next two sections we consider
cognitively inspired models. As Sun (2009) remarks, cognitive models and
social simulation models—despite often having the same aim
(i.e. representing the behaviour of decision-making actors)—tend to have
a different view of what is a good model for representing human decision making.
Sun (2009), a cognitive scientist, remarks that
except for few such as Thagard (1992), social simulation
researchers frequently only focus on agent models custom-tailored to the task at
hand. He calls this situation
unsatisfying and emphasises that it limits realism and
the applicability of social simulation. He argues that to overcome these
short-comings, it is necessary to include cognition as an integral part of an agent
architecture. This and the next section present models that follow this suggestion
and take their inspiration from cognitive research. This section focuses on
“simple” cognitive models that are inspired by cognitive ideas but that still have
strong resemblance to the models presented earlier. Section
7 focuses on the models
that are more strongly influenced by psychology and neurology.
The first architecture with this focus on cognitive ideas and processes that we shall
discuss is PECS (Urban 1997).
stands for Physical conditions, Emotional state, Cognitive
capabilities and Social status, which refers to the authors' aim to “enable an integrative modelling of physical,
emotional, cognitive and social influences within a component-oriented
agent architecture” (Urban & Schmidt 2001). They wanted
to design a reference model for modelling human behaviour that could replace the
BDI architecture Schmidt (2002b). According to them, BDI is only to a “very
limited degree sensible and useful” for modelling humans because of its focus
on rational decision-makers. Instead, they advocate using the Adam model
Schmidt (2000) on which PECS is based.
Figure 8: The PECS Architecture, reproduced from Schmidt (2002a)
The architecture of PECS is depicted in Figure 8. It is divided
into three layers:
(i) an input layer (consisting of a sensor and a perception component)
responsible for the processing of input data,
(ii) an internal layer which is structured in several sub-components, each of which
is responsible for modelling a specific required functionality and might
be connected to other components, and
(iii) a behavioural layer (consisting of a behaviour and an actor component)
in which the actions of the agent are determined.
In the input and the behavioural layers, the sensor and the actor components act
as interfaces with the environment, whereas the perception and the
behaviour components are responsible for handling data to and from the interface
components and for supporting the decision making processes.
The authors model each of the properties that the name
PECS is derived from as a separate component.
Each component is characterized by an internal state (Z),
defined by the current values for the given set of model quantities at
each calculated point in time (Schmidt 2001). The transitions of Z over time
can be specified with the help of
time-continuous as well as time-discrete transition functions (F). Each
component can generate output based on the pre-defined dynamic behaviour of the
As with the other architectures described so far, the agent decision-making cycle
starts with the perception of the environment, which is translated to agent
information and processed by the components mentioned above. Information flows and
dependencies between the different components are depicted in
Figure 8 with dotted and solid arrows.
Urban (1997) notes that both
simple reactive behaviour that can be described by
condition-state-action rules and
more complex deliberative behaviour, including planning
based on the goals the agent has in mind,
are possibilities for the agent's decision-making. An example of
the latter is presented by Schmidt (2002b), who extends the internal
reasoning in the four components of the internal layer to include reflective and
deliberative information flows.
As a reference model, the PECS architecture mainly provides concepts and a partial
methodology for the construction of agents representing humans
and their decision making (as well as concepts for the supporting communication
infrastructure and environment). According to its developers, PECS
should be useful for constructing a wide range of models for agents
whose dynamic behaviour is determined by physical, emotional, cognitive and
social factors and which display behaviour containing reactive and deliberative
elements. Despite this general focus, we were only able to find one paper
presenting an application of the PECS reference model, Ohler & Reger (1999),
in which PECS is used to analyse role-play and group formation among children.
PECS covers many of the comparison dimensions, although
because it is a reference model, only conceptually rather then in terms of an
actual implementation. On the cognitive level, reaction-based architectures as
well deliberative ones and hybrid architectures combining the two are envisioned
by Urban (1997). Following Schmidt (2002b), PECS covers issues on the
affective and the social level, although again few specifics can be found
about actual implementations, which is why it is hard to judge to what level
issues in these dimensions are covered. Norms and learning are the two
dimensions which are not represented in the architectures, or only to a limited degree.
The transition functions in PECS are theoretically
usable for learning, but only within the bounds of pre-defined update functions.
The Consumat model of Jager & Janssen was initially developed to model the behaviour of
consumers and market dynamics (Janssen & Jager 2001; Jager 2000), but has since been
applied to a number of other research topics (Jager et al. 1999).
The Consumat model builds on three main considerations:
(i) that human needs are multi-dimensional,
(ii) that cognitive as well as time resources are required to make
(iii) that decision making is often done under uncertainty.
Jager and Janssen argue that humans have various (possibly conflicting)
needs that can diminish with consumption or over time,
which they try to satisfy when making a decision. They point out
that models of agent decision making should take this into account, rather than
trying to condense decision making to a single utility value.
They base their work on the pyramid of needs from Maslow (1954) as well as
the work by Max-Neef (1992), who distinguishes nine different human needs:
subsistence, protection, affection, understanding, participation, leisure,
creation, identity and freedom. Due to the complexity of modelling nine
different needs as well as their interactions in an agent architecture,
Jager & Janssen (2003) condense them to three: personal needs, social needs and a
status need19. These three
needs may conflict (one can, for example, imagine an agent's personal needs
do not conform with its social ones) and as a consequence an agent
has to balance their fulfilment.
Jager and Janssen note that the resources available for decision making
are limited and thus constrain the number of alternatives an
agent can reason about at a point of time. Their argument
is that humans not only try to optimize the outcomes of decision
making, but also the process itself20.
That is why they favour the idea of “heuristics” that simplify complex
decision problems and reduce the cognitive effort involved in a
decision. Jager and Janssen suggest
that the heuristics people employ can be classified along two dimensions:
(i) the amount of cognitive effort involved for the individual agent, and
(ii) the individual or social focus of information gathering
(Jager & Janssen 2003, Figure 1).
The higher the uncertainty of an agent, the more likely
it will use social processing for its decision making. In the Consumat model,
uncertainty is modelled as the difference between expected outcomes and
An agent's current level of need satisfaction and its uncertainty level
determine which heuristic it will apply when trying to make a decision.
Jager & Janssen (2003) define two agent attributes:
the aspiration level and the uncertainty tolerance. The aspiration level
indicates at what level of need the agent is satisfied and the
uncertainty tolerance specifies how well an agent can deal with its own
uncertainty before looking at other agents' behaviour. A low aspiration
level implies that an agent is easily satisfied and therefore will not
invest in intense cognitive processing using a lot of cognitive effort,
whereas agents with a higher aspiration level are more often dissatisfied, and
hence are also more likely to discover new behavioural opportunities.
Similarly, agents with a low uncertainty tolerance are more likely to look at
other agents to make a decision, whereas agents with a high tolerance rely more
on their own perceptions (Jager & Janssen 2003).
Based on the two dimensions of uncertainty and cognitive effort, the Consumat
approach distinguishes six different heuristics an agent can use to make a
decision (Figure 9).
Figure 9: The Six Heuristics used in the Consumat Approach, reproduced from Jager & Janssen (2003)
Starting from the right hand side of Figure 9, agents with a very low current
level of need satisfaction (i.e. agents that are likely to be dissatisfied) are
assumed to put more cognitive effort into their decision making and to
deliberate, i.e. to determine the consequences of all possible decisions
for a fixed time horizon and to act according to what they perceive as the
“best” possible way. Moving from the right-hand side of the figure to the
left, the level of need satisfaction increases and the dissatisfaction
decreases, resulting in less need for intense cognitive effort spent on decision
making. Thus, in the case of a medium low (rather than very low) need satisfaction
and a low level of uncertainty, the agents engage in a strategy where they
determine the consequences of decisions one by one and stop
as soon as they find one that satisfies their needs. Jager and
Janssen call this strategy satisficing after a concept first described
by Simon (1957).
With the same level of need satisfaction but a higher uncertainty level, agents
engage in social comparison, i.e. they compare their own performance
with those that have similar abilities. With a higher level of
need satisfaction and low uncertainty, the agent compares options until it
finds one that is improving its current situation. In contrast, in the
case of high uncertainty, it will try to imitate the behaviour of agents
with similar abilities. Finally, when there is high need satisfaction and low
uncertainty, the agent will simply repeat what it has been doing so far,
because this seems to be a successful strategy.
The Consumat model also includes a memory component (referred to as a mental map)
that stores information on the abilities, opportunities and characteristics of the
agent. This mental map is updated every time the agent engages
in cognitive effort in social comparison, satisficing or deliberation.
Summing up, on the cognitive dimension, the Consumat approach goes beyond the
approaches presented so far by allowing for different heuristics in the agent's
decision making (and by showing actual implementations for them).
Although the model is not capable of simulating elaborate cognitive
processes, logical reasoning or morality in agents, it does represent
a number of key processes that capture human decision making in a
variety of situations (Jager & Janssen 2012). As a result, it has been used to study
the effects of heuristics in comparison to the extensive deliberation
approaches of other architectures. With respect to the other comparison
dimensions, on the affective level values and morality are considered,
however emotions are not directly mentioned. (Jager 2000, p. 97) considers
norms and institutions as input for the behavioural model, in
particular as one of the study foci of Consumat was the impact of different
policies on agent behaviour. Although (Jager 2000) mentions norms in his dissertation,
legal norms (or laws) and policies, and social norms are not directly
mentioned. On the social level , Consumat puts a lot of emphasis on
comparison of the agent's own success and that of its peers. As such, Consumat
has some idea of sociality in terms of agents being able to reason
about the success of their own actions in relation to the success resulting
from the actions of others (which they use for learning better behavioural
Nonetheless, they are not typically designed to see beyond this success
comparison and for example account for the impact of the behaviour of others on
their own actions.
Furthermore, in the original Consumat, it was difficult to compare the effects
of different peers groups.
Recently the authors of the Consumat model have presented an updated version which they
refer to as Consumat II (Jager & Janssen 2012). Changes include
(i) accounting for different agent capabilities in estimating the future,
(ii) lessening the distinction between repetition on the one
hand and deliberation on the other,
(iii) accounting for the expertise of agents in the social-oriented heuristics, and
(iv) consideration of several different network structures.
At the time of writing, the Consumat II model has not yet been formalized and Jager & Janssen
first want to focus on a few stylized experiments to explore the effects of their
rules. In the long run, their aim is to apply Consumat II to study the parameters
in serious games, which in turn could be used to analyse the effects of policy decisions.
Having described “simple” cognitive decision making models, we now turn
our attention to architectures inspired by psychology and neurology. These are
often referred to as cognitive architectures. However, as
they have a different focus than the “cognitive architectures” we have just
presented, we group them separately. The main difference is that the architectures in this section take into account the presumed structural
properties of the human brain. We chose four to present in this section: MHP, ACT-R/PM, CLARION and
SOAR, because of their
popularity within the agent community. There are many more,
some of which are listed in Appendix A.
- The Model Human Processor (MHP) (Card et al. 1983) originates from studies of
human-computer interaction (HCI). It is based on a synthesis of
and human-computer interaction
and is described by Byrne (2007) as an influential cornerstone for the
cognitive architectures developed afterwards.
MHP was originally developed to support the calculation of how long it
takes to perform certain simple manual tasks. The advantage of the Model Human
Processor is that it includes detailed
specifications of the duration of actions and the cognitive processing of
percepts and breaks down complex human actions into detailed small
steps that can be analysed. This allows system designers to predict the
time it takes a person to complete a task, avoiding the need to perform experiments with human participants
(Card et al. 1986).
Card et al. (1983) sketch a framework based on
a system of several interacting memories and processors. The main three processors are
(i) a perceptual processor,
(ii) a cognitive processor and
(iii) a motor.
Although not explicitly modelled as processors in other architectures, these
three components are common to all psychology/neurology-inspired models.
In most of the examples in Card et al. (1983), the processors are envisioned to
work serially. For example, in order
to respond to a light signal, the light must first be detected by the
perceptual component. This perception can then be processed
in the cognitive module and, once this processing has taken place, the
appropriate motor command can be executed. Although this serial processing is
followed for simple tasks, for more complex tasks Card et al. (1983) suggest the three
processors could work in parallel. For this purpose they lay out some general
operating principles providing quantitative and qualitative specifications for
different applications. These specifications include the timings for the different
processors, their connectivity with each other, restrictions on their
application and their integration with memory modules.
MHP typically considers two types of memory, long-term and short-term, the latter
referred to as “working memory” or “declarative memory”.
Agent reasoning in MHP is based on a production system.
The production rules can focus on actions which should be performed
(after the respective perception and cognitive processing) or on changing
the contents of declarative memory. The latter usually implies that in the next cycle
different production rules will be triggered.
The major applications of MHP have been along the lines of its
initial intention: product studies that focused on temporal aspects of
human-computer interaction. A
recent study by Jastrzembski & Charness (2007) used the MHP framework to analyse the
information processing time of older mobile phone users (in comparison to
younger ones). The findings can be used by designers for understanding
age-related performance using existing interfaces, and could support the
development of age-sensitive technologies.
In terms of our comparison dimensions, on the cognitive
level, MHP uses a production system. Due to its focus on learning about the time
it takes to process tasks and information in the brain, no focus was given
to normative information or the social level. Although emotions are mentioned in
Card et al. (1986); Card et al. (1983), and it is recognized that they can influence brain
processes, it remains unclear how emotions are to be modelled.
Although learning values is
possible in MHP, learning of the rules themselves was not considered.
MHP forms the basis for a wide variety of cognitive
architectures, especially in the area of human-computer interaction. Despite its influence on
other architectures, Card et al. did not implement the MHP as a running
cognitive system, but only presented a framework. Byrne (2007) suggests two reasons
(i) a lack of interest in computational modelling in cognitive psychology
at the time the framework was developed, and
(ii) the assumption that the properties of an architecture
are more useful for guiding HCI researchers and practitioners than computational artefacts.
The relatively recent CLARION (Connectionist Learning with Adaptive Rule Induction ON-line)
architecture (Sun et al. 1998; Sun et al. 2001b) uses hybrid
neural networks to simulate tasks in cognitive psychology and social
psychology, as well as implementing intelligent systems in artificial
According to Sun (2006), it differs from other cognitive architectures in
(i) it contains built-in motivational structure and meta-cognitive
constructs (Sun et al. 2005),
(ii) it considers two dichotomies: explicit versus implicit representation,
and action-centered versus non-action-centered representation, and
(iii) it integrates both top-down and bottom-up learning
The first and the last of these points are of particular interest for
applying CLARION to social simulation, as they not only
allow for an in-depth modelling of learning processes, but also give rise to the
explanation of cognition-motivation-environment interaction.
Sun argues that the biological (basic) needs of agents arise
prior to cognition and are in fact a foundation for it. To Sun, cognition is a
process that is used to satisfy needs and follows motivational forces, taking
into account the conditions of the environment the agent is (inter)acting within.
of a number of functional subsystems. Figure 10
gives an overview.
The basic subsystems of CLARION include:
- the action-centred subsystem (ACS) whose task is to control all action, regardless of whether these are for external physical movement or internal mental processes;
- the non-action centred subsystem (NACS) which is responsible for maintaining knowledge, both implicit and explicit;
- the motivational subsystem (MS) which—through impetus and feedback— provides the underlying motivations for perception, action, and cognition; and
- the meta-cognitive subsystem (MCS), whose role is to monitor, direct and modify the operations of all subsystems dynamically (Sun 2006).
Each of these subsystems has a dual representation structure: the top level
encodes explicit knowledge and the bottom level holds implicit knowledge.21 Due to its
relatively inaccessible nature22, the latter is
captured by subsymbolic and distributed structures (e.g. back-propagation
networks), whereas the former is stored using a symbolic or localist
The authors of CLARION consider learning in their architecture. The learning of
implicit knowledge is done with the help of neural networks23 (Sun et al. 2001b) or
reinforcement learning mechanisms (especially Q-learning24) using back-propagation. Explicit
knowledge can also be learned in different ways. Because of its symbolic or
localist representation, Sun et al. (1998) suggest one-shot learning techniques.
In addition to the learning for each separate representation component,
Sun et al. (2005) suggest that the explicit and implicit representation components
should also be able to acquire knowledge from one another. Thus, in CLARION the
implicit knowledge gained from interacting with the world is used for refining
explicit knowledge (bottom-up learning) and the explicit knowledge can also be
assimilated to the bottom level (top-down learning)25.
Summing up, the CLARION architecture integrates reactive routines, generic
rules, learning, and decision making to develop versatile agents that learn in
situated contexts and generalize resulting knowledge to different environments.
It is the architecture that has the most complex
learning focus of all those reviewed so far.
- ACT-R (Adaptive Control of Thought-Rational) and its extension ACT-R/PM
(Byrne & Anderson 1998; Byrne 2000) take ideas from Newell and Simon
(Simon & Newell 1971) with the aim of developing a fully unified
cognitive architecture, combining models of cognitive psychology with
perceptual-motor modules such as those found in EPIC (hence
They are the successors to previous ACT production systems (ACT * )
(e.g. Anderson (1983)) and put their emphasis on activation-based
processing as the mechanism for relating a production system to a declarative
memory (Pew & Mavor 1998).
Both ACT-R and ACT-R/PM were originally developed as models of
higher-level cognition and have mainly been applied to modelling the results of
psychological experiments in a wide range of domains such as the
Towers of Hanoi puzzle, mathematical problem solving in classrooms, human memory,
(Anderson & Lebiere 1998)26.
This focus is still dominant today, although more recently an interest in
its application to GUI-style interactions can be detected
(Taatgen et al. 2006).
ACT-R/PM is a production
system with production rules
through which all components communicate.
In comparison with other psychologically inspired architectures, it is restricted to firing
only one production rule per cycle
(i.e. if multiple production rules match on a cycle, a conflict resolution
mechanism is used) and the (declarative) memory has a higher degree of
complexity. Figure 11 depicts the general
architecture of ACT-R/PM.
Figure 11: Overview of the ACT-R/PM Architecture, reproduced from (Taatgen et al. 2006, p. 31)
Central to ACT-R/PM (and ACT-R) is the (architectural) distinction between two
long-term memory/knowledge stores: a declarative memory for facts and goals, and
a procedural memory for rules. The procedural memory is implemented in the
“Productions” components (also referred to as a production system) which takes
the centre position in the architecture connecting all major components.
Both types of memory have two
layers of representation: a symbolic and a sub-symbolic layer.
In addition to this long-term memory, ACT-R/PM also has short-term or working memory,
which is considered to be the proportion of the declarative knowledge that can
be accessed directly. The declarative knowledge is represented by structures called
“chunks” (Servan-Schreiber 1991). These chunks are schema-like
structures with a number of pointers specifying their category as well as
Furthermore, chunks have different levels of activation according to their use.
Chunks that have been
used recently or often have a high level of activation, which
decreases over time if chunks are not being used. Procedural knowledge is also
represented by production rules (Pew & Mavor 1998).
The core element in the reasoning of ACT-R/PM agents is the cognitive layer,
which receives input from cues from the environment via the perception-motor
layer. In the cognitive layer, it is an interplay between the declarative and
procedural memory that advances the decision-making of the agent.
ACT-R/PM includes several learning mechanisms for the declarative and the
procedural memories. Declarative knowledge can be created and altered either
directly from input from the perception-motor layer (e.g.
information via the vision module) or as a result of a production rule.
New procedural knowledge (productions) are learned through inductive inference
from existing procedural rules and case studies (Pew & Mavor 1998).
The majority of applications of ACT-R and ACT-R/PM have been in modelling
single human users in psychology-based or HCI-inspired experimental settings
(see e.g. Schoelles & Gray 2000; Byrne 2001).
These experiments range from colour and text recognition (Ehret 1999) to
automobile driving simulators (Salvucci 2001). Looking for applications
closer to social simulation, we could only find one. Best & Lebiere (2006) use the
ACT/R architecture (without the PM components) to analyse teamwork between virtual and human players on
the Unreal Tournament Gaming Platform. They present reasons for the importance
of considering cognition for their teamwork study.
It allows them to abstract from the focus on
low-level details of the environment and
concentrate on the cognitive processes involved in team-formation and
teamwork. They state that their implementation is partially independent of the
environment and even allows for reusing the cognitive component across
different environments. They demonstrate that some social concepts can be
implemented using ACT/R. Teams of human players and agents are formed, and in these
teams agents are capable of understanding the notion of a team. The agents also
understand the idea of a common goal and the division of tasks between the teams.
However, the common goals and the team composition were predefined in Best & Lebiere (2006),
rather than negotiated by the agents.
Summing up, in terms of the comparison dimensions ACT-R/PM, like the previously
described psychologically and neurologically inspired models, has
its focus on the modelling of cognitive decision making and learning.
Other dimension are therefore not in the centre of attention. There is
little mention of affective components or norms.
ACT-R/PM tends to focus on single human representation, which is why the
social level has not been explored in detail.
The currently supported versions of ACT-R are ACT-R 6 and ACT-R 5. ACT-R
is written in Lisp, which makes it easily extensible and able to run on several
platforms: Windows, Unix and Macintosh (Pew & Mavor 1998). At the time of
versions of ACT-R 6 for Windows and Mac OS X 10.5 or newer are available for
download from the ACT-R website27. In
additional to these standalone versions, the website also offers a graphical user interface component,
a number of modules for visual and auditory
perception, motor action and speech production (the modules required for
ACT-R/PM), and comprehensive manuals and tutorials for the tools.
- SOAR (Laird et al. 1987) is
a symbolic cognitive architecture that implements decision making as goal-oriented
behaviour involving search through a problem space and learning of the
results. It has been used for a wide range of applications including
routine tasks and the open ended problem solving typically considered in the artificial
intelligence community as well as interaction with the outside world, either simulated
or real (Pew & Mavor 1998). One reason for this wide range of applications is that both cognitive
scientists interested in understanding human behaviour and artificial
intelligence researchers interested mainly in efficient problem solving were
involved in the development of SOAR.
Figure 12: Overview of the Classical SOAR and SOAR 9 Architecture, reproduced from Laird (2012a). The Classical SOAR components are highlighted by red boxes, whereas SOAR 9 consists of all displayed components.
Figure 12 shows the architectures of both the classical SOAR as
well as the current version, SOAR 9.
The classical SOAR architecture consists of two types of memory:
(i) a symbolic long-term memory that is
encoded with the help of production rules, and
(ii) a short-term (working) memory encoded as a graph structure to allow
the representation of objects with properties as well as relations.
As in other architectures, the working memory of the agent is used for assessing the
agent's current situation. For this it uses the perception
information it receives through its sensors and the information stored
in the long-term memory. The working memory is also responsible for creating motor commands or actions
chosen by the decision procedure module that selects
operators and detects possible impasses.
The decision making process in SOAR
is similar to most other systems: it consists of matching and firing rules that
are a context-dependant representation of knowledge. The conditions
describe the current situation of the agent and the bodies of the rules define actions
that create structures relevant to the current situation in
the working memory.
Rules in SOAR act primarily as associative memory to retrieve
knowledge relevant to the current situation.
Whereas most decision making algorithms permit only one rule to
fire and so the actual decision making is primarily concerned with which rule to pick,
SOAR allows for rules to fire in parallel, retrieving several pieces of
knowledge at the same time. It is argued that in
uncertain situations with limited knowledge it is difficult to select only one rule,
and better to rely on as much knowledge as possible.
That is why the authors of SOAR introduce additional context-dependant knowledge
for the decision making process. They use operators that act as controls for
selecting and using rules as well as for evaluating and applying operators.
In contrast to the usual understanding of operators in AI, the SOAR operators
are not monolithic data structures, but are distributed across several
This allows a flexible representation of knowledge about operators as well as the
continuous updating of knowledge structures for operators, allowing the redefinition
of operators if circumstances require it (Laird 2012a; Laird 2012b).
If there is not sufficient information for selecting or
applying an operator, an impasse arises and a sub-state is created to resolve that impasse.
To achieve this sub-goal, the same process is started
to select and apply operators. If it is still the case that insufficient information is
available, another sub-state is spawned, and so on.
The classical SOAR has a learning mechanism: chunking (Laird et al. 1986).
The chunking algorithm converts the result from the sub-goal problem solving
into new rules that can be used in the later reasoning process.
In SOAR 9, the classical SOAR architecture is extended in two ways:
(i) the authors introduce new learning and memory modules which allow
capturing knowledge more easily, and
(ii) they add non-symbolic knowledge representation as well as associated
processing, learning and memory modules.
SOAR 9 has a reinforcement learning component linked to the procedural knowledge
as well as the working memory (the latter via an appraisal component).
The reinforcement learning in SOAR 9 is rather straightforward. It
adjusts the action selection based on numeric environmental awards.
Numeric preferences specify the expected
value of an operator for the current state. When an operator is selected, all the
rules which determine the expected values for this operator are updated based on any
new reward this operator achieves as well as the expected future reward its
successor operator might yield. This evaluation of operators is applied across all
goals and sub-goals in the system, allowing a fast identification of good and bad
operators for specific situations.
Linked to this reinforcement learning component is the appraisal component, which
links the reinforcement learning with the working memory.
The appraisal component is a first attempt to capture the idea of
emotions in the SOAR cognitive architecture. The idea of emotions in
SOAR is that an agent continuously evaluates the situations it is facing along
multiple dimensions such as goal relevance, goal conduciveness, causality etc.
The evaluations lead to appraisals of how well the goal is met, which in turn
affects the agent's emotions. These emotions
express the intensity of a feeling and thereby work as intrinsic reward for the reinforcement
learning. At present, emotions in SOAR only link the reinforcement
learning and the working memory, but in future the SOAR authors
want to explore the effect of emotions on other modules (Laird 2012a).
The final new component of SOAR 9 focuses on visual imagery. So far, an underlying
assumption has been that knowledge can be expressed and processed by some form of symbolic structure.
This assumption is difficult with respect to visual imagery where other forms of
data representation seem more appropriate. SOAR therefore tries to model
human visual-spatial reasoning with the help a long-term (LT) memory containing images that can be retrieved
into the short-term memory. The short-term (ST) memory allows for
the construction and manipulation of images as well as their translation into
SOAR has been used for a number
of large-scale simulations of human behaviour. The largest of these is
TacAir-Soar (Jones et al. 1993) which is a Soar system that models the behaviour of pilots in
beyond-visual-range, tactical air combat. The TacAir-SOAR domain is simulated,
although humans pilots may also interact with the SOAR agents through simulators.
According to Jones et al., in TacAir, the architecture was being
pushed to the extreme by forcing the integration of many capabilities that
have been fully or partially demonstrated in SOAR but never combined in a
From an ABSS perspective, TacAir is of interest because, despite its strong focus
on modelling individual fighter agents, coordination between the
different fighters is considered. Cooperation in TacAir occurs among a lead plane and
his wingman. Generally these two planes fly and execute missions together,
using radio communication to coordinate their activity. Coordination is also
driven from air (or ground) control, which informs agents about enemy planes
beyond their local radar range and may order them to intercept specific agents.
Thus, agents in this domain may act according to some pre-specified mission, as
well as recognise and act autonomously to threats. Furthermore, they may be
tasked by other agents to accomplish some specific action (such as interception
The coordination and collaboration implementation in TacAir is the most extensive we could
find that uses the SOAR architecture. One reason for the lack of coordination models might be the background
of SOAR in the cognitive sciences. The aim there is primarily to understand the human
brain and the human behaviour (i.e. the individual agent), rather than
social phenomena resulting from the interaction of several agents.
Although social reasoning in SOAR can theoretically be
implemented in the memories and rules, the authors of SOAR do not mention the possibility
in their work. The focus of SOAR can also be seen in Laird (2012a), which
identifies the study of the interaction of the SOAR components as
the next priority. Because of its roots in the cognitive science, like other
psychologically and neurologically inspired models, the
affective and normative dimensions are not central to SOAR.
Version 9 of SOAR is available for
operating systems including Windows, OS X and Linux and can be obtained
from the SOAR website28. This
website also provides two different IDEs for developing SOAR agents as well as
code examples of existing agents, previous versions of SOAR and a large
body of documentation and resources. In addition to these materials, there are
forums and groups for SOAR users as well as a SOAR mailing
list where users can ask for support.
In this paper, we compared 14 agent decision making models, ranging from
conceptual reference models that lacked any suggestions about how they might be implemented, to
complete architectures that have been developed for several operating systems
and are supported with user communities, documentation, tutorials, summer schools, etc. These models
came with different ideas, intentions and assumptions; for example, some of them assume
rational agent behaviour (e.g. BDI), whereas others try to add
“irrational” components such as emotions.
By pointing out specific features, we do not intend to make value judgements about
whether they are advantages or disadvantages, because what
constitutes a (dis)advantage always depends on the problem
or phenomenon to be modelled. For example, consider learning in SOAR. When
attempting to model people improving their use of an interface,
including a learning mechanism is crucial.
However, there are many
applications for which modelling learning is not critical and for which a learning
mechanism might be an unnecessary overload that slows down the simulation or
causes undesired side effects (Jager & Janssen 2003; Byrne 2007).
Tables 2–9 summarize the
models we have reviewed in terms of our dimensions of comparison.
Table 2: Contrasting the Reasoning Architectures (i) Production Rules System BDI Original Focus information processing, pattern matching embedded applications in dynamic and real-time environments Main User Community used by all communities, fundamental for other architectures agents/ABSS community in general Cognitive Level reactive agent (production cycle) reactive and deliberative agents possible, though most implementations do not make use of the deliberation option Architectural Goal Management goals indirectly expressed by rules stored as desires, when activated turned into intentions; use of intention stack Symbolic or activation-based? symbolic symbolic Affective Level none none Social Level no communication and/or inclusion of complex social structures no communication and/or inclusion of complex social structures in the original model Norm Consideration no explicit norm consideration none in the original model Learning none none in the original model Supported Operating Systems n.a. n.a. Resources general literature on production rule systems, specific resources for existing implementations mentioned in Sec. 3 general literature on BDI and multiple resources for mentioned implementations; for reference of sample implementations see end of Sec. 4.1
Table 3: Contrasting the Reasoning Architectures (ii) eBDI BOID Original Focus adding emotions to BDI adding social obligation to BDI Main User Community no large community at this point normative MAS community Cognitive Level decision cycle with deliberation process possibility decision cycle with deliberation process possibility Architectural Goal Management when activated turned into intentions; use of intention stack when activated (under consideration of internalized social obligations) turned into intentions; use of intention stack Symbolic or activation-based? symbolic symbolic Affective Level emotions considered via Emotional State Manager none Social Level no communication and/or inclusion of complex social structures social norms are considered in form of obligations, no communication and/or inclusion of complex social structures Norm Consideration none social norms considered in form of obligations deriving from them Learning none none Supported Operating Systems n.a. n.a. Resources in particular Jiang (2007) few scientific articles (see Sec. 4.3)
Table 4: Contrasting the Reasoning Architectures (iii) BRIDGE Original Focus agents with own and social awareness as well as reasoning update for modelling decision in the policy context Main User Community so far little user community (mainly SEMIRA project29), possibly normative MAS community Cognitive Level decision cycle with deliberation process (concurrent processing of input) as well as short-cuts (based on response factors) Architectural Goal Management ordered list of candidate goals for which plans are generate, order can be overridden by response factors Symbolic or activation-based? symbolic Affective Level Not explicitly part of the architecture, but emotions could be represented using the EGO component Social Level self-awareness (distinction of self and others), consideration of culture and the need for social interaction Norm Consideration architecture explicitly developed to reason about policies and policy-aware agents; consideration of (social) norm which can be depending on culture; implementation of norms in form of obligations Learning none Supported Operating Systems intended implementation in a Repast environment, Repast is available for Windows, Mac as well as Unix Resources few scientific articles (see Sec. 4.4)
Table 5: Contrasting the Reasoning Architectures (iv) Del. Norm. Agents EMIL-A Original Focus social norms in decision making norm innovation and internalization Main User Community normative MAS community mainly ABSS community Cognitive Level deliberative agents, separate norm-internalization cycle general deliberation and deliberation-based norm-internalization cycle as well as stimuli short-cuts Architectural Goal Management similar to BDI, norms can influence intention selection normative board Symbolic or activation-based? symbolic both Affective Level none none Social Level agent communication considered; inclusion of the social norm concept; distinction of one-self and others agent communication considered; inclusion of the social norm concept; distinction of one-self and others Norm Consideration norms considered norms considered Learning learning of norm-specific intentions mentioned learning of norms and related change of intentions considered Supported Operating Systems n.a. EMIL-S implementation in Repast available, Repast is available for Windows, Mac and Unix Resources few scientific articles (see Sec. 5.1) websites of project partners (http://cfpm.org/emil/) and project deliverables
Table 6: Contrasting the Reasoning Architectures (v) NoA PECS Original Focus norms in agent decision making consideration of physis, emotions, cognition and social status; meta-model Main User Community normative MAS community so far little community Cognitive Level deliberative decision cycle which is externally & internally motivated decision process with both reactive as well as deliberation option Architectural Goal Management sub-goal structure and activity stack not described Symbolic or activation-based? symbolic symbolic Affective Level none mentioned, however no explicit description Social Level agent communication considered; inclusion of the social norm concept; distinction of one-self and others inclusion of some social concept such as communication, but no implementation specifications Norm Consideration norms considered; utilization of broader norm definition norms no considered Learning conceptually mentioned, but not elaborated on in detail very little static learning using pre-defined rules in transition functions possible Supported Operating Systems n.a. n.a. (meta-model) Resources mainly scientific papers, especially Kollingbaum (2004) few scientific papers (see Sec. 6.1)
Table 7: Contrasting the Reasoning Architectures (vi) Consumat MHP Original Focus study of consumer behaviour and market dynamics prediction of performance times of human Main User Community ABSS, social science, marketing primarily psychology and HCI Cognitive Level deliberation and reactive decision making, as well as mixed heuristics possible Architectural Goal Management mental maps none Symbolic or activation-based? both both Affective Level values and morality considered, emotions not directly mentioned no specification of possible implementations Social Level culture considered as one input parameter; main social focus on success comparison with peers not considered Norm Consideration (non-social) norms and institutions mentioned as input for the agent behavioural model not considered Learning learning of decision heuristics based on success of peers, inclusions of uncertainty metric learning can be indirectly added in the processing time metrics; no learning on rules possible Supported Operating Systems n.a. n.a. Resources mainly scientific applications, in particular Jager (2000) some scientific publications
Table 8: Contrasting the Reasoning Architectures (vii) CLARION ACT-R/PM Original Focus study of cognitive agents, special focus on learning modelling of higher-level cognition (esp. memory and problem-solving) Main User Community primarily cognitive and social psychology, as well as artificial intelligence primarily psychology and HCI Cognitive Level cognitive architecture relying on production rule based decision cycle cognitive architecture with underlying production cycle (serial) Architectural Goal Management goal stack goal stack Symbolic or activation-based? both both Affective Level not explicitly mentioned not mentioned Social Level not considered not main focus; one example (Best & Lebiere 2006) using the notion of teams and common team goals Norm Consideration not considered not considered Learning specific focus, learning both top-down as well as bottom-up yes for declarative and procedural memories Supported Operating Systems Windows, Mac, Linux Windows, Mac OS X 10.5 or newer, (UNIX) Resources website with examples and scientific publications (http://www.cogsci.rpi.edu/~rsun/clarion.html) extensive tutorial materials & summer school (http://act-r.psy.cmu.edu/actr6/)
Table 9: Contrasting the Reasoning Architectures (viii) SOAR Original Focus problem solving and learning Main User Community mainly HCI and artificial intelligence Cognitive Level cognitive decision cycle Architectural Goal Management universal sub-goaling Symbolic or activation-based? symbolic, non-symbolic knowledge representation added in SOAR 9 Affective Level not strongly considered Social Level concept of teams and team coordination implemented in TacAir-Soar Norm Consideration not considered Learning yes, for all long-term memory components; by chunking, reinforcement learning (with SOAR 9) Supported Operating Systems Windows, Mac OS, Unix Resources website (http://sitemaker.umich.edu/soar/home) with documentation and examples as well as mailing list
- We can observe from the tables that the 'cognitive' aspect of the models
ranged from production rule systems,
via deliberation ideas and heuristics, to complex cognitive architectures.
Which one is chosen can be quite important in designing social simulations.
For example, Wooldridge & Jennings (1995) argue from a (multi-)agent systems perspective
that deliberative agents are particularly useful for
planning and symbolic reasoning, whereas reactive agents allow for a fast
response to changes in the environment.
Reactive architectures have the advantage of being easy to program and
understand and seem the obvious choice for simple simulations.
However, they require that agents are
capable of mapping local knowledge to appropriate actions and that the agents have
pre-defined rules to cater for all possible situations. Deliberative architectures can
be more flexible, but at the cost of computational complexity.
As Dolan et al. (2012) point out, humans sometimes make decision neither by
following simple rules nor by lengthy deliberation, but rather by taking into account the source of information.
Moreover, human behaviour often results from habitual patterns rather than
explicit decision making. For ABSS dealing
with habitual human behaviour, hybrid approaches that allow for heuristics
as well as deliberation and reactive production rules might be more suitable.
A typical example of such a simulation is the energy behaviours of
households. Although potentially interested in energy saving, household often
perform tasks such as cooking or washing without explicitly thinking about the energy they require, i.e.
the task is not associated with energy to them, but is “just performed”
without any deliberation. This cannot easily be captured by either
deliberative or rule-based systems, because these architectures are
founded on the assumption that behaviour is intentional.
Among those architectures we surveyed, two adopted a hybrid approach:
Consumat and BRIDGE. Consumat allowed for
modelling habitual behaviour by introducing five
heuristics based on uncertainty and cognitive effort that can be utilised
instead of complete deliberation (see Figure 9).
BRIDGE, like Consumat, introduces the idea of the basic needs of the agent.
These needs can overrule any deliberate decision-making process via the BRIDGE
response component, to ensure that agents can react quickly when needed (e.g.
in life-threatening situations).
At the start of our survey, we presented production-rule systems as a “simple”
form of agent decision making and then reviewed increasingly complicated
models, ending with some psychologically and neurologically inspired
Nevertheless, many of the more complicated
architectures such as ACT-R/PM still employ production rules as a basic
component for at least some part of the decision making process. This is
not surprising if one considers production rule systems in general.
As pointed out in Section 3, it is possible to
design rules to represent almost any feature of a system.
However, one may then face a problem of complexity. To avoid this, the more
complicated architectures introduce new components that encapsulate ideas,
instead of expressing them with a large number of rules. With regard to
production rules, a further point worth noticing is that as mentioned in Section
4.1, although many architectures allow for the possibility of deliberation
(the intentional weighing of options in order to arrive at an action to be performed),
often such deliberation is not in fact employed in practice.
- Only a small number of the architectures reviewed considered emotions. These include
PECS, eBDI and to some extent BRIDGE. The first of these is a meta-model
and little specific can be found about actual implementations, so it
is hard to judge to what level affinity is covered. eBDI has one implementation,
but it is not widely used. BRIDGE does not explicitly
address emotions, but they could be implemented via the ego component.
BRIDGE has only been applied once and this did not include emotions.
In summary, few
implementations and architectures using emotions can be found, despite the fact
that there is a body of work focussing on emotions in (BDI)
agent reasoning (Adam 2007; Steunebrink et al. 2010). We conclude that
when it comes to the affinity level, it might be beneficial to use some of the
existing theoretical work to inform agent architectures.
- Moving on to the social level, there is a clear divide between the psychologically and
neurologically inspired architectures and the rest: the former have very little focus on
social aspects (TacAir SOAR being the one exception), whereas in the latter
group some aspects can be found.
This should not come as a surprise, as psychologically and neurologically inspired
models tend to focus on representing the human brain, whereas the majority of
the other models have been designed with the intention of analysing social dynamics
and the global-level patterns emerging from the
interactions of agents at a local level. That is why
the models presented in Section 7 seem
more suited for analysing the decision making processes of single agents,
and the other models for studying social phenomena.
However, even among these latter models, there is variation in the
the extent to which they include social concepts,
from the mere understanding of the notions of oneself and other to
the inclusion of communication, culture and some form of understanding of teams
and coordination. Among those surveyed, there were five using the notion of
oneself, others and group membership, namely BRIDGE, Deliberative Normative
Agents, EMIL-A, NoA and Consumat, although Consumat only viewed other agents
as the means of a performance comparison.
All of these also consider (social) norms, a concept that has strong roots in the social domain.
Nevertheless the social aspects considered in these
architectures remain rather simple and although coordination is for
example mentioned in TacAir SOAR, we are far from the what
Helbing & Balletti (2011, p.4) describe as Social models, that is models that assume
that individuals respond to their own and other people's expectations.
A number of
ideas about how this could be done have
not yet got beyond the stage of exploration and conceptual development,
although they have the potential to make a major impression on decision making
None of the agent decision making models reviewed above implement a ToM,
although it can be argued that understanding other agents' intentions is
crucial for cooperative action. Hiatt & Trafton (2010) suggest how ToM might be added
to ACT-R, but with the aim of testing alternative theories about how ToM
develops in humans, rather than in the context of designing agent models.
Klatt et al. (2011) implement a simple ToM using a decision theoretic approach and
shows how two agents can negotiate about a joint enterprise by reasoning about
the beliefs and behaviour of the other agent. Their implementation is based on
PsychSim (Pynadath & Marsella 2005), an agent-based simulation tool for modelling
interactions and influence.
- We-intentionality, collective intentionality, or shared intentionality,
refers to collaborative interactions in which participants
have a shared goal and coordinate their actions in pursuing
that goal. This is more than just understanding others' intentions, and
involves recognising other agents' understanding of one's own intentions. One
important aspect of we-intentionality is 'shared attention': attending to some
object while recognising that the other is doing the same, for example, one
agent pointing out an object in both agents' field of view.
The defining characteristic of we-intentionality is that the goals and
intentions of each participant must include in their content something of the
goals and the intentions of the other (Elsenbroich & Gilbert 2013). One way of
achieving this is to enable the agents to negotiate an agreement on a shared
goal, and allow them to merge their plans and declare a commitment to the goal.
Such a negotiation would be relatively easy to add to a BDI architecture, but
it would differ from what people usually do, resembling a commercial contract
more than a shared commitment to a joint plan of action, because of the very
explicit nature of the negotiation. In human societies, shared intentionality
emerges from joint action. A revealing example is the emergence of a common
vocabulary or lexicon between actors for discussing some novel situation or
object, as discussed in Salgado (2012). In cases such as this, the joint
intentionality is not itself intentional, but rather the product of an inbuilt
disposition towards we-intentionality.
- All the models we have reviewed in previous sections have made a rigid
distinction between the agent's 'mind', as represented by a set of
computational modules, including sensor and output generators, and a
surrounding environment. An intriguing idea, the 'Extended Mind
Hypothesis', proposes that the 'mind' is not confined, as Clark & Chalmers (1998) put
it, “to the demarcations of skin and skull”, but can extend to items in the
agent's environment, for example, a notebook, a computer, one's fingers when
used for counting and so on. Furthermore, the extension might include the minds
of other agents, for example when one agent reminds another of something that
the first has forgotten (Froese et al. 2013). This perspective has been developed in cognitive
science as 'enactivism', the central tenet of which is that “Organisms do not passively receive information from their environments, which they then translate into internal representations. Natural cognitive systems...participate in the generation of meaning...engaging in transformational and not merely informational interactions: they enact a world. " (Paolo et al. 2014).
- We found that two types of norms were considered by architectures:
social norms and legal norms and policies. Most of the six architectures
considering norms (BOID, BRIDGE, Deliberative Normative Agent, EMIL-A, NoA and
Consumat), treated them in terms of obligations (in particular BOID
and BRIDGE) and required all norms to be specified and known to the agent at the
beginning of a simulation (BOID, BRIDGE, Deliberative Normative Agents,
Consumat). Two of the architectures (EMIL-A, NoA) allowed for an active learning
of norms, which might make them particularly suitable for research
questions dealing with the evolution and spreading of norms.
- There is a wide variation in learning approaches among the architectures, ranging from
simple updating of variable values in the agents' decision rules based on
environmental cues (e.g. MHP), via learning about successful decision
strategies (e.g. Consumat) to the learning of new rules for
decision making (via pre-defined transition structures, e.g. PECS, or
without such guidance, e.g. EMIL-A). Excepting the psychologically and
neurologically inspired models, most of the learning was focussing on the
learning of norms30 or the
learning of better decision heuristic selections. The architectures having the
most elaborate models of learning were the psychologically
and neurologically inspired models, including architectures such as CLARION
with its specific focus on learning, and SOAR which in its latest version allows
several learning approaches to be used within one architecture.
- In our survey we have often distinguished between the psychologically and
neurologically inspired models and the others, which is why for the
final part of this “lessons learnt” section, we take a closer look at this
One difference we have not already commented on is the contrast in infrastructures.
ACT-R/PM and SOAR are embedded into complete suites that are provided for
different operating systems on their own websites (accompanied by extensive
tutorial material), none of the other architectures offer this,
making their deployment more difficult.
As well as all the differences, a number of conceptual similarities
between the architectures can be found. For example, SOAR and BDI
share features, as described in detail by Tambe (quoted in Georgeff et al. (1999, Section
SOAR is based on operators, which are similar to reactive plans, and states (which include its highest-level goals and beliefs about its environment). Operators are qualified by preconditions which help select operators for execution based on an agent's current state. Selecting high-level operators for execution leads to sub-goals and thus a hierarchical expansion of operators ensues. Selected operators are reconsidered if their termination conditions match the state. While this abstract description ignores significant aspects of the SOAR architecture, such as (i) its meta-level reasoning layer, and (ii) its highly optimised rule-based implementation layer it will sufficient for the sake of defining an abstract mapping between BDI architectures and SOAR as follows:
- intentions are selected operators in SOAR;
- beliefs are included in the current state in SOAR;
- desires are goals (including those generated from sub-goaled operators); and
- commitment strategies are strategies for defining operator termination conditions. For instance, operators may be terminated only if they are achieved, unachievable or irrelevant.
- Tambe also points out that both SOAR and BDI use similar ideas of sub-goaling and have even been applied to similar large-scale application examples such as air-combat simulations. He also highlights differences between the two approaches and explains that, in his view, they complement each other. Thus, whereas SOAR is typically used by cognitive scientists to understand theory by taking an empirical approach to architecture design, BDI is a favourite tool of logicians and philosophers who typically build models only after understanding their theoretical underpinnings. Based on these observations Tambe concludes that he sees opportunities for cross-fertilisation between the communities, but that, due to a lack of communication between them, both “could end up reinventing each others' work in different disguises”. The conclusion that, despite their different origins and communities, cognitive and non-cognitive models can be mutually informative is the primary lesson of this paper.
Humans are not simple machines, which is why modelling them and their decision
making in an ABSS is a difficult task. As with every model, an ABSS is a simplification
of the system it is supposed to represent. It is therefore the model designer's
task to decide which aspects of the real system to include in the model and
which to leave out. Given the complex nature of human decision making, this
decision about what to model and how to model it is therefore a major challenge.
In this paper we have presented 14 different agent decision making architectures and discussed their aims, assumptions and suitability for modelling agents in computational simulations.
We started by reviewing production-rule
systems as a 'simple' form of agent decision making and then increased
the complexity of the architectures presented, ending with some psychologically and neurologically inspired
designs. Despite this increase in complexity, many of the more complicated architectures such as ACT-R/PM still employed production rules as a basic component, for at least some parts of the decision making process.
When comparing the different architectures, it became apparent that the extent to which different dimensions of decision making are covered varied greatly. The differences have been described, sorted by dimensions, in the previous subsections and are summarized in Tables 2-9.
- What becomes apparent from these tables is that issues not yet receiving much attention include a better representation of the Theory of Mind, the inclusion of We-intentionality and a consideration of the extended mind. Other interesting issues hardly covered are concepts such as priming (our acts are often influenced by subconscious cues) and awareness (Dolan et al. 2012). We expect that future agent architectures will gradually come to include these more cognitively realistic features, as well as allowing agents to develop a collective intelligence that is more than the sum of the cognitions of the individual agents.
Architectures found but not described (in alphabetical order):
- developed at Carnegie Mellon University under Marcel A. Just, http://www.ccbi.cmu.edu/4CAPS/index.html
- developed under Michael Freed at NASA Ames Research Center, http://ti.arc.nasa.gov/m/pub-archive/1068h/1068%20(Freed).pdf
- developed under Fernand Gobet at Brunel University and Peter C. Lane at the University of Hertfordshire, http://chrest.info/
- developed under Ron Sun at Rensselaer Polytechnic Institute and University of Missouri, http://www.cogsci.rpi.edu/~rsun/clarion.html
- Commercial Software, developed by CHI Systems Inc. http://www.hf.faa.gov/workbenchtools/default.aspx?rPage=Tooldetails&subCatId=29&toolID=5
- Developed by the AOS group, http://aosgrp.com/products/cojack/
- by Douglas Hofstadter and Melanie Mitchell at the Indiana University, http://www.jimdavies.org/summaries/hofstadter1995.html
- developed under Boicho Kokinov at the New Bulgarian University, http://alexpetrov.com/proj/dual/
- developed at the University of Michigan under David E. Kieras and David E. Meyer, http://web.eecs.umich.edu/~kieras/epic.html
- developed by Susan L. Epstein at The City University of New York, http://www.compsci.hunter.cuny.edu/~epstein/html/forr.html
- developed by Sevak Avakians, http://www.gaius.com-about.com/
- developed at Stanford University, http://www.isle.org/~langley/papers/icarus.aaai06.pdf
- developed by Muneo Kitajima at National Institute of Advanced Industrial Science and Technology (AIST), http://kjs.nagaokaut.ac.jp/mkitajima/English/Project(E)/CognitiveModeling(E)/LICAI(E)/LICAI(E).html, http://kjs.nagaokaut.ac.jp/mkitajima/CognitiveModeling/WebNavigationDemo/CoLiDeSTopPage.html
- developed under Stan Franklin at the University of Memphis, http://ccrg.cs.memphis.edu/
- developed by Armstrong Laboratory, Logistics Research Division, http://www.hf.faa.gov/workbenchtools/default.aspx?rPage=Tooldetails&subCatId=29&toolID=193
- developed under Dr. Norm Geddes at Applied Systems Intelligence, http://www.asinc.com/inside-preact/
- developed by Veloso et al., http://cogarch.org/index.php/Prodigy/Properties
- developed under Dietrich Dörner at the Otto-Friedrich University in Bamberg, http://www.cognitive-ai.com/page2/page2.html
- developed at the Pennsylvania State University, http://agentlab.psu.edu/
- Society of mind
- proposed by Marvin Minsky as successor for his Emotion machine, http://web.media.mit.edu/~push/ExaminingSOM.html
The Theory of Mind is based on the idea that agents can attribute
mental states such as beliefs, intents, desires, knowledge, etc. to themselves
and others and also can understand that others can have beliefs, desires, and
intentions that are different from their own
2 An example where emotions can play a vital role are emergency settings. Strong emotions such as fear can change the behaviour of agents in the emergency situation, resulting in potential changes to possible rescue scenarios. Thus, ABSS for modelling rescue scenarios might need to account for emotions and their implications.
3 Symbolic systems are systems that use symbols to communicate and to represent information. Examples of symbolic systems include natural language (encapsulation of information with the help of letters), programming languages and mathematical logic. The symbolic systems presented in this paper mainly use the latter two.
4 Production rule systems typically do not elaborate on how the perceived information is translated into facts.
5 As well as forward chaining (or data-driven) inference, backward-chaining is possible. Backward-chaining works towards a final state by looking at the working memory to see which sub-goals need to be fulfilled and infers backwards from the final stage to its precondition, back from this precondition to its precondition, and so on.
9 BOID assumes that agents are aware of all social obligations, although they only internalize the ones they want to conform with.
11 The EMIL-A architecture has some features of a cognitive architecture, but due to its strong focus on norms and norm internalization we group it in this section on normative models.
12 Similar to BOID, the deliberate normative agents architecture draws strongly on the BDI idea. However, according to Dignum et al. (2000), the BDI architecture is not necessarily required. Any “cognitive” agent architecture that accounts for the representation of mental attitudes could be employed.
16 As with other normative architectures, NoA distinguishes between norms that are external to the agent and norms that the agent has internalized. It is assumed that the agent wants to fulfil all its internalized norms, whereas the external ones are considered as candidate norms for norm internalization.
17 Kollingbaum (2004) uses a Rete algorithm—a pattern matching algorithm often used to implement production rule systems, which checks each rule against the known facts in the knowledge base—for the activation and deactivation of plans and norms.
18 The plan is not completely deleted. The agent can activate it again whenever it wants to without having to relearn the norm (e.g. if it obtains the belief that its interaction partner acts honestly).
19 Personal needs specify how well a specific item satisfies the needs of an agent, social needs express an agent's need to belong to a neighbourhood or group of agents by consuming similar items, and the status need implies that an agent might gain satisfaction from possessing more of a specific item than its neighbours.
22 Accessibility here refers to the direct and immediate availability of mental content for operations.
23 A neural network is an interconnected group of natural or artificial neurons that uses a mathematical or computational model for information processing based on a connectionist approach to computation. In most cases a neural network can change its structure based on external or internal information that flows through the network.
24 In Q-learning an agent tries to learn the optimal decision or knowledge from its history of interaction with the environment. 'History' refers to a triple of state, action and reward at a given decision point.
30 This observation might be a result of specifically choosing normative models as one set of models for our analysis.
ADAM, C. (2007).
Emotions: from psychological theories to logical formalization
and implementation in a BDI agent.
Ph.D. thesis, Institut de Recherche en Informatique de Toulouse.
ANDERSON, J. R. (1983). The Architecture of Cognition. Lawrence Erlbaum Associates, Inc., Publishers.
ANDRIGHETTO, G., CAMPENNì, M., CONTE, R. & PAOLUCCI, M. (2007a). On the immergence of norms: a normative agent architecture. In: AAAI Symposium, Social and Organizational Aspects of Intelligence. http://www.aaai.org/Papers/Symposia/Fall/2007/FS-07-04/FS07-04-003.pdf.
ANDRIGHETTO, G., CONTE, R., TURRINI, P. & PAOLUCCI, M. (2007b). Emergence in the loop: Simulating the two way dynamics of norm innovation. In: Normative Multi-agent Systems (BOELLA, G., VAN DER TORRE, L. & VERHAGEN, H., eds.), no. 07122 in Dagstuhl Seminar Proceedings. http://drops.dagstuhl.de/opus/volltexte/2007/907/pdf/07122.ConteRosaria.Paper.907.pdf.
ANDRIGHETTO, G., VILLATORO, D. & CONTE, R. (2010). Norm internalization in artificial societies. AI Communications 23(4), 325-339. http://www.iiia.csic.es/files/pdfs/AI%20Communications%2023%20%282010%29%20325%E2%80%93339.pdf
AXELROD, R. & TESFATSION, L. (2006). A guide for newcomers to agent-based modeling in the social sciences. In: Handbook of Computational Economics, Vol. 2: Agent-Based Computational Economics (TESFATSION, L. & JUDD, K. L., eds.), chap. Appendix A. Elsevier, pp. 164-1659. http://www.econ.iastate.edu/tesfatsi/GuidetoABM.pdf.
BEST, B. J. & LEBIERE, C. (2006). Cognitive agents interacting in real and virtual worlds. In: Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation (SUN, R., ed.). Cambridge University Press, pp. 186-218. http://act-r.psy.cmu.edu/wordpress/wp-content/uploads/2012/12/622SDOC4694.pdf.
BOELLA, G. & VAN DER TORRE, L. (2003). BDI and BOID argumentation. In: Proceedings of the IJCAI Workshop on Computational Models of Natural Argument. http://icr.uni.lu/leonvandertorre/papers/cmna03.pdf.
BOELLA, G., VAN DER TORRE, L. & VERHAGEN, H. (2007). Introduction to normative multiagent systems. In: Normative Multi-agent Systems (BOELLA, G., VAN DER TORRE, L. & VERHAGEN, H., eds.), no. 07122 in Dagstuhl Seminar Proceedings. Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany. http://drops.dagstuhl.de/opus/volltexte/2007/918/pdf/07122.VerhagenHarko.Paper.918.pdf.
BOISSIER, O. & GâTEAU, B. (2007). Normative multi-agent organizations: Modeling, support and control, draft version. In: Normative Multi-agent Systems (BOELLA, G., VAN DER TORRE, L. & VERHAGEN, H., eds.), no. 07122 in Dagstuhl Seminar Proceedings. http://drops.dagstuhl.de/opus/volltexte/2007/902/pdf/07122.BoissierOlivier.Paper.902.pdf.
BORDINI, R. H., HüBNER, J. F. & WOOLDRIDGE, M. (2007). Programming Multi-Agent Systems in AgentSpeak using Jason. Wiley Series in Agent Technology. John Wiley & Sons.
BRATMAN, M. E. (1987). Intention, Plans and Practical Reason. Center for the Study of Language and Information Publications, Stanford University. ISBN 1-57586-192-5.
BROERSEN, J., DASTANI, M., HULSTIJN, J., HUANG, Z. & VAN DER TORRE, L. (2001). The boid architecture: conflicts between beliefs, obligations, intentions and desires. In: Proceedings of the fifth international conference on Autonomous agents. ACM. http://www.staff.science.uu.nl/~broer110/Papers/agents2001.ps. [doi:10.1145/375735.375766]
BROERSEN, J., DASTANI, M., HULSTIJN, J. & VAN DER TORRE, L. (2002). Goal generation in the BOID architecture. Cognitive Science Quarterly 2(3-4), 428-447. http://www.staff.science.uu.nl/~dasta101/publication/goalgeneration.ps.
BYRNE, M. D. (2000). The ACT-R/PM project. In: Simulating Human agents: Papers from the 2000 AAAI Fall Symposium. AAAI Press, pp. 1-3. http://www.aaai.org/Papers/Symposia/Fall/2000/FS-00-03/FS00-03-001.pdf.
BYRNE, M. D. (2001). ACT-R/PM and menu selection: Applying a cognitive architecture to HCI. International Journal of Human-Computer Studies 55, 41-84. http://act-r.psy.cmu.edu/wordpress/wp-content/uploads/2012/12/161mdb_2001_a.pdf. [doi:10.1006/ijhc.2001.0469]
BYRNE, M. D. (2007). Cognitive architecture. In: The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications (SEARS, A. & JACKO, J. A., eds.). CRC Press, pp. 93-114. [doi:10.1201/9781410615862.ch5]
BYRNE, M. D. & ANDERSON, J. R. (1998). Perception and action. In: The Atomic Components of Thought (ANDERSON, J. R. & LEBIERES, C., eds.), chap. 6. Lawrence Erlbaum Associates, Inc., Publishers, pp. 167-200.
CARD, S. K., MORAN, T. P. & NEWELL, A. (1986). The model human processor: An engineering model for human performance. In: Handbook of Perception and Human Performance, vol. Vol. 2: Cognitive Processes and Performance. Wiley, pp. 1-35.
CARD, S. K., NEWELL, A. & MORAN, T. P. (1983). The Psychology of Human-Computer Interaction. Hillsdale, NJ, USA: L. Erlbaum Associates Inc.
CARLEY, K. M., PRIETULA, M. J. & LIN, Z. (1998). Design versus cognition: The interaction of agent cognition and organizational design on organizational performance. Journal of Artificial Societies and Social Simulation 1(3), 4. https://www.jasss.org/1/3/4.html.
CASTELFRANCHI, C., DIGNUM, F., JONKER, C. M. & TREUR, J. (2000). Deliberate normative agents: Principles and architecture. In: Intelligent Agents VI, Agent Theories, Architectures, and Languages (Proceedings 6th International Workshop, ATAL'99, Orlando FL, USA, July 15-17, 1999) (JENNINGS, N. R. & LESPéRANCE, Y., eds.), vol. 1757 of Lecture Notes in Computer Science. Springer.
CHAO, Y. R. (1968). Language and symbolic systems. Cambridge University Press. http://dcekozhikode.co.in/sites/default/files/chao,%20yuen%20ren%20-%20language%20and%20symbolic%20systems.pdf.
COHEN, P. R. & LEVESQUE, H. J. (1990). Intention is choice with commitment. Artificial Intelligence 42(2-3), 213-261. http://www-cs.stanford.edu/~epacuit/classes/lori-spr09/cohenlevesque-intention-aij90.pdf. Elsevier Science Publishers Ltd. [doi:10.1016/0004-3702(90)90055-5]
CONTE, R. & CASTELFRANCHI, C. (1995). Cognitive and Social Action. Taylor & Francis.
CONTE, R., CASTELFRANCHI, C. & DIGNUM, F. (1999). Autonomous norm acceptance. In: Proceedings of the 5th International Workshop on Intelligent Agents V, Agent Theories, Architectures, and Languages. Springer-Verlag. http://igitur-archive.library.uu.nl/math/2007-0223-200804/dignum_99_autonomous.pdf. [doi:10.1007/3-540-49057-4_7]
DASTANI, M. (2008). 2APL: A practical agent programming language. Autonomous Agents and Multi-Agent Systems 16(3), 214-248. http://www.cs.uu.nl/docs/vakken/map/2apl.pdf. Kluwer Academic Publishers. [doi:10.1007/s10458-008-9036-y]
DASTANI, M. & VAN DER TORRE, L. (2004). Programming boid-plan agents: Deliberating about conflicts among defeasible mental attitudes and plans. In: Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, vol. 2. Washington, DC, USA: IEEE Computer Society. .
DIGNUM, F. & DIGNUM, V. (2009). Emergence and enforcement of social behavior. In: 18th World IMACS Congress and MODSIM09 International Congress on Modelling and Simulation (ANDERSSEN, R. S., BRADDOCK, R. D. & NEWHAM, L. T. H., eds.). Modelling and Simulation Society of Australia and New Zealand and International Association for Mathematics and Computers in Simulation. http://www.mssanz.org.au/modsim09/H4/dignum.pdf.
DIGNUM, F., DIGNUM, V. & JONKER, C. M. (2009). Towards agents for policy making. In: Multi-Agent-Based Simulation IX (DAVID, N. & SICHMAN, J. S. A., eds.), vol. 5269 of Lecture Notes in Computer Science. Springer-Verlag, pp. 141-153. http://link.springer.com/chapter/10.1007%2F978-3-642-01991-3_11. [doi:10.1007/978-3-642-01991-3_11]
DIGNUM, F., KINNY, D. & SONENBERG, L. (2002). From desires, obligations and norms to goals. Cognitive Science Quarterly 2(3-4), 407-430. http://www.staff.science.uu.nl/~dignu101/papers/CSQ.pdf. Hermes Science Publications.
DIGNUM, F., MORLEY, D., SONENBERG, E. & CAVEDON, L. (2000). Towards socially sophisticated BDI agents. In: Proceedings of the Fourth International Conference on Multi-Agent Systems (DURFEE, E., ed.). IEEE Press. http://www.agent.ai/doc/upload/200403/dign00_1.pdf. [doi:10.1109/ICMAS.2000.858442]
DIGNUM, V. (2003). A model for organizational interaction: based on agents, founded in logic. Ph.D. thesis, Utrecht University. http://igitur-archive.library.uu.nl/dissertations/2003-1218-115420/full.pdf.
D'INVERNO, M., KINNY, D., LUCK, M. & WOOLDRIDGE, M. (1998). A formal specification of dMARS. In: Proceedings of the 4th International Workshop on Intelligent Agents IV, Agent Theories, Architectures, and Languages (SINGH, M. P., RAO, A. S. & WOOLDRIDGE, M., eds.). London, UK: Springer-Verlag. [doi:10.1007/BFb0026757]
D'INVERNO, M., LUCK, M., GEORGEFF, M., KINNY, D. & WOOLDRIDGE, M. (2004). The dMARS architecture: A specification of the distributed multi-agent reasoning system. Autonomous Agents and Multi-Agent Systems 9(1-2), 5-53. http://www.csc.liv.ac.uk/~mjw/pubs/jaamas2004a.pdf. [doi:10.1023/B:AGNT.0000019688.11109.19]
DOLAN, P., HALLSWORTH, M., HALPERN, D., KING, D., METCALFE, R. & VLAEV, I. (2012). Influencing behaviour: The mindspace way. Journal of Economic Psychology 33(1), 264-277. http://www.sciencedirect.com/science/article/pii/S0167487011001668. [doi:10.1016/j.joep.2011.10.009]
EHRET, B. D. (1999). Learning where to look: The acquisition of location knowledge in display-based interaction. Ph.D. thesis, George Mason University. http://homepages.rpi.edu/~grayw/pubs/papers/2000/Ehret99_diss.pdf.
ELSENBROICH, C. & GILBERT, N. (2013). Modelling Norms. Springer.
EMIL PROJECT CONSORTIUM (2008). Emergence in the loop: simulating the two way dynamics of norm innovation - deliverable 3.3 emil-s: The simulation platform. Tech. rep., Sixth Framework Programme, Project No. 033841.
ESTEVA, M., RODRíGUEZ-AGUILAR, J. A., ARCOS, J. L., SIERRA, C. & GARCíA, P. (2000). Formalizing agent mediated electronic institutions. In: Proceedings of the Congrès Català d'Intel.ligència Artificial.
FRIEDMAN-HILL, E. (2003). Jess in action: rule-based systems in Java. Manning Publications.
FROESE, T, GERSHENSON, C. & ROSENBLUETH, D. A. (2013). The dynamically extended mind - a minimal modeling case study. ArXiv ePrints URL http://arxiv.org/pdf/1305.1958v1.
GALITSKY, B. (2002). Extending the bdi model to accelerate the mental development of autistic patients. In: Proceedings of the 2nd International Conference on Development and Learning (ICDL'02). http://www.researchgate.net/publication/3954049_Extending_the_BDI_model_to_accelerate_the_mental_development_of_autistic_patients/links/0deec5295e57383c05000000.
GEORGEFF, M., PELL, B., POLLACK, M., TAMBE, M. & WOOLDRIDGE, M. (1999). The belief-desire-intention model of agency. In: Intelligent Agents V: Agent Theories, Architectures, and Languages - Proceedings of the 5th International Workshop, ATAL'98 (MüLLER, J. P., RAO, A. S. & SINGH, M. P., eds.), vol. 1555 of Lecture Notes in Computer Science. Springer, pp. 1-10. http://link.springer.com/chapter/10.1007%2F3-540-49057-4_1.
GEORGEFF, M. P. & INGRAND, F. F. (1990). Real-time reasoning: The monitoring and control of spacecraft systems. In: Proceedings of the Sixth Conference on Artificial Intelligence Applications. Piscataway, NJ, USA: IEEE Press. http://dl.acm.org/citation.cfm?id=96751.96782. [doi:10.1109/CAIA.1990.89190]
GEORGEFF, M. P. & LANSKY, A. L. (1987). Reactive reasoning and planning. In: Proceedings of the 6th National Conference on Artificial Intelligence (FORBUS, K. D. & SHROBE, H. E., eds.). Morgan Kaufmann. http://www.aaai.org/Papers/AAAI/1987/AAAI87-121.pdf.
GILBERT, N. (2004). Agent-based social simulation: dealing with complexity. Tech. rep., Centre for Research on Social Simulation, University of Surrey.
HELBING, D. & BALLETTI, S. (2011). How to do agent-based simulations in the future: From modeling social mechanisms to emergent phenomena and interactive systems design. Working Paper 11-06-024, Santa Fe Institute. http://www.santafe.edu/media/workingpapers/11-06-024.pdf.
HIATT, L. M. & TRAFTON, J. G. (2010). A cognitive model of theory of mind. In: Proceedings of the 10th International Conference on Cognitive Modeling (SALVUCCI, D. D. & GUNZELMAN, G., eds.).
INGRAND, F. F., GEORGEFF, M. P. & RAO, A. S. (1992). An architecture for real-time reasoning and system control. IEEE Expert: Intelligent Systems and Their Applications 7(6), 34-44. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=180407. [doi:10.1109/64.180407]
ISHIDA, T. (1994). Parallel, Distributed and Multiagent Production Systems, vol. 878 of Lecture Note in Computer Science. Springer. http://link.springer.com/book/10.1007%2F3-540-58698-9. [doi:10.1007/3-540-58698-9]
JAGER, W. (2000). Modelling Consumer Behaviour. Ph.D. thesis, University of Groningen. http://dissertations.ub.rug.nl/faculties/gmw/2000/w.jager/?pLanguage=en.
JAGER, W. & JANSSEN, M. (2003). The need for and development of behaviourally realistic agents. In: Multi-Agent-Based Simulation II (SIMãO SICHMAN, J., BOUSQUET, F. & DAVIDSSON, P., eds.), vol. 2581 of Lecture Notes on Computer Science. Springer. [doi:10.1007/3-540-36483-8_4]
JAGER, W. & JANSSEN, M. (2012). An updated conceptual framework for integrated modeling of human decision making: The consumat ii. In: Proceedings of the Complexity in the Real World Workshop, European Conference on Complex Systems.
JAGER, W., JANSSEN, M. A. & VLEK, C. A. J. (1999). Consumats in a commons dilemma - testing the behavioural rules of simulated consumers. Tech. Rep. COV 99-01, Rijksuniversiteit Groningen. http://clivespash.org/speer/simpaper.pdf.
JANSSEN, M. A. & JAGER, W. (2001). Fashions, habits and changing preferences: Simulation of psychological factors affecting market dynamics. Journal of Economic Psychology 22, 745-772. http://www.rug.nl/staff/w.jager/Janssen_Jager_JEP_2001.pdf. [doi:10.1016/S0167-4870(01)00063-0]
JASTRZEMBSKI, T. S. & CHARNESS, N. (2007). The model human processor and the older adult: Parameter estimation and validation within a mobile phone task. Journal of Experimental Psychology: Applied 13(4), 224-248. http://www.apa.org/pubs/journals/features/xap-13-4-224.pdf. [doi:10.1037/1076-898x.13.4.224]
JIANG, H. & VIDAL, J. M. (2006). From rational to emotional agents. In: Proceedings of the AAAI Workshop on Cognitive Modeling and Agent-based Social Simulation. AAAI Press.
JIANG, H., VIDAL, J. M. & HUHNS, M. N. (2007). Ebdi: an architecture for emotional agents. In: Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems (AAMAS'07). [doi:10.1145/1329125.1329139]
JONES, R. M., TAMBE, M. & ROSENBLOOM, P. S. (1993). Intelligent automated agents for flight training simulators. In: Proceedings of the Third Conference on Computer Generated Forces and Behavioral Representation. http://www.dtic.mil/dtic/tr/fulltext/u2/a278641.pdf.
KENNEDY, W. G. (2012). Modelling human behaviour in agent-based models. In: Agent-Based Models of Geographical Systems (HEPPENSTALL, A. J., CROOKS, A. T., SEE, L. M. & BATTY, M., eds.). Springer Netherlands, pp. 167-179. . [doi:10.1007/978-90-481-8927-4_9]
KLATT, J., MARSELLA, S. & KRäMER, N. C. (2011). Negotiations in the context of aids prevention: An agent-based model using theory of mind. In: Intelligent Virtual Agents 2011 (VILHJáLMSSON, H. H., ed.), vol. 6895. Springer-Verlag.
KOLLINGBAUM, M. J. (2004). Norm-Governed Practical Reasoning Agents. Ph.D. thesis, University of Aberdeen, Department of Computer Science. http://homepages.abdn.ac.uk/m.j.kollingbaum/pages/publications/Thesis_v1.5.pdf.
KOLLINGBAUM, M. J. & NORMAN, T. J. (2003). Norm adoption in the noa agent architecture. In: Proceedings of the second international joint conference on Autonomous agents and multiagent systems. New York, NY, USA: ACM. http://www.csd.abdn.ac.uk/cgi-bin/betsie.pl/0003/www.csd.abdn.ac.uk/~mkolling/publications/KollingbaumNormanAAMAS2003.pdf. [doi:10.1145/860575.860784]
KOLLINGBAUM, M. J. & NORMAN, T. J. (2004). Norm adoption and consistency in the noa agent architecture. In: Programming Multi-Agent Systems (DASTANI, M., DIX, J. & EL FALLAH-SEGHROUCHNI, A., eds.). Springer. http://homepages.abdn.ac.uk/m.j.kollingbaum/pages/publications/Kollingbaum_Norman_Springer_Promas03.pdf. [doi:10.1007/978-3-540-25936-7_9]
LAIRD, J. E. (2012a). The SOAR Cognitive Architecture. Cambridge, MA, USA: MIT Press.
LAIRD, J. E. (2012b). The SOAR cognitive architecture. AISB Quarterly (134), 1-4.
LAIRD, J. E., NEWELL, A. & ROSENBLOOM, P. S. (1987). Soar: an architecture for general intelligence. Artificial Intelligence 33(3), 1-64. [doi:10.1016/0004-3702(87)90050-6]
LAIRD, J. E., ROSENBLOOM, P. S. & NEWELL, A. (1986). Chunking in SOAR: The anatomy of a general learning mechanism. Machine Learning 1(1), 11-46. [doi:10.1007/BF00116249]
LEE, J., HUBER, M. J., DURFEE, E. H. & KENN, P. G. (1994). Um-prs: An implementation of the procedural reasoning system for multirobot applications. In: AIAA/NASA Conference on Intelligent Robots in Field, Factory, Service, and Space (CIRFFSS'94). https://ia700607.us.archive.org/18/items/nasa_techdoc_19950005140/19950005140.pdf.
LóPEZ Y LóPEZ, F., LUCK, M. & D'INVERNO, M. (2007). A normative framework for agent-based systems. In: Normative Multi-agent Systems (BOELLA, G., VAN DER TORRE, L. & VERHAGEN, H., eds.), no. 07122 in Dagstuhl Seminar Proceedings. Internationales Begegnungs- und Forschungszentrum fuer Informatik (IBFI), Schloss Dagstuhl, Germany. http://drops.dagstuhl.de/opus/volltexte/2007/933/pdf/07122.LopezyLopezFabiola.Paper.933.pdf.
MACHADO, R. & BORDINI, R. H. (2001). Running agentspeak(l) agents on sim agent. In: Intelligent Agents VIII, 8th International Workshop, ATAL 2001 Seattle, WA, USA, August 1-3, 2001, Revised Papers (MEYER, J.-J. C. & TAMBE, M., eds.), vol. 2333 of Lecture Notes in Computer Science. Springer. http://link.springer.com/chapter/10.1007/3-540-45448-9_12.
MASLOW, A. H. (1954). Motivation and Personality. Harper and Row.
MAX-NEEF, M. (1992). Development and human needs. In: Real-life economics: Understanding wealth creation (MAX-NEEF, M. & EKINS, P., eds.). London: Routledge, pp. 197-213. http://atwww.alastairmcintosh.com/general/resources/2007-Manfred-Max-Neef-Fundamental-Human-Needs.pdf.
MCDERMOTT, J. & FORGY, C. (1976). Production system conflict resolution strategies. Department of Computer Science, Carnegie-Mellon University.
MEYER, M., LORSCHEID, I. & TROITZSCH, K. G. (2009). The development of social simulation as reflected in the first ten years of jasss: a citation and co-citation analysis. Journal of Artificial Societies and Social Simulation 12(4), 12. https://www.jasss.org/12/4/12.html.
MORLEY, D. & MYERS, K. (2004). The SPARK agent framework. In: Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, vol. 2. Washington, DC, USA: IEEE Computer Society.
NILSSON, N. J. (1977). A production system for automatic deduction. Technical Note 148, Stanford University, Stanford, CA, USA. http://www.sri.com/sites/default/files/uploads/publications/pdf/743.pdf.
OHLER, P. & REGER, K. (1999). Emotional cooperating agents and group formation a system analysis of role-play amoung children. In: Modelling and Simulation - A Tool for the Next Millenium, 13th European Simulation Multiconference (SZCZERBICKA, H., ed.). SCS Publication.
ORTONY, A., CLORE, G. L. & COLLINS, A. (1990). The Cognitive Structure of Emotions. Cambridge University Press. http://www.cogsci.northwestern.edu/courses/cg207/Readings/Cognitive_Structure_of_Emotions_exerpt.pdf.
PADGHAM, L. & TAYLOR, G. (1996). A system for modelling agents having emotion and personality. In: Proceedings of the PRICAI Workshop on Intelligent Agent Systems '96. http://goanna.cs.rmit.edu.au/~linpa/Papers/lnaibelagents.pdf.
PAOLO, E. A. D., RHOHDE, M. & JAEGHER, H. D. (2014). Horizons for the enactive mind: Values, social interaction, and play. In: Enaction: Toward a New Paradigm for Cognitive Science (STEWART, J., GAPENNE, O. & PAOLO, E. A. D., eds.). MIT Press, pp. 33-87. http://pub.uni-bielefeld.de/luur/download?func=downloadFile&recordOId=2278903&fileOId=2473451.
PEREIRA, D., OLIVEIRA, E. & MOREIRA, N. (2008). Formal modelling of emotions in bdi agents. In: Computational Logic in Multi-Agent Systems (SADRI, F. & SATOH, K., eds.). Berlin, Heidelberg: Springer-Verlag, pp. 62-81. http://www.dcc.fc.up.pt/~nam/publica/50560062.pdf. [doi:10.1007/978-3-540-88833-8_4]
PEREIRA, D., OLIVEIRA, E., MOREIRA, N. & SARMENTO, L. (2005). Towards an architecture for emotional bdi agents. In: EPIA'05: Proceedings of 12th Portuguese Conference on Artificial Intelligence. Springer. http://www.dcc.fc.up.pt/~nam/publica/epia05.pdf. [doi:10.1109/epia.2005.341262]
PEW, R. W. & MAVOR, A. S. (eds.) (1998). Modeling Human and Organizational Behavior:Application to Military Simulations. The National Academies Press. http://www.nap.edu/catalog.php?record_id=6173#toc. Panel on Modeling Human Behavior and Command Decision Making: Representations for Military Simulations, National Research Council.
PHUNG, T., WINIKOFF, M. & PADGHAM, L. (2005). Learning within the bdi framework: An empirical analysis. In: Proceedings of the 9th international conference on Knowledge-Based Intelligent Information and Engineering Systems - Volume Part III (KHOSLA, R., HOWLETT, R. J. & JAIN, L. C., eds.), vol. 3683 of Lecture Notes on Computer Science. Berlin, Heidelberg: Springer-Verlag. http://www.cs.rmit.edu.au/agents/www/papers/kes05-pwp.pdf.
POKAHR, A., BRAUBACH, L. & LAMERSDORF, W. (2005). Jadex: A bdi reasoning engine. In: Multi-Agent Programming (BORDINI, R., DASTANI, M., DIX, J. & SEGHROUCHNI, A. E. F., eds.). Springer Science+Business Media Inc., USA. http://vsis-www.informatik.uni-hamburg.de/getDoc.php/publications/250/promasbook_jadex.pdf.
PYNADATH, D. V. & MARSELLA, S. C. (2005). Psychsim: Modeling theory of mind with decision-theoretic agents. In: in Proceedings of the International Joint Conference on Artificial Intelligence. Morgan Kaufman Publishers Inc.
RAO, A. S. (1996). Agentspeak(l): BDI agents speak out in a logical computable language. In: MAAMAW '96: Proceedings of the 7th European workshop on Modelling autonomous agents in a multi-agent world: agents breaking away. Secaucus, NJ, USA: Springer-Verlag New York, Inc. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.3.8296&rep=rep1&type=pdf.
RAO, A. S. & GEORGEFF, M. P. (1991). Intelligent real-time network management. Technical Note 15, Australian Artificial Intelligence Institute.
RAO, A. S. & GEORGEFF, M. P. (1995). BDI-agents: from theory to practice. In: Proceedings of the First International Conference on Multiagent Systems. http://www.aaai.org/Papers/ICMAS/1995/ICMAS95-042.pdf.
SALGADO, M. (2012). More than words: Computational models of emergence and evolution of symbolic communication. Ph.D. thesis, University of Surrey, UK.
SALVUCCI, D. D. (2001). Predicting the effects of in-car interface use on driver performance: an integrated model approach. International Journal of Human-Computer Studies 55(1), 85-107. http://www.sciencedirect.com/science/article/pii/S1071581901904720. [doi:10.1006/ijhc.2001.0472]
SCHMIDT, B. (2000). The Modelling of Human Behaviour. SCS Publications.
SCHMIDT, B. (2001). Agents in the social sciences - modelling of human behaviour. Tech. rep., Universität Passau. http://www.informatik.uni-hamburg.de/TGI/forschung/projekte/sozionik/journal/3/masho-2.pdf.
SCHMIDT, B. (2002a). How to give agents a personality. In: Proceeding of the Third Workshop on Agent-Based Simulation. http://schmidt-bernd.eu/modelle/HowtogiveAgents.pdf.
SCHMIDT, B. (2002b). Modelling of human behaviour - the pecs reference model. In: Proceedings 14th European Simulation Symposium (VERBRAECK, A. & KRUG, W., eds.). SCS Europe BVBA. http://www.scs-europe.net/services/ess2002/PDF/inv-0.pdf.
SCHOELLES, M. J. & GRAY, W. D. (2000). Argus prime: Modeling emergent microstrategies in a complex simulated task environment. In: Proceedings of the Third International Conference on Cognitive Modeling (TAATGEN, N. & AASMAN, J., eds.). Universal Press. http://act-r.psy.cmu.edu/wordpress/wp-content/uploads/2012/12/311mjs_wdg_2000_a.pdf.
SERVAN-SCHREIBER, E. (1991). The competitive chunking theory: models of perception, learning, and memory. Ph.D. thesis, Carnegie Mellon University, Pittsburgh, PA, USA.
SIMON, H. A. & NEWELL, A. (1971). Human problem solving: The state of the theory in 1970. American Psychologist 26(2), 145-159. http://www.cog.brown.edu/courses/cg195/pdf_files/fall07/Simon%20and%20Newell%20%281971%29.pdf [doi:10.1037/h0030806]
STEUNEBRINK, B. R., DASTANI, M. & MEYER, J.-J. C. (2010). Emotions to control agent deliberation. In: Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems: Volume 1 - Volume 1, vol. 1 of AAMAS '10. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems. http://dl.acm.org/citation.cfm?id=1838206.1838337.
SUN, R. (2002). Duality of the Mind: A Bottom-Up Approach Towards Cognition. Lawrence Erlbaum Associates, Inc., Publishers.
SUN, R. (2006). The CLARION cognitive architecture: Extending cognitive modeling to social simulation. In: Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation (SUN, R., ed.). Cambridge University Press, pp. 79-99. http://www.cogsci.rpi.edu/~rsun/sun.clarion2005.pdf.
SUN, R. (2009). Cognitive architectures and multi-agent social simulation. In: Multi-Agent Systems for Society (LUKOSE, D. & SHI, Z., eds.). Springer-Verlag, pp. 7-21. [doi:10.1007/978-3-642-03339-1_2]
SUN, R., MERRILL, E. & PETERSON, T. (1998). A bottom-up model of skill learning. In: Proceedings of the 20th Cognitive Science Society Conference. Lawrence Erlbaum Associates, Mahwah, NJ. http://www.cogsci.rpi.edu/~rsun/sun.cog98.ps.
SUN, R., MERRILL, E. & PETERSON, T. (2001a). From implicit skills to explicit knowledge: a bottom-up model of skill learning. Cognitive Science 25(2), 203-244. http://www.arts.rpi.edu/public_html/rsun/sun.CS99.pdf. [doi:10.1207/s15516709cog2502_2]
SUN, R., PETERSON, T. & SESSIONS, C. (2001b). Beyond simple rule extraction: acquiring planning knowledge from neural networks. In: Proceedings of WIRN'01. Springer. http://www.cogsci.rpi.edu/~rsun/sun.wirn01.pdf.
SUN, R., SLUSARZ, P. & TERRY, C. (2005). The interaction of the explicit and the implicit in skill learning: A dual-process approach. Psychological Review 112(1), 159-192. http://www.arts.rpi.edu/public_html/rsun/sun-pr2005-f.pdf. [doi:10.1037/0033-295X.112.1.159]
TAATGEN, N. A., LEBIERE, C. & ANDERSON, J. R. (2006). Modeling paradigms in ACT-R. In: Cognition and Multi-Agent Interaction: From Cognitive Modeling to Social Simulation (SUN, R., ed.). Cambridge University Press, pp. 29-52. http://act-r.psy.cmu.edu/wordpress/wp-content/uploads/2012/12/570SDOC4697.pdf.
THAGARD, P. (1992). Adversarial problem solving: Modeling an opponent using explanatory coherence. Cognitive Science 16, 123-149. http://cogsci.uwaterloo.ca/Articles/adversarial.pdf. [doi:10.1207/s15516709cog1601_4]
THANGARAJAH, J., PADGHAM, L. & HARLAND, J. (2002). Representation and reasoning for goals in bdi agents. Australian Computer Science Communications 24(1), 259-265. http://crpit.com/confpapers/CRPITV4Thangarajah.pdf. IEEE Computer Society Press.
URBAN, C. (1997). Pecs: A reference model for the simulation of multi-agent systems. In: Social Science Microsimulation: Tools for Modelling, Parameter Optimization, and Sensitivity Analysis (GILBERT, N., MüLLER, U., SULEIMAN, R. & TROITZSCH, K., eds.), vol. 9719 of Dagstuhl Seminar Series.
URBAN, C. & SCHMIDT, B. (2001). Pecs - agent-based modeling of human behavior. Aaai technical report, University of Passau, Chair for Operations Research and System Theory. http://www.aaai.org/Papers/Symposia/Fall/2001/FS-01-02/FS01-02-027.pdf.
VILLATORO, D. (2011). Social Norms for Self-Policing Multi-agent Systems and Virtual Societies. Ph.D. thesis, Universitat Autònoma de Barcelona.
WAHL, S. & SPADA, H. (2000). Children's reasoning about intentions, beliefs and behaviour. Cognitive Science Quarterly 1, 5-34. http://cognition.iig.uni-freiburg.de/csq/pdf-files/Wahl_Spada.pdf.
WINIKOFF, M. (2005). JACK intelligent agents: An industrial strength platform. In: Multi-Agent Programming: Languages, Platforms and Applications (BORDINI, R. H., DASTANI, M., DIX, J. & FALLAH-SEGHROUCHNI, A. E., eds.), vol. 15 of Multiagent Systems, Artificial Societies, and Simulated Organizations. Springer, pp. 175-193.
WOOLDRIDGE, M. & JENNINGS, N. R. (1995). Agent theories, architectures, and languages: A survey. In: Proceedings of the Workshop on Agent Theories, Architectures, and Languages on Intelligent Agents, ECAI-94. New York, NY, USA: Springer-Verlag New York, Inc. http://www.csee.umbc.edu/~finin/papers/atal.pdf.