©Copyright JASSS

JASSS logo ----

Christian Hahn, Bettina Fley, Michael Florian, Daniela Spresny and Klaus Fischer (2007)

Social Reputation: a Mechanism for Flexible Self-Regulation of Multiagent Systems

Journal of Artificial Societies and Social Simulation vol. 10, no. 1
<https://www.jasss.org/10/1/2.html>

For information about citing this article, click here

Received: 20-Jan-2006    Accepted: 20-Nov-2006    Published: 31-Jan-2007

PDF version


* Abstract

In this paper, we use multiagent technology for social simulation of sociological micro-macro issues in the domain of electronic marketplaces. We argue that allowing self-interested agents to enable social reputation as a mechanism for flexible self-regulation during runtime can improve the robustness and 'social order' of multiagent systems to cope with various perturbations that arise when simulating open markets (e.g. dynamic modifications of task profiles, scaling of agent populations, agent drop-outs, deviant behaviour). Referring to the sociological theory of Pierre Bourdieu, we provide a multi-level concept of reputation that consists of three different types (image, social esteem, and prestige) and considers reputation as a kind of 'symbolic capital'. Reputation is regarded to be objectified as an observable property and to be incorporated into the agents' mental structures through social practices of communication on different aggregation levels of sociality. We present and analyse selected results of our social simulations and discuss the importance of reputation with regard to the robustness of multiagent simulations of electronic markets.

Keywords:
Reputation; Institution; Electronic Market; Self-Regulation; Multiagent System

* Introduction

1.1
A long-standing critical issue in both sociology and social simulation has been the 'micro-macro problem' (Malsch 2001: 166; Castelfranchi and Conte 1995: 4; Conte and Gilbert 1995: 11; Sawyer 2003). The sociological challenge is to provide a suitable basis for the integration or "linkage between micro and macro theories and levels of analysis" (Ritzer 1996: 223, 489). Up to now, the crucial question still remains how "macro-social phenomena emerge from individual action, and then, in turn, constrain, limit, and influence future action" (Sawyer 2003: 346). Computational simulations of artificial societies that use multiagent systems (MAS) as an " analytical tool for representing and reasoning about sociality" (Panzarasa and Jennings 2001: 14) are considered to provide a promising way to face the problem of micro-macro linkage by investigating (1) the impacts of individual and collective action on the generation and emergence of social structures, (2) the effects of social structures on agent's behaviour as well as (3) the dynamics of the recursive process of "emergence and social causation" (Sawyer 2003).

1.2
The micro-macro problem is a research question that also occupies computer scientists in the area of distributed artificial intelligence (DAI). Although the perspective on this issue differs between sociology and DAI (cf. Fischer and Florian 2005), we are convinced that this topic offers a mutual basis in the sense of a shared research problem within the interdisciplinary research programme Socionics, combining sociology and DAI. This paper provides a sketch of our interdisciplinary research by presenting a theoretical concept (reputation in electronic markets) and corresponding simulation results in order to illustrate how the interplay between sociological concepts, computational modelling, and social simulation offers a promising new perspective on a basic problem of artificial societies: flexible self-regulation as a social process of linking the micro-to-macro impacts of individual and collective actions on the generation, activation and change of institutional rules with the macro-to-micro effects of these rules on agents' behaviour.

1.3
MAS, the prevailing software engineering paradigm in DAI, consist of heterogeneous agents with different goals, different rationales, and varying beliefs about appropriate behaviour. According to Jennings et al. (1998), agents have only incomplete information and limited capabilities for solving problems or carrying out tasks. Moreover, especially in the domain of open MAS, there is no global system control, data are decentralised, and computation is asynchronous. Therefore, desired system-level behaviour as well as expected outcomes of MAS usually depends on communication and social coordination of more or less autonomous and self-interested agents producing 'emergent' macro-social properties. This also applies to the application scenario of our interdisciplinary research: electronic marketplaces where software agents interact with each other in order to trade goods and services (tasks). In our model, they are also allowed to cooperate, form partnerships and organisations (cf. Schillo et al. 2004).

1.4
However, if agents are acting relatively autonomously and self-interested in an open electronic market, the economic processes are not necessarily resistant against perturbations and the overall outcome of individual and collective actions is not certainly predictable. Therefore, the question how to design (open) artificial societies is confronted with the problem of "engineering social order" (Castelfranchi 2000). We agree with Castelfranchi (2000: 2) that the application of rigid rules, constraining infrastructures or security devices is no appropriate way to design (open) artificial societies. Instead of relying on a "predetermined, 'hardwired' or designed social order", it will be better to use a socially oriented approach with "decentralised and autonomous social control" where social order "has to be continuously restored and adjusted, dynamically produced by and through the action of the agents themselves." (Castelfranchi 2000: 2).

1.5
In multiagent-based electronic markets, the basic problem of flexible self-regulation comprises two dimensions: the sociological reference to 'social order' and the DAI-reference to 'robustness'. From a sociological point of view, the institutional regulation of markets tackles problems of social coordination that are related to collectively desirable characteristics (e.g. emergence of cooperative behaviour, avoidance of deviant behaviour, coping with social dilemmas, or other aspects of market failure like monopoly power and ruinous competition). In DAI/MAS, the notion of robustness is used as an analytical concept to detect whether agents are able to tackle dynamic environmental changes to allow a 'graceful degradation' of the system's performance under perturbation (cf. Schillo et al. 2001). Hence, the term is used with regard to verifiable quantitative criteria and measurable performance standards of MAS (cf. Schillo et al. 2001,Schillo et al. 2004: 71ff.). However, sociologists remain sceptical about objective measures of performance in the sense of efficiency or stability. Most of them consider efficiency as a 'social construction' (e.g. Fligstein 2001: 9, 190, 229) claiming that there are many ways to organise 'efficiently' and to produce 'enough stability' (cf. Fligstein 2001: 190, 23). Nevertheless, we have selected a scenario of perturbation (deviant behaviour) that is suitable to explore the flexible self-regulation of electronic marketplaces with respect to both dimensions: the sociological problem of macro-social 'order' as well as the technical and application-oriented problem of the robustness of agent-based electronic markets (see 4.8). 1.6

1.6
The main hypothesis we state in this paper is that reputation is an important mechanism that enables the flexible self-regulation of artificial societies. We have chosen this issue as an example to illustrate the interdependence between flexible self-regulation and the linkage of micro and macro social phenomena. Reputation, defined as properties and evaluations that are collectively ascribed to agents in a continuous discursive process, reduces local uncertainty of an agent by providing knowledge about other agents on the micro-level and that generates collectively shared definitions and valuations of certain properties and skills on a meso- or macro-level. With regard to DAI, we therefore suggest that perturbations may diminish the robustness of MAS less, if computational models account to a greater extent for reputation. Reputation may provide built-in properties that are able to cope with dynamic open environments in which agents (1) have only local information about system states, (2) have different goals and corresponding strategies, (3) may not act benevolently, and (4) may freely enter and leave the system. We propose a concept of robustness consisting of four properties (i.e. reliability, flexibility, scalability, and stability) to prove particular system qualities (see 4.8). With regard to sociology, analysing the benefits of reputation on different levels of sociality may explain the emergence of macro-social phenomena out of individual actions and vice versa.

* Related work on the social simulation of reputation

2.1
During the last two decades, the study of reputation has obtained increasing attention in many scientific disciplines (e.g. sociology, economics, game theory, organisation and management science). With the rise of the Internet, it also has aroused growing interest in several domains of technical application (e.g. electronic commerce, computer-mediated communication). A vast body of sociological work has examined the effects of reputation (often associated with the property of trustworthiness) on the emergence of social cooperation and coordination, especially in the fields of markets and organisations (e.g.Granovetter 1985; Shapiro 1987; Coleman 1988; Raub and Weesie 1990; Kollock 1994; Shenkar and Yuchtman-Yaar 1997). Moreover, game theoretic, economic and management literature has highlighted the importance of reputation and trust in (on-line) markets under conditions of uncertainty and information asymmetries (e.g.Kreps and Wilson 1982; Axelrod 1984; Weigelt and Camerer 1988; Dasgupta 1988; Fombrun and Shanley 1990; Rao 1994; Fombrun 1996; Tadelis 1999; Resnick et al. 2000; Ba and Pavlou 2002; Dellarocas 2003; Pavlou and Gefen 2004). This strong interest is not surprising since reputation is supposed to "play a central role in the emergence of positive social behaviour and in the achievement of social order" (Conte and Paolucci 2002: xi).

2.2
In recent years, computational modelling and simulation of reputation has also become a main issue in the area of MAS, artificial societies, and social simulation (e.g.Saam and Harrer 1999; Rouchier et al. 2001; Hales 2002; Lepperhoff 2002; Yu and Singh 2002; Bhavnani 2003; Younger 2004). In previous computational studies, reputation is mainly viewed as a kind of information or knowledge ('belief") agents have of other agent's behaviour (or 'properties') through mental perception, personal experience, or via social exchange of information (Castelfranchi et al. 1998). From a sociological perspective, these reputation models are agent-centred ignoring important structural dimensions of reputation as a multi-level social phenomenon. The crucial micro-macro problem that reputation is embedded in mutual relations between cognitive beliefs, individual actions, and macro-social structures is not modelled adequately.

2.3
A socially more complex model of reputation, based on social relations and social network analysis, has been contributed by Sabater and Sierra (2002; Sabater 2003). While previous reputation approaches are limited to direct interaction (i.e. personal perception and experience) and to information provided by other members of the society (mainly 'neighbours'), Sabater and Sierra propose a multi-faceted concept of reputation that considers the social dimension of reputation more comprehensively in three aspects: firstly, the model completes the set of possible beliefs or information that an agent can have about social characteristics of another agent (e.g. agents' social relations in terms of social networks). Secondly, the social structure of the agent community serves as a new source of information. Thirdly, from a sociological and social simulation point of view, the major improvement is that the model takes different sources of information into account. The concept draws a distinction between witness (information coming from other agents), neighbourhood (information from the agent's social environment) and system reputation (a reputation value based on the 'role' an agent plays within institutional structures of an organisation or group). However, the model still lacks important sociological aspects of reputation such as power, status, and social inequality.

2.4
Following the work of Sabater and Sierra, Ashri et al. (2005) pay more attention to power relations. In their model, agents do not only identify whether they have a relationship with another agent, i.e. whether this belongs to the same network or organisation or shares the same region of influence. Additionally, agents evaluate whether a relationship is characterized by dependency, competition, or cooperation. However, this incorporation of power relations still can be criticized as agent-centred because the agents only evaluate bilateral or tripartite relationships from their own individual perspective. Neither the model of Sabater and Sierra nor that of Ashri et al. takes the mutual relation between social and cognitive mechanisms into consideration in order to bridge the micro-macro gap. Reputation is not only a phenomenon of individual estimations and beliefs, but also a phenomenon of collective attributions, evaluations, and constructions.

2.5
In their basic research on reputation in artificial societies, Conte and Paolucci (2002: 20) criticise "vague definitions and insufficient theorising" in the study of reputation. As a first step towards an appropriate theoretical concept of reputation, they propose to distinguish between (1) an image as a "set of evaluative beliefs" (p. 67) about the characteristics of a given target (in the sense of a direct estimation based on personal perceptions, experiences, and knowledge) and (2) reputation in terms of a "meta-belief" about others' beliefs and mental evaluations that is indirectly acquired through social propagation and that involves several categories of agents such as evaluators, targets, beneficiaries, and third parties (72ff., 81). Unlike previous agent-centred views, Conte and Paolucci (2002: 72) define reputation as "the process and the effect of transmission of a target image". More precisely, reputation is understood as a mental object in the sense of a cognitive representation of a believed evaluation that is related to a perceptible 'objective emergent property' of an agent (the objective dimension of what the agent is believed to be). Simultaneously, this property is the outcome of a social transmission of beliefs (the social dimension of a 'population object' that results from believed evaluations propagated by communication). Moreover, a symbolic dimension is also involved because these beliefs contain an evaluation of the 'social desirability' [3] of agents' (past) behaviour (including a temporal dimension in terms of a prediction of future actions based on an assessment of past behaviour).

2.6
The great advantage of this process-oriented model is that Conte and Paolucci (2002: 10, 72ff., 189) provide a multi-level, bi-directional approach to the study of reputation, i.e. reputation emerges "from the level of individual cognition to the level of social propagation and from this level back to that of individual cognition again" (p. 72).[4] However, from a sociological point of view, we still miss two important aspects of reputation. Firstly, social structures like power and inequality, based on the agents' different social positions within the artificial society, are neglected (cf. explicitly Conte and Paolucci 2002: 1f.). Secondly, a multi-level approach to reputation requires not only to account for the individual and the meso-level of agents' interactions and group-based network relations, but also the macro-level of the global society to give a suitable answer to the sociological problem of micro-macro linkage viewing reputation as an institution-based category of public prestige embedded in a social structure of unequal distributed resources and assets of action. Moreover, the social process in which perception and information is acknowledged as genuine 'knowledge' is not only an operation of information transmission but rather a kind of symbolic transformation of individual data and opinions into socially accepted knowledge. This process depends on the 'symbolic' power of mighty agents to produce and impose their beliefs and evaluations as a kind of socially recognised and legitimated "symbolic capital" (cf.Bourdieu 1980, 1992, 1998, 2000: 166 and 241f.).

2.7
Furthermore, the social simulation of reputation needs some technological improvement to meet the sociological requirements of more elaborated concepts that capture the social complexity and dynamics of artificial societies. Conte and Paolucci have kept both the agent architecture as well as the model of society comparatively plain. Combining sociology and DAI on the common ground of Socionics, we intend to improve both the theoretical model of reputation as well as its multiagent-based implementation in order to explore the combined effects of reputation on different levels of sociality as well as the consequences for the system's ability of self-regulation capacity, robustness, and global performance.

* Reputation: a sociological multi-level concept

3.1
We widely agree with the common definition that reputation is the attribution of certain properties and their evaluations to an individual agent, a collective agent (group), or a corporative agent (organisation). Moreover, in terms of Bourdieu's theory of capital, reputation can be considered as a kind of 'symbolic capital'. Capital denotes any kind of 'resource' that confers status and power to an agent. Hence, the behavioural options of an agent in a social field (e.g. a market) depend on his relational position that is defined by his share of capital in relation to the amount of capital of others (Bourdieu and Wacquant 1992: 97).[5] Since Bourdieu assumes that a basic interest of agents is to improve their relational position within a field, these resources also become potential 'objects of desire' so that the interests of agents can be expressed in terms of different sorts of capital (cf.Bourdieu 1998; Bourdieu and Wacquant 1992: 75ff.): Hence, as a kind of symbolic capital, reputation is generated through the dissemination of information and evaluations about others. Reputation not only has the effect to provide collectively shared beliefs that guide the expectations and choices of agents, but also functions as a kind of resource that confers status and power, which derive from the "existence of others, from their perception and appreciation" (Bourdieu 2000: 241). Symbolic capital secures profits because it functions as a sign of importance and legitimates the possession of desirable characteristics (e.g. professional qualifications, honourable behaviour) as well as the competence to recognise and evaluate others in turn.

3.2
Comparable to a proposal of Castelfranchi and Conte (1995: 9) to view the micro-macro link as a 'three-faceted issue', we consider reputation as a sort of capital that presupposes (1) external social forces and structures as well as (2) the agents' cognition and that is (3) generated and reproduced by their actions. Therefore, reputation is not only a phenomenon that appears and has relevance in interactions, but refers to the structural dimension of social practice. The transformation of a certain characteristic into symbolic capital presupposes the construction of collectively shared symbolic representations and structures (cf.Bourdieu 1994: 122).

3.3
Against this background, symbolic capital and reputation respectively can be considered as a multi-level phenomenon. Therefore, we suggest a distinction between three types of reputation in order to develop a model of reputation that captures the link between the structural dimensions and the multi-level transformation of reputation from interactions and communications into a macro-phenomenon:

3.4
Whether an image, social esteem, or prestige of an agent is generated depends on the process how and by whom attributions and evaluations about an agent are diffused and communicated. With respect to our market model, we distinguish between three different processes:

3.5
Flexible self-regulation is confronted with the problem how to change institutions appropriately since market structures, agents' positions, and practices of competitors are supposed to be in a constant flux so that institutions only provide meaning and stability for agents as long as they adapt flexibly to environmental changes. We consider reputation as a central mechanism in both processes: the generation and reproduction of institutions as well as their adaptation. Due to the communication of attributes and evaluations of agents' properties and behaviours, agents constantly obtain information to evaluate their position, the market structure, or competitors' strategies. This may lead to changing beliefs and practices and, consequently, to adaptations of specific competitive styles. Hence, the diffusion of evaluations of other agents' properties and behaviour (reputation) functions as a kind of sanction (disapproval) in case of deviation from institutional rules.

* Computational model and research focus

4.1
In order to transfer the sociological concepts into a multiagent-based market model using reputation as a mechanism for self-regulation, we have chosen e-markets as application scenario. Our market model consists of three different types of agents: customer, provider, and journalist agents.

4.2
Customer agents demand transportation services and select the most appropriate providers for further interactions:

4.3
Due to a lack of capacities, provider agents that were selected by customer agents to deliver a task may not been able to produce the requested service on their own. Consequently, they have to outsource missing capacities by delegating them either to members of their organisation (if existing) or to appropriate and trustworthy providers using auction protocols that drive the price negotiation. Providers that have cooperated successfully in jointly offering a product which neither of them can offer individually can decide to build a formal organisation if they believe strengthening the cooperation is an investment in the future. We refer to Schillo et al. (2004) for detailed information regarding the self-organisation process. Some provider agents may prefer to fraud (i.e. committing to but not delivering a product) and to intentionally spread false information (i.e. lie about other agent's qualities).

4.4
Another agent type is 'journalist' that performs two types of actions:

4.5
Reputation can be considered as a bundle of valuations spread by some source (providers, customers, journalists). In our model, this bundle consists of three different attributes:

4.6
After these informal definitions, we explain in the following sections how credibility and trustwortiness are formally defined and used in the MAS to improve save trading. Whether beliefs about reputation are the result of direct experience or hearsay, they exist in the form of models that contain all information (like the target's trustworthiness and credibility) relevant to both customer and provider.

The credibility credp,t of a provider p at time t is determined by considering evidence that saw the provider p spreading true or false information about some target q. This is illustrated by the set Relp of tuples relp (x,r) consisting of a time stamp r indicating in which round the evidence was collected and a value x expressing whether the witness spread true or false information. This value is calculated after the agent could make direct experience with the target q. If q behaved as predicted by p, the credibility value is set to 1, otherwise this value is set to 0. If an agent has no evidence about a target agent, it uses the default value d of p (i.e. dp) that depends on the witness' incentive to intentionally diffuse false information (e.g., journalist agents = .8, organisational colleagues =.7 and providers = .6). Since organisational colleagues would not benefit by spreading false information their default value is higher compared to any competing provider who would indirectly benefit of a decreased trust value of the agent. We have assigned a high credibility value to journalists because they have no incentive at all to spread false information (i.e. they are not involved in the auction itself). The witness' reliability is determined in accordance to Equation 1. If the credibility value of p lies below 0.3, it is considered as liar. If the credibility value lies above 0.55, it is considered to answer questions in a truthful manner. Between both values, p's attitude towards lying is set to unknown.

Equation (1)

In Equation 1, n illustrates the number of rounds that are simulated (n is set to 100 for all configurations we present in Section 5). The credibility values are weighted according to the time stamp r at which they were collected. Consequently, an evidence that was collected long time ago does not affect the overall credibility value as much as an evidence that was collected at t-1. This satisfies the fact that providers may change their attitude towards lying during the simulation.

4.7
To cope with varying reliabilities, information of a provider p is stored as triple mp(r,w,x) consisting of a time stamp r and a trust value x of p that is reported by witness w. The trust value x reported by witness w may again result from the combination of trust values reported by various witnesses and direct experiences. Instead of equally rating, time stamp and the already existing credibility value credw,t of witness w at the current time t are used to determine the relevance of a piece of information. Additionally, the agent evaluates the validity and completeness of reputation statements by comparing the reports of different witnesses. If reputation values of an agent reported by a witness continuously deviate from the average, the witness is considered to be a liar and is banished from the list of potential reputation sources. The overall trustworthiness trustp,t at time t of some target agent p is evaluated in accordance to Equation 2 as the weighted average over the set of triples Mp the agent has about provider p. To reduce the dependencies between triples of the same witness, only the mp with the most current time stamp is considered to evaluate p's overall trustworthiness.

Equation (2)

The most reputable providers get finally a task offered. If an agent classifies a provider to be a cheater, it stops to assign tasks to that provider altogether. We hypothesise that the composition of attributes illustrated in 4.5 and the computation model discussed in this section prevents negative effects of malicious behaviour and improves the cooperation and coordination between agents by reducing the risk of selecting unfeasible strategies including inappropriate organisation partners caused by local knowledge and inappropriate beliefs.

4.8
The different reputation mechanisms give providers and customers the opportunity to complete their models before interacting with the corresponding agent directly. The more an agent knows about a provider, the better it can calculate the provider's character towards cheating. Whenever an agent has insufficient information it sends report queries to credible, not lying agents. Figure 1 describes the process of requesting reputation information from both perspectives — the initiator requesting information sends a request to the selected witness acting in the following as participant in the interaction. The interaction protocol is initiated if a provider — acting as initiator — receives a call to produce a particular task. If the provider does not know as many trustworthy agents as defined by the auction-related ML, it selects for each agent to which a call should be sent a set of credible agents (evaluated in accordance to Equation 1) and sends a reputation request to each of them. The number of messages that are sent may not exceed the reputation-related ML where each message only contains the name of one target agent. By receiving a request, the participant checks whether information about the target is available. In that case, it evaluates the price of this information — which depends on the age of the piece of information — and sends a proposal. Otherwise, it rejects. If the participant lies, it proposes — even if it does not have any relevant information about the targets' behaviour with respect to cheating - by sending an arbitrary price offer. Depending on the price and its current financial situation, the initiator either accepts by paying the demanded price or rejects. If the participant receives an 'accept-proposal', it sends its available reputation information. If the participant is lying two configurations are feasible: (i) the participant already possesses information about the target agent, then it sends a modified trustworthy value (i.e. non-cheating agents are defamed by decreasing their trustworthy value) and (ii) the participant has no information available, then it selects randomly a value between 0.1 and 0.2 as trustworthy value. After receiving the new reputation information the initiator updates its beliefs according to Equation 2 and starts the auction by sending call for proposals to the most trustworthy provider agents. The agent's activities in the auction are very similar to the contract net protocol (cf. Davis and Smith 1983).

Figure
Figure 1. Activity diagram describing the agent's internal process and the message exchange regarding reputation

4.9
Our main research focus in this paper is to examine the usefulness and benefits of reputation as a mechanism to overcome deviant behaviour as one form of perturbation. Moreover, a general concern of our reputation model is to explore the potential of reputation as a multi-level self-regulation mechanism with respect to various perturbations scenarios. The analysis of these robustness criteria gives information about the abilities as well as the limitations of a particular reputation type in satisfying certain robustness demands. In particular, the following criteria are important with respect to MAS:

4.10
Whether these criteria of robustness are achieved depends on parameters that may vary with the social mechanisms implemented and used in a system. We assume that three of these parameters are (1) the extent to which certain information (characteristics and evaluations of agents, i.e. reputation) are quantitatively diffused within a population of agents, (2) how far these information reflect the factual environmental state, and (3) to which degree these information deviate among agents (equity or consistency of agents' beliefs). Moreover, we assume that these parameters depend on the type of reputation source, since these types have an impact on the speed and scope of reputation spreading, the sources of reputation (experience, gossip, organisation membership) and the reputing instance's (individuals, organisation members, journalists) attitude towards reputation diffusion. Therefore, we hypothesise that the different types of reputation affect the handling of perturbations differently.

* Empirical evaluation

5.1
The different simulation scenarios are based on a 'standard' configuration with the following characteristics: In the following, the results of seven different configurations, i.e., six different scenarios (standard, scaling of message limit, scaling of cheater rate, lies, scaling of agent population, self-organisation), are discussed, each simulated with 180 (or 240) runs for 100 rounds. [6]

5.2
Whether a configuration has been run 180 or 240 times depended on the number of reputation types to be evaluated: in order to investigate the different impacts of the three reputation types on the four robustness criteria separately, 60 runs of each configuration either use the mechanism 'image', 'social esteem', 'prestige', or 'none'. In case that self-organisation has not been allowed, no simulations for social esteem could have been run. All runs (except for the scenario 'scaling of agent population') start with a population of 90 providers (that may become bankrupt during runtime) and 30 customers. Table 1 summarises the configurations and their settings.

Table 1: The configurations and their settings

Performance criteriaConfigurationConfiguration Settings
RoundNumber of Providers, CustomersNumber of Cheaters (in %)Message Limit
(ML)
Self-organisationLiars (in %)Figures
Reliability: Fraud handlingConfiguration 1090,30101No01,2
Configuration 2090,30101/5/8/16No03,4
Reliability: Scaling of cheatersConfiguration 3090,3010/20/305No05,6,7
Reliability:
Impact of lies
Configuration 4090,30205No208,9
ScalabilityConfiguration 50,35,70(30,30,30),3020/20/205Yes010
StabilityConfiguration 6090,30305No011
FlexibilityConfiguration 7090,30205Yes012,13

Reliability: Fraud handling

5.3
Four scenarios have been concerned with the effect of reputation on the detection and reduction of fraud, i.e. the impact on the system's reliability. A general result is that the spreading of trustworthiness values between agents indeed helps to decrease the rate of fraud. However, this overall result depends on certain conditions. In a first step, the standard model (no lies, no self-organisation) has been run with a low rate of cheating agents (10%). The possibility of agents to learn about the trustworthiness of others was initially kept small due to the message limit of one for reputation requests, i.e. agents allowed to use the reputation type image must not ask more than one other provider for trustworthiness values in each round. Prestige was spread by a single journalist who was allowed to interview eight providers or customers, while requests by customers or providers were also restricted to one report each round. Already, the results for this initial configuration show that prestige decreases the number of fraud cases more rapidly (e.g., the fraud level of prestige at round 30 is reached by image at round 42, by none at round 59) and more effective (after round 52, fraud is eliminated to a large extent) than image and none. Yet, in this configuration, the mere exchange of trustworthiness values between providers and customers (image) does not lead to a significant difference compared to simulations in which agents rely solely on their own experience (none), cf. Figure 2.

Figure
Figure 2. Reduction of fraud (Configuration 1)

Despite the similar reduction of fraud by none and image, image realises much better results than none in terms of diffusing information about cheaters within the market, especially among providers. The number of providers that know less than 25 % of the cheaters decreases for image much more and much faster (from 90 initially to 22 in round 100) in favour of the number of those agents that know between 50% and 75% of the cheaters (from 0 to 56) and those that know more than 75% (from 0 to 4). For none still 61 providers know only few cheaters (less than 25%) in round 100. Using prestige, the knowledge about deviant agents can be spread even more effective than for image. 74 providers know more than 75% of the cheaters finally. Figure 3 shows graphs that are rather typical for all configurations with respect to the spread of knowledge for prestige.

Figure
Figure 3. Number of providers knowing certain percentages of cheaters (Configuration 1, Prestige)

5.4
Hence, these results suggest that a modest reduction of fraud requires a major increase of knowledge about deviance among the provider population. Fraud is not only facilitated by providers that assign tasks to other providers that are cheaters, but also by the minor knowledge of customers that assign tasks to deviant acting providers. Although customers are allowed to request reputation values as well, the distribution of knowledge is significantly worse than within the provider population. This originates from the fact that the necessity of getting comprehensive information is reduced for customers since the set of providers getting a proposal addressed by customers is comparably small (i.e. in all configurations, the auction-related message limit is set to 8) and is subject of minor changes. In contrast, providers may choose to request reputation values as a general strategy (plan) and ask about agents independently of the immediate likeliness to assign tasks to them. However, these results raise the question whether the similar occurrence of fraud for image and none is caused by the low message limit (ML). Moreover, does a scaling of the message limit for image may even lead to a better reduction of fraud than prestige?

Figure
Figure 4. Variation of message limit (Configuration 2)

5.5
In consequence, the ML for image has been scaled to five, eight, and 16 messages per round (Configuration 2). Figure 4 shows that the decrease of fraud cases is indeed stronger, the higher is the ML. However, the differences between ML 8 and ML 16 are quite marginal as the agents (customers and providers) do not request more reputation reports at ML 8 than at ML 16 (see Figure 5). Correspondingly, the percentage of cheaters known by a certain number of providers each round does not differ for ML 8 and ML 16. This can be explained by the fact that agents do not try to explore the trustworthiness values of all providers. Instead the number of reputation requests is restricted by (i) the number of providers they are interested to cooperate with in the next auction phase (without having sufficient information regarding their trustworthiness yet) and (ii) the number of reputation requests they can afford because reputation information is not free of charge. Due to the fraud reduction, the average providers' profits increase during the simulation, which implies that providers could afford to buy more reputation information. The number of reputation requests increases until providers could collect enough information regarding potential trading partners. Since only reputation reports of the next trading partners are requested and this number is also restricted by the auction-related ML (which is set to 8), not more than 8 reputation-related messages are sent. Thus, the number of reputation reports behaves very similar for ML 8 and ML 16. For both, the reduction of fraud is rather comparable to those for prestige in the standard configuration. Thus, a further scaling of the message limit for image will not improve the effect again, whereas the spread of knowledge still can be improved for prestige by scaling the number of journalists.

Figure
Figure 5. Requested reputation reports (Configuration 2)

Reliability: Scaling of cheaters

5.6
As a third scenario, the effect of different cheating rates (i.e. 10%, 20%, or 30%) has been evaluated. In contrast to the standard configuration, image was simulated with a message limit of five. For each of the three configurations, the already discussed order of the reputation types concerning the effectiveness of fraud reduction is still valid. Moreover, the simulations showed that prestige and image are capable to eliminate fraud completely, independently of the initial cheater rate (cf. Figure 7). For prestige and image, the number of fraud cases drops in proportion to the initial rate only until round 51, while for none the number of fraud remains higher, the higher the initial cheater rate has been initially (cf. Figure 6).

Figure
Figure 6. Scaling of cheater rate (none) (Configuration 3)

Figure
Figure 7. Scaling of cheater rate (image) (Configuration 3)

In fact, image and especially prestige are even more effective in comparison to none if the initial cheater rate is scaled up. A matter of special importance is described by the fact that for both reputation types, deviant behaviour can be banished during the same period independent of the actual initial fraud percentage. Figure 8 shows that at an initial cheater rate of 30%, the difference regarding the number of tasks completed without fraud finally amounts to 3,5 tasks between prestige and none (between prestige and image to 2 tasks). At an initial rate of 20 %, the difference between none and prestige is around 2,3 tasks, at rate of 10% only 1,3 tasks.

Figure
Figure 8. Improvement of task completion (Configuration 3, 30% cheaters)

Reliability: Impact of lies

5.7
Nevertheless, these beneficial effects of reputation are only achieved, if agents do not lie when communicating reputation values. The impact of lies on reliability has been investigated by a further scenario (Configuration 4). The simulation runs were started with 20% cheaters as well as 20% liars and a ML of five for image. Figure 9 illustrates the rate of lies for prestige and image. The results show that image cannot reduce the rate of lies, whereas prestige, after an initial high rate of lies, is able to banish the spread of false information. The initial bad performance results from the lack of information the journalist has to estimate an agent's creditability. However, Figure 10 shows that the number of fraud cases for both, prestige and image, exceeds the number of fraud of the same configuration without lies. Nevertheless, the difference only varies slightly, although the rate of lies could not be reduced for image. Consequently, the occurrence of lies does not effect the general performance to a large extent with respect to fraud cases. After falling below 20% (see Figure 9), prestige behaves similarly in terms of detecting fraud compared to prestige without lies (standard configuration).

Figure
Figure 9. Ratio of lies (Configuration 4)

Figure
Figure 10. Impact of lies on the fraud cases (Configuration 4)

Scalability

5.8
A further scenario on scalability (Configuration 5) showed that the effectiveness of reputation without lies does not only improve with an increase of the cheater population, but also with an increase of the entire population. In this scenario, the simulations started with a provider population of 30, a cheater rate of 20%, and a message limit of five for image and social esteem. Additionally, we allow the agents to self-organise. Both in round 35 and 70, 30 providers with the same ratio of cheaters and trustworthy providers have been added. Figure 11 shows that all reputation types are able to decrease the number of fraud cases after scaling the population of cheating agents much faster compared to none. Additionally, each time the provider population is scaled, the number of fraud cases drops more rapidly and deeper for all three reputation types. (i.e. cheating agents are isolated and no tasks are further assigned to them). For instance, after the first scaling, social esteem needs 16 rounds to reach the level of 5 fraud cases. After the second scaling, this level is reached after 6 rounds.

Figure
Figure 11. Scaling of agent population (Configuration 5)

Stability

5.9
All of the previous configurations showed that 'correct' reputation also has beneficial effects on the economic performance, i.e. the stability of the market. The more fraud cases can be reduced, the more the average profit of the cheaters diminish per round, and the more cheaters become bankrupt and cease the market. Figure 12, showing the decrease of average cheaters' profits for Configuration 6 (Configuration 3 with 30% cheaters), is exemplary regarding the differences between prestige, image, and none. The average cheaters' profits for prestige and none differ at 2 € in round 30 which is nearly 30% of the cheaters' profits in the first rounds when their attitude towards cheating is widely unknown. Additionally, after 30 rounds for prestige and after 35 rounds for image, cheating agents are forced to work profitless, whereas for 'none' this state is reached for the first time after 48 rounds. With prestige and image the identification of cheaters has stronger effects on profits and is realized much faster compared with 'none' (60% or 37% faster decrease beneath the profit threshold). In contrast, the average profits of the trustworthy agents are not affected differently by the reputation types. Additionally, for all configurations and reputation types some of the trustworthy agents got insolvent since providers that are earlier known as trustworthy are preferred, leaving a lower chance to other honest agents to be recognised for being trustworthy. (This also happens if cheating behaviour is not allowed.)

Figure
Figure 12. Decrease of cheaters' profits (Configuration 6)

Flexibility: Self-Organisation and Reputation

5.10
A last simulation scenario (Configuration 7) was concerned with the interdependencies between self-organisation and reputation. For this scenario, providers are allowed to form organisations. This formation process is initiated by a provider sending offers to trustworthy agents that possess the demanded skills to build an organisation. As organisations prescribe a long-term commitment, it is necessary that potential members consider each other as trustworthy. To ensure that no cheater affiliates that organisation, providers having a trustworthy value below 0.8 are not accepted as organisational member. An agent receiving such a request evaluates the trustworthiness of the potential members and compares them with the minimal trustworty value. If one of the members has a trust value below 0.8 or the agent itself is already part of another organisation, it refuses. Otherwise, it agrees to join that new organisation. After evaluating the participant's responses, the initiator informs all agents that accepted the formation request and attaches the list of organisational members. The newly created organisation prescribes an exclusive membership and delegates tasks with respect to a hierarchical organisational structure. The agent requested to form this organisation automatically becomes the coordinator that delegates sub-tasks to members. The reputation parameters have been adjusted like Configuration 4 (no lies, 20% cheaters, message limit of five for image). The results of this configuration are from two viewpoints astonishing: firstly, prestige fosters the formation of organisations (cf. Figure 13) since agents only agree to organise if they can ensure a certain level of trustworthiness of the potential members. Secondly, for image and social esteem, fewer organisations are formed in the beginning compared to none. At least image can compensate the difference with respect to the number of organisations (until round 90). Especially the second result differs from our expectations, as we have assumed that additional knowledge encourages the process of formation. At least the high number of organisations formed regarding prestige attested our assumption. The low number of organisations formed regarding image and social esteem bases mainly on the fact that the additional knowledge does not encourage to establish a strong relationship between two parties as it is necessary for the organisational formation. In contrast, this additional knowledge leads to exploring the set of none-cheating agents. For none, provider agents are interested in establishing frequent interactions with detected none-cheaters since additional information about deviant acting agents is not available. Consequently, more organisations are formed.

Figure
Figure 13. Number of organisations

Figure 14 in combination with Figure 13 highlights the directly proportional relationship between the number of organisations and the cases of fraud. This figure shows the average number of fraud cases of configuration 3 (20% cheaters) less the fraud cases of the configuration allowing self-organisation as trend lines. For both, prestige and image, the difference between the average fraud cases remains positive, i.e., reputation in combination with self-organisation reduces fraud additionally with respect to our standard configuration. These effects base on the trustworthiness of organisations and their members. Consequently, when self-organisation is allowed, none is more effective initially (until round 45). In comparison to image, none also reduces fraud more effective until round 37. Afterwards the more comprehensive investment in information pays out for image.

Figure
Figure 14. Surplus of fraud cases without self-organisation

* Conclusion and future work

6.1
In this paper, we presented our multi-level approach of reputation. The main hypothesis has been that reputation is an important mechanism of self-regulation of markets (i.e. to achieve 'social order'). Referring to sociological theory, we provided a multi-level concept of reputation that consists of three different types (image, social esteem, and prestige). With regard to multiagent-based electronic marketplaces, we assumed that reputation furthers the robustness of MAS in terms of reliability, scalability, stability, and flexibility. In order to evaluate this assumption, we conducted a simulation study with 'fraud' as perturbation scenario and evaluated the results for 'image', 'social esteem', and 'prestige' as well as 'no reputation' with respect to the four robustness criteria. A general result is that the spreading of trustworthiness values between agents indeed helps to increase the robustness and thus reputation can be considered as a self-regulation mechanism to obtain social order. We also showed that the different types of reputation affect the handling of perturbations differently. A resume of the most important results is recapitulated in Table 2. In future work, we would like to deepen our understanding by investigating further consequences of reputation on scalability and stability.

Table 2: Selected results of the simulations

PerturbationConfigurationMain results
Reliability: Fraud handlingConfiguration 1Spreading of trustworthiness values helps to decrease the rate of fraud
Prestige decreases fraud cases more rapidly and more effectively
Configuration 2Image: decrease of fraud cases is stronger, the higher the message limit (ML)
Fraud cases for ML 8/16 are comparable to prestige (Configuration 1)
Further scaling of ML does not further decrease fraud cases
Reliability: Scaling of cheatersConfiguration 3Prestige and image are capable to eliminate fraud completely, independently of the initial cheater rate
Reliability:
Impact of lies
Configuration 4Image cannot reduce the rate of lies, whereas prestige is able to banish the spread of false information
ScalabilityConfiguration 5The higher the number of providers raises, the more effective reputation and especially prestige can handle entering cheaters
Each time the cheater population increases, number of fraud cases drops more rapidly and deeper for all three reputation types
StabilityConfiguration 6The more fraud cases can be reduced, the more the average profit per round of the cheaters diminish, and the more cheaters become bankrupt and cease the market
FlexibilityConfiguration 7Prestige fosters the formation of organisations


* Acknowledgements

This work was funded by the German Research Foundation (DFG) in the priority program Socionics under contracts FI 420/4-3 and FL 336/1-3. We are indebted to Rolf Schmidt for his support concerning implementation and simulation runs. We thank the three anonymous referees as well as the editor Nigel Gilbert for helpful comments.


* Notes

1 With this definition, we are partially following Castelfranchi (2000: 1) but do not accept his 'functional' approach. We also reject a sociological view that equates social order with stability caused by normative integration, i.e. institutionalisation and internalisation of shared normative standards (cf. e.g. Parsons 1951: 11f., 36ff.). Our view corresponds with Elster's assumption that the problem of social order is focusing on predictability (of stable, regular patterns of behaviour) and cooperation (cf. Elster 1989: 1ff.).

2 According to Scott (1995: 33), "institutions consist of cognitive, normative, and regulative structures and activities that provide stability and meaning to social behaviour." Institutions — in terms of "property rights, governance structures, conceptions of control, and rules of exchange — enable actors in markets to organize themselves, to compete and to cooperate, and to exchange" (Fligstein 1996: 658).

3 The notion of 'desirable conduct' refers to possible solutions to the problem of social order and may comprise social cooperation, altruism, reciprocity, or norm obedience (cf. Conte and Paolucci 2002: 1).

4 Conte and Paolucci (2002: 76f.) draw a clear distinction between the 'transmission of reputation (or gossip)' in terms of a dissemination of a cognitive representation and the 'contagion of reputation' (or 'prejudice') in the sense of the spreading of a property in cases of social proximity and common membership in a social group, network or organisation.

5 The notion of markets as 'social fields' (cf. Bourdieu 2005, Fligstein 2001) implies that markets are structured (1) by a distribution of desirable and scarce resources (capital) that define the competitive positions of agents and constrain their strategic options, (2) by institutional rules that guide and constrain the actions of agents, and (3) by practices of competing agents which engage in power struggles to control relevant market shares and improve their relational position within the market field.

6 The simulation software was implemented in JAVA and is available at: http://www.ags.uni-sb.de/~chahn/ReputationFramework.


* References

ASHRI R, Ramchurn S D, Sabater J, Luck M and Jennings N R (2005) Trust Evaluation Through Relationship Analysis. Proceedings of the 4th International Joint Conference on Autonomous Agents and Multi-Agent Systems. http://www.ecs.soton.ac.uk/~nrj/download-files/ashri-aamas05.pdf

AXELROD R (1984) The Evolution of Cooperation. New York: Basic Books.

BA S L and Pavlou P A (2002) Evidence of the effect of trust building technology in electronic markets: Price premiums and buyer behavior. MIS Quarterly, 26 (3). pp. 243-268.

BHAVNANI R (2003) Adaptive Agents, Political Institutions and Civic Traditions in Modern Italy. Journal of Artificial Societies and Social Simulation, vol. 6, no. 4. https://www.jasss.org/6/4/1.html

BOURDIEU P (1980) The production of belief: contribution to an economy of symbolic goods. Media, Culture and Society, 2. pp. 261-293.

BOURDIEU P (1992) Language and Symbolic Power. Oxford: Blackwell.

BOURDIEU P (1994) In Other Words. Essays Towards a Reflexive Sociology. Cambridge, Oxford: Polity Press.

BOURDIEU P (1998) The Forms of Capital. In Halsey A H, Lauder H, Brown P, Stuart Wells A (Eds.), Education. Culture, Economy, and Society. Oxford, New York: Oxford University Press. pp. 46-58.

BOURDIEU P (2000) Pascalian Meditations. Stanford/Ca.: Stanford University Press.

BOURDIEU P (2005) Principles of an Economic Anthropology. In Smelser N J, Swedberg R (Eds.), The Handbook of Economic Sociology. second edition, Princeton: Princeton University Press. pp. 75-89.

BOURDIEU P and Wacquant, L J D (1992) An Invitation to Reflexive Sociology. Cambridge, Oxford: Polity Press.

CASTELFRANCHI C (2000) Engineering Social Order. In Omicini A, Tolksdorf R, Zambonelli F (Eds.), Engineering Societies in the Agents World. First International Workshop, ESAW 2000, Berlin, Germany, August 2000. Revised Papers. Lecture Notes in Artificial Intelligence LNAI 1972. Springer-Verlag: Berlin, Heidelberg, New York. pp. 1-18.

CASTELFRANCHI C and Conte R (1995) Cognitive and Social Action. London: UCL Press.

CASTELFRANCHI C, Conte R and Paolucci M (1998) Normative reputation and the costs of compliance. Journal of Artificial Societies and Social Simulation 1 (3), https://www.jasss.org/1/3/3.html

COLEMAN J (1988) Social Capital in the Creation of Human Capital. American Journal of Sociology, 94 (Supplement). pp. 95-120.

CONTE R and Gilbert N (1995) Introduction: Computer simulation for social theory. In Gilbert N and Conte R (Eds.), Artificial Societies: The Computer Simulation of Social Life. London: UCL Press. pp. 1-15.

CONTE R and Paolucci M (2002) Reputation in Artificial Societies: Social Beliefs for Social Order. Dordrecht: Kluwer Academic Publishers.

DASGUPTA P (1988) Trust as a Commodity. In Gambetta D (Ed.), Trust: Making and Breaking Cooperative Relations. Oxford, New York: Basil Blackwell. pp. 49-72.

DAVIS R and Smith R G (1983) Negotiation as a metaphor for distributed problem solving. Artificial Intelligence (20). pp. 63-109.

DELLAROCAS C (2003) The digitization of word of mouth: Promise and challenges of online feedback mechanisms. Management Science 49 (10). pp. 1407-1424.

ELSTER J (1989) The Cement of Society. A Study of Social Order. Cambridge: University Press.

FISCHER K and Florian M (2005) Contribution of Socionics to the Scalability of Complex Social Systems: Introduction. In Fischer K, Florian M and Malsch T (Eds.), Socionics: Its Contributions to the Scalability of Complex Social Systems, Lecture Notes in Artificial Intelligence LNAI 3413. Berlin, Heidelberg, New York: Springer-Verlag.

FLIGSTEIN N (1996) Markets as politics: A political-cultural approach to market institutions. American Sociological Review 61. pp. 656-673.

FLIGSTEIN N (2001) The architecture of markets. An economic sociology of twenty-first-century capitalist societies. Princeton and Oxford: Princeton University Press.

FOMBRUN C J (1996) Reputation: Realizing Value from the Corporate Image. Boston, Mass.: Harvard Business School Press.

FOMBRUN C and Shanley M (1990) What's in a Name? Reputation-Building and Corporate Strategy. Academy of Management Journal 33. pp. 233-258.

GRANOVETTER M (1985) Economic Action and Social Structure: The Problem of Embeddedness. American Journal of Sociology 91. pp. 481-510.

HAHN C, Fley B, and Florian M (2006) Self-regulation through social institutions: A framework for the design of open agent-based electronic marketplaces. Computational & Mathematical Organization Theory 12 (2-3). pp. 181-204.

HALES D (2002) Group Reputation Supports Beneficent Norms. Journal of Artificial Societies and Social Simulation 5 (4). https://www.jasss.org/5/4/4.html

JENNINGS N R, Sycara K and Wooldridge M J (1998) A roadmap of agent research and development. Journal of Autonomous Agents and Multi-Agent Systems 1 (1). pp. 7-38.

KOLLOCK P (1994) The Emergence of Exchange Structures: An Experimental Study of Uncertainty, Commitment, and Trust. American Journal of Sociology 100 (2). pp. 313-345.

KREPS D M and Wilson R (1982) Reputation and Imperfect Information. Journal of Economic Theory 27. pp. 253-279.

LEPPERHOFF N (2002) SAM - Simulation of Computer-mediated Negotiations. Journal of Artificial Societies and Social Simulation 5 (4). https://www.jasss.org/5/4/2.html

MALSCH T (2001) Naming the Unnamable: Socionics or the Sociological Turn of/to Distributed Artificial Intelligence. Autonomous Agents and Multi-Agent Systems 4. pp. 155-186.

MERTON R K (1967) Social Theory and Social Structure. Revised and enlarged edition, New York: The Free Press.

PARSONS T (1951) The Social System. Glencoe, Ill. [etc.]: The Free Press.

PANZARASA P and Jennings N R (2001) The Organisation of Sociality: A Manifesto for a New Science of Multi-Agent Systems. Proceedings of the 10th European Workshop on Multi-Agent Systems (MAAMAW-01), Annecy, France http://www.ecs.soton.ac.uk/~nrj/download-files/maamaw01.pdf

PAVLOU P A and Gefen D (2004) Building effective online marketplaces with institution-based trust. Information Systems Research 15 (1). pp. 37-59.

RAO H (1994) The Social Construction of Reputation: Certification Contests, Legitimation, and the Survival of Organizations in the American Automobile Industry, 1895-1912. Strategic Management Journal 15. pp. 29-44.

RAUB W and Weesie J (1990) Reputation and Efficiency in Social Interactions: An Example of Network Effects. American Journal of Sociology 96 (3). pp. 626-54.

RESNICK P, Zeckhauser R, Friedman E and Kuwabara K (2000) Reputation Systems. Communications of the ACM 43 (12). pp. 45-48.

RITZER G (1996) Sociological Theory. Fourth Edition. New York etc.: McGraw-Hill.

ROUCHIER J, O'Connor M and Bousquet F (2001) The creation of a reputation in an artificial society organised by a gift system. Journal of Artificial Societies and Social Simulation 4 (2) https://www.jasss.org/ 4/2/8.html

SAAM N J and Harrer A (1999) Simulating Norms, Social Inequality, and Functional Change in Artificial Societies. Journal of Artificial Societies and Social Simulation 2 (1) https://www.jasss.org/2/1/2.html

SABATER J (2003) Trust and reputation for agent societies. Monografies de l'Institut d'Investigació en Intel.ligència Artificial, Number 20. Spanish Scientific Research Council. Universitat Autònoma de Barcelona Bellaterra, Catalonia, Spain (PhD thesis) http://www.iiia.csic.es/~jsabater/Documents/Thesis.pdf

SABATER J and Sierra C (2002) Reputation and Social Network Analysis in Multi-Agent Systems. Proceedings of First International Joint Conference on Autonomous Agents and Multiagent Systems. pp. 475-482.

SAWYER R K (2003) Artificial societies: Multi agent systems and the micro-macro link in sociological theory. Sociological Methods and Research 31 (3). pp. 325-363.

SCHILLO M, Bürckert H J, Fischer K and Klusch M (2001) Towards a Definition of Robustness for Market-style open Multiagent Systems. Proceedings of the Fifth International Conference on Autonomous Agents (AA'01). pp. 75-76.

SCHILLO M, Fischer K, Fley B, Florian M, Hillebrandt F and Spresny D (2004) FORM — A Sociologically Founded Framework for Designing Self-Organization of Multiagent Systems. In Lindemann G, Moldt D, Paolucci M and Yu B (Eds.), Regulated Agent-Based Social Systems. First International Workshop, RASTA 2002. Bologna, Italy, July 2002. Revised Selected and Invited Papers. Lecture Notes in Artificial Intelligence LNAI 2934. Berlin, Heidelberg, New York: Springer-Verlag. pp. 156-175.

SCOTT W R (1995) Institutions and Organizations. Thousand Oaks, London, New Delhi: Sage Publications.

SHAPIRO S (1987) The Social Control of Impersonal Trust. The American Journal of Sociology 93 (3). pp. 623-658.

SHENKAR O and Yuchtman-Yaar E (1997) Reputation, image, prestige, and goodwill: An interdisciplinary approach to organizational standing. Human Relations 50 (11). pp. 1361-1381.

TADELIS S (1999) What's in a Name? Reputation as a Tradeable Asset. American Economic Review 89 (3). pp. 548-563.

WEIGELT K and Camerer C (1988) Reputation and Corporate Strategy: A Review of Recent Theory and Applications. Strategic Management Journal 9. pp. 443-454.

YOUNGER S (2004) Reciprocity, Normative Reputation, and the Development of Mutual Obligation in Gift-Giving Societies. Journal of Artificial Societies and Social Simulation 7 (1) https://www.jasss.org/7/1/5.html

YU B and Singh M (2002) Distributed Reputation Management for Electronic Commerce. Computational Intelligence 18 (4). pp. 535-549.

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, [2007]