© Copyright JASSS

  JASSS logo ----

Dirk Nicolas Wagner (2000)

Liberal Order for Software Agents? An economic analysis

Journal of Artificial Societies and Social Simulation vol. 3, no. 1,
<https://www.jasss.org/3/1/forum/2.html>

To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary

Received: 8-Dec-99      Accepted: 13-Jan-00      Published: 31-Jan-00


* Abstract

Computer science and economics face a common problem, the unpredictability of individual actors. Common problems do not necessarily imply a common understanding so that it is important to note that the agent-paradigm can function as an interface between Computer science and economics. On this basis, economics is able to provide valuable insights for the design of artificial societies that are intended to constructively deal with individual unpredictability. It is argued that liberal rules and adaptive actors are promising concepts in order to achieve spontaneous social order among software-agents.

Keywords:
Software agents, multi-agent systems, economics, liberalism, social order, spontaneous order, adaptation, unpredictability

* Economics and Computer science I: Common problems

Computer science and economics face unpredictable actors

1.1
In many ways Computer science can be regarded as one of the most successful sciences of the last few decades. Computer science gave birth to software, and today, "software entities are more complex for their size than perhaps any other human construct, because no two parts are alike." This famous quote from Brooks (1995) has not raised much contradiction. However, complexity comes at a price. The more complex software entities became, the less predictable they turned out to be. Already in the 1950s, "the SAGE system was so complicated that there appeared to be no way to model its behavior more concisely than by putting the system through its paces and observing the results" (Dyson 1997).

1.2
Computer science is not the first Science to face the unpredictability of complex actors. Precisely this problem has given other sciences a hard time for centuries; and it still does. For the Social Sciences the basic unpredictable entity is the human being. Among other Social Scientists, economists have been investigating the question of how social order can be achieved in a world consisting of unpredictable actors. With Hayek (1989/91) the notion of order can be understood as "a state of affairs in which a multiplicity of elements of various kinds are so related to each other that we may learn from our acquaintance with some spatial or temporal part of the whole to form correct expectations which have a good chance of proving correct." A modest rate of progress in economics seems to suggest that it is more difficult to successfully deal with complexity than to create it. Nevertheless, economic insights can help to establish social order and thus make it less necessary to "put the system through its paces" if one wants or needs to know something about it.

The downside of unpredictable actors has passed the borders of Computer science

1.3
Unpredictability of individual actors has long been a concern, because of the potential and real harmful consequences connected to it. Already by 1651 Thomas Hobbes (1651/1980) had concluded that human beings in a state of nature, i.e. in a world with constrained resources where all events are controlled individually, would end up in a "war of each against all". Today, software entities are no longer only inhabitants of Computer science laboratories but most importantly populate firms, households and many other areas and objects of daily human life, including the Internet. Consequently, the downsides of unpredictable software can appear virtually anywhere. In fact, they have passed the borders of computer labs to become a social phenomenon. Thus, artificial entities start to play a role in state of nature situations. The problems to social order induced by software artifacts can be summarized as follows:
  • Bug: A program contains errors and does not behave as designed.
  • Design bug: The program code is free of errors, but there are unintended errors in the design.
  • Virus: The Program and Design are error-free. The virus manipulates someone else's data or programs and can intentionally cause damage.
  • Devil: The Program and Design are error-free. Other programs remain unchanged. The devil steals, manipulates or distributes data in full knowledge of its semantic meaning and thus intentionally creates damage.
  • Emergent problem: Program and design are error-free. Other programs remain unchanged. Data is being manipulated or distributed in full knowledge of its semantic meaning but without offending the rights of others. The combined effect of many operating programs leads to unintended, overall harmful effects.

* Economics and Computer science II: A possibility of understanding on the basis of software agents

2.1
Despite the fact that Computer science and economics are concerned with different objects of study, they obviously face similar kinds of problems. Additionally, the problems arising for Computer science are no longer confined to their field but become relevant to society as a whole which is the object of economic study. Interrelations and commonalties regarding the key challenges may be a motivation for closer cooperation between the two sciences. However, such cooperation is doomed to be superficial or lead to misunderstandings as long as the two sciences live in separate worlds, characterized by different languages and abstraction levels. Fortunately, a new paradigm has emerged in Computer science, or more precisely in the area of Distributed Artificial Intelligence (DAI), which stands for a possibility of understanding. According to the so called "agent-paradigm", software entities can be viewed as agents and complex software systems as Multi-Agent systems (MAS): "The computational architecture that seems to be evolving out of an informationally chaotic web consists of numerous agents, representing users, services and data resources" (Huhns and Singh 1998). Software entities that qualify as agents have to be active, persistent components that perceive, reason, act and communicate.[1] In many cases, agents are modeled as rational-choice decision makers and strongly resemble the economic model of man (cf. Russell and Norvig 1995). At the same time Multi-Agent environments are characterized by resource constraints, resource contention, as well as costliness of information and interaction, which are key issues in the economic analysis of social order. Consequently the assessment that "the principles of MAS are primarily economic" (Boutilier et al. 1997) does not come as a surprise. In conclusion, the agent-paradigm of Computer science can be viewed as an important interface to economics that allows for a bi-directional knowledge transfer because it offers an adequate abstraction level and because it focuses on similar variables.

* The free actor: Towards a counterintuitive ground

3.1
Next to a widespread ambition to establish economics as a positive Science (cf. e.g. Kirchgässner 1991), economists have always supported strong normative claims. A normative position shared by many people, not only within the branch of economics, is the defense of the freedom of the individual. In the light of the preceding exposition such a position seems to pose two dilemmas:
  • Individual freedom can be defined as unpredictability, which may be harmful.
  • Individual freedom may be an intuitive claim for humans but not for software- agents.

3.2
Following Kirsch (1994), an actor can be defined as free "when and to the extent that his future behavior is unpredictable to somebody; this somebody may be the actor himself or somebody else."[2] Consequently a defense of individual freedom implies that the unpredictability of the individual has to be defended. More to the point, this means that the mentioned problems connected with unpredictability, which may climax in wars of each against all[3], cannot be solved by forcing the actors to be more predictable. Other exits to state of nature situations have to be found. Thomas Hobbes was still unable to identify such exits. But authors like David Hume (1739/1985) and Adam Smith ( 1776/1974) sowed ideas for a liberal social order that subsequently have been fruitfully grown by many thinkers. Today, liberal social order constructively deals with individual unpredictability and seems to be promising for human society (cf. Gwartney and Lawson 1997). The possibility of such an order for complex systems must be good news for Computer scientists.

3.3
But, before the possibility of liberal order for Multi-Agent environments can be analyzed in detail, another dilemma has to be considered. Whether in the sense of the above definition or in some other sense: liberal thinkers promote freedom.[4] People dealing with computers, however, seldom envision the notion of freedom. In contrast, another notion seems to be more common. Rawlins (1997) points to a widespread opinion when he states that "a slave is what we purpose all machine intellects to be." It is not the purpose of this paper to propose whether machines should be slaves or should be free. Nevertheless, unpredictability of machines seems to be uncircumventable. As liberal thinkers in general, and liberal economists in particular, promise to be handling the phenomenon of individual unpredictability in a natural way -- i.e without oppressing or eliminating it -- the following seems to be a stimulating question: What would be required for a liberal order for software-agents and how would it look like? And, what are the minimum steps to be undertaken to reconcile individual unpredictability and social order?

* Applying liberal rules in software-agent environments

4.1
From a liberal point of view, individual freedom is the ultimate goal. However, it is not unrestricted freedom that is being promoted, but freedom under the rule of law (cf. Hayek, 1971). "The key individualist move is to draw attention to the way that structures not only constrain; they also enable" (Hargreaves Heap and Varoufakis, 1995). This is the lesson that can be learned from Thomas Hobbes' disabling state of nature, where no rules exist. The central question at this stage is, whether liberal structures or rules that have proven to function in human society can be applied to software-agents?

4.2
In order to identify the characteristics of liberal rules one may follow Hayek (1989/91), who points out that they are independent of purpose and the same for all members of a system. They must be "applicable to an unknown and indeterminable number of persons and instances." In essence, liberal rules may be characterized as abstract, general rules of conduct.

4.3
It is not the purpose of this paper to give an exhaustive overview of the set of rules that can be found at the heart of a liberal order.[5] Nevertheless, it will turn out to be instructive to introduce two fundamental rules; the exclusion principle and the contract principle. The exclusion principle is essential for the formation of individual property rights and is best illustrated by Hayek (1973): "The understanding that 'good fences make good neighbors', that is, that men can use their own knowledge in the pursuit of their own ends without colliding with each other only if clear boundaries can be drawn between their respective domains of free action, is the basis on which all known civilization has grown." Meanwhile the contract principle considers "man's propensity to truck, barter, and exchange one thing for another" (Smith, 1776/1974). It rules that people should be able to freely choose their partners of exchange and that contracts have to be fulfilled.

4.4
Traditional rules for software artifacts sharply contrast with liberal rules. Based on a problem-oriented perspective they are rather concrete, than abstract. By dividing problems into sub-problems and by delivering algorithms that are exact manuals for the problem-solving process, software designers automate actions and tasks, and delegate them to machines. The rules are orders that command actors what to do, rather than general liberal rules of conduct which would be based on bans of what not to do. Unpredictability is alien to these rule-sets. It can hardly be accommodated, but must be fixed.

4.5
But things in software do not have to stay as they are. The computer is the universal information manipulator par-excellence. Therefore, liberal principles are feasible in software systems too. That such a match-making between these principles and agent-technology is possible can be shown by referring once again to the exclusion principle and the contract principle. Regarding the exclusion principle it quickly becomes evident that "good fences" in virtual worlds have to be different from those in physical worlds. They are, however, feasible. Instead of wood or bricks, for example cryptographic methods or digital watermarks can be employed. The measures falling into this category can be summarized by the notion of "encapsulation" (Miller and Drexler 1988). As the first column in table 1 shows, the encapsulation of software items can prevent spying, theft or unauthorized communication, while also enabling accountability.

Table 1

4.6
If, in addition to encapsulation, the possibility of communication is provided (see second column in the table), then the contract principle becomes feasible. The encapsulated rights to information, access and resources can be transferred when there is mutual consent between the involved software entities. This requires, that agents can freely choose their exchange partners so that the first part of the contract principle would be satisfied. In fact many applications of Multi-Agent systems have demonstrated that simple contracting already works in practice (cf. Jennings and Wooldridge 1998). The second part -- that contracts have to be fulfilled -- is more challenging. In contrast to human beings, software-agents can be programmed to keep promises and contracts. But, universal as they are, they can also be programmed to break contracts and to deceive their exchange partners (cf. Rosenschein and Zlotkin 1994). In open environments this cannot be excluded but it seems reasonable that it will not be done arbitrarily. Rather, programmer or user and agent have some motivation to defect. Thus, comparable to human beings, software- agents can be supplied with incentives to keep contracts (cf. Kraus 1996).

4.7
The purpose of the preceding paragraph was, to demonstrate that liberal rules can serve as building blocks for software-agent environments. If software systems are based on liberal rules, then unpredictability of individual agents will be a natural, rather than an alien feature of those systems. This is so, because the underlying logic of the rules is inverse to those of traditional programming paradigms. While commands try to predictably direct an agent into one direction and towards a certain outcome, liberal rules just prevent agents from certain actions and leave it undetermined which of the chooseable actions the agent may take (see figure 1). A first critique to such an approach may be that software-agents, just like traditional programs, are supposed to solve certain problems for us. In a step by step approach to freedom, it seems feasible to think of domain specific and problem-oriented roles for software-agents.[6] To the extent the agent is able to follow abstract rules these roles would be less and less determined by commands. In the long run, one may think of a software-agent to have clients and customers rather than masters that have problems to solve. A second aspect is that a free agent may get lost. Figure 1 illustrates that more of the environment has to be understood by a free agent than by a commanded agent. Research in Multi-Agent systems develops agent-accessible environments by designing for example communication-protocols, interaction-protocols and ontologies (cf. Huhns and Singh 1998). These developments can be considered in line with liberal principles as long as they create spheres of conduct but do not confine agents to certain behaviors.

Figure 1

* Preparing software-agents to be free

5.1
Can software-agents be made to be free? This seemingly paradoxical question is the next to be investigated. It might be useful to cautiously approach the question by encircling common characteristics of software-agents which restrict the freedom of these agents without being required for the existence of order.[7]
  • Benevolence For MAS, agents are often designed to be benevolent, to pursue the welfare of the system, or at least to be cooperative. It can be argued that such a proposition is neither realistic in open systems, nor is it technically feasible in large systems, nor is it always good (in whatever sense for the individual or the society). A deeper discussion of these points is worthwhile but not required within the given context. It suffices to state that from a liberal standpoint no particular orientation is expected from an agent. It may be altruistic or self-interested or anything else.[8]
  • Hyper-cognition In MAS-research it has frequently been pointed out, that agents face severe cognitive restrictions (cf. e.g. Russell 1997). Nevertheless, it is regularly considered useful to design Multi-Agent systems with omniscient actors; for example, to facilitate cooperation. From a liberal point of view an agent may be cognitively bounded and still qualify for freedom.
  • Perfect-Rationality This argument is connected to the one above. It is often assumed that agents who maximize expected utility are the appropriate actors for Multi-Agent systems. However, as Holland and Miller (1991) state, "there is only one way to be fully rational, but there are many ways to be less rational." There is, of course, no reason to outrule rationality, but it is not necessary to expect a particular kind of rationality or -- more general -- a specific decision making algorithm from an agent.[9]
  • Fixed-Interalized Rules The traditional way in MAS to implement rules of conduct like those discussed in section 4, is by internalization. They work as fixed, internally represented restrictions on individual behavior. Obviously this restricts the freedom of an agent as long as it does not decide itself over the internalization. Consequently, third-party rule-internalization is not required for participants of a liberal order.
  • Perfect slaves "In order to avoid agent's exploitation [and] slavery... the agent should maintain not only a level of executive autonomy, but of full goal autonomy." Castelfranchi (1995) may well be the only one to post such a statement within the MAS-community. In general, the goals of agents are expected to be set by the designers or users of the employed agents. This is certainly not a liberal requirement.

5.2
The list is not meant to be complete. Nevertheless, it gives an adequate picture of those characteristics with which agents are frequently equipped but which would not be compulsory parts of the architecture of a free agent. In summary, the above points challenge the traditional mind-set and argue that there cannot (except in small, closed systems), would not and does not have to be central control of the individual actor.

5.3
However, throughout the discussion it has not yet become evident how a free agent may be built, nor have the necessary ingredients for a free agent been presented. In this regard, two problems can be identified:
  • First, a software-agent has to be able to survive in a world where only abstract rules guide it.
  • Second, other actors -- be they humans or other agents -- have to be able to constructively cope with the unpredictability of that agent.

5.4
Large systems where actors are free to behave unpredictably can be characterized by an unavoidable imperfection of the knowledge of the individual actor (cf. Hayek 1948). Because of the large number of variables, because of their interdependence and because of continuous change, the success of individual action within such a system depends on more facts than anyone can possibly know. This means that it is not possible for an agent to have a well-specified model of its environment which, in turn, makes it difficult for the agent to decide how to behave. Even if the agent would voluntarily accept to follow them, abstract liberal rules would not be of much help in such a situation. The two principal difficulties the agent faces are, first, that it has to translate abstract rules into a more specified rule for the actual situation;[10] and second, that it has to find a decision rule that helps it to choose from the spectrum of possible actions.

5.5
As it is not possible for the agent to deduce from objective facts how to behave, it may instead use a trial and error approach: First, the agent tries actions. Second, actions that led to better outcomes in the past are more likely to be repeated in the future. A more sophisticated software-agent may employ pattern-matching capabilities to identify situations it experienced in the past that resemble the actual situation. As the agent is always confronted with new situations, in the second step it then may not recall and repeat specific actions but rather execute the abstract internal model of the resembling situation and the connected decision-rule. For example, an agent operating in the world wide web wants to download information from some server. Surprisingly, digital watermarks are woven into the data. This might resemble situations where the agent retrieved encoded but decipherable information from the Internet. The agent recognizes that the exclusion principle applies and deduces that it is advantageous to ask for authorization before retrieving the desired data. This process of so called inductive reasoning is characteristic for human beings (Arthur 1994). However, it appears to be downwearing as long as an agent has to rely solely on its own experience. Fortunately, the ability to communicate and to imitate modes of conduct helps the individual to utilize knowledge that is not given to anyone in its totality but is dispersed within the system. The individual not only experiences the reactions of other actors to its own actions, but it also continuously observes the behavior of those other actors. All this does not lead to a situation where the actor possesses large amounts of pure knowledge, but merely feeds back to its set of rules of conduct and the applicability of certain rules in certain types of situations. After all, this means that "...we can make use of so much experience, not because we possess the experience, but because, without our knowing, it has become incorporated in the schemata of thought which guide us" (Hayek, 1973).[11] In summary, inductive reasoning allows the individual agent to specify abstract rules in particular situations and helps it to deduce an appropriate action.

5.6
While human beings so far are the unbeaten champions within the discipline of inductive reasoning and adaptive behavior, software-agents are on track to follow. The behavioral model described above, can be found in DAI under the notions of "reinforcement learning" and "classifier systems". In addition "genetic algorithms" can be employed to build software-agents (cf. Holland and Miller 1991). The agent then is not only able to learn existing rules that guide it within an unpredictable environment. It may also, through random modification, develop new rules that may be successful, and become selected by other agents and spread through the system.

5.7
Can agents be made to be free? -- The second part of the problem concerns the question whether the actors are able to constructively deal with each other's unpredictability. It seems, that for inductive actors the answer to this question can be "yes". This optimistic answer is grounded on the insight that inductive behavior does not only contribute to the fitness of the individual actor but also to the system-wide selection of certain rules. So called coevolution appears:[12] individual actors or groups of individual actors and the models of the environment and decision-rules they employ coevolve. In other words, while one actor adapts to the environment, parts of the environment adapt to him. Based on simulations with software-agents, Vriend (1999) demonstrates that coevolution is possible for machines as well. He shows how agents, based on decentralized interactions, learn to employ the best decision rule for a given situation.

5.8
Systems where actors and rules coevolve make it easier for the individual actor to live with the unpredictability of its specific interaction partner because it receives a reasonably good orientation based on the average behavior of all its interaction partners. This is the crucial property of the adaptive actor paradigm. It is important to conclude that this is a necessary condition for an actor that is supposed to "be made to be free". In contrast to the arguments presented at the beginning of this section, here it is suggested that, in essence, agents may be free to the extent to which they are able to translate the abstract rules of a liberal society into specific rule-based behavior that continuously adapts to its environment.

* Spontaneous liberal order with software agents

6.1
This paper has started by laying out common problems and common grounds of Computer science and economics. It has continued by detecting a major discrepancy in the perception of the human individual on the one hand and the individual software-agent on the other hand. This difference ranks around the question of freedom. The argument then marched on to explore the cornerstones of liberal order -- general rules of conduct and free, adaptive actors. In principle, software can be created to deliver both. Finally, this section serves to venture an outlook on what may result from this.

6.2
Essential for liberal order is the natural integration of unpredictable behavior. This is what makes it fundamentally different from other forms of order, e.g. those represented by traditionally designed computer systems. With the prospect that integration of unpredictability is possible without vitally suffering from the downsides of it, it will be instructive to sketch the underlying logic of liberal order.

6.3
A hypothetical tale, reported from unindexed spheres of the Internet, may serve as an introductory example: A software-agent managing an Internet server has to decide whether or not, to give an unknown information-retrieval-agent access to the contents of the server. The information-retrieval-agent is unpredictable in that the server-agent does not know whether it will just browse, modify, buy or steal something. The server-agent has to make this decision several thousand times a day for different information-retrieval-agents. It bases its decision on its experience with past visitors of the same kind. It might incur negative experiences with exactly this visitor but this investment pays-off to the extent that in the future, the server-agent will on average make better decisions, i.e. let certain types of agents in and exclude others. Vice-versa, the information-retrieval-agent, will by experience, learn which behavior is best for him. If however, stealing turns out to be the dominant mode of conduct among information-retrieval-agents -- e.g. because server-agents are unable to protect themselves -- then the negative effects may cumulate to a system-wide problem, because server-agents would stop to produce information altogether. In such a case, modifications regarding the general rules of conduct can help, e.g. by increasing the fines for unauthorized use and distribution of digitally watermarked intellectual property. Thus, the agents remain unpredictable, but their incentives have changed and as a consequence adaptation for the better can be expected.

6.4
The case of the two Internet-agents sheds light on several characteristics of liberal order. A distinct feature is the tendency to locate unpredictability on the micro-level. In other words, here one is not faced with a monocentral system that may or may not be unpredictable and that can only be dealt with on the system level (e.g. the SAGE system quoted in section 1). Instead, liberal order stands for a particular kind of Multi-Agent system, which -- polycentral in character -- consists of many actors that are "centers in their own right" (Kirsch 1994). These actors are unpredictable, which means they may inhabit bugs, system bugs, viruses or devils. Whereas in monocentral systems with these problems one has to deal on the macro-level, here they are tackled on the micro-level. Various factors come into play. First, an agent -- e.g. one containing a bug so that it misinterprets the exclusion principle -- learns from experience and thus is able to improve its behavior. Second, an agent learns from experience and is free to choose; for example its interaction partner: after bad experiences with one interaction partner (e.g. one transporting viruses) it may switch to another (contract principle). The first two aspects indicate that the individual agent has to bear the consequences of its own unpredictability and that of the other agents. Third, this can be viewed as an investment rather than a cost, because the negative experiences the agent incurs, help it to improve its future behavior. The agent can be considered to choose not so much based on the actual situation but grounded on its cumulative experience to achieve a satisfying overall behavior. Fourth, the fact that mutual adaptation swallows downsides of individual unpredictability and turns them into improved behavior means that negative surprises are regularly being canceled out before they can accumulate to get big enough to affect the system as a whole. Fifth, if nevertheless emergent problems affect the system as a whole, then there is no need to iteratively search the individual devil in the hay-hack. Instead, one can look for an adjustment of the few existing, general rules of conduct, solely envisaging the overall system behavior.

6.5
In essence, the unpredictability on the micro-level and the predictability on the macro-level reinforce each other. It is the unpredictability of the individual agent, which allows the system as a whole to be predictable. And because the system on average is predictable, the agents can bear to be unpredictable. After all, the predictable system is not more but less than the sum of its unpredictable parts. It is not characterized by a "2+2=5 effect" but by a "2+2=3 effect" (Vriend 1999). And that is the reason why, in the sense of Hayek, in a liberal order the individual actor by acquaintance with some spatial or temporal part of the whole can form expectations which have a good chance of proving correct.

6.6
It would be biased to argue that this ideal concept of liberal order turns out to unfold unhindered in -- physical or virtual -- reality. Human history has shown that there are problems as well. And research in DAI suggests that -- while some problems can be circumvented -- a considerable number of them are bound to surface in software-agent environments. Examples are negative externalities and the tragedy of the commons when the exclusion principle cannot be enforced (cf. Sandholm and Lesser 1997). Another example is volatile system behavior, i.e. when the exception proves the rule and unpredictable behavior on the micro-level aggregates to have destabilizing effects on the macro-level (cf. Huberman and Hogg, 1995). However, two arguments justify optimism: First, confronted with these problems, coevolving individuals and rules learn to cope with these problems. And it follows second, that there exists some evidence that historically the advantages of liberal order outweigh its disadvantages. So far, theories prophesying the near end of capitalism have repeatedly failed to turn into reality.

6.7
Besides the overall working properties of liberal order, another aspect pointed out in this paper is the role of the individual agent. The agent-paradigm discussed here has no longer the instrumental character machines traditionally used to have. It has become clear that adaptive artificial software-agents would to an increasing degree be unpredictable, which in turn means that they would to an increasing degree be free in relation to other actors. Regarding the recent development in software, increasing freedom is nothing new: Since the early days of Computer science, software has continuously become more complex and less predictable. This unpredictability, however, was often unintended and malicious. The presented first steps of an analysis show that freedom for software-agents can be fruitful and may be intended. In addition to its inherent intentionality the concept of unpredictability put forward here contrasts with the traditional unpredictability of software in that it demands software bear the consequences of its own actions.[13]

6.8
Admittedly, many of the points discussed here may be a long way down the aisles of leading research & development labs. But considering the rise of phenomenons like "Machine-to-Machine E-Business" (Roddy 1999) and having in mind that human beings are relatively slow information processors, it seems to be time to trigger a discussion.


*Acknowledgements

This research was supported by the Swiss National Science Foundation, subside no. 12/555 11.98. I thank Kurt Annen for valuable comments on an earlier draft of this paper.


*Notes

1It is not intended to add the long list of contributions, discussing the question 'What exactly is an agent?'. The definition here is adapted from Huhns and Singh (1998) and synthesizes many existing definitions. For a survey and another synthesis see Franklin and Graesser (1996).

2The definition implies that what is of interest here, is not the essence of freedom in the philosophical sense, but the impact of freedom which is of interest to the economist, and as should subsequently turn out, for the Computer scientist. (cf. Kirsch, 1994). Within the context of this paper, the notion of freedom always refers to the stipulative definition given above.

3Fortunately, this extreme situation is rather rare. However, more common are situations comprising theft, fraud, moral-hazard, shirking and other kinds of opportunistic behavior. Analytically these situations can be viewed as state of nature situations (cf. Leipold 1989).

4For a discussion, see e.g. Berlin (1969).

5Various authors doubt whether such an all encompassing liberal conception does exist at all in modern liberal economic theory (cf. e.g. Homann and Pies 1993).

6For a discussion of domains for software-agents cf. Rosenschein and Zlotkin (1994). For a discussion of commands, abstract rules and roles cf. Hayek (1989/91).

7For a comparable discussion of the first two of the following points cf. Castelfranchi and Conte (1996).

8Without prescribing anything, economic theory -- for analytical purposes -- typically assumes that an actor follows its self-interest. It might, however, lay in its self-interest to be benevolent.

9Note that the argument of making rational-choice an entry-hurdle to allow for participation in software-agent environments is different from the assumption that agents make rational-choices for the purposes of analysis. Economic theory typically models actors (including agents) as rational decision makers.

10By investigating Asimov's famous -- and also abstract -- rules of robotics, Clarke (1993) shows that this is far from trivial for a machine.

11This can also be interpreted as "implicit knowledge" (Polanyi 1985).

12The notion coevolution is most widely used not in economics but in biology (cf. Sigmund 1995).

13In this respect one fundamental problem to solve will be the following: Unlike human beings, artificial actors lack any natural motivation to reliably follow the discussed process. Sanctions do not hit them the same way they hit humans, and after all "they are not hurt, when they go broke" (Miller et al. 1996). Compare also the above example of the interaction between server-agents and information-retrieval-agents.


* References

ARTHUR, B. (1994), Inductive reasoning and bounded rationality, AEA Papers and Proceedings, Vol.84, No.2, 406-411.

BERLIN, I. (1969), Four essays on liberty, Oxford: Oxford University Press.

BOUTILIER C., Shoham, Y. and Wellman, M. (1997), Economic principles of multi-agent systems, Artificial Intelligence, 94, 1-6.

BROOKS, F. (1995), The mythical man-month: essays on software engineering, Reading MA: Addison Wesley.

CASTELFRANCHI, C. (1995), Guarantees for autonomy in cognitive agent architecture, in: Wooldridge, M. and Jennings, N. (eds.), Intelligent agents, ECAI-94 Workshop on agent theories, architectures, and languages, Heidelberg: Springer, 56-70.

CASTELFRANCHI, C. and Conte, R. (1996), Distributed artificial intelligenceand social science: critical issues, in: O'Hare, G. and Jennings, N. (eds.), Foundations of distributed artficial intelligence, New York: John Wiley and Sons, 527-542.

CLARKE, R. (1993), Asimov's laws of robotics -- implications for information technology, IEEE Computer 26,12 and 27,1, 53-61 and 57-66

DYSON, G. (1997), Darwin among the machines: The evolution of global intelligence, Reading MA: Addison-Wesley.

FRANKLIN, S. and Graesser, A. (1996), Is it an agent or just a program?: A taxonomy of autonomous agents, in: Proceedings of the Third International Workshop on Agent Theories, Architectures, and Languages, Heidelberg: Springer.

GWARTNEY, J. and Lawson, R. (1997), Economic freedom of the world. 1997 Annual report, Vancouver B.C.

HARGREAVES Heap, S. and Varoufakis, Y (1995), Game theory: a critical introduction, London: Routledge.

HAYEK, F. A. v. (1948), Individualism and economic order, Chicago: University of Chicago.

HAYEK, F. A. v. (1971), Die Verfassung der Freiheit, Tübingen: J.C.B. Mohr (Paul Siebeck).

HAYEK, F. A. v. (1973), Law, Legislation, and Liberty, Vol.I, Chicago: University of Chicago.

HAYEK, F. A. v. (1989/91), Spontaneous ('grown') order and organized ('made') order, in: Thompson, G. (ed.), Markets, hierachies, networks: The coordination of social life, London: Sage, 293-301.

HOBBES, Th. (1651/1980), Leviathan, München: Reclam.

HOLLAND, J. H. and Miller, J. H. (1991), Artificial adaptive agents in economic theory, AEA Papers and Proceedings, Vol. 51, No. 2, 365-370

HOMANN, K and Pies, I. (1993), Liberalismus: kollektive Entwicklung individueller Freiheit -- Zu Programm und Methode einer liberalen Gesellschaftsentwicklung, in: Homo Oecnomicus, Bd. X. (3/4), 297-347

HUBERMAN, B. and Hogg, T. (1995), Distributed computation as an economic system, Journal of Economic Perspectives, Vol. 9, No. 1, 141-152

HUHNS, M. and Singh, M. (1998) (eds.), Readings in agents, San Francisco: Morgan Kaufmann.

HUME, D. (1739/1985), A treatise on human nature, London: Penguin.

JENNINGS, N, and Wooldridge, M. (1998), Agent technology: Foundations, applications, markets, Berlin: Springer.

KIRCHGÄSSNER, G. (1991), Homo oeconomicus, Tübingen: J.C.B. Mohr (Paul Siebeck).

KIRSCH, G. (1994), Unpredictablilty: another word for freedom...and if machines were free?, in: Thalmann, N. and Thalmann, D. (eds.), Artificial life and virtual reality, Chichester: John Wiley.

KRAUS, S. (1996), An overview of incentive contracting, Artificial Intelligence, 83 (2), 297-346.

LEIPOLD, H. (1989), Das Ordnungsproblem in der ök. Institutionentheorie, ORDO, 40, 129-146.

MILLER, M., Krieger, D., Hardy, N., Hibbert, Ch. and Tribble, E. (1996), An automated auction in ATM network bandwith, in: Clearwater, E. (ed.), Market based control, Singapore: World Scientific, 96-125.

MILLER, M. S. and Drexler, K. A. (1988), Markets and computation: Agoric open systems, in: Huberman , B. A. (ed.), The ecology of computation, Amsterdam: North-Holland.

POLANYI, M. (1985), Implizites Wissen, Frankfurt/M: Suhrkamp.

RAWLINS, G. (1997), Slaves of the machine: the quickening of computer technology, Cambridge, MA: MIT Press.

RODDY, D. (1999), Machine to machine e-business, Research Paper, Deloitte Consulting, http://www.dc.com

ROSENSCHEIN, J. S. and Zlotkin, G. (1994), Rules of encounter: designing conventions for automated negotiation among computers. Cambridge, MA: MIT Press.

RUSSELL, S. and Norvig, P. (1995), Artificial intelligence: A modern approach, Englewood Cliffs: Prentice-Hall.

RUSSELL, S. J. (1997), Rationality and intelligence, Artificial Intelligence, 94, No. 1-2, 57-77

SANDHOLM, T. W. and Lesser, V. R. (1997), Coalitions among computationally bounded agents, Artificial Intelligence, 94, No. 1-2, 99-137

SIGMUND, K. (1995), Spielpläne, Hamburg: Hoffmann and Campe.

SMITH, A. (1776/1974), Der Wohlstand der Nationen, Recktenwald, H. (Hrsg.), München: dtv.

VRIEND, N. (1999), Was Hayek an Ace?, Working Paper, Queen Mary and Westfield College, University of London

----

ButtonReturn to Contents of this issue

© Copyright Journal of Artificial Societies and Social Simulation, 1998