The Ethics of Agent-Based Social Simulation

: The academic study and the applied use of agent-based modelling of social processes has matured considerably over the last thirty years. The time is now right to engage seriously with the ethics and responsible practice of agent-based social simulation. In this paper, we first outline the many reasons why it is appropriate to explore an ethics of agent-based modelling and how ethical issues arise in its practice and organisation. We go on to discuss different approaches to standardisation as a way of supporting responsible practice. Some of the main conclusions are organised as provisions in a draft code of ethics. We intend for this draft to be further developed by the community before being adopted by individuals and groups within the field informally or formally.


Introduction
1.1 Discussions about research ethics have previously focussed mainly on research misconduct. Recently, however, the focus has widened to include concerns about integrity and responsible research (Owen et al. 2012;Horbach & Halffman 2017;Shaw 2019;Steneck 2006) 1 . By shifting attention from merely avoiding misconduct to ensuring integrity, ethics has become a constitutive element or guarantor of overall good science. As the use of agent-based social simulation grows and the method is increasingly recognised as an effective approach for social research, practitioners may benefit from engaging in collective critical reflection about the way ethics permeates their everyday practices and how disciplinary agreements on ethical compliance could help with the further maturation and consolidation of agent-based social simulation. . This article takes a complementary approach. We enquire about the ethics that agent-based social simulation faces as a field of study and suggest that, from this perspective, ethical challenges arise from both its practice and its organisation. Our goal is twofold: First, to outline how challenges in each domain manifest and, second, to explore alternatives for disciplinary ethical standardisation.

Previous
organisations (e.g., government, business, and NGOs) to inform decisions. Often organisations will commission consultants or researchers to develop an agent-based model of a topic, or less often they will develop one themselves in-house. Agent-based models may also be used as a component of larger models that bring together different methodological approaches to represent physical, biological, ecological, technical, and environmental as well as socio-economic systems. Agent-based models are also a part of a wider push for the use of 'complexity-appropriate' analysis in applied settings (Barbrook-Johnson et al. 2021) along with other complexity and systems science-inspired methods.

2.5
While the use of agent-based models is still not widespread, their growing influence necessitates a reassessment of the ethics of agent-based modelling from a practice point of view. It raises multiple questions for applied settings, such as: should the deployment of agent-based models be more formalised and standardised? Should the method follow an agreed set of technical and ethical standards? As use grows, developing a clearer ethics of agent-based modelling may increase the chances of it being used more in decision-making processes, and in turn improve the quality of those processes, and the final outcomes they deliver.

3.1
The ethics of computer simulation is a surprisingly underdeveloped topic, especially considering the increasing focus on the ethics of digital technologies (e.g., Tsamados et al. 2022). Several aspects of computer simulation with potentially relevant ethical implications have been identified in the literature. In most cases, however, they are discussed without elaborating on these implications. For example, while the simplifying nature of computational modelling has some distinctive ethical implications (Brey 2014), the literature frequently fails to acknowledge this.

3.2
In the general computer simulation literature, ethics has been accounted for in a multiplicity of ways. Some texts approach computer simulation relying on particular ethical accounts. Palmer (2017), for example, explicitly reframes the model evaluation process within consequentialist ethics. Others incorporate ethics more narrowly through specific morally relevant concepts e.g., trust (Williamson 2010). There are, as well, discussions of ethics and standardisation at the professional or disciplinary level (e.g., Durán 2018;Ören et al. 2002;Tolk & Ören 2017). Regularly, however, ethics is discussed from the perspective of how values influence the modelling process, particularly, the evaluation stage. Prior research has most prominently considered: (i) different types of values that are relevant in the context of computer modelling (Hirsch Hadorn & Baumberger 2019; Intemann 2015), (ii) the connection between values, uncertainty, and subjectivity (Morrison 2014;Parker 2014), and (iii) the contextual determinants and nature of computational evidence (Cassini 2022;Parker & Winsberg 2018).

3.3
The agent-based social simulation literature is somewhat different. In most cases, the reference to ethics is more explicit and elaborated. There is also a narrower focus on the ethical implications of the multiple uses of computer simulation, motivated both by the increasing interest in aiding decision-making and the progressive popularisation of empirically calibrated models in social simulation. While there is a common motivation, the discussions address a variety of interrelated topics e.g., potential intentional misuses of simulation models and results (Sobkowicz 2019), ethical challenges arising from decision-making in contexts where negative social consequences are unavoidable e.g., in the distribution of scare resources (Bak 2022), the imperative for practitioners to purposefully seek the betterment of society through their labour (Wildman 2019), the ethical responsibility to model morally sensible topics (or models with morally sensitive implications) (Shults & Wildman 2019), and how any given implementation amounts to taking a moral stance (David 2021).

3.4
A meta-ethical framework has also been put forward (Shults & Wildman 2019) and exemplified with models (Shults et al. 2018;Tolk et al. 2021). This framework invites practitioners to reflect about the ethics of modelling on three levels: philosophical meta-ethics (i.e., considerations about what is 'good' and 'right' in different modelling and simulation activities), scientific meta-ethics (i.e., considerations about how a model captures and justifies salient moral features that are inherent to social dynamics) and the practical import of meta-ethics (i.e., considerations about the criteria used to justify ethical judgements). Even though this framework brings the ethical aspects of the modelling process to the forefront, it is not meant for the identification and resolution of everyday ethical challenges during the simulation lifecycle. As the authors explain, "[r]ather than a guide for resolving specific ethical dilemmas, that framework is meant to provide a way of thinking about the ethics of simulation" (Shults et al. 2018, p. 4069, emphasis in the original).

3.5
Still lacking, then, is a systematic exploration of the many ethical challenges that practitioners face in everyday instances of modelling. Table 1 lists a series of questions that could help identifying potential ethical issues during the simulation workflow and, simultaneously, kick-start a further more detailed and more transversal exploration of the source and nature of ethical challenges in the practice of agent-based social simulation. These questions are organised following a common separation of the modelling process into distinct stages or subprocesses: conceptualisation, implementation, execution, analysis, and dissemination of the computational model (Galán et al. 2016;Gilbert 2008;Railsback & Grimm 2012;Squazzoni 2012;Wilensky & Rand 2015).

3.6
There are some elements that are worth mentioning about the challenges listed: (i) while the agent-based social simulation literature has so far centred on ethical concerns associated with how models are used, every stage of the simulation lifecycle is worthy of ethical consideration, (ii) ethical concerns that emerge during the simulation workflow could be one-off, repeated, or transversal, (iii) some major ethical challenges do not de-pend entirely on the model itself, and (iv) the options that different questions give room for do not necessarily all have the same moral standing. For instance, the question about ethical data collection implies a separation between ethical and unethical alternatives. Other questions, such as the one about the outputs chosen for analysis, address, instead, issues of value conflicts and trade-offs.

3.7
The acknowledgement that not all ethical challenges have the same moral standing raises some interesting questions from the wider perspective of scientific integrity. Addressing value conflicts and trade-offs, for example, rather than avoiding misconduct, has to do with critically selecting the alternative that is subjectively believed to best fit the modelling goals and resources, being mindful of these choices and their implications throughout the entire modelling process, and being transparent about these choices in the reporting. It has been shown, however, that some researchers and institutions have a narrow understanding of integrity as misconduct (Anderson 2018;. Collectively agreeing on the ethical challenges emerging during the simulation workflow in agent-based social simulation seems, then, to call for an institutional reflection on the meta-ethical elements that Shults & Wildman (2019) include in their framework. It is important, however, to determine the dimension and scope of diverse ethical judgements about the modelling process. There is an interesting discussion in the literature on computer simulation ethics about whether the most relevant criteria for any decision in which values and subjectivity are involved is by default ethical. Depending on the values involved and how subjectivity intervenes, some (e.g., Cassini 2022; Morrison 2014) suggest that the best criteria might be methodological or epistemological, rather than ethical.

4.1
Reflecting about the appropriate scope and dimension of ethical reflection in social simulation also raises some issues of interpretation for the questions listed in Table 1. For instance, the first questions about topic selection can be reinterpreted in a more general way as a question about whether the full spectrum of relevant possible topics is currently being covered by the agent-based social simulation literature. While, for example, heterogeneity among individuals has inspired models addressing the dynamics of inequality, discrimination and segregation, among others, there is a knowledge gap about the instantiation of these dynamics in particular populations. For example, given the contemporary interest in advancing the promotion and recognition of LGBTQIA+ rights, it could be argued that agent-based social simulation, as a community or area of study, has a moral responsibility to try deliberately to engage with these understudied topics, especially when they figure prominently in the public sphere.

4.2
It would not be reasonable to ask for practitioners to always consider this additional interpretation of the first question in particular instances of modelling. It is, however, an important ethical question that should be addressed. It becomes evident, then, that some ethical challenges should be tackled collectively by the community, for they pertain more widely to the organisation of agent-based social simulation as an area of research. A few of the most relevant organisational ethical challenges may be identified by comparing with other wellestablished disciplinary areas, for they are shared (Iverson et al. 2003). There are, however, some ethical issues that arise from the distinctive organisational features of agent-based social simulation that practitioners should be mindful of. This section will centre centre on two dimensions of the organisation of social simulation: its interdisciplinary and technology-dependent nature, to exemplify the type of ethical challenges that are common at the organisational level and show how they differ from challenges to the practice of social simulation.

4.3
Following Galison (1996), computer simulation is often referred to as a 'trading zone' i.e., "an arena in which radically different activities could be locally, but not globally, coordinated" (p. 119). The idea is that, contrary to typical disciplinary work where all collaborating researchers share a paradigm, in computer simulation, there is collaboration among multiple expertise communities that locally contribute to the activity, without merging or renouncing the paradigms they are affiliated to. Because of the different expertises involved, computer simulation displays high levels of opaque epistemic dependence (Wagenknecht 2016). This dependence implies, on the one hand, that there is asymmetrical intellectual authority over a domain or expertise among members and, on the other hand, that the fields of expertise do not necessarily overlap, so there are difficulties in judging the other members' expertise. For example, often a social scientist will not only defer to a computer scientist when it comes to technical decisions about implementing the computer model, but also will not have sufficient expertise to judge whether it is the most efficient and effective implementation of the model.

4.4
This context of opaque epistemic dependence and increasing scientific collaboration is relevant from an ethical point of view because of its effects on aspects such as accountability and epistemic trust. In traditional research and disciplinary collaborations, trust is partially built upon certification (Wagenknecht 2016). Yet, most practitioners of agent-based social simulation are not trained or certified as such. Thus, alternative mechanisms for trust-building need to be employed, not only to judge the competence of others, but also your own. Accountability also becomes more difficult because there might not be overlapping expertise, researchers might be working in dissimilar institutional and normative contexts and research practices are increasingly becoming cognitively, financially, and socially decentralised (Winsberg et al. 2014). It is not clear, for example, to what extent agent-based social simulation is affected when research is privately funded or whether cognitive asymmetries become more relevant when dealing with stakeholders in domains such as policy-making.

4.5
Ethical concerns arising from the interdisciplinary nature of agent-based social simulation are not limited to epistemic dependence. Even in instances of overlapping expertise, there might be ethical issues that need to be reviewed either because they are not uniformly covered by the disciplinary traditions or because there are conflicting ethical principles and commitments. Social scientists, for example, have a range of different attitudes to deception in empirical research. In economics, deception is often proscribed, to the point where some journals will refuse to publish manuscripts based on research in which participants have been deceived. Conversely, in sociology and psychology, deception is often not only acceptable, but considered an important methodological resource (Barrera & Simpson 2012;Krasnow et al. 2020). While the reason to permit or proscribe deception might not necessarily be ethical in nature (Barrera & Simpson 2012), the potential professional and personal consequences for researchers who include deception in their methodological designs do raise some ethical concerns. Given the increasing popularity of empirical research, and particularly of experiments, in agent-based social simulation, it would be expected that the community should address the topic in a discussion about the ethics of their research.

4.6
Differences between types of computer simulation also have interesting ethical implications. In comparison with the discrete-event simulation community, for example, the agent-based simulation community tends to work predominantly in academia, rely more often on the use of theory, and use models for experimentation and explanation, rather than prediction (Padilla et al. 2018). While the object of study and technical skills required in each case may not significantly differ, the methodological particularities of each method and the physical, social, and cognitive organisation underlying practices of computer simulation lead to disciplinary dynamics with distinct ethical ramifications. For example, there are noticeable differences in the way the agent-based social simulation community and the larger social simulation community perceived their role in the COVID-19 outbreak (compare the editorial articles published in JASSS Squazzoni et al. 2020 and the Journal of Simulation Currie et al. 2020), which, in part, can be attributed to disciplinary idiosyncrasies, such as the somewhat ambiguous relationship that the agent-based social simulation community has historically had with prediction (see e.g., the 'Prediction': https://rofasss.org/tag/prediction-thread/ and 'JASSS-COVID': https://rofasss.org/tag/jasss-covid19-thread/ threads in the Review of Artificial Societies and Social Simulation).
Technology dependence 4.7 There are some distinctive ethical concerns that arise from agent-based social simulation's strong reliance on computer technology. These concerns can be classified into four groups. The first pertains to the moral standing of computational social science. There are distinctive dynamics in technology-intensive social research for which traditional social science research ethics are insufficient. How should, among other things, the morality of 'virtual' experiments or 'artificial' agents be approached? Dignum et al. (2018), for example, argue that the practice of simulation might become more ethical by making artificial agents capable of reflecting morally about their actions and decisions.

4.8
The second group comprises considerations about the ethics of computational modelling. These considerations may be technical or conceptual, and pertain to computer simulation in general or just agent-based social simulation. Any computer simulation, for example, faces problems of trustworthiness linked to epistemic opacity of the computation (Durán & Formanek 2018). At the same time, practitioners of agent-based social simulation face heightened risks with the intelligibility, transparency, and commensurability of representation, given the unformalised and multiparadigmatic nature of social theory and the syntactic and semantic flexibility of computer languages (Anzola 2021b;Poile & Safayeni 2016).

4.9
The next group of ethical considerations includes a large and diverse set of issues that are related to the development and governance of the general body of knowledge in social simulation. In comparison to other forms of disciplinary knowledge, everyday practices could be disrupted if, for instance, the current software stops being supported or developed (even though some popular software is free and open source) due to the specificity of programming languages. Knowledge from models not converted might be lost and some technical skills depreciated. This unique risk puts pressure on practitioners to develop adequate practices of knowledge curation e.g., model documentation, update, and preservation (Calder et al. 2018). In turn, as with most information technologies, agent-based social simulation has adopted a distributed structure of knowledge governance that facilitates current practices, but, at the same time, creates additional risks and ethical concerns. The possibility of third-party independent use of models, algorithms, data, and frameworks leads to questions about whether computer simulation should be classified as dual-use research (i.e., research with the potential for both benevolent and malevolent applications) (Sobkowicz 2019). It also might require the development of a more elaborated account of authorship. Similarly, this decentralised governance might hinder collaboration when dealing with stakeholders in other domains, where individuals might feel more comfortable with or are required to adopt centralised models of knowledge governance.

4.10
Finally, in the last group, there are ethical concerns associated with the social organisation of agent-based social simulation. As with any other technology, agent-based social simulation must deal with the fact that technological infrastructure is unevenly distributed. If modelling intricate socio-ecological or socio-technical systems requires having access to High Performance Computers (HPC), it is likely that these phenomena will mostly be modelled by researchers in developed countries or that the opinion of these researchers will dominate the discussion. In the same way, the general domain of modelling and simulation seems to reproduce disparities that are common in other STEM areas e.g., gender and ethnicity (Padilla et al. 2018), reinforcing conditions of underrepresentation. There is an ethical challenge for the field to guarantee that access to technology does not become a source of 'epistemic injustice' (Fricker 2007), where experiences and knowledge from those social groups with uneven access to and command of technological and technical resources, both inside and outside academia, carry less value. Practitioners, particularly those with privileged access to more advanced technological infrastructure, should also be mindful of the potential risk of a technology-based form of scientific imperialism or determinism. That is certainly a potential source of tension and conflict when engaging in the socially asymmetric relationships, particularly those enabled by digital technologies (Origgi & Ciranna 2017;Wyatt 2008).

4.11
The social organisation of agent-based social simulation is equally important when discussing the ethical relationship of the field with science as an institution and society in general. In recent years, the traditional model of science has been challenged to become more open. This challenge manifests in actions such as the coordination of several scientists refusing to submit, review or serve as editors for a closed-access journal (Statement on Nature Machine Intelligence), the resignation of an entire editorial board of a prestigious journal to create an open-access alternative (Singh 2019) or the request to publish publicly funded research in open-access journals (Enserink 2018). In this push for open science, technology-intensive areas of research have played a major role, for they tend to display approaches to work organisation and overall politics (e.g., Free Libre Open Source Software (FLOSS)) that are more compatible with a collaborative and decentralised governance and also more critical of the traditional model of science (Coleman 2009;von Hippel & von Krogh 2003). These new forms of scientific work organisation, however, raise additional ethical concerns that are worth addressing, such as the ethical implications of crowdsourcing data (Gleibs 2017), which move past typical issues of privacy, recruitment, and consent in online research.

4.12
Overall, the two examples analysed offer a nuanced and multifaceted picture of scientific integrity. When the focus is on practice, the discussion tends to centre on individual compliance i.e., how researchers, working alone or as part of a team, on the one hand, avoid misconduct and, on the other hand, consciously consider aspects such as a model's potential uses and implications in their work. Alternatively, when the focus is on organisation, ensuring scientific integrity depends on a variety of contextual determinants. In the context of interdisciplinarity, for example, it requires bringing to the foreground issues of accountability, epistemic trust, and expertise. In the context of technology, conversely, considerations about integrity can range from issues about the morality of artificial agents to issues about technology governance. Ultimately, an organisational approach to scientific integrity means inquiring into how agent-based social simulation can institutionally moderate the multiple determinants and dimensions of integrity, acknowledging the diversity of agents and systems, and the interactions between the two, that influence the everyday practice of science.

How Ethical Behaviour Agent-Based Modelling Can be Enabled
Developing standards 5.1 Because of the multiplicity of determinant and dimensions of scientific integrity and the frequent differences in individual perceptions, attitudes and behaviours about the matter (Ana et al. 2013;Davies 2019;, conscious and deliberate reflection about ethics in a research area or disciplinary field often results in standardisation through different mechanisms of normalisation: training, shared methods and procedures, social norms, and codification (e.g, principles, guidelines, conventions, and laws) (Frankel 2000;Freidson 2007;Israel 2020). The outputs of these mechanisms of normalisation contribute to scientific integrity by creating consensus and making explicit norms for conduct that can guide researchers in the moral assessment of their behaviour and that of their peers. Standards also foster the establishment of a system of mutual regulation of expectations and accountability that, in the most formalised instances, includes mechanisms and procedures for sanctioning and exclusion from the social group. From the perspective of external groups, standards allow for public recognition, as well as the external evaluation and accountability of scientific practices (Fox & Braxton 1994;Resnick 2013).

5.2
Even though the potential benefits of ethically regulating agent-based social simulation are hard to deny, one might question whether there is a need for the community to engage in the development of its own ethical standards. Practitioners could, alternatively, voluntarily seek to achieve ethical self-regulation, commit to abide by the regulations of the institutions in which they work, or adopt any of the already existing codes of ethics. Although these options could each contribute to guaranteeing the ethical integrity of social simulation, a deliberate effort to account for the idiosyncratic elements involved in the practice and organisation of agent-based social simulations is likely to be needed.

5.3
Normative self-regulation, interestingly, has been a popular topic in the agent-based social simulation literature (Conte et al. 2013;Elsenbroich & Gilbert 2014;Hollander & Wu 2011;Morris-Martin et al. 2019;Neumann 2008). Previous research has shown that both norm emergence and compliance are possible bottom-up outcomes of adaptive uncoordinated interaction at the micro-level. In the case of scientific integrity, there is empirical evidence that supports these results. Through history, self-regulation has proven effective in the generation of several widespread standards that contribute to the ethical integrity of science. Standardisation pertains in some cases to the emergence and diffusion of guiding principles or ideals, such as 'the scientific method' and, in others, to the application of specific mechanisms of accountability and self-correction e.g., the peer review process or paper retraction.

5.4
While there is theoretical and empirical evidence of its effectiveness, self-regulation cannot, by itself, fully ensure scientific integrity. Initially, the current institutional setting in science not only offers negative incentives for ethical behaviour e.g., pressure to publish more and faster (Alberts et al. 2015;Davies 2019), but also allows for the emergence of normative and institutional setups that favour misconduct e.g., the predatory publishing system. In turn, the effective application of self-correction may be limited by variations in individual perceptions and attitudes towards research and scientific integrity, ethics, and misconduct (Horbach & Halffman 2017;Davies 2019) and by the moral status of self-correcting mechanisms themselves Koepsell 2010). Finally, self-correction, while effective in the long run, might be too slow (Ioannidis 2012). Some of the most pressing ethical challenges for contemporary science have been acknowledged for decades. Yet, in most cases, no satisfactory progress has been made 2 .

5.5
Given the increasing demands for holding science accountable to society (motivated, in part, by popular cases of research misconduct), science may not be able to afford the time needed for self-correction. This standardisation alternative, then, might need to be coupled with an institutional effort that provides means and mechanisms, initially, for the collective moderation of individual values, beliefs, and expectations (Fox & Braxton 1994;Freidson 2007;Iverson et al. 2003) and, later, for oversight and accountability (Salloch 2018;Short & Toffel 2010;Taylor 2009). Since the second half of the twentieth century, several countries have advanced in the articulation of institutional ethics procedures that oversee and regulate behaviour based on the derivation of sets of rules from a series of universal principles e.g., justice or beneficence, that any good research practice would be expected to comply with (Israel 2020). Currently, there are institutional regulations that work at international (e.g., the European Framework Programmes), national (the United States' common rule) and local (e.g., universities' research ethics committees) levels. In most cases, these procedures depend on processes of collective (peer) review and deliberation over a research proposal's ethical compliance. If deemed appropriate, the research might be funded or allowed to continue.

5.6
Institutional ethics procedures can, indeed, promote consensus-forming around basic ethical principles and help preventing questionable research from being funded or carried out. Yet, their adequacy as mechanisms of ethical regulation has been extensively questioned in the literature. In general, the problem is that compliance at the level of individuals and institutions might be established following different guidelines and principles and have distinct, and sometimes conflicting, conditions of fulfilment. There are significant organisational and national asymmetries in the way institutional ethical frameworks are developed (Ana et al. 2013). In addition, it has been shown that not all research committees follow the same procedures or enforce ethical standards in the same way (Hoecht 2011) and that some institutional standards are too limited to guarantee individual ethical behaviour (Elliott 2008). Finally, and most important, institutional ethics procedures might not be necessarily conducive to better research practices when the interests of individuals and institutions conflict. University research committees, for example, have sometimes been used to protect the reputation of the host institution or as a mechanism for internal discipline, rather than as a guarantor of ethical behaviour (Hedgecoe 2016) 3 .

5.7
Codes of ethic offer an interesting middle ground between self-regulation and institutional ethics procedures. They are usually autonomously developed by a community, increasing the likelihood of accounting for practical and organisational needs. They are often institutionalised, making it easier to use them to moderate the behaviour of a social group. If the agent-based social simulation community were to adopt an already existing code, it would avoid spending resources in the drafting and socialisation of its own code. This resource saving, however, risks coming at the expense of effectiveness. Previous literature on codes of ethics offers insights into standardisation and regulation that are highly relevant for the present discussion. It evidences, first, that diverging beliefs and perceptions about the object and subject of ethical regulation might lead to entirely different processes of standardisation and, second, that a social group willing to autonomously regulate its ethical behaviour must pay attention both to the means and mechanisms of regulation and the disciplinary dynamics targeted by the standardisation 4 .

5.8
Concerning their means and mechanisms of regulation, codes might not be drafted with ethical motivations or might include provisions that are ethically questionable or exacerbate ethical conflict (Farrell et al. 2002;Jamal & Bowie 1995;Schwartz 2002), as we have seen is the case for institutional ethics procedures. When considering already existing codes, it might not be easy to identify their limitations or estimate the extent to which they will adequately regulate ethical behaviour in a new context. In addition, it has been shown that the existence of a code alone is not enough to guarantee ethical behaviour (Freidson 2007;Iverson et al. 2003;Singh 2011). A diverse arrange of supporting activities and structures targeted primarily to its promotion, administration and enforcement are necessary (Lere & Gaumnitz 2007;Rosenberg 1998;Schwartz 2004;Webley & Werner 2008).

5.9
Similarly, the supporting activities that lead to the articulation of the code have equally proved to affect its effectiveness. Adherence to a code of ethics, for instance, is more likely when it is found contextually relevant by those governed by it (Hardy 2016;Kaptein & Schwartz 2007). There might be, as well, a moral reason to include practitioners in the discussion about ethical regulations. Since standardisation of ethical practices in a code of ethics, or any other type of formal procedure, creates new obligations for individuals and is used to judge their behaviour, it would be morally responsible for them to have the possibility to be involved in the ethical standardisation (Schwartz 2002).

5.10
From the perspective of the target of standardisation, it is clear that previous codes of ethics do not sufficiently address all the ethical issues that are involved in the practice of agent-based social simulation. Although some ethical principles or ideals e.g., respect, integrity, and fair treatment, are usually included, codes of ethics are often developed to cover specific disciplinary or professional practices that warrant ethical regulation. For instance, psychology and sociology are two closely connected disciplinary areas. Yet, the codes for the American associations for sociology and psychology notably differ in content (see Table 2), since ethical concerns are not completely shared. The former, for example, does not include standards pertaining to assessment and therapy, for sociologists do not have that professional competence.

5.11
In turn, while the output of the field of agent-based social simulation could be categorised within the general social sciences, its digitalised nature, as mentioned in the previous sections, raises some ethical issues related to the intensive use of digital technologies, and to the implementation, processing, analysis, and dissemination of computational models, that render codes of ethics in social science insufficient. At the same time, codes in more technical areas of research, such as those developed by ACM and IEEE (see Table 3), do not acknowledge ethical issues that are typical of social sciences e.g., 'Informed Consent' or 'Record Keeping and Fees' (included in the codes for psychology and sociology (see Table 2)), for they tend to emphasise ethical concerns surrounding efficiency and proficiency in the use of technology.

To uphold the highest standards of integrity, responsible behavior, and ethical conduct in professional activities
-To hold paramount the safety, health, and welfare of the public, to strive to comply with ethical design and sustainable development practices, to protect the privacy of others, and to disclose promptly factors that might endanger the public or the environment -To improve the understanding by individuals and society of the capabilities and societal implications of conventional and emerging technologies, including intelligent systems -To avoid real or perceived conflicts of interest whenever possible, and to disclose them to affected parties when they do exist -To avoid unlawful conduct in professional activities, and to reject bribery in all its forms -To seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, to be honest and realistic in stating claims or estimates based on available data, and to credit properly the contributions of others -To maintain and improve our technical competence and to undertake technological tasks for others only if qualified by training or experience, or after full disclosure of pertinent limitations To treat all persons fairly and with respect, to not engage in harassment or discrimination, and to avoid injuring others -To treat all persons fairly and with respect, and to not engage in discrimination based on characteristics such as race, religion, gender, disability, age, national origin, sexual orientation, gender identity, or gender expression -To not engage in harassment of any kind, including sexual harassment or bullying behavior -To avoid injuring others, their property, reputation, or employment by false or malicious actions, rumors or any other verbal or physical abuses

To strive to ensure this code is upheld by colleagues and co-workers
-To support colleagues and co-workers in following this code of ethics, to strive to ensure the code is upheld, and to not retaliate against individuals reporting a violation -Public: Software engineers shall act consistently with the public interest.
-Client and employer: Software engineers shall act in a manner that is in the best interests of their client and employer consistent with the public interest.
-Product: Software engineers shall ensure that their products and related modifications meet the highest professional standards possible.
-Judgment: Software engineers shall maintain integrity and independence in their professional judgment.
-Management: Software engineering managers and leaders shall subscribe to and promote an ethical approach to the management of software development and maintenance.
-Profession: Software engineers shall advance the integrity and reputation of the profession consistent with the public interest.
-Colleagues: Software engineers shall be fair to and supportive of their colleagues.
-Self: Software engineers shall participate in lifelong learning regarding the practice of their profession and shall promote an ethical approach to the practice of the profession.  (2018), IEEE (2020) and IEEE Computer Society (1999) .

5.12
Most codes centred on technology also fail to sufficiently cover ethical issues arising from disciplinary dynamics that are important for agent-based social simulation from an organisational point of view. Simulation is referred in the literature both as discipline and a profession (Anzola 2021a;Diallo et al. 2015;Padilla et al. 2018;Silvert 2001;Tolk & Ören 2017). While the difference is not clear-cut, professions are generally associated with the generation of a jurisdiction in the labour market, and the eventual exclusive control of that jurisdiction, based on recognised expertise over some specialised knowledge (Freidson 2007;Young & Muller 2014). Conversely, disciplines are about the structuration and consolidation of bodies of knowledge through the delimitation of aspects such as a specialised object of research, a foundational narrative, a particular research agenda and specialised theoretical-methodological frameworks (Becher & Trowler 2001;Krishnan 2009). When it comes to ethics, it seems that most technical codes, including the 'Code of Professional Ethics for Simulationists' (Table  4), approach computer modelling more as a profession than as a discipline. They neglect aspects that are often found in disciplinary codes of ethics, such as those associated with training, research organisation, and publication processes, because the emphasis is on occupational elements of the everyday practice of simulation.

5.13
This narrow understanding of the object of ethical regulation in the professional approach to computer simulation is not limited to standardisation, but is also present in some ethical compliance strategies. In recent years, 'design' approaches to ethics have become popular in computer science and engineering, especially for AI applications in social settings (Donia & Shaw 2021;European Commission 2021;IBM 2019). These methods and frameworks seek to prevent ethical issues either by purposefully intervening in the technological application itself (e.g., embedding morality in artificial agents Dignum et al. 2018) or in the production process (e.g., by clarifying the normative dimensions of the process or by explicitly incorporating ethics-oriented principles, activities or subprocesses Donia & Shaw 2021). While useful, design approaches are limited when considering the ethics of a disciplinary field or area of research, for they are product-centred. Computer simulation, however, is not all there is to agent-based social simulation. Additional key scientific outputs and activities e.g., training, interaction with stakeholders, events, and hiring/promotion, are relevant for everyday activities within a discipline and are also worth of ethical regulation. A disciplinary area, in comparison to a simulation, does not readily lend itself to be 'designed'.

Code of Professional Ethics for Simulationists
Personal Development and the Profession As a simulationist I will: -Acquire and maintain professional competence and attitude -Treat fairly employees, clients, users, colleagues and employers -Encourage and support new entrants to the profession -Support fellow practitioners and members of other professions who are engaged in modelling and simulation -Assists colleagues to achieve reliable results -Promote the reliable and credible use of modelling and simulation -Promote the modelling and simulation profession; e.g., advance public knowledge and appreciation of modelling and simulation and clarify and counter false or misleading statements

Professional Competence
As a simulationist I will: -Assure product and/or service quality by the use of proper methodologies and technologies -Seek, utilize, and provide critical professional review -Recommend and stipulate proper and achievable goals for any project -Document simulation studies and/or systems comprehensibly and accurately to authorized parties -Provide full disclosure of system design assumptions and known limitations and problems to authorized parties -Be explicit and unequivocal about the conditions of applicability of specific models and associated simulation results -Caution against acceptance of modelling and simulation results when there is insufficient evidence of thorough validation and verification -Assure thorough and unbiased interpretations and evaluations of the results of modelling and simulation studies

Trustworthiness
As a simulationist I will: -Be honest about any circumstances that might lead to conflict of interest -Honor contracts, agreements, and assigned responsibilities and accountabilities -Help develop an organizational environment that is supportive of ethical behavior -Support studies which will not harm humans (current and future generations) as well as environment

Property Rights and Due Credit
As a simulationist I will: -Give full acknowledgement to the contributions of others -Give proper credit for intellectual property -Honor property rights including copyrights and patents -Honor privacy rights of individuals and organizations as well as confidentiality of the relevant data and knowledge

Compliance with the Code
As a simulationist I will: -Adhere to this code and encourage other simulationists to adhere to it -Treat violations of this code as inconsistent with being a simulationist -Seek advice from professional colleagues when faced with an ethical dilemma in modelling and simulation activities -Advise any professional society which supports this code of desirable updates

First steps
5.14 Collective ethical compliance first requires the community to become aware of the increasing importance of scientific integrity and the multiple ways in which behaviour could be ethically considered. While practitioners might already be behaving in a way that is ethically compliant, a conscious effort to analyse critically the different sources of ethical concern should yield a more complex understanding of individual situations. Practitioners with training or supervision responsibilities, for example, should be mindful of an ethical dimension that is not as relevant for those with only research responsibilities. Moreover, as mentioned above, ethical expectations are gradually changing to cover a multiplicity of elements beyond what was traditionally considered central to the practice of science e.g., issues of under-representation. There must be, then, a willingness to continuously engage in ethical reflection and moderate behaviours accordingly.

5.15
In the current absence of standards that the community can generally agree upon and collectively employ, it is important for practitioners to strive for self-regulation. In some instances, the overlap with general so-cial norms, customs, and conventions may help to identify the requirements for ethically compliant behaviour.
Similarly, institutional ethics procedures should provide guidance about research-specific issues. For aspects related to simulation proper, multiple resources, including the questions listed in Table 1 and some of the resources mentioned in Section 2, especially Shults & Wildman (2019) meta-ethical framework, should prove useful.

5.16
Ultimately, however, it may be more reasonable for the community to explicitly address the ethics of agentbased social simulation collectively, in the form of a code of ethics and the corresponding set of supporting activities. This approach could help agent-based social simulation dealing with: (i) the increasing complexity of the issues associated with scientific integrity, (ii) the limitations in self-regulation and institutional ethics procedures, (iii) the potential higher costs in having to manage ethical compliance as individual researchers, (iv) the observed differences in individual and social perceptions of and attitudes towards scientific integrity, (v) the need to raise the profile of the field and improve the relationship with external stakeholders, and (vi) the interest in further consolidating the practice of agent-based social simulation.

Some General Recommendations Moving Forward
6.1 In order to advance a standardisation process that brings scientific integrity to the fore in agent-based social simulation, an institutional setup that accommodates and provides resources for a range of standardisation activities is needed. Below, some key actions and decisions that might guide the articulation of such setup are briefly mentioned.

Actions pertaining to ethical standardisation (and the potential development of a code)
• Raise awareness: differences in ethical compliance are not only associated with perceptions of and attitudes towards integrity and misconduct, but also with what standardisation entails and its potential effectiveness (Davies 2019;Fleischmann et al. 2010;. It is therefore necessary to carry out activities (e.g., special events, tracks or workshops in conferences, special issues, dedicated workshops and training) that raise awareness about scientific integrity and its potential standardisation and foster an initial moderation of perceptions, attitudes and knowledge about the ethics of social simulation (Frankel & Bird 2003).
• Participation and recruitment: for ethical standardisation to be successful, a reasonable proportion of members should willingly engage with the ethicalisation process i.e., participate in different roles in the several activities carried out (Frankel & Bird 2003;Freidson 2007;Romani & Szkudlarek 2014). Should the community proceed with the development of a code, as mentioned below, recruitment and participation will be crucial for three decisions pertaining to its drafting: who will draft it, whether the drafting process will be open to any member of the community and at any time, and who is responsible for approving the code (Messikomer & Cirka 2010;Webley & Werner 2008).
• Establishing a governance structure: Standardisation usually requires some sort of institutionalisation (Becher & Trowler 2001;Frankel & Bird 2003;Freidson 2007). Thus, the articulation of a governance structure with detailed roles and functions will greatly aid in the process. Depending on the scope of the standardisation process, some positions might be necessary or be given more relevance. For example, if an institutional space for conflict resolution is desired, there should be an ombudsperson that operates separately from those in charge of administrative issues (e.g., an ethics committee). Similarly, if a code is developed, a dedicated structure for drafting, implementation and management of the code might be required (Rosenberg 1998;Mcdonald 2009;Messikomer & Cirka 2010).
• Training: formal training is one of the most powerful standardisation mechanisms in contemporary societies. Ideally, options should be available to account for differences in expertise in simulation and knowledge of ethics. Training programmes and scenarios should also be designed to cover different goals.
There is a significant difference, for example, in training that is meant to be simply informative and training designed to foster the development of the skills needed to recognise ethical challenges in practice (Fleischmann 2010;Frankel & Bird 2003;Guillemin & Gillam 2004;Israel & Hay 2006). If the community moves forward with the development of a code, training will be fundamental, for practitioners must be trained in the provisions included in the code, if they are to be expected to comply with it (Mcdonald 2009;Schwartz 2004;Webley & Werner 2008).
• Reporting: for ethics to become part of everyday practice, a reporting structure should be developed, probably linked to the governance structure and the training infrastructure, that informs the community about scientific integrity and advances in standardisation (including the administration and application of the code, if needed), among others (Frankel & Bird 2003;Singh 2011;Webley & Werner 2008).

6.2
This list is not exhaustive, but it shows the several fronts on which the discussion about ethical standardisation in agent-based social simulation can be advanced. We hope that other practitioners feel motivated by this article to share their experiences and contribute to activities seeking to position ethics and scientific integrity centrally within everyday practice.

6.3
Should the agent-based social simulation community decide to move forward with the development of its own code of ethics, there are a few key elements about the design, implementation and management of a code that must be considered. Actions carried out regarding participation and governance would be fundamental to assign responsibility for these decisions.

Decisions pertaining to the code
• What type of code?: the literature usually distinguishes two types of ethical code: aspirational and prescriptive (Farrell et al. 2002;Mcdonald 2009;Schwartz 2004). The former, as the name suggests, centres on moral ideals that are believed worthy of being professionally pursued by the community; the latter, in comparison, provides a more elaborate description of expected behaviour in certain situations. Each code fosters different approaches to behaviour regulation. Prescriptive codes, for example, because of their narrower scope, tend to emphasise proscribed rather than virtuous behaviour.
• Who is it for?: the code might be intended exclusively for the community or deliberately involve external stakeholders. Explicitly involving additional stakeholders might help legitimising the code, which could be useful, considering the increasing popularity of stakeholder engagement. It, however, poses additional challenges to the conceptualisation of the code, for the roles, expectations, and interests of additional stakeholders will need to be deliberately accounted for (Messikomer & Cirka 2010;Singh 2011;Webley & Werner 2008). In turn, the 'who' is particularly important in agent-based social simulation, given that the sense of belonging to the community is rarely defined through training or affiliation but through practice.
• Who should draft it?: the drafting process could be open or closed. In the former case, anyone can participate of the drafting at any point; in the latter, the drafting is carried out entirely by a predefined group (e.g., an ad-hoc drafting committee or an already appointed ethics committee). There are, naturally, possible combinations, for example, open for the conceptualisation stage, but closed for the drafting itself. The possible options differ in the type and amount of resources employed (e.g., open processes typically require more resources for the drafting, but less for dissemination), as well as the decisions and activities required for its approval, implementation, dissemination, and enforcement. As mentioned, opening the drafting process can increase compliance and is also morally responsible, given the obligations a code creates. It may, in addition, strengthen the ethical culture and identity of the community beyond the specific efforts of standardisation and code provision (Becher & Trowler 2001;Romani & Szkudlarek 2014).
• Who will enforce it?: there is not one way to enforce the code. In most cases, implementation, oversight, and enforcement is included in the functions of the governance structure mentioned above. Whichever option is chosen, it should be made clear and explicit for members of the community (Singh 2011;Mcdonald 2009;Schwartz 2004). Enforcement should not only be understood as dealing with misconduct. Integrity could be promoted, for example, by publishing regular reports on topics that directly pertain to ethical practices and organisation.

6.4
These decisions are specific to the code. Yet, as mentioned above, their success ultimately depends on how they are integrated with a diverse set of supporting activities, derived, in part, from the actions just listed. In addition, decisions about code enforcement should be made being mindful of the need to guarantee the continuous ethical relevance of the code. Whoever is responsible for ethical oversight must ensure that the code remains contextually adequate (e.g., through updates), that it contributes to scientific integrity (e.g., through ethically compliant decision-making) and that it keeps the different stakeholders engaged (e.g., through constant, open, and transparent communication).

7.1
This article sought to raise awareness of the need for practitioners of social simulation to engage in a collective discussion about ethics. We have argued that now is a good time to start this discussion, first, because, despite the increasing popularity of scientific integrity, agent-based social simulation lacks widespread and consensual standards on ethical compliance and, second, because developing models with real-life implications requires being especially mindful of the interests and needs of different stakeholders.

7.2
We analysed the two main sources of ethical issues in social simulation from a disciplinary point of view. The first is the modelling workflow. Ethical challenges arising during the modelling process were presented and linked to the different stages. The second is the organisation of agent-based social simulation. Two examples were used to show how ethics is differently linked to the practice and organisation of agent-based social simulation. We suggested that the dissimilar and uneven disciplinary expertises and the possibility of conflicting disciplinary moral commitments in social simulation are potential sources of ethical tension. We also claimed that the combination of social science with technology puts agent-based social simulation in an ethical context that differs from that of both traditional social disciplines and other technological domains, such as engineering and computer science. Overall, we argued, ethical challenges linked to the organisation of social simulation bring to the foreground the need for differentiated ethical standardisation.

7.3
We then addressed the question of how to enable collective ethical behaviour. We claimed that there are three major options for ethical regulation and standardisation, each with its own advantages and disadvantages. We suggested that, given the distinctive organisational features of social simulation, a code of ethics might be the best long-term strategy.

7.4
The article closes with a brief discussion about key actions and decisions pertaining to the standardisation of ethics in agent-based social simulation. A first version of the code, incorporating some of the major conclusions of this article as provisions, is presented in the Appendix. This draft is intended as a contribution to the discussion among stakeholders that needs to happen around the ethics of agent-based social simulation. Subsequent reflections, additions and criticism are both encouraged and welcome. It is our intention to use a variety of institutional spaces such as social simulation conferences to encourage the advance of this discussion.

7.5
Discussing the ethics of simulation, we believe, could also encourage further conversation and cross-fertilisation with other types of digitalised scientific research. Several ethical challenges are not exclusive to agent-based models, but pertain more generally to the operation of diverse information technologies. In the domain of artificial intelligence, for instance, the concern with epistemic opacity mentioned above has led to the popularisation of explainable artificial intelligence (XAI), a set of methods that seek to make artificial intelligence models, particularly machine learning, easier to understand for a human subject (Adadi & Berrada 2018). Similarly, while the conditions are not the same, practitioners of agent-based social simulation might also learn from past experiences of researchers in other fields. A decade ago, for example, research on the ethics of algorithms (Kraemer et al. 2011;Mittelstadt et al. 2016) tackled the problem of the value-ladenness of model implementation, an issue that is worth discussing further from a moral standpoint in agent-based social simulation (David 2021).

The Code
Institutional considerations 1. Professional competence and training: Have in place plans for training for new and experienced modellers, not only on technical aspects of modelling, but also on model use and interpretation, and interdisciplinary working.
2. Interdisciplinary working: Have in place plans and common understandings of how the challenges of interdisciplinary working will be managed (e.g., different assumptions about the aims and value of methods and projects, knowledge asymmetries, different ethical standards).

Individual project considerations
1. Project management, transparency, and quality: Plan and maintain project management processes to ensure the documentation, quality, and reproducibility of all model stages.

Narrative and positions:
Be honest and open about your underlying beliefs about the system you are modelling. Are there contested understandings, results, or interventions that you expect your model to support or contradict?
3. Model use: Develop a plan for how a model will be used and for unplanned use of your model or its results.

Model inputs:
Be conscious of bias in the data and theory you use to inform a model. 12. Public communication: Consider preparing documentation which presents the model, results, and their interpretation for layman or non-modeller audiences.
13. Maintenance: Make appropriate arrangements for long-term model and documentation maintenance. Notes 1 While a conceptual agreement about research integrity is lacking and some distinctions may be worth making between 'research' and 'scientific' integrity (Shaw 2019), this text uses 'scientific integrity' to highlight, first, the role of ethics beyond preventing misconduct and, second, the additional dimensions of integrity in scientific practices and organisation beyond research. 2 In social simulation, for example, Axtell et al. (1996) called in their now seminal article for an institutional setup that rewards practices such as replication and docking, two increasingly popular concerns in contemporary science with interesting ethical implications. Even though this call was made almost 30 years ago, these activities are still peripheral and very resource demanding, both from the perspective of framing the research and validating the results (Anzola 2021b). 3 The ethical status of these procedures has also been questioned. Some authors (e.g., Haggerty 2004;Hammersley 2009) argue first, that principles to regulate behaviour are themselves unethical, for they impinge on research autonomy and, second, that they might be conducive to worse research outputs and overall research ethics, especially in social science, for they do not acknowledge key idiosyncratic features of social research. 4 Findings reported in this literature should be critically approached, however. A significant portion of the literature on codes of ethics centre on organisations and some key discussions, such as the ones on code drafting and code effectiveness, significantly rely on the analysis of organisational codes of ethics. There are, however, some key differences between organisations and entire disciplinary or professional areas that are worth being mindful of. For example, unlike professional and disciplinary codes, organisational codes centre more often on proscribing behaviour and protecting the company from the employees, in part, because of hierarchies and power asymmetries are more pronounced (Khaled & Gond 2020;Komić et al. 2015;Valkenburg et al. 2021;Mcdonald 2009). Similarly, because disciplines are larger and more intricate social systems, ethical standardisation is less uniform (e.g., it may vary according to sociodemographic factors, institutional setting or disciplinary sub-specialisation Ana et al. 2013;Freidson 2007;Israel 2020;Schwartz 2004) and might, overall, require more time for its effect to be adequately assessed (see e.g., Baker et al. 1999 for a discussion about the long-term implications of the American Medical Association's code of ethics).