Introducing the Argumentation Framework Within Agent-Based Models to Better Simulate Agents’ Cognition in Opinion Dynamics: Application to Vegetarian Diet Diffusion

: This paper introduces a generic agent-based model simulating the exchange and the diffusion of pro andconarguments. Itisappliedtothecaseofthediffusionofvegetariandietsinthecontextofapotentialemer-gence of a second nutrition transition. To this day, agent-based simulation has been extensively used to study opinion dynamics. However, the vast majority of existing models have been limited to extremely abstract and simplified representations of the diffusion process. These simplifications impairs the realism of the simulations and disables the understanding of the reasons for the shift of an actor’s opinion. The generic model presented here explicitly represents exchanges of arguments between actors in the context of an opinion dynamic model. In particular, the inner attitude towards an opinion of each agent is formalized as an argumentation graph and each agent can share arguments with other agents. Simulation experiments show that introducing attacks between arguments and a limitation of the number of arguments mobilized by agents has a strong impact on the evolution of the agents’ opinion. We also highlight that when a new argument is introduced into the system, the quantity and the profile of the agents receiving the new argument will impact the evolution of the overall opinion. Finally, the application of this model to vegetarian diet adoption seems consistent with historical food behaviour dynamics observed during crises.


Introduction .
Agent-based modelling is a classical approach to study opinion dynamics as it takes into account the heterogeneity of actors and the impact of interactions between them. Among existing approaches, the most popular one -known as the bounded confidence model -uses a numerical value to represent the opinion towards an option (De uant et al. ; Hegselmann & Krause ). The opinion of each agent is updated by averaging a set of agent opinions. Typically, in the classic bounded confidence model, when two agents with respective opinions x and x meet, they adjust their opinions conditionally upon their di erence of opinions being smaller in magnitude than a threshold d (i.e. the opinions of the agents are modified if |x − x | < d): with µ the parameter of speed of convergence. .
In the past fi een years, many studies have proposed to enrich this generic model, for example by taking into account fixed uncertainties, by integrating multi-dimensional opinion dynamics (Lorenz ; Urbig & Malitz ), by studying the behaviour of the model when adding extremists (Mathias et al. ) or contrasting e ects (Jager & Amblard ; Huet et al. ).
. These models are very relevant to study social influence. However, most of them remain theoretical and only a few have been applied to real case-studies using data and validated (Flache et al. ). Another drawback is the di iculty to understand the inner motivation underlying the change in an agent's opinion. Indeed, as the opinion is usually integrated in a single numerical value, the reasons why the agent has changed his/her opinion is unknown. .
To integrate the inner motivation underlying change, a relevant framework is the argumentation model (Besnard & Hunter ). Argumentation deals with situations where information contains contradictions because it originates from several sources or corresponds to several points of view that possibly have di erent priorities. It is a reasoning model based on the construction and evaluation of interacting arguments. It has been formalized both in philosophy as well as in computer science (Rescher ) and applied to various domains including non-monotonic reasoning (Dung a), decision making (Thomopoulos ) or negotiation (Kraus et al. ). The argumentation model framework introduced in Dung ( a) consists of a set of arguments and binary relations expressing conflicts among arguments. An argument gives a reason for believing a claim, or for doing an action. Historically, the typical field of application of argumentation in computer science was the legal domain (Prakken & Sartor ). More recently, several studies proved its relevance in social-related concerns, medicine, food systems, supply chains, policies and controversies, especially for decision-making purposes (Thomopoulos ).
. Mäs & Flache ( ) proposed a di erent theory to represent argument exchanges. Their model, itself inspired by earlier Persuasive Argument Theory (PAT), is based on the Argument-Communication Theory of Bi-polarization (ACTB) that adds to existing models the communication of arguments. In this model, arguments are abstracted by a numerical value between -(con argument) and (pro argument). The agent's opinion is the average value of the arguments that the agent considers relevant. Agents disregard pieces of information not communicated in recent interactions (and consider them as not relevant). An experiment showed that the model enables to reproduce bi-polarization (i.e., the development of increasingly antagonistic groups Esteban & Ray ) without explicitly representing negative influence. A recent extension of this model has been proposed in Banisch & Olbrich ( ) to take into account di erent issues on which opinions can be formed (each issue is linked to a subset of arguments). .
Another work using explicit arguments is Stefanelli & Seidl ( ). To do so, they collected empirical data through questionnaires. This date plays a very important role as each agent has a set of di erent types of arguments and will form an opinion according to the valence of the arguments and the importance it gives to them.
Changes in opinion will happen by interacting with other agents in their social network and through comparison of argument values. Therefore, there is no explicit exchange of arguments as such in this model: the interaction results in the adaptation (or not) of the agents' argument scales, i.e., the valence and importance that the agent gives to a type (benefit, risk or process) of arguments. The adaption of the agent's argument scale is computed according to the position of each argument in its own continuum of social judgment. .
Another approach by Wolf et al. ( ) does not directly use arguments but the closely related concept of "need". In this model about electric cars is also based on empirical data and each agent assigns a weight to each identified need (e.g., safety, comfort, costs) for each possible action (e.g. using an electric vehicle). These weights change during the simulation through interactions with other agents. .
Although these models show interesting results, we emphasize that they are silent about argumentative reasoning and do not explicitly formalize the tensions between arguments. In this context, Friedkin et al. ( ) deals with statements rather than arguments and introduces the notion of logical constraints associated with statements. This means that if an agent believes that a specific statement is true, then he/she will automatically impact his/her beliefs regarding other statements linked to the previous one.

.
Regarding argumentation, several studies have proposed the use of the system introduced by Dung ( a) in opinion dynamic models. For example, in Gabbriellini & Torroni ( ), all agents reason from the same set of arguments and exchange attacks between arguments. The exchange of attacks is carried out during a dialogue phase. During this phase, an agent who is about to receive an attack he/she is disagreeing with can either accept this new attack (if the agent formulating the attack is trustworthy, which is a stochastic process), reply with a counter-attack, or end the exchange. If the agent decides to reply, the agent proposing the first attack can in turn reply. It goes on until an agent accepts an attack or ends the dialog. This model provides a very interesting basis for integrating Dung's argumentation system into a model of opinion di usion. Nevertheless, it is based on the hypothesis of a common set of arguments for all agents. Unlike Mäs & Flache ( ), there are no arguments but attacks exchanged here, which can be questioned. Moreover, an opinion in Gabbriellini & Torroni ( )'s model is an argumentation framework. The model is thus a discrete opinion dynamics model, with no numeric update, which di ers from our approach. .
The link between arguments and opinions was explored in Villata et al. ( ) through an empirical study of emotions. Explanation and reasoning theories were also proposed in cognitive psychology Williams & Lombrozo ( ).
. A recent study worth mentioning (Butler et al. a) focuses on collective decision-making processes and proposes to combine a deliberative process using Dung's system of argumentation with a process of interpersonal influence (De uant et al. ). In this work, each argument is modelled by a real number between −1 and 1 representing the support of the argument for a principle (e.g., "protect the environment"). Each agent has an opinion about the principle modelled and this opinion evolves through a group deliberative process during which agents exchange arguments and have a direct influence through pair interactions. .
A last study using Dung's system of argumentation is the model proposed by Butler et al. ( b). Similarly to Butler et al. ( a), this model combines dyadic interactions (pair-wise interactions) with collective deliberation. One of its major contributions is the introduction of the notion of argumentative epistemic vigilance, i.e., the possibility for agents to reject an argument in case of "message-source" discrepancy. Indeed, when an agent receives an argument from another agent, he/she can invalidate it either by asserting the existence of an argument that attacks the first argument, or by pointing out that the argument he/she has received is not consistent with the opinion of the sender.

.
Our proposal is in line with the studies of Butler et al. ( a,b) except that we simply focus on the "daily" exchange of arguments using a general process of evolution of opinion close to Mäs & Flache ( ) and we do not investigate group deliberation. One of the originality of our paper go further by eliciting the content of arguments. To do so, we use the system introduced by Dung ( a) as a basis and enrich it with detailed descriptions of the arguments. Finally, as in Stefanelli & Seidl ( ), the collection of empirical data plays an important role in the design of the model.

.
Section presents the empirical approach we start from. Section describes the generic model that we propose. Section explores the model behaviour relating to the impact of the attacks in the argumentation graph and to the impact of the number of arguments known by the agents. Section presents the application of this model to study the evolution of the vegetarian diet. Finally, Section concludes and presents some perspectives of this work. . Results tend to demonstrate that behavioural changes possibly follow a series of stages. However, in the case of food choices, changes in dietary patterns are quite specific and their stages are less well-identified (Povey et al. ). Food choices are complex, dynamic, and change over the course of a person's life. They are determined by a wide variety of factors. In their review, Vabo & Hansen ( ) try to address these factors. Three fundamental groups stand out almost systematically: factors related to the characteristics of the food (organoleptic characteristics, nutritional content, function, etc.), factors related to the consumer (physiological and psychological aspects) and environmental factors (economic, cultural, social context, etc.). These factors have a double influence on food choices, by building the food preferences of individuals, particularly during their childhood, and by influencing choices (through price, health, practicality, sensory appeal, mood, etc.).

Data acquisition .
In order to get insights about the role of arguments in following a vegetarian diet, we conducted a survey and built an argument database. These are published in Salliou et al. ( ) and described herea er. .

First,
French citizens were surveyed and asked about their actual diet, as well as expressing on a five degree Lickert-scale their agreement with key arguments about animal product consumption. These arguments were extracted from the participatory online platform Kialo which allows users to co-construct argument hierarchies about any topic. We considered these arguments as central as they are the main and first degree arguments over a hierarchy of more than arguments expressed by over , participants about the topic of "humans should stop eating meat" (Kialo ). The analysis of the survey reveals that % of respondents would ideally have a lower meat consumption than their current diet. This finding backs the assumption that conditions for dietary change towards more plant-based diets are significant, which supports the objective of the simulation.

.
Secondly, we constructed a database composed of arguments obtained from google search about vegetarian diets. An analysis is provided in Salliou & Thomopoulos ( ). Our sources of arguments are newspapers, grey literature and top ten google research ("vegetarian diet"; "vegan diet"; "vegetalism argument"). The latter inquiry added to the pool popular scientific papers, webmedia articles and blog posts. We read thoroughly each source and extracted all arguments as expressed by their authors. For each argument we attributed a criterion ("Nutritional"; "Economic"; "Environmental"; "Anthropological"; "Ethical"; "Health" or "Social") and noted the stakeholder expressing this argument ("Journalist"; "Scientist", "Philosopher"; "Blogger", etc.). We also indicated for each argument whether it was pro (+) or con (-) vegetarian diets (see Table for example of arguments). From this list of arguments base we built an argumentation network (Dung b) by establishing "attack" relationships between arguments ( Figure ). An attack happens when an argument challenges another argument. For example, the argument "a vegan diet is healthy" is attacked by the argument "vegans have B vitamin deficiency". As arguments rarely mention explicitly which arguments they attack, they were elicited by us. We did not check whether the attack is legitimate or not. Arguments, connected by these "attacks" relationships, form an argumentation network.  Figure : Graphical Representation of a sample of arguments and attacks about Reduced Meat Consumption: each number corresponds to one argument. Apart from the black node, which is common to several sources, each source is represented by one node color (extracted from Salliou & Thomopoulos ) Generic Model

Main concepts
. The idea behind this model is to explicitly represents agents' own mental deliberation process from arguments towards an opinion, through the use of the argumentation framework. We use the argumentation framework of Definition . Argument. We define an argument by a tuple a = (I; O; T ; S; R; C; A; T s), with: • I: the identifier of the argument; • O: the option that is concerned by the argument; • T: the type of the argument: pro, con or neutral towards the option; • S: the statement of the argument, i.e., its conclusion; • R: the rationale underlying the argument, i.e., its hypothesis; • C: the importance of each criterion (e.g., "Nutritional", "Economic", "Environmental") on which the argument relies on; • A: the agent who proposes the argument; • Ts: the type of source the argument comes from.
We thus consider that each agent is characterized by a set of attributes: • argumentation graph: a directed graph that represents a Dung's argumentation system. Each node is an argument, and each edge represents an attack from an argument to another one. The weight of an edge represents the strength of the attack for the agent. The interested reader can refer to Yun et al. ( ) for di erent ways to define attacks.
• criterion importance: for each criterion that arguments rely on, a score (numerical value between and ) represents the importance of this criterion for the agent. As opinions are formed from a cognitive and an a ective part (Bergman ), criterion are used to evaluate the a ective preference of arguments.
• opinion: a numerical value that corresponds to the opinion of the agent. A value higher than means that the agent is in favour of the option, a value lower than means that the agent is against the option, and if the value is , the agent is neutral towards the option.
• behaviour: a nominal value that corresponds to the behaviour resulting from the agent's opinion. Examples of possible values in the food diet application are omnivorous, flexitarian, vegetarian or vegan. There is thus a mapping between the opinions, defined on a numerical domain, and the set of behaviours, which label predefined consumer profiles. .
We define the notion of strength of an argument for an agent.
Definition . Argument strength. Let us consider an agent j, the strength of an argument a is defined as follows: with CRIT , the set of criteria, a c the importance of the criterion c for the argument a (see Definition ), and j c the importance of c for the agent j.
. From the notion of strength, we define the notion of value for a set of arguments.

Definition . Value of a set of arguments.
Let us consider a set of arguments A for an agent j, the value of this set of arguments is computed as follows: Using the notion of strength, we also define the notion of simplified argument graph: Definition . Simplified argumentation graph. Let us consider an agent j, (A, R) an argumentation graph and (a, a ) ∈ R. The simplified argumentation graph (A, R ) obtained from (A, R) is defined by: (a, a ) ∈ R if and only if: . This means that if an argument a attacks an argument a and if a attacks a, only the attack that has for origin the argument with the highest strength is kept in the simplified graph. If the arguments have the same strength, both attacks are kept.
. Finally, we define the notion of preferred extension.
Definition . Preferred extension. Let an argumentation system (A, R) and B ⊆ A. Then: • a conflict-free set B of arguments is admissible if and only if B defends all its elements.
A preferred extension is a maximal (with respect to set inclusion) admissible set of arguments.

Dynamics .
We made the choice to use the same general model as the one proposed in Mäs & Flache ( ). A simulation step corresponds to the exchange of an argument between two agents, i.e., an agent gives one of his/her arguments to another agent. When an agent learns a new argument, the oldest argument is removed from his/her argumentation graph. This forgetting process, already defined in Mäs & Flache ( ), was introduced to take into account the limitation of human cognition and memory. We also integrated the use of an argument, i.e., giving it to another agent, triggers the agent to remember it. The given argument is automatically considered as the agent's most recent argument. Similarly, an agent who receives an argument he/she already has will not add it again in his/her argumentation graph, but will consider this argument as the most recent among his/her arguments. The e ect of this mechanism is that some arguments may be forgotten by the entire population of agents. Thus, for example, if all agents tend to converge towards the same opinion, most of the arguments against that opinion will be forgotten. .
Concerning the choice of the agent to exchange arguments with, we used the same partner selection method as Mäs & Flache ( ). In each simulation step, an agent chosen randomly (uniform distribution) selects another agent. The probability that the second agent is chosen as an interaction partner depends on the similarity between the two agents in terms of opinion. .
Let i and j be agents, the similarity between i and j is: And the probability for an agent i to select j for partner considering N the set of all possible partners is: with h, the strength of homophily. .
For the choice of the argument to be given, our hypothesis is that an agent will give an argument that seems relevant to him/her and that allowed him/her to form his/her opinion. In other words, it means picking an argument belonging to the set of arguments in the preferred extension maximizing the absolute value of opinion as defined in Equation .
The agent will choose a random argument in this set.Our proposal to randomly choose which argument to give to another agent is due to the dependence of such action on external factors not represented in the model such as the course of the discussion between individuals, the profile of the other, etc. Other choices could have been made such as giving the argument with the highest strength, the argument with the highest chance of convincing the other, etc. We discuss this point in the perspectives of the article. .
Note that in the case of an argumentation graph without attacks, there will be only one preferred extension which will be composed of all the arguments considered by the agent. We thus find ourselves in the same case as the ACTB model where the agent chooses an argument at random among all the arguments at his/her disposal. In the other cases, the ranking between several co-existing preferred extensions is known as the "ranking semantics" problem (Yun et al. ). We made a modelling choice stating that an agent chooses the preferred extension with the highest value, which expresses the motivation to favour the most adamant view stemming from the extensions. .
Once an agent receives a new argument (and at the initialization of the model), the agent deliberates using his/her argument graph to make his/her opinion. Contrary to Mercier & Sperber ( ), who state that individuals will be strongly critical of any new argument challenging their own opinion, we assumed no such psychological reactance (Brehm ). The deliberation is composed of steps: . simplifying the argumentation graph according to the weights of the edges (see Section . ); . computing the set of preferred extensions from the simplified argumentation graph (see Section . ); . computing the opinion from the preferred extensions: for each extension, the agent computes its value using Equation , then returns the extension with the maximal absolute value. If several extensions have the same absolute value, then the agent randomly selects one of these extensions. .
If we consider an argumentation graph with no attack and that all the criteria have the same importance for all the agents, then, we are in the exact context of the ACTB model, when all relevant arguments have the same persuasiveness (i.e., all arguments are equally weighted in the calculation of the opinion). In this case, the evaluation of an opinion of an agent j with a set of arguments A can be directly computed by: Convergence towards a steady state can only be achieved if none of the agents can change their opinion no matter what happens in terms of exchanging arguments. The definition of such a steady state depends on the strength of homophily h. Indeed, the model relies on a stochastic choice of agents to exchange arguments (see Equation ): if h = 0, it means that all agents can exchange arguments with all other agents even if they have a very di erent opinion; if h > 0, it means that all the agents can exchange arguments with all other agents unless they have a completely di erent opinion (i.e., if one of the agents has a −1 opinion and the other has a 1 opinion). Thus, in the first case, to be sure to obtain a stable state, all the agents must have the same opinion (−1 or 1) and arguments of a homogeneous type (all pro or all con). In the case of h > 1, the first condition can be relaxed: all agents must have arguments of homogeneous type (all pro or all con) but their opinion can be either −1 or 1. Indeed, in this case, agents will only exchange arguments with agents who already have the same opinion as them and the new arguments brought, in accordance with the opinion of both agents, will not have an impact on the result of the opinion calculation.

Implementation .
The model was implemented with the GAMA platform (Taillandier et al. ). GAMA provides modellers with a dedicated modelling language which is easy to use and learn. It also allows them to naturally integrate GIS data and includes an extension dedicated to generating a spatialized and structured synthetic population (Chapuis et al. ), which is particularly interesting for building empirically grounded models. The main components of the model (arguments, argumentative agents, etc.) were implemented as a plugin for the GAMA platform. The interest of making a plug-in is to facilitate the reuse of these elements in other models. Thus, a modeller wishing to use them will just have to import the plugin and she/he will be able to directly use all these functions. This is particularly interesting for non simple functions such as the calculation of preferred extensions which JASSS, ( ) , http://jasss.soc.surrey.ac.uk/ / / .html Doi: . /jasss. is based on the JArgSemSAT Java library (Cerutti et al. ). The plugin was designed to be as modular as possible allowing the modellers to customize all the existing functions (for example, the computation of the argument strength). It was developed under the GPL-licence, and is available on Github (Github ). It can be directly downloaded and installed from GAMA . . from the GAMA experimental p update site.

Model Exploration
. In this section, we explore the impact of di erent parameters on the simulation results, respectively the number of attacks in the global argumentation graph, the strength of homophily (h), and the number of arguments per agent. The values of the parameters used for the experiments are given in the .
We simulated a population of agents, homogeneous in the sense that criteria have the same importance for all agents. Each agent is initialized with a set of arguments chosen at random among arguments ( pro and con). At the beginning of the simulation, for each agent, an oldness value between (recent) and (old) is assigned to each argument: one argument has a oldness of , another of , another of and so on up to . We also considered that attacks link the arguments. The attacks are randomly generated between two arguments with di erent conclusions. Indeed, we consider that a pro argument can only attack a con argument and vice versa. Since we have pro and con arguments, the maximum number of attacks is . .
Concerning the strength of homophily, we set the value of h at for our experiments. .
We studied the change of agents' opinions over , , simulation events. This number was chosen to maximsze the chances of reaching a steady state. As the model is stochastic, we ran the simulation times per value of parameters. .
In terms of outputs, we analysed the average distribution of opinions for the repetitions, the number of stable states obtained and the average number of states to reach such a stable state, and finally the evolution of the polarization of the agents' opinion.
. Concerning the polarization at time t, we use the following equation to estimate it: where N is the set of agents, d ij,t the distance between the opinions of agent i and agent j at time t computed as d ij,t = |opinion i,t − opinion j,t | and γ t the mean opinion distance among all the agents at time t.

Influence of argument attacks .
In order to evaluate the impact of considering attacks in the model that are not taken into account in the ACTB model, we ran the model using the same conditions as the one presented in the previous section (same parameter values, same number of iteration, and same number of replication). The only di erence is that we vary the number of attacks in the global argumentation graph. We tested values for the number of attacks: , ,  As showed, the attacks between arguments have a strong impact on the result. Indeed, in the case where no attack is taken into account, no phenomenon of bi-polarization of opinion is visible and there is no convergence towards an unique opinion, whereas it is very marked as soon as the number of attacks exceeds . This result can be explained by the fact that the larger the number of attacks, the smaller the number of arguments that the agent considers relevant, because the attacked arguments that are not defended are not retained in his/her preferred extensions (see Section . ). In addition, the arguments that are relevant for the agent o en supports the same conclusion, leading to a polarization of the agent's opinion. It can also be observed that for attacks, the agents' opinion tends to converge towards a bipolarization, whereas when the number of attacks increases ( or ) the agents' opinion tends to converge towards a single value. In fact, the higher the number of attacks, the greater the chance of giving an argument that attacks the other arguments, and therefore the greater the chance that the other agent changes his or her opinion for an opinion close to that of the one who gave the argument, which ultimately leads to a reinforcing e ect as a consensus begins to emerge.

Influence of homophily strength .
We tested the model with the following values for h: , , , , , and . A value of means that the agent receiving an argument is selected randomly using a uniform distribution. Figure , when h = 0, the polarization value quickly converges towards 0, which means that all agents converge towards the same opinion. We can see in Figure  . Indeed, as already mentioned in the previous experiment, a high number of attacks ( in this experiment) means that the number of arguments that the agent considers as relevant is low. This is because attacked arguments that are not defended are not in the agent's preferred extensions and thus agents o en have a rather polarized opinion. As in the case where h = 0, agents can give arguments to all other agents with the same probability, even to those with a very di erent opinion. The higher the number of agents sharing the same opinion, the faster they are able to convince agents with a di erent opinion to converge towards their opinion. This creates a reinforcing phenomenon leading to a fast convergence towards a uniform opinion (polarization = ). This phenomenon can be observed in the polarization chart. Note that in Figure , for h = 0, the agents' opinions seem to be bipolarized (half of the agents with an opinion in the interval [−1, −0.75[ and the other half with an opinion in the interval [0.75, 1]. In reality, this is an e ect due to the aggregation of the simulations: over the simulations, if in the agents converge to an opinion value of −1 and in to an opinion value of 1, on average the agents will be evenly distributed between these extreme intervals. The fact that the standard deviation is very high is a good indication of this type of phenomenon.

As shown in
.
As the value of h increases, the agents tend to converge more and more towards higher values of polarization (and an increasingly smaller standard deviation in the distribution of opinions - Figure ), up to a certain level (above 50). Indeed, as shown by Mäs & Flache ( ), the increase in the value of h will lead to the observation of a phenomenon of bipolarization: The higher the value of h, the less agents will interact with agents with very di erent opinions and therefore try to convince them. Talking only to agents with similar opinions will also mean that the pool of arguments to which they will be subjected will be smaller, and agents will mostly receive arguments that are consistent with their opinion, leading to a phenomenon of reinforcement of their opinion to an extreme. From a certain level of h, the agents will only exchange arguments with agents already having a very close opinion to them, which explains why several clusters may appear and therefore the polarization value obtained is lower for h = 100 than for h = 50.

Influence of the number of arguments per individual .
In order to assess the impact of the number of arguments per individual, we carried out an experiment using the same conditions as the previous experiment. The only di erence is that we vary the number of arguments known by each agent ( in the previous experiment). We tested values for the number of arguments: , , , , , and (i.e. every agent knows all the arguments).

Figure and
In the case of a single argument, a perfect bi-polarization is observed, which is normal: each agent has a single argument and builds its decision from it. As we have as many pro as con arguments, each agent has a chance out of to have one of these two types of arguments and thus to have an opinion totally pro (opinion of ) or con (opinion of -) the option. .
It is also expected that when all agents have all arguments, as all agents have the same criteria values, all agents will have the same preferred extensions and the same absolute values for them. With attacks, the chances of having unattacked arguments are quite low, so the chances of getting homogeneous extensions (only arguments with the same conclusion) are high. Therefore, very o en, two extensions will appear with contradictory opinions (-and ). Each agent will therefore have an equal chance of having at the end a totally pro (opinion of ) or con (opinion of -) opinion to the option. Very o en, but not for every simulation, the polarization value will be close to , which explains the average polarization value of . .

.
With arguments, the global opinion tends to converge to a single value (polarization value close to , and opinions mostly between [-,-. [ and [ . , . ]). This result can be explained by the fact that with arguments, only configurations exist: three homogeneous arguments, two arguments with the same conclusion and one with a di erent conclusion. The number of possible opinion values is therefore limited: -/+ ( arguments with the same conclusion in the preferred extension), -/+ . ( arguments with the same conclusion and argument with a di erent conclusion in the preferred extension), ( argument for and argument against in the preferred extension). In this context, the reception of an argument will have a profound impact on the agent's opinion. Once the number of agents with more arguments pro (or con) is greater than the number of agents with arguments con (or pro), the overall opinion irremediably converges towards a single opinion. .
In the case of arguments, the results are more contrasted, due to the greater number of possible combinations. The value of polarization is higher because the agents' opinions are more distributed in the opinion space and because a greater number of simulations converge towards a bipolarization, which results in an increase in the value of the polarization. This phenomenon is further reinforced by arguments where the number of simulations converging towards a bipolarization increases. .
Finally, in the case of arguments, as in the case of arguments, with attacks and many arguments, the chances of having unattacked arguments are quite low, so the chances of obtaining homogeneous extensions (only arguments with the same conclusion) are high. In this context, the simulations converge either to a single value of opinion (for most of them) or to a strong bipolarization, which explains this low mean value of polarization.  .
We used the results of the survey to generate the agents: we created from the survey a population of agents. For each of them, we defined the opinion option -omnivorous ( %), flexitarian ( . %), vegetarian ( . %) and vegan ( . %)-from the answer to the question "Ideally, what diet would you like to have in the future?". These proportions illustrate the share of opinions in the sample and not their declared diet in practice, which have much lower proportions of individuals following one of the vegetarian diets. .
For each agent, we drew a set of initial arguments according to the type of arguments (pro) and (con) depending on the opinion option of the agent. The number of considered arguments per agent was set as . This number was chosen because research on human cognitive capabilities tends to show that humans can process and recall about pieces of information (Miller ; Mäs & Flache ). We experimentally checked the coherence of this number, by observing the number of arguments spontaneously provided by individuals asked to list pro and con arguments concerning the consumption of animal food products: to arguments were spontaneously given, with an average of , . The number of pro arguments per agent was defined randomly (uniform distribution) using the following intervals: • omnivorous: [ -] • flexitarian: [ -] • vegetarian: [ -] • vegan: [ -] .
For each agent, the number of con arguments is 7number of pros arguments. .
For the criterion importance values, we used the survey to determine the relative importance of the di erent criteria. More precisely, we used the answer to the question concerning the degree of agreement with arguments: as each argument is linked to a criterion, we used the answer (Lickert-scale, i.e. value between and ) to give a value to the linked criterion. Figure shows the distribution of scores given for each criterion. Table  shows the mean value for each criterion per category of people. We can observe that omnivores tend to give more value than others to health, anthropological and nutritional criteria while vegetarians and even more so vegans tend to give more value than others to ethical and environmental criteria.  . Figure shows the distribution of opinions at initialization. A first observation is that the stochasticity is low: despite the random drawing of the arguments, the standard deviation obtained for each group is very small.
. Another observation is that the distribution of opinions appears to be consistent with the data. For example, . % of the respondents defined vegetarianism or veganism as the ideal diet which is consistent with the percentage of . % for the agents with an opinion higher than . . Similarly, if we take all the agents with a negative opinion about vegetarianism (opinion below ), we get . %, which is close to the % of people who stated that their ideal diet is omnivorous. Figure : Initial opinions of the agents: x-axis, value of opinion; y-axis, percentage of agents per opinion values. .
We used the homophily rule for partner selection with a value of for h and we still used the same rule as in Section . for the choice of the argument to be given (one of the preferred extensions that maximises the absolute value of the opinion).

Evolution of the system without introducing new arguments .
We have studied the normal evolution of the system when no new arguments are introduced. As the simulation is stochastic, we performed repetitions of the simulation. During , , simulation steps, we analysed the evolution of opinion and the polarization of opinions. .
Figure shows the evolution of the agents' opinions for the , , simulation steps. The general opinion tends to converge towards a mean value of . , which shows that the vegetarian diet tends to be better accepted by the agent population. At the end of the , , simulation steps, we can observe by comparing the distribution obtained in Figure and

Impact of the number of the agents to whom the new argument is given .
The goal of this experiment is to analyse the impacts of giving a new argument to di erent numbers of agents. We tested a general scenario where a new argument is introduced at the initialization of the simulation to certain percentage of the agent population. The new argument is pro vegetarian diet and attacks arguments con vegetarian diet (randomly selected from arguments that concerns the same criterion). The argument is given to randomly selected agents regardless of their opinion and with a random criterion concerned by the argument. As the simulation is stochastic, we ran replications for each configuration. For each configuration, we evaluated, during , simulation steps, the mean opinion, the value of polarization, and the number of agents that have the new argument in their argumentation graph.    . Table shows the  .
A first observation is that for all proportion values, the polarization increases ( Figure ). These values evolve in a similar way whatever the proportion value.
. Another observation, which was expected, is that the introduction of the new argument has a significant impact on the initial opinion of the agents (p − value by Wilcoxon test lower than 1.0e − 14 between all proportion values). Thus, the impact of the introduction of the argument is all the stronger the higher the number of agents is concerned. It can also be observed that while introducing an argument has a significant impact on the opinion results obtained a er , steps of simulations (p-value with Wilcoxon test lower than 0.01 for all proportion combinations), introducing it to % of the agents or more has no significant impact. Indeed, the rate of % triggered the most significant change in the evolution of global opinion: when the rate increases, the evolution of global opinion tends to decrease. This can also be observed on the evolution curves: at the beginning of the simulation, the general opinion tends to increase in all cases, but when the argument is introduced to all the agents, the overall opinion tends to decrease rapidly (the higher the rate of introduction, the greater the drop) before increasing slowly again a erwards. .
This increase at the beginning and this fall can be explained by the forgetting process: at the beginning of the simulation, the agents tend to keep the new argument, considered as recent. However, a er a while, the agents who can mobilize this argument (mainly omnivores) tend to forget it, which impacts their opinion. This phenomenon is visible on Figure : for a proportion of 1.0, at the beginning of the simulation, all the agents have the argument that has been introduced, but a er a certain number of simulation steps, the number of agents with this argument decreases rapidly until converging to a number close to agents. Also, for the case of a proportion of 0.5, the number of agents having the argument increases at the beginning: as the argument is recent, no agent forgets it, but as the agents who have this new argument in their preferred extension propagate it, the number of agents having the argument increases. Then, as in the case of the 1.0 proportion, once the argument becomes older, agents start to forget it.
. This result could potentially explain part of the evolution of meat consumption in the context of the mad cow disease (or BSE for Bovine spongiform encephalopathy) in the 's. Godfray et al. ( ) show how the European consumption of meat dramatically fell in the beginning of the 's when many consumers discovered the disease through considerable media coverage (Ashworth & Mainland ). The precise percentage of European consumers who received information conveying arguments against beef consumption due to health risks is unknown. However, we think it's a fair assumption to consider this percentage was very high. By the mid 's, the crisis ended and meat consumption started to steadily grow again but at a lower growth rate than before (Godfray et al. ).
In that sense, Figure shows that when all agents received the argument the general opinion partly recovers a er the initial introduction of the new argument. Under the assumption that opinion partly translates into behaviour (Bleda & Shackley ), our model seems able to reproduce some observed behaviour in diet change under a wide introduction of an argument.

.
This phenomenon also exists when the argument is introduced to a smaller number of agents, but it will quickly be disseminated by those agents whose opinion is favorable to the argument to other agents who are also potentially favourable to it (due to the homophily rule).
Impact of the profile of the agents to whom the new argument is given .
The goal of this experiment is to analyse if the introduction to certain agent profiles (vegan, vegetarian...) impacts more or less the general opinion of agents. We tested a general scenario with a new argument introduced at the initialization of the simulation to % of the agent population ( agents). The new argument is pro vegetarian diet and attacks arguments con vegetarian diet (randomly selected from arguments that concerns the same criterion). This scenario was tested with profiles of agents receiving the argument: no specific profile, i.e. the argument is given to randomly selected agents regardless of their opinion; agents with lowest opinion value (i.e. omnivorous); with the more neutral value -opinion closest to . (i.e. flexitarian); agents with highest opinion value (i.e. vegetarian or vegan agents). We also tested the criteria concerned by the argument that were the most represented in the argumentation network: nutritional, health and ethical. As the simulation is stochastic, we run replications for each configuration. For each configuration, we evaluated, during , simulation steps, the mean opinion, the value of polarization, and the number of agents that have the new argument in their argumentation graph.

Profile
Criterion  Table : For each profile of agents receiving the argument and for each criterion concerned by the argument, mean value of opinion at step and , and number of agents still having the argument in their argumentation network a er , simulation steps. Mean value on the replications (with the standard deviation).   An initial observation is that regardless of the audience that receives the argument and the criterion to which it relates, the introduction of the argument always allows a significant increase in the value of the initial and final opinions (p-value with Wilcoxon's test lower than 0.001 for all combinations of profile and criteria). An interesting point is that for some of the combinations the introduction of the new argument allowed a more rapid evolution of the general opinion than without it. This means that not only did the introduction of an argument have an immediate e ect (higher value of opinion in the initial state), but that having this new argument made it possible to convince other agents more quickly. .
Another phenomenon is that whatever the profile of the agent receiving the argument and the criterion concerned, the number of agents having it in their argumentation graph evolves in a similar way: as observed in Section . , initially, the argument spreads and therefore the number of agents having it in their possession increases. Then, the agents who do not use the argument start to forget the argument and the number of agents having this argument decreases until a level lower than the initial number of agents having the argument.
. Finally, a general observation is that the introduction of the new argument, whatever the profile of the agents or the criterion concerned, has no real impact on polarization. A er , simulation steps, all combinations of parameters converge to a polarization value of 0.48.

.
Concerning the impact of the criterion concerned and of the profile of agents receiving the argument, results show that depending of the profile concerned, the criterion concerned does not have the same impact. Indeed, the impact (opinion value) of an argument di used to con agents is slightly greater when it concerns the nutritional criterion whereas the impact is significantly greater for pro agents when it concerns the ethical criterion. Among the three agent profiles (neutral, pro, con), it is when the argument is given to neutral agents that it has the most impact. It is a hesitant public about its attitude towards vegetarian diets and can therefore easily be convinced to switch to a stricter vegetarian diet. .
To interpret these observations, its' important to stress that the population was initialized based on the actual survey results. In the case of omnivores for instance, the value of ethical criterion is low for most of the agents, while it is very high for most of vegetarian/vegan agents. In reality, omnivorous are probably protected from ethical attacks as they somehow own a "normative argument" as their behaviour is in line with the social norm. Such argument is not present in the database of arguments for the very reason that the benefit of social norms for individuals is to avoid to loose time and energy to wonder about the legitimacy of a behaviour and to state its existence (Epstein ). In a similar fashion, many nutritional arguments are against vegan diets, claiming the nutritional balance is questionable. Yet the nutritional criterion has among the highest importance values for omnivorous agents, which leads nutritional arguments to destabilize the argumentation system for them.

.
Important di erences can also be observed for the evolution of the number of agents having the argument in their argumentation graph. It is when the argument is not given to any particular agent profile (no profile) that the argument propagates best at the beginning of the simulation: indeed, the homophily rule makes that agents rather exchange arguments with like-minded agents. Therefore, when the argument is given to agents with a similar profile (i.e. close value of opinion), the argument will circulate less among other types of agents. The fact that the argument circulates less when it is given to pro agents can be explained by the fact that the argument will attack con arguments but will not be attacked: it will therefore o en end up in the preferred extensions of the agents having it. But this extension will o en be composed of a larger number of arguments in the case of pro agents as they will o en have a set of pro arguments that are not attacked. This is not the case for con agents, who will o en see their con argument attacked by this new argument and therefore have less argument in their preferred extension. This situation gives a better chance to pass on this argument, which explains the better di usion of the argument.

Conclusion and Perspectives
. The paper presented a generic opinion dynamic model implemented with the GAMA platform based on the use of formal argumentation. The use of the model was illustrated through an application about the di usion of a favourable opinion of vegetarian diet. The carried out experiment shows the possibilities o ered by the framework. We plan to go further in the analysis of the model, in particular in terms of analysis of the steadystates. To that end, we intend to draw on the work of Camargo ( ) which proposes methods for analysing the steady-state of the model proposed by Banisch & Olbrich ( ). .
Like the studies of Gabbriellini & Torroni ( ); Butler et al. ( a), this study contributes to bridge formal argumentation and agent-based models of opinion dynamics. Our goal for the future is to continue to strengthen this bridge. Thus, we plan to enrich the way arguments are evaluated. In the current version, the evaluation of arguments depends on the criteria concerned by the argument and on the importance of these criteria for the agent. Other factors can impact the perception of an argument, and among them, the source of the argument (Pornpitakpan ). As an example, the profusion of fake news from dubious source can impact people di erently. Our model should soon be able to take this di erence of perception into account. We also plan to use an approach similar to Banisch & Olbrich ( ) to take into account di erent attitudes on di erent issues. For example, for the dissemination of the vegetarian diet, instead of considering a single issue on vegetarian diets, we could imagine having four di erent issues related to di erent types of diets (omnivorous, flexitarian, vegetarian, vegan) with a specific subset of arguments for each. Indeed, some arguments may only concern one specific diet and not the others. .
Another enrichment that we plan to add is a mechanism to enable the evolution of the criterion of importance for the agents. Indeed, these values are not fixed for life but can evolve a er a particular event and from the influence of others. We also plan to enrich the argument protocol. In that sense, we could adapt the type of arguments exchanged depending on the opinions hold by both parties. For example, a flexitarian would exchange a pro-vegetarian argument to an omnivore, but would exchange a pro-omnivore argument to a vegetarian. In this context, it would also be interesting to go further by taking inspiration from the studies of Gabbriellini & Torroni ( ); Butler et al. ( a) that integrate real exchanges of argument and where an agent does not simply give an argument to another agent without the latter being able to respond. In this sense, the notion of trust introduced by Gabbriellini  . A last perspective concerns the link between the model and the BEN (Behaviour with Emotions and Norms) agent architecture (Taillandier et al. ; Bourgais et al. , ). Indeed, in addition to the BDI (Belief Desire Intention) reasoning engine, the BEN architecture introduces numerous concepts that could be interesting for our work such as the personality of agents based on the classic OCEAN model (McCrae & John ) and the social relation between agents evaluated according to dimensions (liking, dominance, solidarity, familiarity and trust).

.
For the application case of the di usion of the vegetarian diet, we plan to take advantage of the data collected to deepen the realism of the population of agents generated (criterion importance, social networks, initial arguments, etc.) in particular by using a population generation tool such as GEN* that is already integrated into GAMA (Chapuis et al. , ). This step will allow further testing and validating of the model in a wide range of scenarios. We also plan to take advantage of existing data to better characterize the temporal aspect of the model. Indeed, for the time being, we have chosen to use an abstract simulation step. In our model, a simulation step corresponds to an exchange of arguments and is not related to real time. It could be interesting to use the collected data to establish a link between the occurrence of an argument exchange and real time. First, such a link could allow a better characterization of the temporal evolution of the general opinion on vegetarian diets and secondly, validate the model by comparing the result of the simulation with the real data available on this subject.

Model Documentation
All the source codes of the model and experiments presented (with the data and parameters used for the experiments) are available on OpenABM (Taillandier et al. ).