Introduction

Historically, societies moving towards higher income per capita increased their consumption of animal products, and known as the nutrition transition model (Popkin 1993). This trend is valid worldwide and knows little variation between countries and cultures (Sans & Combris 2015). Currently, billions are moving towards more animal product consumption, especially as the economies of China and India are growing. However, in many developed countries, a potential new trend is emerging towards a second diet transition with the stabilization and/or the decline of animal product consumption (Vranken et al. 2014). Many factors may influence this process like animal welfare, negative environmental impacts of livestock production as well as health concerns (Godfray et al. 2018; Poore & Nemecek 2018). One hypothesis to explain such a trend would be the diffusion and wider adoption of vegetarian diets, ranging from flexitarianism (or semi-vegetarian) to veganism (Beardsworth & Keil 1991). Choosing a vegetarian diet usually stems in ethical, environmental and/or health concerns (Ruby 2012). By reducing the share of animal products consumed per capita, such a transition towards more plant-based diets would reduce harm to the environment, health risks and animal sufferings. Therefore, this transition towards more sustainable diets is desirable but many hindrances exist at the individual level (Stoll-Kleemann & Schmidt 2017), such as lack of awareness about these diets (Macdiarmid et al. 2016), health concerns (Herzog 2011), etc.

We focus on the role of arguments in changing people’s opinions on diets by raising awareness on animal products consumption. While a recent study (Scalco et al. 2019) modelled the influence of colleagues and household members on meat consumption, the relation between argument acquisition at the individual level and opinion diffusion has never been explored for vegetarian diets. Our assumption is that the diffusion of ethical, health and environmental arguments in favour of such diets probably fuels the vegetarian diet adoption process.

Agent-based modelling is a classical approach to study opinion dynamics as it takes into account the heterogeneity of actors and the impact of interactions between them. Among existing approaches, the most popular one - known as the bounded confidence model - uses a numerical value to represent the opinion towards an option (Deffuant et al. 2000; Hegselmann & Krause 2002). The opinion of each agent is updated by averaging a set of agent opinions. Typically, in the classic bounded confidence model, when two agents with respective opinions \(x\) and \(x'\) meet, they adjust their opinions conditionally upon their difference of opinions being smaller in magnitude than a threshold \(d\) (i.e. the opinions of the agents are modified if \(|x - x'| < d\)):

\[\begin{split} x = x + \mu \times (x' - x)\\ x' = x' + \mu \times (x - x') \end{split}\] \[(1)\]
with \(\mu\) the parameter of speed of convergence.

In the past fifteen years, many studies have proposed to enrich this generic model, for example by taking into account fixed uncertainties, by integrating multi-dimensional opinion dynamics (Lorenz 2003; Urbig & Malitz 2005), by studying the behaviour of the model when adding extremists (Mathias et al. 2016) or contrasting effects (Huet et al. 2008; Jager & Amblard 2005).

These models are very relevant to study social influence. However, most of them remain theoretical and only a few have been applied to real case-studies using data and validated (Flache et al. 2017). Another drawback is the difficulty to understand the inner motivation underlying the change in an agent’s opinion. Indeed, as the opinion is usually integrated in a single numerical value, the reasons why the agent has changed his/her opinion is unknown.

To integrate the inner motivation underlying change, a relevant framework is the argumentation model (Besnard & Hunter 2008). Argumentation deals with situations where information contains contradictions because it originates from several sources or corresponds to several points of view that possibly have different priorities. It is a reasoning model based on the construction and evaluation of interacting arguments. It has been formalized both in philosophy as well as in computer science (Rescher 1997) and applied to various domains including non-monotonic reasoning (Dung 1995a), decision making (Thomopoulos 2018) or negotiation (Kraus et al. 1998). The argumentation model framework introduced in (Dung 1995) consists of a set of arguments and binary relations expressing conflicts among arguments. An argument gives a reason for believing a claim, or for doing an action. Historically, the typical field of application of argumentation in computer science was the legal domain (Prakken & Sartor 2015). More recently, several studies proved its relevance in social-related concerns, medicine, food systems, supply chains, policies and controversies, especially for decision-making purposes (Thomopoulos 2018).

Mäs & Flache (2013) proposed a different theory to represent argument exchanges. Their model, itself inspired by earlier Persuasive Argument Theory (PAT), is based on the Argument-Communication Theory of Bi-polarization (ACTB) that adds to existing models the communication of arguments. In this model, arguments are abstracted by a numerical value between -1 (con argument) and 1 (pro argument). The agent’s opinion is the average value of the arguments that the agent considers relevant. Agents disregard pieces of information not communicated in recent interactions (and consider them as not relevant). An experiment showed that the model enables to reproduce bi-polarization (i.e., the development of increasingly antagonistic groups Esteban & Ray 1994) without explicitly representing negative influence. A recent extension of this model has been proposed in Banisch & Olbrich (2021) to take into account different issues on which opinions can be formed (each issue is linked to a subset of arguments).

Another work using explicit arguments is Stefanelli & Seidl (2017). To do so, they collected empirical data through questionnaires. This date plays a very important role as each agent has a set of different types of arguments and will form an opinion according to the valence of the arguments and the importance it gives to them. Changes in opinion will happen by interacting with other agents in their social network and through comparison of argument values. Therefore, there is no explicit exchange of arguments as such in this model: the interaction results in the adaptation (or not) of the agents’ argument scales, i.e., the valence and importance that the agent gives to a type (benefit, risk or process) of arguments. The adaption of the agent’s argument scale is computed according to the position of each argument in its own continuum of social judgment.

Another approach by Wolf et al. (2015) does not directly use arguments but the closely related concept of "need". In this model about electric cars is also based on empirical data and each agent assigns a weight to each identified need (e.g., safety, comfort, costs) for each possible action (e.g. using an electric vehicle). These weights change during the simulation through interactions with other agents.

Although these models show interesting results, we emphasize that they are silent about argumentative reasoning and do not explicitly formalize the tensions between arguments. In this context, Friedkin et al. (2016) deals with statements rather than arguments and introduces the notion of logical constraints associated with statements. This means that if an agent believes that a specific statement is true, then he/she will automatically impact his/her beliefs regarding other statements linked to the previous one.

Regarding argumentation, several studies have proposed the use of the system introduced by Dung (1995a) in opinion dynamic models. For example, in Gabbriellini & Torroni (2014), all agents reason from the same set of arguments and exchange attacks between arguments. The exchange of attacks is carried out during a dialogue phase. During this phase, an agent who is about to receive an attack he/she is disagreeing with can either accept this new attack (if the agent formulating the attack is trustworthy, which is a stochastic process), reply with a counter-attack, or end the exchange. If the agent decides to reply, the agent proposing the first attack can in turn reply. It goes on until an agent accepts an attack or ends the dialog. This model provides a very interesting basis for integrating Dung’s argumentation system into a model of opinion diffusion. Nevertheless, it is based on the hypothesis of a common set of arguments for all agents. Unlike Mäs & Flache (2013), there are no arguments but attacks exchanged here, which can be questioned. Moreover, an opinion in Gabbriellini & Torroni (2014)’s model is an argumentation framework. The model is thus a discrete opinion dynamics model, with no numeric update, which differs from our approach.

The link between arguments and opinions was explored in Villata et al. (2017) through an empirical study of emotions. Explanation and reasoning theories were also proposed in cognitive psychology Williams & Lombrozo (2010).

A recent study worth mentioning (Butler et al. 2019b) focuses on collective decision-making processes and proposes to combine a deliberative process using Dung’s system of argumentation with a process of interpersonal influence (Deffuant et al. 2000). In this work, each argument is modelled by a real number between \(-1\) and \(1\) representing the support of the argument for a principle (e.g., "protect the environment"). Each agent has an opinion about the principle modelled and this opinion evolves through a group deliberative process during which agents exchange arguments and have a direct influence through pair interactions.

A last study using Dung’s system of argumentation is the model proposed by Butler et al. (2019a). Similarly to Butler et al. (2019b), this model combines dyadic interactions (pair-wise interactions) with collective deliberation. One of its major contributions is the introduction of the notion of argumentative epistemic vigilance, i.e., the possibility for agents to reject an argument in case of "message-source" discrepancy. Indeed, when an agent receives an argument from another agent, he/she can invalidate it either by asserting the existence of an argument that attacks the first argument, or by pointing out that the argument he/she has received is not consistent with the opinion of the sender.

Our proposal is in line with the studies of Butler et al. (2019b, 2019a) except that we simply focus on the "daily" exchange of arguments using a general process of evolution of opinion close to Mäs & Flache (2013) and we do not investigate group deliberation. One of the originality of our paper go further by eliciting the content of arguments. To do so, we use the system introduced by Dung (1995a) as a basis and enrich it with detailed descriptions of the arguments. Finally, as in Stefanelli & Seidl (2014), the collection of empirical data plays an important role in the design of the model.

Section 2 presents the empirical approach we start from. Section 3 describes the generic model that we propose. Section 4 explores the model behaviour relating to the impact of the attacks in the argumentation graph and to the impact of the number of arguments known by the agents. Section 5 presents the application of this model to study the evolution of the vegetarian diet. Finally, Section 6 concludes and presents some perspectives of this work.

Empirical Approach

In this section, the first steps to build well-informed and populated models are instantiated on the case of dietary changes towards vegetarian diets. They are guided by the questions: What knowledge does literature provide about the case? What data should be collected in order to fuel the model scenarios?

Literature overview

Changes in individual behaviours have been extensively studied in the case of addictions (Prochaska et al. 1992). Results tend to demonstrate that behavioural changes possibly follow a series of stages. However, in the case of food choices, changes in dietary patterns are quite specific and their stages are less well-identified (Povey et al. 1999). Food choices are complex, dynamic, and change over the course of a person’s life. They are determined by a wide variety of factors. In their review, Vabo & Hansen (2014) try to address these factors. Three fundamental groups stand out almost systematically: factors related to the characteristics of the food (organoleptic characteristics, nutritional content, function, etc.), factors related to the consumer (physiological and psychological aspects) and environmental factors (economic, cultural, social context, etc.). These factors have a double influence on food choices, by building the food preferences of individuals, particularly during their childhood, and by influencing choices (through price, health, practicality, sensory appeal, mood, etc.).

Data acquisition

In order to get insights about the role of arguments in following a vegetarian diet, we conducted a survey and built an argument database. These are published in Salliou et al. (2019) and described hereafter.

First, 1714 French citizens were surveyed and asked about their actual diet, as well as expressing on a five degree Lickert-scale their agreement with 16 key arguments about animal product consumption. These 16 arguments were extracted from the participatory online platform Kialo which allows users to co-construct argument hierarchies about any topic. We considered these arguments as central as they are the main and first degree arguments over a hierarchy of more than 2000 arguments expressed by over 1,400 participants about the topic of "humans should stop eating meat" (Kialo 2021). The analysis of the survey reveals that 40% of respondents would ideally have a lower meat consumption than their current diet. This finding backs the assumption that conditions for dietary change towards more plant-based diets are significant, which supports the objective of the simulation.

Secondly, we constructed a database composed of 145 arguments obtained from google search about vegetarian diets. An analysis is provided in Salliou & Thomopoulos (2018). Our sources of arguments are newspapers, grey literature and top ten google research (“vegetarian diet”; “vegan diet”; “vegetalism argument”). The latter inquiry added to the pool popular scientific papers, webmedia articles and blog posts. We read thoroughly each source and extracted all arguments as expressed by their authors. For each argument we attributed a criterion (“Nutritional”; “Economic”; “Environmental”; “Anthropological”; “Ethical”; “Health” or “Social”) and noted the stakeholder expressing this argument (“Journalist”; “Scientist”, “Philosopher”; “Blogger”, etc.). We also indicated for each argument whether it was pro (+) or con (-) vegetarian diets (see Table 1 for example of arguments). From this list of arguments base we built an argumentation network (Dung 1995b) by establishing "attack" relationships between arguments (Figure 1). An attack happens when an argument challenges another argument. For example, the argument "a vegan diet is healthy" is attacked by the argument "vegans have B12 vitamin deficiency". As arguments rarely mention explicitly which arguments they attack, they were elicited by us. We did not check whether the attack is legitimate or not. Arguments, connected by these "attacks" relationships, form an argumentation network.

Table 1: Example of arguments collected towards vegetarian option (extracted from Salliou & Thomopoulos 2018)
Id Type Statement Rationale Criterion Actor Source type
1 - Vegan diet is deficient in B12 vitamin Vegetal proteins do not contain B12 vitamin Nutritional Jounalist Newspaper
15 - Plant proteins trigger allergies Plant-based food are more regularly allergic Nutritional Innovation cluster Powerpoint
23 + Vegetarian diet is good for health Diabetes, cancer and coronary risks are reduced Health Scientists Scientific paper
56 + Stop eating animals does not mean animal extinction Deforestation for the cultivation of animal feed provokes species extinctions Environmental Blogger pro-vegan Blog post
59 + Animals suffer when eaten, not plants A nervous system is needed to suffer, which plants do not have Ethical Blogger pro-vegan Blog post

Generic Model

Main concepts

The idea behind this model is to explicitly represents agents’ own mental deliberation process from arguments towards an opinion, through the use of the argumentation framework. We use the argumentation framework of Dung (1995a) (Definition 1) complemented with a structured description of arguments extending those introduced in Bourguet et al. (2013); Thomopoulos et al. 2018) (Definition 2).

Definition 1. Dung’s argumentation graph. An argumentation graph is a pair \((\mathcal{A},\mathcal{R})\) where \(\mathcal{A}\) is a set of arguments and \(\mathcal{R}\subseteq \mathcal{A}\times \mathcal{A}\) is an attack relation. An argument \(a\) attacks an argument \(a'\) if and only if \((a,a') \in \mathcal{R}\).
Definition 2. Argument. We define an argument by a tuple \(a = (I;O;T;S;R;C;A;Ts)\), with:
  • I: the identifier of the argument;
  • O: the option that is concerned by the argument;
  • T: the type of the argument: pro, con or neutral towards the option;
  • S: the statement of the argument, i.e., its conclusion;
  • R: the rationale underlying the argument, i.e., its hypothesis;
  • C: the importance of each criterion (e.g., "Nutritional", "Economic", "Environmental") on which the argument relies on;
  • A: the agent who proposes the argument;
  • Ts: the type of source the argument comes from.
Example 1. An example of argument for the vegetarian diet context is ("1", "adoption of the vegetarian diet", "con", "Vegan diet is deficient in B12 vitamin", "Vegetable proteins do not contain B12 vitamin", "criterion ‘Nutritional’ with a importance of 1.0", "journalist of ‘Canard Enchainé’", "Newspaper").

We thus consider that each agent is characterized by a set of attributes:

  • argumentation graph: a directed graph that represents a Dung’s argumentation system. Each node is an argument, and each edge represents an attack from an argument to another one. The weight of an edge represents the strength of the attack for the agent. The interested reader can refer to Yun et al. (2018) for different ways to define attacks.
  • criterion importance: for each criterion that arguments rely on, a score (numerical value between 0 and 1) represents the importance of this criterion for the agent. As opinions are formed from a cognitive and an affective part (Bergman 1998), criterion are used to evaluate the affective preference of arguments.
  • opinion: a numerical value that corresponds to the opinion of the agent. A value higher than 0 means that the agent is in favour of the option, a value lower than 0 means that the agent is against the option, and if the value is 0, the agent is neutral towards the option.
  • behaviour: a nominal value that corresponds to the behaviour resulting from the agent’s opinion. Examples of possible values in the food diet application are omnivorous, flexitarian, vegetarian or vegan. There is thus a mapping between the opinions, defined on a numerical domain, and the set of behaviours, which label predefined consumer profiles.

We define the notion of strength of an argument for an agent.

Definition 3. Argument strength. Let us consider an agent \(j\), the strength of an argument \(a\) is defined as follows:
\[ strength(j, a) = \sum_{c \in CRIT} j_c \times a_c\] \[(2)\]
with \(CRIT\), the set of criteria, \(a_c\) the importance of the criterion \(c\) for the argument \(a\) (see Definition 2), and \(j_c\) the importance of \(c\) for the agent \(j\).

From the notion of strength, we define the notion of value for a set of arguments.

Definition 4. Value of a set of arguments.
Let us consider a set of arguments \(A\) for an agent \(j\), the value of this set of arguments is computed as follows:
\[value(j, A) = \frac{\sum\limits_{a \in A} strength(j, a) \times type(a)}{\sum\limits_{a \in A} strength(j, a)} \] \[(3)\]
with: \(type(a) = \left\{ \begin{array}{ll} -1 & \mbox{if type of a = con} \\ 0 & \mbox{if type of a = neutral} \\ 1 & \mbox{if type of a = pro} \\ \end{array} \right.\)

Using the notion of strength, we also define the notion of simplified argument graph:

Definition 5. Simplified argumentation graph. Let us consider an agent \(j\), \((\mathcal{A},\mathcal{R})\) an argumentation graph and \((a,a') \in \mathcal{R}\). The simplified argumentation graph \((\mathcal{A},\mathcal{R}')\) obtained from \((\mathcal{A},\mathcal{R})\) is defined by: \((a,a') \in \mathcal{R}'\) if and only if:
  • \((a,a') \in \mathcal{R}\) and
  • if \((a',a) \in \mathcal{R}\) then \(strength(j, a) \geq strength(j, a')\).

This means that if an argument \(a\) attacks an argument \(a'\) and if \(a'\) attacks \(a\), only the attack that has for origin the argument with the highest strength is kept in the simplified graph. If the arguments have the same strength, both attacks are kept.

Finally, we define the notion of preferred extension.

Definition 6. Preferred extension. Let an argumentation system \((\mathcal{A},\mathcal{R})\) and \(B \subseteq \mathcal{A}\). Then:
  • \(B\) is conflict-free if and only if \(\not \exists a_i, a_j \in B\) such that \((a_i, a_j) \in \mathcal{R}\);
  • \(B\) defends an argument \(a_i \in B\) if and only if for each argument \(a_j \in \mathcal{A}\), if \((a_j, a_i) \in \mathcal{R}\), then \(\exists a_k \in B\) such that \((a_k, a_j) \in \mathcal{R}\);
  • a conflict-free set \(B\) of arguments is admissible if and only if \(B\) defends all its elements.
A preferred extension is a maximal (with respect to set inclusion) admissible set of arguments.

Dynamics

We made the choice to use the same general model as the one proposed in Mäs & Flache (2013). A simulation step corresponds to the exchange of an argument between two agents, i.e., an agent gives one of his/her arguments to another agent. When an agent learns a new argument, the oldest argument is removed from his/her argumentation graph. This forgetting process, already defined in Mäs & Flache (2013), was introduced to take into account the limitation of human cognition and memory. We also integrated the use of an argument, i.e., giving it to another agent, triggers the agent to remember it. The given argument is automatically considered as the agent’s most recent argument. Similarly, an agent who receives an argument he/she already has will not add it again in his/her argumentation graph, but will consider this argument as the most recent among his/her arguments. The effect of this mechanism is that some arguments may be forgotten by the entire population of agents. Thus, for example, if all agents tend to converge towards the same opinion, most of the arguments against that opinion will be forgotten.

Concerning the choice of the agent to exchange arguments with, we used the same partner selection method as Mäs & Flache (2013). In each simulation step, an agent chosen randomly (uniform distribution) selects another agent. The probability that the second agent is chosen as an interaction partner depends on the similarity between the two agents in terms of opinion.

Let \(i\) and \(j\) be 2 agents, the similarity between \(i\) and \(j\) is:

\[Similarity(i,j) = \frac{1}{2} (2 - |i.opinion - j.opinion|) \] \[(4)\]

And the probability for an agent \(i\) to select \(j\) for partner considering \(N\) the set of all possible partners is:

\[Proba_{i}(j) = \frac{(Similarity(i,j))^h}{\sum\limits_{k \in N}{(Similarity(i,k))^h}} \] \[(5)\]
with \(h\), the strength of homophily.

For the choice of the argument to be given, our hypothesis is that an agent will give an argument that seems relevant to him/her and that allowed him/her to form his/her opinion. In other words, it means picking an argument belonging to the set of arguments in the preferred extension maximizing the absolute value of opinion as defined in Equation 3. The agent will choose a random argument in this set.Our proposal to randomly choose which argument to give to another agent is due to the dependence of such action on external factors not represented in the model such as the course of the discussion between individuals, the profile of the other, etc. Other choices could have been made such as giving the argument with the highest strength, the argument with the highest chance of convincing the other, etc. We discuss this point in the perspectives of the article.

Note that in the case of an argumentation graph without attacks, there will be only one preferred extension which will be composed of all the arguments considered by the agent. We thus find ourselves in the same case as the ACTB model where the agent chooses an argument at random among all the arguments at his/her disposal. In the other cases, the ranking between several co-existing preferred extensions is known as the “ranking semantics” problem (Yun et al. 2020). We made a modelling choice stating that an agent chooses the preferred extension with the highest value, which expresses the motivation to favour the most adamant view stemming from the extensions.

Once an agent receives a new argument (and at the initialization of the model), the agent deliberates using his/her argument graph to make his/her opinion. Contrary to Mercier & Sperber (2011), who state that individuals will be strongly critical of any new argument challenging their own opinion, we assumed no such psychological reactance (Brehm 1966). The deliberation is composed of 3 steps:

  1. simplifying the argumentation graph according to the weights of the edges (see Section 3.5);
  2. computing the set of preferred extensions from the simplified argumentation graph (see Section 3.7);
  3. computing the opinion from the preferred extensions: for each extension, the agent computes its value using Equation 3, then returns the extension with the maximal absolute value. If several extensions have the same absolute value, then the agent randomly selects one of these extensions.

If we consider an argumentation graph with no attack and that all the criteria have the same importance for all the agents, then, we are in the exact context of the ACTB model, when all relevant arguments have the same persuasiveness (i.e., all arguments are equally weighted in the calculation of the opinion). In this case, the evaluation of an opinion of an agent \(j\) with a set of arguments \(A\) can be directly computed by:

\[opinion(j) = \frac{\sum\limits_{a \in A} type(a)}{Card(A)}\] \[(6)\]
with: \(type(a) = \left\{ \begin{array}{ll} -1 & \mbox{if type of a = con} \\ 0 & \mbox{if type of a = neutral} \\ 1 & \mbox{if type of a = pro} \\ \end{array} \right.\)

Convergence towards a steady state can only be achieved if none of the agents can change their opinion no matter what happens in terms of exchanging arguments. The definition of such a steady state depends on the strength of homophily \(h\). Indeed, the model relies on a stochastic choice of agents to exchange arguments (see Equation 5): if \(h = 0\), it means that all agents can exchange arguments with all other agents even if they have a very different opinion; if \(h > 0\), it means that all the agents can exchange arguments with all other agents unless they have a completely different opinion (i.e., if one of the agents has a \(-1\) opinion and the other has a \(1\) opinion). Thus, in the first case, to be sure to obtain a stable state, all the agents must have the same opinion (\(-1\) or \(1\)) and arguments of a homogeneous type (all pro or all con). In the case of \(h > 1\), the first condition can be relaxed: all agents must have arguments of homogeneous type (all pro or all con) but their opinion can be either \(-1\) or \(1\). Indeed, in this case, agents will only exchange arguments with agents who already have the same opinion as them and the new arguments brought, in accordance with the opinion of both agents, will not have an impact on the result of the opinion calculation.

Implementation

The model was implemented with the GAMA platform (Taillandier et al. 2019). GAMA provides modellers with a dedicated modelling language which is easy to use and learn. It also allows them to naturally integrate GIS data and includes an extension dedicated to generating a spatialized and structured synthetic population (Chapuis et al. 2018), which is particularly interesting for building empirically grounded models. The main components of the model (arguments, argumentative agents, etc.) were implemented as a plugin for the GAMA platform. The interest of making a plug-in is to facilitate the reuse of these elements in other models. Thus, a modeller wishing to use them will just have to import the plugin and she/he will be able to directly use all these functions. This is particularly interesting for non simple functions such as the calculation of preferred extensions which is based on the JArgSemSAT Java library (Cerutti et al. 2017). The plugin was designed to be as modular as possible allowing the modellers to customize all the existing functions (for example, the computation of the argument strength). It was developed under the GPL-3 licence, and is available on Github (Github 2021). It can be directly downloaded and installed from GAMA 1.8.1 from the GAMA experimental p2 update site.

Model Exploration

In this section, we explore the impact of different parameters on the simulation results, respectively the number of attacks in the global argumentation graph, the strength of homophily (\(h\)), and the number of arguments per agent. The values of the parameters used for the 3 experiments are given in the Table 2.

Table 2: Parameter values used for the experiments
Experiment number of attacks h number of arguments per individual
Experiment 1: influence of argument attacks 0, 100, 200, 300, 500, 1000 10 10
Experiment 2: Influence of homophily strength 300 0, 1, 5, 10, 50, 100 10
Experiment 3: influence of the number of arguments 300 10 1, 3, 7, 10, 30, 60

For this exploration, we carried out experiments using conditions close to those used in Mäs & Flache (2013) to study the dynamics of bi-polarization.

We simulated a population of 100 agents, homogeneous in the sense that criteria have the same importance for all agents. Each agent is initialized with a set of 10 arguments chosen at random among 60 arguments (30 pro and 30 con). At the beginning of the simulation, for each agent, an oldness value between 1 (recent) and 10 (old) is assigned to each argument: one argument has a oldness of 1, another of 2, another of 3 and so on up to 10. We also considered that 300 attacks link the arguments. The attacks are randomly generated between two arguments with different conclusions. Indeed, we consider that a pro argument can only attack a con argument and vice versa. Since we have 30 pro and 30 con arguments, the maximum number of attacks is 1800.

Concerning the strength of homophily, we set the value of \(h\) at 10 for our experiments.

We studied the change of agents’ opinions over 1,000,000 simulation events. This number was chosen to maximsze the chances of reaching a steady state. As the model is stochastic, we ran the simulation 100 times per value of parameters.

In terms of outputs, we analysed the average distribution of opinions for the 100 repetitions, the number of stable states obtained and the average number of states to reach such a stable state, and finally the evolution of the polarization of the agents’ opinion.

Concerning the polarization at time \(t\), we use the following equation to estimate it:

\[P_t = \frac{1}{|N|(|N|-1)}\sum_{i \ne j}^{i \in N, j \in N}{(d_{ij,t}- \gamma_t)^2} \] \[(7)\]
where \(N\) is the set of agents, \(d_{ij,t}\) the distance between the opinions of agent \(i\) and agent \(j\) at time \(t\) computed as \(d_{ij,t}= |opinion_{i,t} - opinion_{j,t}|\) and \(\gamma_t\) the mean opinion distance among all the agents at time \(t\).

Influence of argument attacks

In order to evaluate the impact of considering attacks in the model that are not taken into account in the ACTB model, we ran the model using the same conditions as the one presented in the previous section (same parameter values, same number of iteration, and same number of replication). The only difference is that we vary the number of attacks in the global argumentation graph. We tested 6 values for the number of attacks: 0, 100, 200, 300, 500, and 1000.

Figure 2 and Figure 3 show the results obtained for different number of attacks. As showed, the attacks between arguments have a strong impact on the result. Indeed, in the case where no attack is taken into account, no phenomenon of bi-polarization of opinion is visible and there is no convergence towards an unique opinion, whereas it is very marked as soon as the number of attacks exceeds 300. This result can be explained by the fact that the larger the number of attacks, the smaller the number of arguments that the agent considers relevant, because the attacked arguments that are not defended are not retained in his/her preferred extensions (see Section 3.7). In addition, the arguments that are relevant for the agent often supports the same conclusion, leading to a polarization of the agent’s opinion. It can also be observed that for 300 attacks, the agents’ opinion tends to converge towards a bipolarization, whereas when the number of attacks increases (500 or 1000) the agents’ opinion tends to converge towards a single value. In fact, the higher the number of attacks, the greater the chance of giving an argument that attacks the other arguments, and therefore the greater the chance that the other agent changes his or her opinion for an opinion close to that of the one who gave the argument, which ultimately leads to a reinforcing effect as a consensus begins to emerge.

Influence of homophily strength

We tested the model with the following values for \(h\): 0, 1, 5, 10, 50, and 100. A value of 0 means that the agent receiving an argument is selected randomly using a uniform distribution.

As shown in Figure 4, when \(h = 0\), the polarization value quickly converges towards \(0\), which means that all agents converge towards the same opinion. We can see in Figure 5 that opinion of agents converges towards the two extreme categories (\([-1, -0.75[\) and \([0.75, 1]\)), i.e. at the end of the simulation, either all agents have an opinion between \([-1, -0.75]\), or all agents have an opinion between \([0.75, 1]\). Indeed, as already mentioned in the previous experiment, a high number of attacks (300 in this experiment) means that the number of arguments that the agent considers as relevant is low. This is because attacked arguments that are not defended are not in the agent’s preferred extensions and thus agents often have a rather polarized opinion. As in the case where \(h=0\), agents can give arguments to all other agents with the same probability, even to those with a very different opinion. The higher the number of agents sharing the same opinion, the faster they are able to convince agents with a different opinion to converge towards their opinion. This creates a reinforcing phenomenon leading to a fast convergence towards a uniform opinion (polarization = 0). This phenomenon can be observed in the polarization chart. Note that in Figure 5, for \(h = 0\), the agents’ opinions seem to be bipolarized (half of the agents with an opinion in the interval \([-1, -0.75[\) and the other half with an opinion in the interval \([0.75, 1]\). In reality, this is an effect due to the aggregation of the 100 simulations: over the 100 simulations, if in 50 the agents converge to an opinion value of \(-1\) and in 50 to an opinion value of \(1\), on average the agents will be evenly distributed between these extreme intervals. The fact that the standard deviation is very high is a good indication of this type of phenomenon.

As the value of \(h\) increases, the agents tend to converge more and more towards higher values of polarization (and an increasingly smaller standard deviation in the distribution of opinions - Figure 5), up to a certain level (above \(50\)). Indeed, as shown by Mäs & Flache (2013), the increase in the value of \(h\) will lead to the observation of a phenomenon of bipolarization: The higher the value of \(h\), the less agents will interact with agents with very different opinions and therefore try to convince them. Talking only to agents with similar opinions will also mean that the pool of arguments to which they will be subjected will be smaller, and agents will mostly receive arguments that are consistent with their opinion, leading to a phenomenon of reinforcement of their opinion to an extreme. From a certain level of \(h\), the agents will only exchange arguments with agents already having a very close opinion to them, which explains why several clusters may appear and therefore the polarization value obtained is lower for \(h = 100\) than for \(h = 50\).

Influence of the number of arguments per individual

In order to assess the impact of the number of arguments per individual, we carried out an experiment using the same conditions as the previous experiment. The only difference is that we vary the number of arguments known by each agent (10 in the previous experiment). We tested 6 values for the number of arguments: 1, 3, 7, 10, 30, and 60 (i.e. every agent knows all the arguments).

Figure 6 and Figure 7 show that the number of arguments known by each agent has a strong impact on the result.

In the case of a single argument, a perfect bi-polarization is observed, which is normal: each agent has a single argument and builds its decision from it. As we have as many pro as con arguments, each agent has a 1 chance out of 2 to have one of these two types of arguments and thus to have an opinion totally pro (opinion of 1) or con (opinion of -1) the option.

It is also expected that when all agents have all 60 arguments, as all agents have the same criteria values, all agents will have the same preferred extensions and the same absolute values for them. With 300 attacks, the chances of having unattacked arguments are quite low, so the chances of getting homogeneous extensions (only arguments with the same conclusion) are high. Therefore, very often, two extensions will appear with contradictory opinions (-1 and 1). Each agent will therefore have an equal chance of having at the end a totally pro (opinion of 1) or con (opinion of -1) opinion to the option. Very often, but not for every simulation, the polarization value will be close to 1, which explains the average polarization value of 0.82.

With 3 arguments, the global opinion tends to converge to a single value (polarization value close to 0, and opinions mostly between [-1,-0.75[ and [0.75, 1.0]). This result can be explained by the fact that with 3 arguments, only 2 configurations exist: three homogeneous arguments, two arguments with the same conclusion and one with a different conclusion. The number of possible opinion values is therefore limited: -/+1 (3 arguments with the same conclusion in the preferred extension), -/+0.33 (2 arguments with the same conclusion and 1 argument with a different conclusion in the preferred extension), 0 (1 argument for and 1 argument against in the preferred extension). In this context, the reception of an argument will have a profound impact on the agent’s opinion. Once the number of agents with more arguments pro (or con) is greater than the number of agents with arguments con (or pro), the overall opinion irremediably converges towards a single opinion.

In the case of 7 arguments, the results are more contrasted, due to the greater number of possible combinations. The value of polarization is higher because the agents’ opinions are more distributed in the opinion space and because a greater number of simulations converge towards a bipolarization, which results in an increase in the value of the polarization. This phenomenon is further reinforced by 10 arguments where the number of simulations converging towards a bipolarization increases.

Finally, in the case of 30 arguments, as in the case of 60 arguments, with 300 attacks and many arguments, the chances of having unattacked arguments are quite low, so the chances of obtaining homogeneous extensions (only arguments with the same conclusion) are high. In this context, the simulations converge either to a single value of opinion (for most of them) or to a strong bipolarization, which explains this low mean value of polarization.

Application to Vegetarian Diet Diffusion

The previous model was instantiated in the context of the vegetarian diet diffusion. Although the survey shows that there is an important difference between the opinion (answer to the question "Ideally, what diet would you like to have in the future?") and the practice (answer to the question "What is your current diet?"), in this series of experiments we chose to focus above all on the evolution of agents’ opinion and not on their practice. More particularly, we wanted to show how the introduction of a new argument impacts more or less the general opinion of agents.

For this series of experiments, we used the 145 arguments collected in Salliou et al. (2019).

We used the results of the survey to generate the agents: we created from the survey a population of 1714 agents. For each of them, we defined the opinion option –omnivorous (40%), flexitarian (49.1%), vegetarian (8.3%) and vegan (2.6%)– from the answer to the question "Ideally, what diet would you like to have in the future?". These proportions illustrate the share of opinions in the sample and not their declared diet in practice, which have much lower proportions of individuals following one of the vegetarian diets.

For each agent, we drew a set of initial arguments according to the type of arguments (pro) and (con) depending on the opinion option of the agent. The number of considered arguments per agent was set as 7. This number was chosen because research on human cognitive capabilities tends to show that humans can process and recall about 7 pieces of information (Mäs & Flache 2013; Miller 1956). We experimentally checked the coherence of this number, by observing the number of arguments spontaneously provided by 16 individuals asked to list pro and con arguments concerning the consumption of animal food products: 3 to 13 arguments were spontaneously given, with an average of 6,2. The number of pro arguments per agent was defined randomly (uniform distribution) using the following intervals:

  • omnivorous: [5-7]
  • flexitarian: [3-4]
  • vegetarian: [1-2]
  • vegan: [0-1]

For each agent, the number of con arguments is \(7\) - number of pros arguments.

For the criterion importance values, we used the survey to determine the relative importance of the different criteria. More precisely, we used the answer to the question concerning the degree of agreement with arguments: as each argument is linked to a criterion, we used the answer (Lickert-scale, i.e. value between 1 and 5) to give a value to the linked criterion. Figure 8 shows the distribution of scores given for each criterion. Table 3 shows the mean value for each criterion per category of people. We can observe that omnivores tend to give more value than others to health, anthropological and nutritional criteria while vegetarians and even more so vegans tend to give more value than others to ethical and environmental criteria.

Table 3: Mean value for each criterion per category (Omnivorous, Flexitarian, Vegetarian, Vegan)
Criterion Omnivorous Flexitarian Vegetarian Vegan
Health 3.56 2.95 2.36 1.71
Ethical 2.64 3.36 4.03 4.54
Anthropological 3.92 3.47 2.68 1.59
Environmental 3.03 3.68 4.11 4.57
Nutritional 3.93 3.48 2.61 2.13

Figure 9 shows the distribution of opinions at initialization. A first observation is that the stochasticity is low: despite the random drawing of the arguments, the standard deviation obtained for each group is very small.

Another observation is that the distribution of opinions appears to be consistent with the data. For example, 10.9% of the respondents defined vegetarianism or veganism as the ideal diet which is consistent with the percentage of 10.5% for the agents with an opinion higher than 0.5. Similarly, if we take all the agents with a negative opinion about vegetarianism (opinion below 0), we get 44.8%, which is close to the 40% of people who stated that their ideal diet is omnivorous.

We used the homophily rule for partner selection with a value of 10 for \(h\) and we still used the same rule as in Section 4.8 for the choice of the argument to be given (one of the preferred extensions that maximises the absolute value of the opinion).

Evolution of the system without introducing new arguments

We have studied the normal evolution of the system when no new arguments are introduced. As the simulation is stochastic, we performed 30 repetitions of the simulation. During 2,000,000 simulation steps, we analysed the evolution of opinion and the polarization of opinions.

Figure 11 shows the evolution of the agents’ opinions for the 2,000,000 simulation steps. The general opinion tends to converge towards a mean value of 0.1, which shows that the vegetarian diet tends to be better accepted by the agent population. At the end of the 2,000,000 simulation steps, we can observe by comparing the distribution obtained in Figure 9 and Figure 10 that the main evolution is the increase of agents with an extremist opinion, especially pro-extremist opinions ($opinion >=$0.75), that can also be observed in Figure 12 with an increase of the polarization from \(0.2\) to \(0.45\). The increase in this number of extremes could already be observed in the experiment presented in Section 4.10. When \(h\) has a value of \(10\), there is indeed a tendency towards polarization and the extremization of opinions.

Impact of the number of the agents to whom the new argument is given

The goal of this experiment is to analyse the impacts of giving a new argument to different numbers of agents. We tested a general scenario where a new argument is introduced at the initialization of the simulation to certain percentage of the agent population. The new argument is pro vegetarian diet and attacks 5 arguments con vegetarian diet (randomly selected from arguments that concerns the same criterion). The argument is given to randomly selected agents regardless of their opinion and with a random criterion concerned by the argument. As the simulation is stochastic, we ran 30 replications for each configuration. For each configuration, we evaluated, during 200,000 simulation steps, the mean opinion, the value of polarization, and the number of agents that have the new argument in their argumentation graph.

Table 4: For different percentage of agents receiving the argument at the simulation initialization, mean value of opinion at step 0 and 200,000 and number of agents still having the argument in their argumentation network after 200,000 simulation steps. Mean value on the 30 replications (with the standard deviation).
% of informed agents init opinion final opinion Polarization final % of informed agents
0% -0.204 (0.005) 0.046 (0.035) 0.203 (0.004) 0.0% (0.0%)
10% -0.183 (0.006) 0.072 (0.028) 0.2 (0.004) 9.28% (3.1%)
20% -0.162 (0.008) 0.081 (0.035) 0.198 (0.004) 12.4% (2.5%)
50% -0.097 (0.017) 0.084 (0.041) 0.186 (0.005) 12.1% (3.4%)
100% 0.011 (0.033) 0.085 (0.038) 0.154 (0.005) 13.5% (2.3%)

Table 4 shows the results obtained after 200,000 simulation steps for different percentages of agents receiving the argument at the initialization of the simulation, Figure 13 the evolution of the general opinion for these values, Figure 14 the evolution of the polarization and Figure 14 the evolution of the number of agents who have the new argument in their argumentation graph.

A first observation is that for all proportion values, the polarization increases (Figure 14). These values evolve in a similar way whatever the proportion value.

Another observation, which was expected, is that the introduction of the new argument has a significant impact on the initial opinion of the agents (\(p-value\) by Wilcoxon test lower than \(1.0e-14\) between all proportion values). Thus, the impact of the introduction of the argument is all the stronger the higher the number of agents is concerned. It can also be observed that while introducing an argument has a significant impact on the opinion results obtained after 200,000 steps of simulations (p-value with Wilcoxon test lower than \(0.01\) for all proportion combinations), introducing it to 10% of the agents or more has no significant impact. Indeed, the rate of 10% triggered the most significant change in the evolution of global opinion: when the rate increases, the evolution of global opinion tends to decrease. This can also be observed on the evolution curves: at the beginning of the simulation, the general opinion tends to increase in all cases, but when the argument is introduced to all the agents, the overall opinion tends to decrease rapidly (the higher the rate of introduction, the greater the drop) before increasing slowly again afterwards.

This increase at the beginning and this fall can be explained by the forgetting process: at the beginning of the simulation, the agents tend to keep the new argument, considered as recent. However, after a while, the agents who can mobilize this argument (mainly omnivores) tend to forget it, which impacts their opinion. This phenomenon is visible on Figure 15: for a proportion of \(1.0\), at the beginning of the simulation, all the agents have the argument that has been introduced, but after a certain number of simulation steps, the number of agents with this argument decreases rapidly until converging to a number close to 230 agents. Also, for the case of a proportion of \(0.5\), the number of agents having the argument increases at the beginning: as the argument is recent, no agent forgets it, but as the agents who have this new argument in their preferred extension propagate it, the number of agents having the argument increases. Then, as in the case of the \(1.0\) proportion, once the argument becomes older, agents start to forget it.

This result could potentially explain part of the evolution of meat consumption in the context of the mad cow disease (or BSE for Bovine spongiform encephalopathy) in the 90’s. Godfray et al. (2018) show how the European consumption of meat dramatically fell in the beginning of the 1990’s when many consumers discovered the disease through considerable media coverage (Ashworth & Mainland 1995). The precise percentage of European consumers who received information conveying arguments against beef consumption due to health risks is unknown. However, we think it’s a fair assumption to consider this percentage was very high. By the mid 90’s, the crisis ended and meat consumption started to steadily grow again but at a lower growth rate than before (Godfray et al. 2018). In that sense, Figure 13 shows that when all agents received the argument the general opinion partly recovers after the initial introduction of the new argument. Under the assumption that opinion partly translates into behaviour (Bleda & Shackley 2012), our model seems able to reproduce some observed behaviour in diet change under a wide introduction of an argument.

This phenomenon also exists when the argument is introduced to a smaller number of agents, but it will quickly be disseminated by those agents whose opinion is favorable to the argument to other agents who are also potentially favourable to it (due to the homophily rule).

Impact of the profile of the agents to whom the new argument is given

The goal of this experiment is to analyse if the introduction to certain agent profiles (vegan, vegetarian...) impacts more or less the general opinion of agents. We tested a general scenario with a new argument introduced at the initialization of the simulation to 20% of the agent population (344 agents). The new argument is pro vegetarian diet and attacks 5 arguments con vegetarian diet (randomly selected from arguments that concerns the same criterion). This scenario was tested with 4 profiles of agents receiving the argument: no specific profile, i.e. the argument is given to 344 randomly selected agents regardless of their opinion; 344 agents with lowest opinion value (i.e. omnivorous); 344 with the more neutral value - opinion closest to 0.0 (i.e. flexitarian); 344 agents with highest opinion value (i.e. vegetarian or vegan agents). We also tested the 3 criteria concerned by the argument that were the most represented in the argumentation network: nutritional, health and ethical. As the simulation is stochastic, we run 30 replications for each configuration. For each configuration, we evaluated, during 200,000 simulation steps, the mean opinion, the value of polarization, and the number of agents that have the new argument in their argumentation graph.

Table 5: For each profile of agents receiving the argument and for each criterion concerned by the argument, mean value of opinion at step 0 and 200,000 and number of agents still having the argument in their argumentation network after 200,000 simulation steps. Mean value on the 30 replications (with the standard deviation).
Profile Criterion init opinion final opinion Polarization final % of informed agents
no argument - -0.202(0.01) 0.05(0.03) 0.48 (0.017) 0% (0%)
no profile Nutritional -0.16 (0.01) 0.097(0.03) 0.47 (0.013) 13.83% (3.1%)
Health -0.162 (0.01) 0.094(0.02) 0.47 (0.013) 14.1% (3.2%)
Ethical -0.159 (0.01) 0.098(0.02) 0.47 (0.014) 14.83% (3.6%)
Con Nutritional -0.135 (0.01) 0.091(0.03) 0.47 (0.015) 11.97% (2.5%)
Health -0.143 (0.01) 0.088(0.04) 0.47 (0.016) 13.17% (2.8%)
Ethical -0.158 (0.01) 0.087(0.03) 0.47 (0.017) 14.44% (3.2%)
Neutral Nutritional -0.157 (0.01) 0.091(0.03) 0.47 (0.014) 11.6% (3.1%)
Health -0.166 (0.01) 0.097(0.03) 0.47 (0.013) 13.83% (3.1%)
Ethical -0.157 (0.01) 0.099(0.03) 0.47 (0.015) 12.9% (3.2%)
Pro Nutritional -0.181 (0.01) 0.08(0.03) 0.48 (0.013) 9.7% (3%)
Health -0.185 (0.01) 0.079(0.03) 0.47 (0.016) 9.2% (3.5%)
Ethical -0.178 (0.01) 0.095(0.03) 0.47 (0.016) 11.01% (3.9%)

Table 5 shows the results obtained after 200,000 simulation steps for different profiles of agents receiving the argument and for different criteria concerned by the argument, Figure 16 the evolution of the general opinion for these values, Figure 17 the evolution of the polarization and Figure 17 the evolution of the number of agents having the new argument in their argumentation graph. An initial observation is that regardless of the audience that receives the argument and the criterion to which it relates, the introduction of the argument always allows a significant increase in the value of the initial and final opinions (p-value with Wilcoxon’s test lower than \(0.001\) for all combinations of profile and criteria). An interesting point is that for some of the combinations the introduction of the new argument allowed a more rapid evolution of the general opinion than without it. This means that not only did the introduction of an argument have an immediate effect (higher value of opinion in the initial state), but that having this new argument made it possible to convince other agents more quickly.

Another phenomenon is that whatever the profile of the agent receiving the argument and the criterion concerned, the number of agents having it in their argumentation graph evolves in a similar way: as observed in Section 5.12, initially, the argument spreads and therefore the number of agents having it in their possession increases. Then, the agents who do not use the argument start to forget the argument and the number of agents having this argument decreases until a level lower than the initial number of agents having the argument.

Finally, a general observation is that the introduction of the new argument, whatever the profile of the agents or the criterion concerned, has no real impact on polarization. After 200,000 simulation steps, all combinations of parameters converge to a polarization value of \(0.48\).

Concerning the impact of the criterion concerned and of the profile of agents receiving the argument, results show that depending of the profile concerned, the criterion concerned does not have the same impact. Indeed, the impact (opinion value) of an argument diffused to con agents is slightly greater when it concerns the nutritional criterion whereas the impact is significantly greater for pro agents when it concerns the ethical criterion. Among the three agent profiles (neutral, pro, con), it is when the argument is given to neutral agents that it has the most impact. It is a hesitant public about its attitude towards vegetarian diets and can therefore easily be convinced to switch to a stricter vegetarian diet.

To interpret these observations, its’ important to stress that the population was initialized based on the actual survey results. In the case of omnivores for instance, the value of ethical criterion is low for most of the agents, while it is very high for most of vegetarian/vegan agents. In reality, omnivorous are probably protected from ethical attacks as they somehow own a "normative argument" as their behaviour is in line with the social norm. Such argument is not present in the database of arguments for the very reason that the benefit of social norms for individuals is to avoid to loose time and energy to wonder about the legitimacy of a behaviour and to state its existence (Epstein 2006). In a similar fashion, many nutritional arguments are against vegan diets, claiming the nutritional balance is questionable. Yet the nutritional criterion has among the highest importance values for omnivorous agents, which leads nutritional arguments to destabilize the argumentation system for them.

Important differences can also be observed for the evolution of the number of agents having the argument in their argumentation graph. It is when the argument is not given to any particular agent profile (no profile) that the argument propagates best at the beginning of the simulation: indeed, the homophily rule makes that agents rather exchange arguments with like-minded agents. Therefore, when the argument is given to agents with a similar profile (i.e. close value of opinion), the argument will circulate less among other types of agents. The fact that the argument circulates less when it is given to pro agents can be explained by the fact that the argument will attack con arguments but will not be attacked: it will therefore often end up in the preferred extensions of the agents having it. But this extension will often be composed of a larger number of arguments in the case of pro agents as they will often have a set of pro arguments that are not attacked. This is not the case for con agents, who will often see their con argument attacked by this new argument and therefore have less argument in their preferred extension. This situation gives a better chance to pass on this argument, which explains the better diffusion of the argument.

Conclusion and Perspectives

The paper presented a generic opinion dynamic model implemented with the GAMA platform based on the use of formal argumentation. The use of the model was illustrated through an application about the diffusion of a favourable opinion of vegetarian diet. The carried out experiment shows the possibilities offered by the framework. We plan to go further in the analysis of the model, in particular in terms of analysis of the steady-states. To that end, we intend to draw on the work of Camargo (2020) which proposes methods for analysing the steady-state of the model proposed by Banisch & Olbrich (2021).

Like the studies of Gabbriellini & Torroni (2014); Butler et al. (2019b), this study contributes to bridge formal argumentation and agent-based models of opinion dynamics. Our goal for the future is to continue to strengthen this bridge. Thus, we plan to enrich the way arguments are evaluated. In the current version, the evaluation of arguments depends on the criteria concerned by the argument and on the importance of these criteria for the agent. Other factors can impact the perception of an argument, and among them, the source of the argument (Pornpitakpan 2004). As an example, the profusion of fake news from dubious source can impact people differently. Our model should soon be able to take this difference of perception into account. We also plan to use an approach similar to Banisch & Olbrich (2021) to take into account different attitudes on different issues. For example, for the dissemination of the vegetarian diet, instead of considering a single issue on vegetarian diets, we could imagine having four different issues related to different types of diets (omnivorous, flexitarian, vegetarian, vegan) with a specific subset of arguments for each. Indeed, some arguments may only concern one specific diet and not the others.

Another enrichment that we plan to add is a mechanism to enable the evolution of the criterion of importance for the agents. Indeed, these values are not fixed for life but can evolve after a particular event and from the influence of others. We also plan to enrich the argument protocol. In that sense, we could adapt the type of arguments exchanged depending on the opinions hold by both parties. For example, a flexitarian would exchange a pro-vegetarian argument to an omnivore, but would exchange a pro-omnivore argument to a vegetarian. In this context, it would also be interesting to go further by taking inspiration from the studies of Gabbriellini & Torroni (2014); Butler et al. (2019b) that integrate real exchanges of argument and where an agent does not simply give an argument to another agent without the latter being able to respond. In this sense, the notion of trust introduced by Gabbriellini & Torroni (2014) is interesting, as well as the notions of tables for deliberations (Butler et al. 2019b) and epistemic vigilance introduced (Butler et al. 2019a).

A last perspective concerns the link between the model and the BEN (Behaviour with Emotions and Norms) agent architecture (Bourgais et al. 2017, 2020; Taillandier et al. 2016). Indeed, in addition to the BDI (Belief Desire Intention) reasoning engine, the BEN architecture introduces numerous concepts that could be interesting for our work such as the personality of agents based on the classic OCEAN model (McCrae & John 1992) and the social relation between agents evaluated according to 5 dimensions (liking, dominance, solidarity, familiarity and trust).

For the application case of the diffusion of the vegetarian diet, we plan to take advantage of the data collected to deepen the realism of the population of agents generated (criterion importance, social networks, initial arguments, etc.) in particular by using a population generation tool such as GEN* that is already integrated into GAMA (Chapuis et al. 2018, 2019). This step will allow further testing and validating of the model in a wide range of scenarios. We also plan to take advantage of existing data to better characterize the temporal aspect of the model. Indeed, for the time being, we have chosen to use an abstract simulation step. In our model, a simulation step corresponds to an exchange of arguments and is not related to real time. It could be interesting to use the collected data to establish a link between the occurrence of an argument exchange and real time. First, such a link could allow a better characterization of the temporal evolution of the general opinion on vegetarian diets and secondly, validate the model by comparing the result of the simulation with the real data available on this subject.


Model Documentation

All the source codes of the model and experiments presented (with the data and parameters used for the experiments) are available on OpenABM (Taillandier et al. 2021).

Acknowledgments

This work is part of the VITAMIN ("VegetarIan Transition Argument ModellINg") project funded by INRAE.

References

ASHWORTH, S. W., & Mainland, D. D. (1995). The economic impact of BSE on the UK beef industry. Outlook on Agriculture, 24(3), 151–154. [doi:10.1177/003072709502400304]

BANISCH, S., & Olbrich, E. (2021). An argument communication model of polarization and ideological alignment. Journal of Artificial Societies and Social Simulation, 24(1), 1: https://www.jasss.org/24/1/1.html. [doi:10.18564/jasss.4434]

BEARDSWORTH, A. D., & Keil, E. T. (1991). Vegetarianism, veganism, and meat avoidance: Recent trends and findings. British Food Journal, 93(4), 19–24. [doi:10.1108/00070709110135231]

BERGMAN, M. M. (1998). A theoretical note on the differences between attitudes, opinions, and values. Swiss Political Science Review, 4(2), 81–93. [doi:10.1002/j.1662-6370.1998.tb00239.x]

BESNARD, P., & Hunter, A. (2008). Elements of Argumentation. Cambridge, MA: The MIT Press.

BLEDA, M., & Shackley, S. (2012). Simulation modelling as a theory building tool: The formation of risk perceptions. Journal of Artificial Societies and Social Simulation, 15(2), 2: https://www.jasss.org/15/2/2.html. [doi:10.18564/jasss.1857]

BOURGAIS, M., Taillandier, P., & Vercouter, L. (2017). 'Enhancing the behavior of agents in social simulations with emotions and social relations.' In G. P. Dimuro & L. Antunes (Eds.), Multi-Agent Based Simulation XVIII (pp. 89–104). Berlin Heidelberg: Springer. [doi:10.1007/978-3-319-91587-6_7]

BOURGAIS, M., Taillandier, P., & Vercouter, L. (2020). BEN: An architecture for the behavior of social agents. Journal of Artificial Societies and Social Simulation, 23(4), 12: https://www.jasss.org/23/4/12.html. [doi:10.18564/jasss.4437]

BOURGUET, J. R., Thomopoulos, R., Mugnier, M. L., & Abécassis, J. (2013). An artificial intelligence-Based approach to deal with argumentation applied to food quality in a public health policy. Expert Systems with Applications, 40(11), 4539–4546. [doi:10.1016/j.eswa.2013.01.059]

BREHM, J. W. (1966). A Theory of Psychological Reactance. Cambridge, MA: Academic Press.

BUTLER, G., Pigozzi, G., & Rouchier, J. (2019a). 'An opinion diffusion model with vigilant agents and deliberation.' In M. Paolucci, J. S. Sichman, & H. Verhagen (Eds.), Multi-Agent-Based Simulation XX (pp. 81–99). Berlin Heidelberg: Springer. [doi:10.1007/978-3-030-60843-9_7]

BUTLER, G., Pigozzi, G., & Rouchier, J. (2019b). Mixing dyadic and deliberative opinion dynamics in an agent-based model of group decision-Making. Complexity, 2019. [doi:10.1155/2019/3758159]

CAMARGO, C. Q. (2020). New methods for the steady-state analysis of complex agent-based models. Frontiers in Physics, 8, 103.

CERUTTI, F., Vallati, M., & Giacomin, M. (2017). An efficient Java-Based solver for abstract argumentation frameworks: jArgSemSAT. International Journal on Artificial Intelligence Tools, 26(02), 1750002. [doi:10.1142/s0218213017500026]

CHAPUIS, K., Taillandier, P., Amblard, F., Gaudou, B., & Thiriot, S. (2019). Gen*: An integrated tool for realistic agent population synthesis. SSC2019, Social Simulation Conference.

CHAPUIS, K., Taillandier, P., Renaud, M., & Drogoul, A. (2018). Gen*: A generic toolkit to generate spatially explicit synthetic populations. International Journal of Geographical Information Science, 32(6), 1194–1210. [doi:10.1080/13658816.2018.1440563]

DEFFUANT, G., Neau, D., Amblard, F., & Weisbuch, G. (2000). Mixing beliefs among interacting agents. Advances in Complex Systems, 3(01n04), 87–98. [doi:10.1142/s0219525900000078]

DUNG, P. M. (1995a). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial Intelligence Journal, 77, 321–357. [doi:10.1016/0004-3702(94)00041-x]

DUNG, P. M. (1995b). An argumentation-theoretic foundation for logic programming. The Journal of Logic Programming, 22(2), 151–177. [doi:10.1016/0743-1066(95)94697-x]

EPSTEIN, J. M. (2006). Generative Social Science: Studies in Agent-Based Computational Modeling. Princeton, NJ: Princeton University Press. [doi:10.23943/princeton/9780691158884.003.0003]

ESTEBAN, J. M., & Ray, D. (1994). On the measurement of polarization. Econometrica: Journal of the Econometric Society, 62(4), 819–851.

FLACHE, A., Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S., & Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20(4), 2: https://www.jasss.org/20/4/2.html. [doi:10.18564/jasss.3521]

FRIEDKIN, N. E., Proskurnikov, A. V., Tempo, R., & Parsegov, S. E. (2016). Network science on belief system dynamics under logic constraints. Science, 354(6310), 321–326. [doi:10.1126/science.aag2624]

GABBRIELLINI, S., & Torroni, P. (2014). 'A new framework for ABMs based on argumentative reasoning.' In B. Kaminski & G. Koloch (Eds.), Advances in Social Simulation (pp. 25–36). Berlin Heidelberg: Springer. [doi:10.1007/978-3-642-39829-2_3]

GITHUB. (2021). Github of the argumentation plugin.

GODFRAY, H. C. J., Aveyard, P., Garnett, T., Hall, J. W., Key, T. J., Lorimer, J., Pierrehumbert, R. T., Scarborough, P., Springmann, M., & Jebb, S. A. (2018). Meat consumption, health, and the environment. Science, 361(6399), eaam5324. [doi:10.1126/science.aam5324]

HEGSELMANN, R., & Krause, U. (2002). Opinion dynamics and bounded confidence: Models, analysis, and simulation. Journal of Artificial Societies and Social Simulation, 5(3), 2: https://www.jasss.org/5/3/2.html.

HERZOG, H. (2011). Why do most vegetarians go back to eating meat?

HUET, S., Deffuant, G., & Jager, W. (2008). A rejection mechanism in 2D bounded confidence provides more conformity. Advances in Complex Systems, 11(4), 529–549. [doi:10.1142/s0219525908001799]

JAGER, W., & Amblard, F. (2005). Uniformity, bipolarization and pluriformity captured as generic stylized behavior with an agent-based simulation model of attitude change. Computational & Mathematical Organization Theory, 10(4), 295–303. [doi:10.1007/s10588-005-6282-2]

KIALO. (2021). The ethics of eating animals: Is eating meat wrong?. Available at: https://www.kialo.com/the-ethics-of-eating-animals-is-eating-meat-wrong-1229?path=1229.0~1229.1.

KRAUS, S., Sycara, K. P., & Evenchik, A. (1998). Reaching agreements through argumentation: A logical model and implementation. Artificial Intelligence, 104(1–2), 1–69. [doi:10.1016/s0004-3702(98)00078-2]

LORENZ, J. (2003). Multidimensional opinion dynamics when confidence changes.

MACDIARMID, J. I., Douglas, F., & Campbell, J. (2016). Eating like there’s no tomorrow: Public awareness of the environmental impact of food and reluctance to eat less meat as part of a sustainable diet. Appetite, 96, 487–493. [doi:10.1016/j.appet.2015.10.011]

MATHIAS, J. D., Huet, S., & Deffuant, G. (2016). Bounded confidence model with fixed uncertainties and extremists: The opinions can keep fluctuating indefinitely. Journal of Artificial Societies and Social Simulation, 19(1), 6: https://www.jasss.org/19/1/6.html. [doi:10.18564/jasss.2967]

MÄS, M., & Flache, A. (2013). Differentiation without distancing. Explaining bi-polarization of opinions without negative influence. PloS ONE, 8(11), e74516.

MCCRAE, R. R., & John, O. P. (1992). An introduction to the five-factor model and its applications. Journal of Personality, 60(2), 175–215. [doi:10.1111/j.1467-6494.1992.tb00970.x]

MERCIER, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57–74. [doi:10.1017/s0140525x10000968]

MILLER, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63(2), 81. [doi:10.1037/h0043158]

POORE, J., & Nemecek, T. (2018). Reducing food’s environmental impacts through producers and consumers. Science, 360(6392), 987–992. [doi:10.1126/science.aaq0216]

POPKIN, B. M. (1993). Nutritional patterns and transitions. Population and Development Review, 19(1), 138–157.

PORNPITAKPAN, C. (2004). The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied Social Psychology, 34(2), 243–281. [doi:10.1111/j.1559-1816.2004.tb02547.x]

POVEY, R., Conner, M., Sparks, P., James, R., & Shepherd, R. (1999). A critical examination of the application of the transtheoretical model’s stages of change to dietary behaviours. Health Education Research, 14(5), 641–651. [doi:10.1093/her/14.5.641]

PRAKKEN, H., & Sartor, G. (2015). Law and logic: A review from an argumentation perspective. Artificial Intelligence, 227, 214–245. [doi:10.1016/j.artint.2015.06.005]

PROCHASKA, J., Diclemente, C., & Norcross, J. (1992). In search of how people change: Applications to addictive behaviors. American Psychologist, 47(9), 1102–1114. [doi:10.1037/0003-066x.47.9.1102]

RESCHER, N. (1997). The role of rhetoric in rational argumentation. Argumentation, 12(2), 315–323.

RUBY, M. B. (2012). Vegetarianism. A blossoming field of study. Appetite, 58(1), 141–150. [doi:10.1016/j.appet.2011.09.019]

SALLIOU, N., Taillandier, P., & Thomopoulos, R. (2019). VITAMIN project (VegetarIan Transition Argument ModellINg). Dataset available at: https://doi.org/10.15454/HOBUZH.

SALLIOU, N., & Thomopoulos, R. (2018). Modelling multicriteria argument networks about reduced meat consumption. FOODSIM’2018.

SANS, P., & Combris, P. (2015). World meat consumption patterns: An overview of the last fifty years (1961-2011). Meat Science, 109, 106–111. [doi:10.1016/j.meatsci.2015.05.012]

SCALCO, A., Macdiarmid, J. I., Craig, T., Whybrow, S., & Horgan, G. W. (2019). An agent-Based model to simulate meat consumption behaviour of consumers in Britain. Journal of Artificial Societies and Social Simulation, 22(4), 8: https://www.jasss.org/22/4/8.html. [doi:10.18564/jasss.4134]

STEFANELLI, A., & Seidl, R. (2014). Moderate and polarized opinions. Using empirical data for an agent-based simulation.

STEFANELLI, A., & Seidl, R. (2017). Opinion communication on contested topics: How empirics and arguments can improve social simulation. Journal of Artificial Societies and Social Simulation, 20(4), 3: https://www.jasss.org/20/4/3.html. [doi:10.18564/jasss.3492]

STOLL-KLEEMANN, S., & Schmidt, U. J. (2017). Reducing meat consumption in developed and transition countries to counter climate change and biodiversity loss: A review of influence factors. Regional Environmental Change, 17(5), 1261–1277. [doi:10.1007/s10113-016-1057-5]

TAILLANDIER, P., Bourgais, M., Caillou, P., Adam, C., & Gaudou, B. (2016). 'A BDI agent architecture for the gama modeling and simulation platform.' In L. G. Nardin & L. Antunes (Eds.), Multi-Agent Based Simulation XVII (pp. 3–23). Berlin Heidelberg: Springer. [doi:10.1007/978-3-319-67477-3_1]

TAILLANDIER, P., Gaudou, B., Grignard, A., Huynh, Q. N., Marilleau, N., Caillou, P., Philippon, D., & Drogoul, A. (2019). Building, composing and experimenting complex spatial models with the GAMA platform. GeoInformatica, 23(2), 299–322. [doi:10.1007/s10707-018-00339-6]

TAILLANDIER, P., Salliou, N., & Thomopoulos, R. (2021). A model of opinion dynamics based on formal argumentation: Application to the diffusion of the vegetarian diet. CoMSES Computational Model Library. Available at: https://www.comses.net/codebases/f787d173-cc1f-48ff-a961-aaa31881fe6a/releases/1.0.0/.

THOMOPOULOS, R. (2018). A practical application approach to argumentation for multicriteria analysis and decision support. EURO Journal on Decision Processes, 6(3), 237–255. [doi:10.1007/s40070-018-0087-2]

THOMOPOULOS, R., Moulin, B., & Bedoussac, L. (2018). Supporting decision for environment-Friendly practices in the agri-Food sector: When argumentation and system dynamics simulation complete each other. International Journal of Agricultural and Environmental Information Systems, 9(3), 1–21. [doi:10.4018/ijaeis.2018070101]

URBIG, D., & Malitz, R. (2005). Dynamics of structured attitudes and opinions.

VABO, M., & Hansen, H. (2014). The relationship between food preferences and food choice: A theoretical discussion. International Journal of Business and Social Science, 5(7), 145–157.

VILLATA, S., Cabrio, E., Jraidi, I., Benlamine, S., Chaouachi, M., Frasson, C., & Gandon, F. (2017). Emotions and personality traits in argumentation: An empirical evaluation. Argument and Computation, 8(1), 61–87. [doi:10.3233/aac-170015]

VRANKEN, L., Avermaete, T., Petalios, D., & Mathijs, E. (2014). Curbing global meat consumption: Emerging evidence of a second nutrition transition. Environmental Science & Policy, 39, 95–106. [doi:10.1016/j.envsci.2014.02.009]

WILLIAMS, J. J., & Lombrozo, T. (2010). The role of explanation in discovery and generalization: Evidence from category learning. Cognitive Science, 34(5), 776–806. [doi:10.1111/j.1551-6709.2010.01113.x]

WOLF, I., Schröder, T., Neumann, J., & Haan, G. de. (2015). Changing minds about electric cars: An empirically grounded agent-based modeling approach. Technological Forecasting and Social Change, 94, 269–285. [doi:10.1016/j.techfore.2014.10.010]

YUN, B., Thomopoulos, R., Bisquert, P., & Croitoru, M. (2018). 'Defining argumentation attacks in practice: An experiment in food packaging consumer expectations.' In P. Chapman, D. Endres, & N. Pernelle (Eds.), Graph-Based Representation and Reasoning (pp. 73–87). Cham: Springer International Publishing. [doi:10.1007/978-3-319-91379-7_6]

YUN, B., Vesic, S., & Croitoru, M. (2020). Ranking-Based semantics for sets of attacking arguments. AAAI 20 - 34th AAAI Conference on Artificial Intelligence. 34(3):3033-3040. [doi:10.1609/aaai.v34i03.5697]