The COVID-19 Global Challenge
The news on 24 March 2020 announced that in a few weeks, the SARS-CoV-2 virus had already infected almost 390,000 persons and the associated COVID-19 disease killed more than 16,000 worldwide, with early peaks in China, Italy, Iran, Spain, and France. One of the most shocking pictures of the outbreak in Italy was taken during the night of 18 March 2020 in Bergamo, a rich city around 30 miles northeast of Milan: it shows a long queue of army trucks transporting dozens of coffins out of town as 24-hour crematoriums in Bergamo were overwhelmed. By the time you read this article, the situation will be even worse and the virus will be expanding into new places and communities, perhaps hitting the crowded places of less developed countries with devastating impact.
In order to try to contain the contagion and avoid the collapse of their health care systems, governments are taking draconian measures that, only a few weeks ago, might have caused a revolution. Social distancing, intensive quarantine, lockdown, cancellation of mass gathering events, and strict traffic restrictions are enforced, sometimes even to the extent of using a pervasive system of police and drones. In many countries, industries, companies, small businesses, and shops have been shut if they are not essential. This will have long-term economic consequences, such as the failure of many small businesses and a decline in private investment – consequences related to the severity of policy measures which, at the moment, do not have a defined expiry date. Politicians are consulting epidemiologists, virologists, and public health experts in order to try and make informed decisions, adapting their responses to contingencies and sometimes reconsidering decisions announced only a few days before. Experts are increasingly featured in the public media and then under pressure to predict when this disease will end to reduce panic. Massive public spending – at levels that will substantially increase deficits – have been enacted. Levels which, previously, might have caused a breakdown of inter-state relationships with the EU. Some observers have started to blame liberal democracies for their inefficiency, while praising the capacity of some authoritarian states (which they considered oppressive only a few weeks before) to respond effectively.
The current outbreak of COVID-19 is not only causing a dramatic loss of lives worldwide, challenging the sustainability of our health care systems, precipitating an economic meltdown, and putting pressure upon the mental health of individuals under quarantine and lock-down measures. This outbreak is also challenging the research community, pushing scientists beyond their ‘comfort zone’ for two sorts of reason, which we now elaborate upon.
Firstly, the need for a rapid response is largely incompatible with the ‘normal’ path of scientific progress, which is based on complex and delicate practices of peer review, testing, and replicability. These practices have been built over time to ensure the validity of scientific claims and research findings. The systemic nature of the COVID-19 outbreak requires wide-ranging political decisions about prevention, testing, and anti-contagion measures. These decisions cannot be solely based on epidemiological knowledge, because the efficacy of implementation depends on people’s reactions, pre-existing social norms, and structural societal constraints. For instance, the same lock-down policy aimed at reducing the number of infected elderly might have different effects when implemented in a country where several generations live together and a country where elderly live alone but are still very active in their communities, e.g. in religious or neighbourhood associations.
Tackling the challenge of responding with rigorous research to a complex global problem is a difficult endeavour even in normal times. In a crisis, the ‘default’ response is to convert/adapt existing models to the new context, ideally by fitting it to newly available data. While this could reasonably be seen as the best way forward given the speed with which the virus is spreading, the dependency on the quality of available data and the underestimation of theoretical premises and original intentions of re-used models can make this of questionable rigour (Edmonds et al. 2019).
Responding with rigorous research to a complex global problem is even more problematic in times of crisis. This is due to: public pressure for immediate responses, misplaced expectations about the role of science, misunderstanding about the certainty of scientific knowledge, and confusion concerning public responsibility. The same political leaders who endorsed, without any modesty, public statements similar to the “we have had enough of experts” (a statement by Michael Gove, who was Minister for the Cabinet Office in the United Kingdom – a statement he later qualified), are now turning to scientists for advice or recommendations on decisions that are politically controversial. Public discourse makes it difficult for politicians to alter course (‘U-turn’) in the light of new scientific evidence, even if it is to their credit that they do so. Indeed, public perceptions of science itself are not helped by disagreements among scientists concerning the reliability of findings. Moreover, although politicians are (or ought to be) aware that responsibility for decisions eventually lie solely with them as their society’s elected representatives, they can seek to dodge this – blaming scientists if informed decisions turn out to be wrong or glorifying themselves in case decisions turn out to be good.
Secondly, it is rare for any crisis to lie comfortably within the domain of a single discipline. Even if we squarely consider the COVID-19 an epidemiological problem, our responses to it have environmental, ecological, political, socio-psychological, and economic aspects, and systemic cascading effects are able to be fully understood only if multi- and interdisciplinary perspectives are considered. Integrating knowledge from all of these disciplinary perspectives is sufficiently difficult (e.g., Voinov and Shugart 2013) t) that integration itself should be a recognized specialism (Bammer 2017). We must remember that the dangers of excessive specialisation are well-documented even in less problematic conditions (Thompson Klein 1990).
In this context, it is not surprising that agent-based modelling is under the spotlight. When policy decisions and people’s reactions depend on perceptions of the future, and scenarios are probabilistic and largely unpredictable, computer simulation models are seen as a viable method to project future states of a system from past ones in a non-trivial manner. What we see today in many media are predictions of the exponential growth of the number of infected persons based on equations that capture stylized populations and the distributions of their different states. However, any social or behavioural scholar can spot that these projections do not consider relevant factors of social complexity, which are intrinsically crucial to the modelled dynamics and a negligible exogenous force. Not recognizing social complexity can undermine the credibility of findings, and thus we call for urgent initiatives to: (1) improve the transparency and rigour of models to understand theoretical premises and details and (2) promote data access to help contextualize and validate models across various levels of analysis (i.e., micro, meso, and macro). This call is even more urgent when simulation findings can rapidly affect public policy decisions (e.g. on possible consequences of certain policy scenarios) and/or motivate individual actions (e.g. impact upon decisions to stay at home to “flatten the curve”).
To improve the quality, impact, and appropriate use of computer simulation models in this delicate situation, we will in this paper, first, briefly review recent agent-based models of COVID-19 to bring out their potential and emphasize any existing explanatory gaps. While the number of publications, preprints and simulation tools on immediate responses to COVID-19 pandemic is rapidly increasing due to the attention of scholars and public pressure, it is important to discuss some important challenges involved and suggest counter-measures to avoid collective mistakes. Secondly, we will reflect on the problematic interface between modelling and policy in order to better understand problems related to excessive expectations about scientific knowledge arising from a misunderstanding of the nature of science. Finally, we will suggest measures to reduce these gaps and improve the relationship between science and public policy via a call for extensive collaboration between public stakeholders and academic scholars in terms of model and data sharing. The credibility of science has recently been under attack from various communities, including anti-vaxxers, climate change deniers, creationists, flat-earthers, fake news propagators, and conspiracy theorists but also from some philosophers and critical sociologists. While it is desirable that academic experts have greater public visibility and take a lead in public debate by explicating the evidence, the unfolding of this pandemic carries the risk of undermining science if we do not take the necessary precautions – for example, clarifying the boundaries and limits of presented conclusions/recommendations. Science could be the scapegoat if the public is seeking someone to blame.
Potential of and Gaps in COVID-19 Agent-Based Models
Modelling in epidemiology has a venerable tradition since the 1920s, when differential equations were used to model the population distribution of disease spread, including susceptible, infected, and recovered/dead pools. While this approach has helped to understand the threshold nature of epidemics and herd immunity, such models could not examine important social and behavioural factors, such as the behavioural responses of individuals to policy measures, and the effect of heterogeneous social contacts on diffusion patterns (Epstein 2009). Progress has been made since the 1990s in modelling epidemiological diseases especially through agent-based simulations that include some important sources of population heterogeneity and explore the structure and dynamics of transmission networks (e.g., Stroud et al. 2007; Yoneyama, Das and Krishnamoorthy 2012; Zhang et al. 2016; Hunter, Mac Namee and Kelleher 2017). However, whenever an outbreak suddenly occurs, such as the one into which we have all been thrusted in the last month, several modelling problems emerge that require careful attention. These include: (1) predicting complex outcomes when crucial data is unreliable/unavailable and theories are underdetermined; (2) repurposing models outside their original purposes by confusing original illustrative/explanatory purposes with prediction (see Edmonds et al. 2019); (3) ignoring good practices of model transparency and rigour either due to the race for public/academic relevance or because of political pressures for immediate responses.
The case of the Imperial College COVID-19 model, which has contributed to reshaping the political agenda in many countries, illustrates these challenges. Based on an adaptation of an individual-based simulation model on H5N1 (Ferguson et al. 2005) and influenza (Ferguson et al. 2006), in mid-March, a team of the Imperial College published a report in which they predicted a huge number of people would die in Britain unless severe policy measures were taken. Their results also helped to assess the efficacy of isolation, household quarantine, and the closing of schools and universities. These results were quickly endorsed by the UK government (after some initial hesitation), influenced the US administration and alerted the French administration in their attempt at trying to minimize the mortality rate in their countries due to the transmission of the global pandemic.
The core of the Imperial College model consists of households, schools and workplaces that are geographically distributed to represent the country under study, travel distance patterns (within the country and international), workplace, school and household size and other demographic data. However, the exact internals of the model are not described in any detail and no one has yet accessed the model code. Maybe this is because the model was written more than 13 years ago by Neil Ferguson, the Imperial College team leader, and includes thousands of line of undocumented code, as admitted in a recent tweet.
Considering its impact and relevance, the Imperial College model has been criticized for various reasons: (a) it does not enable the consideration of other policy options, (b) it does not use sufficient data across different contexts, while claiming general findings, (c) it does not help to understand social conditions and consequences of measures. For instance, Shen et al. (2020) focused on (a) and explored the efficacy of strict contact tracing, pre-emptive testing on a large scale and super-spreader events. They also focused on the model’s inability to study the consequences of local dynamics at the micro level rather than the aggregated level of the data available, and the lack of attention to compliance dynamics, which depend on behavioural and social factors. They concluded that the Imperial College model was “several degrees of abstraction away from what is warranted by the situation”. Considering the policy and predictive purposes of the Imperial College model, Colbourn (2020) called for more context-specific models of the social and economic effects of lockdown and other interventions and knock-on effects on health, including mental health and interpersonal violence, via careful empirical evaluation.
Unfortunately, even the available examples of more specific and empirically calibrated microsimulation models neglect important behavioural and social factors. For instance, Chang et al. (2020) proposed a microsimulation model calibrated on empirical data to inform pandemic response planning in Australia (see the origins of AceMod-Australian Census-based Epidemic Model in Cliff et al. 2018). The model captures average characteristics of the Australian population by simulating over 24 million individuals with heterogeneous attributes (e.g., age, occupation, individual health characteristics) and their social context (different contact rates and contexts such as households, neighbourhoods, schools, or workplaces), whose distributions are taken from the 2016 Australian census. In a similar vein, IndiaSIM from the Center for Disease Dynamics, Economics & Policy (see: https://cddep.org/wp-content/uploads/2020/03/covid19.indiasim.March23-2-eK.pdf) was calibrated with data from the 2018-2019 census data of the Indian population and available data from China and Italy to estimate the force of infection, age- and gender-specific infection rates, severe infection, and case-fatality rates. However, although better calibrated than previous models, these do not capture network effects nor people’s reactive responses as the population states simply change via stochastic (randomized) processes determined by parameters (although the parameters derive from the data).
Independently of their specific characteristics, all of these models are either weak in terms of empirical calibration or theoretically underdetermined or both. Indeed, none of them are based on explicit, empirical/theoretical assumptions of individual behaviour, social transmission mechanisms and social structure constraints. Not only does this imply a very abstract conceptualisation of populations and behaviours: it also misses the chance to understand people’s response to policy measures due to pre-existing behavioural attitudes, network effects and social norms. While the lack of appropriate data on a country or region’s socio-economic structure, e.g., household structures or geographical clustering of population, can make a model’s parameter calibration problematic, this should not be an excuse to: (a) repurpose models that are purely illustrative or intended to provide a theoretical exposition for predicting complex social outcomes, (b) suppress the attention to behavioural and social factors, which are critical to estimate the efficacy of advocated measures, because data are not available.
This suggests that any model must be considered depending on its purposes and has associated values and risks for public use. Edmonds et al. (2019) listed seven modelling purposes: prediction, explanation, description, theoretical exposition, illustration, social learning, and analogy. In the Appendix, we tried to consider each of these purposes with a view to evaluating the role the corresponding models might play in a crisis context, the potential usefulness they might have to decision-makers, and the risks associated with their use (many of which are general to all models, not just agent-based models). For instance, in response to the lack of behavioural realism in many of the models currently used in the public debate, there has been a proliferation of examples of open source agent-based implementations, though authors admitted that they are probably simply illustrations. While this tells more about the academic effervescence and selective attention that typically characterise emergencies and outbreaks, the competitive advantage of the Imperial College model and similar models (which at the present stage cannot be seriously tested by the community), makes these efforts of uncertain value for influencing the immediate response, but they could be relevant for understanding long-term socio-economic consequences of policy measures.
In short, the modelling practices that we developed in ‘normal’ times need to be reconsidered during a global outbreak as this global event poses key modelling challenges. The first one is a COVID-19 prediction challenge. Prediction of complex systems displaying all sorts of non-linearities, heterogeneities and sensitivities is very challenging independently of the scientific method used to tackle the problem, agent-based modelling included (Edmonds 2019). We all are experiencing in real life the concepts and analogies that have made the fortunes of complex systems theorists, such as the famous ‘butterfly effect’ in chaos theory (Mitchell 2019). Complex systems researchers are aware that small, unnoticed events, possibly identified only in retrospect, may generate unforeseen, large-scale consequences (Vespignani 2009). Awareness of significant limitations, such as the nature of complexity, but also lack of data, ontological diversity, or the variety of approaches to simulating human behaviour algorithmically, can make prediction difficult, if not impossible and often even undesirable (Polhill 2018; Edmonds, Polhill and Hales 2019).
Therefore, modelling experts strive to minimize the limitations – make their model assumptions valid with respect to the new SARS-CoV-2 virus, and their calibration rooted in the most accurate available data (e.g., Wu et al. 2020). However, during the current COVID-19 pandemic, accurate data suitable for complexity-friendly, agent-based models are not yet available and this inhibits agent-based modellers’ ability to produce a much-needed, rapid response. At the same time, other scientific communities make bolder claims, even though the same or similar limitations apply to their methods. In late November/early December 2019, when the SARS-CoV-2 virus probably emerged (Andersen et al. 2020), it was impossible to precisely estimate the scale and global consequences of the COVID-19 disease. At the time or writing, four months later, even though we are aware of the rate of loss of human life minimally attributable to the pandemic, precise estimation of the death toll at the end of the crisis is still out of scientific reach. So is the estimation of its direct and indirect consequences worldwide. Nonetheless, developing probabilistic scenarios that can reliably inform policy decisions is an important goal.
This calls for a second important challenge: The COVID-19 modelling human behaviour challenge. The complex social dynamics related to transmission, response and compliance (mentioned above) arise from the behaviours of individuals. Research in psychology and decision making has recognized for years that humans do not follow predictable, optimal decision-making even in a laboratory and without deep uncertainty. In times of crisis, findings suggest that cognition is impaired, and that traumatic experiences can cause psychological distress and cognitive distortions (Agaibi and Wilson 2005; Liu et al. 2012). A review recently published in The Lancet reports severe negative psychological effects of quarantine, including post-traumatic stress symptoms, confusion, and anger (Brooks et al. 2020)— all factors that have long-lasting consequences on decision making and behaviour, including compliance. Modelling the complexity of human behaviour, social interaction and the diffusion of collective behaviours or opinions has been at the core of much agent-based modelling (Squazzoni, Jager and Edmonds 2014; Flache et al. 2017). Although the incorporation of agents’ heterogeneity in terms of cognitions and behaviours in epidemiological models is a difficult task and would require cross-disciplinary collaboration (Squazzoni 2010), there are examples of models using socio-economic data to estimate behavioural heterogeneity in epidemiological diseases (e.g., Hunter, Mac Namee and Kelleher 2017, 2018). Although constructing more complex models takes time and effort, requiring cross-disciplinary teams and maybe lowering the rapidity of response to public emergencies, it is nonetheless necessary to build better models. Indeed, when trying to estimate the consequences of policy measures that depend on heterogeneous responses, it is often the case that population outcomes are contingent on specific sets of circumstances that arise from social interaction and its non-linear effects. Models that cannot examine the social dynamics of COVID-19 contagion are missing a crucial aspect that has serious implications for any possible estimation, scenario or prediction. We need to use better informed assumptions of the way in which individuals’ and communities’ behaviours will change as an effect of the epidemic. Agents are not simply virus carriers and their preferences and actions have implications at multiple levels.
This calls for a third important challenge: The COVID-19 data calibration and validation challenge. Model validation is very challenging when the model has a predictive purpose. This is because sometimes data are unavailable and/or there are no parallel situations that can be drawn on to to independently test predictive ability and hence build confidence in model findings. During emergencies, experimental samples or tests might be impossible or unethical (Hunter, Mac Namee and Kelleher 2017). Without appropriate data, validation can be improved by empirically-grounded theoretical knowledge and domain competence, which can be the bases for a more adequate representation of the complexity of the system. This motivates our plea on the importance of cross-disciplinary collaboration when simulating epidemiological diseases, which intertwined behavioural, social and economic dimensions (An, Grimm and Turner II 2020). While availability of data is crucial for valid model assumptions, retrospective validation of a predictive model is also possible during the unfolding of an event. For instance, Ziff and Ziff (2020), using WHO data (WHO 2020), analysed the number of deaths due to COVID-19 and challenged the assumption of a fixed reproduction rate of the virus, which determines the temporal exponential growth of the number of infected and deceased. However, fine-tuning parameters of a predictive model via empirical validation tests during an event, when model predictions inform public decisions on the same event, can generate confusions.
The problem is that the kind of data with the fine-grained quality needed to calibrate and validate COVID-19 models are (if any) dispersed, fragmented and rarely available in a comparable time window, scale and format. All data are subject to biases and COVID-19-related data are no exception. Even the simple task of estimating the number of COVID-19 cases requires that scientists, decision makers and public refer to the number of persons who tested positive for the presence of the SARS-CoV-2 virus with sophisticated tests, such as RT-PCR (WHO, Laboratory testing for COVID-19 in suspected human cases). However, these numbers are dependent not only on the actual number of infected individuals, but also, for instance, on: the testing capabilities in a given geographical region, the sensitivity/specificity of a given test, the same definition of ‘cases’, and the willingness of the testing authority to make undistorted data available (Lai et al. 2020). This is exemplified by the case of Italy, where the fatality rate is high and unequally distributed because regions have followed different testing approaches, while the health authorities have tested only persons who already have two of three symptoms, and have never performed random tests. In short, building predictive models based on the number of cases without considering how cases were defined and data collected can lead to biased estimations. Such obstacles raise questions about the comparability of available official statistics between countries, which are used to estimate the number of potential infected in each country. Even if collected and processed with utmost scrutiny, publicly available data reporting the number of confirmed cases almost certainly greatly underestimates the number of infected and, consequently, the number of recovered individuals. This fundamental deficiency can have dramatic calibration/validation consequences if a model aims to predict contagion trends and estimate the efficacy of policies at various time scales.
The Policy-Modelling Interface
The previous section discussed how the quality of a model depends on its purpose, its theoretical assumptions, the level of abstraction, and the quality of data. A further issue is that good COVID-19 pandemic models are not always good policy advice models. For example, a perfect strategy to prevent new infections (e.g. a total shut-down) might mean long-term harm to important societal system functions and survival mechanisms. A health-focussed model concentrating on the process of infection transmission will not automatically provide insights concerning long-term economic consequences or implications for social well-being, hard though it may seem to be effectively considering trading off such ramifications against the immediate threat of mortality. In other words, a particular modelling focus can limit the arena for debate (Aodha and Edmonds 2017).
Scientific policy advice to inform political debates and decisions about the pandemic should be based on the empirical monitoring and assessment of social contexts, including ex ante evaluation and appraisal of potential futures, policy options, and scenarios, as well as on epidemiological models (Weingart and Lentsch 2009; Wrasai and Swank 2007; Jasanoff 2004; Weaver, Stares and Kokusai 2001). However, the complex characteristics of the social world generate many possibilities and options. The complexity of social reality refutes a “blueprint for social engineering on the grand scale” (Popper 1972, 267), although the social sciences can teach us much about empirical regularities in social actions and social systems.
Computational simulation models generating macro phenomena from micro dynamics have the potential to provide some expert advice for public policy making (Gilbert et al. 2018; Ahrweiler et al. 2015), especially in areas where empirical data is scarce or of bad quality, such as in the current outbreak. However, the validity of scientific policy advice in this domain needs to be handled with care, honesty and responsibility. The limitations of models and the policy recommendations derived from them have to be openly communicated and transparently addressed. This applies not only to recognising missing, insufficient, or bad-quality data for calibrating and validating models, but also to admitting the fundamental complexities and contingencies of social systems, which require a holistic approach to capture effects of policy measures across the boundaries of sub-systems.
Under pressure to respond immediately and the social expectations of expert judgement, the temptation is to turn to simple models with few variables, high predictive claims, and clear messages to policymakers. But, especially in cases of so-called X-events (i.e., human-caused rare, unexpected events that cause a system to shift abruptly from one state to another; see Casti 2012), such as pandemic outbreaks, the need for complexity-appropriate and empirically validated models is higher than ever. Even then, merely creating a good model is no guarantee that the conclusions that modellers draw from it will be translated into policy.
One of the problems that arises when translating the conclusions of modelling into policy is managing the potentially fraught relationship between scientific expertise and democratic decision making. There are different functional logics for producing legitimacy in science and in policy, the former internally by peer review and the latter externally by elections. Trying to bring these two closer together often ends with a loss of legitimacy for both: by “politicising science” and by the “scientification of politics” (Weingart 1999). This is complicated by the “expert dilemma” (Collins and Evans 2007), that it is usually rather easy for every political position to find an expert willing to provide scientific evidence to support their own position, leading to competing expertise sets. This can lead to scientific advice being treated as merely symbolic or rhetorical as it can be observed in the current COVID-19 public and media discourse, where experts seem more often to be asked to legitimise political decisions.
Modelling and simulation can remedy some of these issues by providing support for “evidence-based policy” (a term from the mid-nineties in the wake of the evidence-based medicine approach) (Pawson 2006). As outlined in Gilbert et al. (2018), for policy evaluation we need data about the actual situation with the policy implemented to compare with data about the situation if the policy were not implemented. To obtain the latter, randomized controlled trials (RCTs) have been seen as the “gold standard”, as they are in evidence-based medicine. However, RCTs in policy contexts are often difficult and expensive to organise, and are sometimes not feasible, for example when the policy is available to all and so there is no possible control group, or even ethical. Simulation models can be used to create virtual worlds, one with and one without the implementation of the policy, to obtain an evidence base to inform “knowledge-based policy” (Gilbert et al. 2018) —they can explicitly represent (and hence make available for critique) a “theory of change” that can tell us when the results of an RCT can be applied to a different context (Cartwright and Hardie 2012).
A second area of difficulty in managing the modelling-policy interface is a still prevalent conceptualisation of the relationship between science and policy that marginalises the expertise of professionals and other stakeholders. There are many theories about the role of scientific policy advice: decisionistic, technocratic, pragmatic, or recursive (Lentsch and Weingart 2011). However, all of these assume that scientific elites interact directly only with political elites.
The current situation around COVID-19 shows how many diverse stakeholder interests are involved in shaping the implementation of knowledge-based policy agendas. Problem solutions require behaviour change on a global scale, changes to societal routines and practices, and new approaches to economic organisation and social well-being. Thus, policy agendas are big societal projects that, to be effective, have to be supported by all members of society building on their knowledge, experience and expertise. Agendas, and the underlying models supporting these agendas, must be informed not only by the elites, but by all relevant stakeholders and practitioners if they are to be successful and sustainable.
In our “experimental societies” (Gross and Krohn 2005), where experimentation, manipulations and policies are ubiquitous and generate reflexivity and performativity of outcomes, it is difficult to learn how to coordinate transformation processes around global challenges such as COVID-19 in a participative way. However, in complexity-based realistic modelling and simulation, there is already an emerging awareness of how to integrate stakeholder interests and expertise. Involving stakeholders in policy and management modelling activities has been extensively applied in socio-ecological management (Jones et al. 2009; Mendoza and Prabhu 2006; Robles-Morua et al. 2014). An example of doing so is the “Companion Modelling” framework (Barreteau et al. 2003; Etienne 2013).
Although policy modelling for policy advice shares the complexity of the target (Geyer and Cairney 2015), it can be also seen as a straightforward service activity with policymakers and policy analysts as clients who contract modellers under demanding budget and time restrictions (Ahrweiler et al. 2019). Usually a high immediate utility of results and short deadlines are mandatory requirements of any policy advice project using computational models (e.g., Aodha and Edmonds 2017). This requires a lean and systematic process among modellers and policymakers to develop appropriate support to help stakeholders and policy actors engage with and benefit from those with modelling and assessment expertise by establishing a set of resources to help both sides negotiate the relationship (e.g., Jager and Edmonds 2015).
An important part of this effort will be to improve the interface between modellers and policymakers, recognising the requirements of each. From the perspective of a policymaker, advice needs to be specific, concise, relevant to their immediate concerns, and accompanied by a plausible narrative. From the point of view of a modeller, advice needs to recognize the inherent uncertainty of conclusions drawn from the model, and avoid oversimplification. Both modellers and policymakers need to accept that ‘evidence’, no matter how strong, is just one ingredient in political decision-making, to be mixed in with others such as political expediency, public opinion, practicality, and so on.
Involving policymakers at the very earliest stages of modelling (that is, ‘co-design’) can help, by giving modellers a tacit understanding of the policymaking context, and policymakers a feel for the uncertainties and assumptions that are unavoidable in policy modelling. An alternative is to encourage the use of "knowledge brokers", that is people who can bridge the gap between modelling and policy making, preferably drawing on experience of both. Independent ‘think tanks’ and analytic divisions within government can often perform such a role.
In addition, modellers can make their models more useful for informing policy, for example, by ensuring that updating a model with new data can be done easily and quickly (as is done in weather forecasting, for example, see Kalnay 2003). For COVID-19 models, this would mean that their calibration would be updated, perhaps daily, to assimilate the latest infection and mortality data. If appropriate, models can also be adapted to become the basis for ‘serious games’ (Cannon-Bowers 2010; Zhou 2014), in which stakeholders can interact with the model and explore the implications of policy options.
Modellers also need to be prepared for their advice to fall on deaf ears if the policy issues are not salient at the time – policy advice is most influential during ‘windows of opportunity’ (Kingdon 1984), moments when there is coincidence of policy interest, possible solutions and implementation opportunities. They also need to understand that policy making is itself a political process, in which advocacy coalitions compete for influence and may set the ‘framing’ for an issue. Such framing can heavily influence the search for relevant evidence. For example, it can influence the design of a computational model and the choice of policy options to be examined. As noted above, modelling requires making decisions on matters such as the boundaries of the domain to be modelled, the degree of abstraction, the theory of change implied by the model, the assumptions about unmeasurable parameters and so on. These decisions are often made implicitly, based on the designer’s or policy maker’s framing of the policy issue. A danger is that dominant coalition with an accepted model can use it to support their policy for long periods, simply updating the model over time, while leaving its basic assumptions unchanged (Kolkman et al. 2016).
A Call to Action
Our community has made considerable progress in improving the methodological rigour and transparency of agent-based modelling, with a special attention to model documentation and sharing for replicability. The establishment of CoMSES (an online clearinghouse for model codes and documentation), initiatives such as the Open Modelling Foundation, and journal policies to enforce the adoption of open science principles have provided relevant infrastructures to the field that have improved assessment and replicability. Defending these principles and practices is necessary in normal times; it is even more vital in periods of public pressures and political uncertainty. Firstly, a lack of rigorous cross-checks by experts on models could have serious consequences if findings that inform public decisions turn out to be based on brittle assumptions or simply contain mistakes. This could reverberate on the reputation of the whole community. Secondly, when multiple academic teams build their models from scratch without incrementally developing previously assessed models, there is a risk of research waste and misallocation of resources.
Given that exceptional times require exceptional responses, we call here for: (A) the whole community of agent-based modellers and computational social scientists working on COVID-19 models to collaborate in maintaining high standards of model building and committing to best practices of transparency and rigour for replicability; (B) the institutional agencies, which have data that could help calibrate and inform COVID-19 models at various national, regional, local levels of granularity to engage the most trusted scientific associations in setting up data infrastructures in order to make data available to the academic community while protecting stakeholders’ interests.
As regards to (A), although competition and timely publication of results are essential to the advancement of science, the need for immediate responses in exceptional times and the availability of online platforms for rapid sharing of results (i.e., preprint online archives, social media) must not compromise the rigorous methodological standards that are essential for the long-term sustainability of the scientific enterprise.
There is widespread consensus in the community of agent-based modellers on three best practices: (1) using open source software and tools (e.g., NetLogo, MASON) to build models to minimize obstacles to replication and effective access costs for reviewers and possible re-users; (2) adopting standard protocols to document models that make easier for reviewers and re-users to assess model properties and building blocks, while allowing model builders to reflect on the adequacy of their model’s structure and features (Edmonds et al. 2019; Grimm et al. 2020); and (3) using permanent online repositories, such as ComSES, to archive fully documented models before submission of a paper to a journal to speed up assessment and replicability. We believe that these three practices are all the more important during exceptional times: what the community loses in terms of immediate rapidity of responses to public expectations as a result of complying with these best practices is gained in enhancing its long-term credibility.
As regards to (B), although immediate sharing of institutional data to help scholars develop more empirically contextualized and customized models is a good idea, the kind of data that is necessary to calibrate model parameters and estimate outcomes could require appropriate infrastructures to ensure that sensitive information is properly treated. While relevant benefits are expected when institutions outsource research to the community on a large scale via data sharing, there are alternatives that could make this effort manageable in a more responsible way so that the advantages of transparency and open data can be counterbalanced with the priority of protection and security. Benchmarks exist to build customized protocols for data sharing that could be adapted to the purpose of sharing institutional data for agent-based computational research on COVID-19 (e.g., Squazzoni et al. 2020). However, this process needs a clear organisation and representative authorities capable of ensuring transparent rules of access and enforcing public interest of data use.
In this regard, The European Social Simulation Association (ESSA), the largest association for the advancement of social simulation research worldwide, and the Journal of Artificial Societies and Social Simulation (JASSS), the flagship journal of the community of agent-based modellers, established in 1998, have agreed to offer their expertise and facilities to manage this process in the benefit of institutional stakeholders and the community.
ESSA commits itself to set up a protocol for data sharing that will be jointly developed with any institutional agency interested in sharing data for research. While the association has membership fees and priorities related to strengthening the European research area, its international dimension and public mandate will help to ensure that any interested teams of scholars independently of their origin and status will have the opportunity of data access through public calls. Public calls will target only interdisciplinary teams formed by at least epidemiologists, computer scientists, and social and behavioural scientists. The Association also commits itself to opening a campaign to leverage funds and donations to support this effort whenever a first institutional agency accepts to collaborate. JASSS commits itself to enforce its policy on transparency and model documentation by collaborating with CoMSES to streamline peer review of model codes of any manuscript on COVID-19 submitted to the journal. This will ensure that code reviewers and manuscript reviewers will be mutually informed so that competences and resources will be optimized.
Exceptional times require exceptional decisions and these may benefit from collective creativity. While attention is now focussed on immediate epidemiological challenges, the decisions of many governments to contain the pandemic are already having unpredictable consequences on social behaviour, social relationships, economic processes, political agendas, and the mental health of millions of individuals. Research will also be needed to understand these long-term consequences, which could turn out to be dramatic beyond the immediate public health sphere. Our call for action is an attempt to organise a sustainable collaborative answer to these long-term socio-economic challenges. We praise current initiatives from prestigious institutions, such as the Royal Society and some fundings agencies, to stimulate and support modelling research to address important challenges. ESSA and JASSS are here to help.
- For an account see The Washington Post by William Booth on March 17, 2020: https://www.washingtonpost.com/world/europe/a-chilling-scientific-paper-helped-upend-us-and-uk-coronavirus-strategies/2020/03/17/aaa84116-6851-11ea-b199-3a9799c54512_story.html.
- At the time of publication the team have announced they will release the code, but the final date and form are not yet clear
- This is the tweet by Neil Ferguson on 22 March, 2020, 10:13PM: “I’m conscious that lots of people would like to see and run the pandemic simulation code we are using to model control measures against COVID-19. To explain the background – I wrote the code (thousands of lines of undocumented C) 13+ years ago to model flu pandemics…”.
- For instance, see Smaldino’s social distancing model (http://smaldino.com/wp/covid-19-modeling-the-flattening-of-the-curve/), a NetLogo model which illustrates how social distancing flattens the infection curve. It is interesting to note that in mid-March, a simulation model of a non-existent disease “simulitis” in an imaginary population was published as an interactive online graphic by the Washington Post (see: https://www.washingtonpost.com/graphics/2020/world/corona-simulator/). The model sparked a broad societal debate on social distancing. The model was timely and openly accessible for the public. Its purpose was to illustrate the possible consequences of an (unsuccessful) lock-down and of social distancing on flattening the curve of population contagion. The exact ‘decision rules’ were unclear in the article, but the build-up of the storyline, from individuals getting infected to limiting movement of individuals, was extremely clear. In this model, people were susceptible, sick or recovered. There was no explicit implementation of mortality due to an editorial decision by the newspaper, deeming death to be too cruel for its readers.
Edmonds et al. (2019) listed seven modelling purposes: prediction, explanation, description, theoretical exposition, illustration, social learning, and analogy. Prediction, not to be confused with ‘what my model outputs,’ entails reliable anticipation of unknown data or knowledge. Explanation is the attempt to provide a possible causal chain from initialization to output that shows why or how a phenomenon of interest could occur. Description is a partial representation of a specific case study of interest, with no claim to generality beyond that. Theoretical exposition is when simulations are used to explore general theories or ideas, without any necessary connection to the real world. Illustration involves using a simulation to communicate an idea. Analogy is a more informal kind of use of a simulation than theoretical exposition used to help think through ideas in a playful and creative way. Social learning is the use of simulation to represent the shared knowledge of a group. Here, in Table 1, we tried to consider each of these purposes with a view to evaluating the role the corresponding models might play in a crisis context, the potential usefulness they might have to decision-makers, and risks associated with their use (many of which are general to all models, not just agent-based models).
|Purpose||Value in a crisis context||Useful to decision-makers||Risks|
|Prediction||Ability to anticipate and compare intervention scenarios (including the consequences of doing nothing). Assessment of uncertainties, and development of ‘robust’ policies that minimize maximum regret. Base line numbers to use in planning.||If answers to questions can be derived quickly enough, and interventions formalized accurately, it could make a valuable contribution to discussion over interventions.||Over-reliance on the model as an ‘oracle’, inappropriate political exposure of developers, inability of an effectiveness-focused model to forecast policy utility. Often, the quality of data to calibrate important model parameters is questionable, especially during an event. It is also not necessarily the case that decision-makers will adopt the policy the model recommends.|
|Explanation||Explanation could address questions such as how we (might have) arrived at a particular outcome, but does not guarantee that the particular causal chain that really led us there is the one simulated.||More likely of use in ‘lessons learned’ exercises, especially if in conjunction with several other models with a similar purpose.||Enacting measures that address possible causes rather than the actual causes risks unintended consequences in future.|
|Description||A descriptive model could be used to explore scenarios in a heavily constrained context.||Unlikely to be of value at the national scale, in part because the generalizations needed to model at that scale would be inconsistent with this modelling purpose. Could be used for local levels, however.||Elements not simulated might prove later to be relevant. Overgeneralization from a model fitted to specific circumstances is also a risk. Possible confusion with prediction.|
|Social Learning||Potentially valuable in resolving conflict. The main value is the process by which the model is constructed, rather than the model itself.||Resolving arguments, encouraging people to see others’ points of view, observing the logical consequences of beliefs.||Modelling what people in a group believe does not guarantee relevance beyond the group, or to the empirical world. Usual risks of group-work (e.g. groupthink, dominant voices) need to be carefully managed by facilitators.|
|Illustration||Useful for communication and education of ideas to the general public.||Provides a means of communicating reasoning behind policies for dealing with the crisis that may be unpopular.||Under certain conditions, the model may not behave consistently with the communicated ideas.|
|Theoretical Exposition||Unlikely to be of value.||In a crisis context, decision-makers will have little time for comparing or exploring theories.||No (necessary) connection with the real world in this purpose risks over interpretation if attempt is made to use it.|
|Analogy||Of little value other than distracting the modellers themselves from the psychological consequences of the crisis.||Not useful.||Over interpretation of findings that merit more rigorous study using, say, theoretical exposition or explanation purposes.|
AODHA, L. and Edmonds, B. (2017). ‘Some pitfalls to beware when applying models to issues of policy relevance.’ In Edmonds, B. & Meyer, R. (Eds.). Simulating Social Complexity - A Handbook, 2nd edition. Berlin/Heidelgerg: Springer, 801-822. [doi:10.1007/978-3-319-66948-9_29]
AGAIBI, C. E., & Wilson, J. P. (2005). Trauma, PTSD, and resilience: A review of the literature. Trauma, Violence, & Abuse, 6(3), 195-216. [doi:10.1177/1524838005277438]
AHRWEILER, P., Frank, D., and N. Gilbert (2019). Co-designing social simulation models for policy advice: Lessons learned from the INFSO-SKIN study. In 2019 Spring Simulation Conference (SpringSim) (pp. 1-12). Tucson, AZ, USA, USA: IEEE. [doi:10.23919/springsim.2019.8732901]
AHRWEILER, P., Schilperoord, M., Pyka, A. and N. Gilbert (2015). Modelling research policy - Ex-ante evaluation of complex policy instruments. Journal of Artificial Societies and Social Simulation, 18(4), 5: https://www.jasss.org/18/4/5.html.. [doi:10.18564/jasss.2927]
AN, Li, Grimm, Volker and Turner II, Billie L. (2020). Editorial: Meeting grand challenges in agent-based models. Journal of Artificial Societies and Social Simulation, 23 (1) 13: https://www.jasss.org/23/1/13.html. [doi:10.18564/jasss.4012]
ANDERSEN, K.G., Rambaut, A., Lipkin, W. I., Holmes, E. and Garry, R. F. (2020). The proximal origin of SARS-CoV-2. Nature Medicine. In press. [doi:10.1038/s41591-020-0820-9]
BAMMER, G. (2017). Should we discipline interdisciplinarity? Palgrave Communications, 3, 30. [doi:10.1057/s41599-017-0039-7]
BARRETEAU, O. et al. (2003). Our companion modelling approach. Journal of Artificial Societies and Social Simulation, 6(2), 1: https://www.jasss.org/6/2/1.html.
BROOKS, S. K., Webster, R. K., Smith, L. E., Woodland, L., Wessely, S., Greenberg, N., & Rubin, G. J. (2020). The psychological impact of quarantine and how to reduce it: rapid review of the evidence. The Lancet. 39591 (0227), P912-920. [doi:10.1016/s0140-6736(20)30460-8]
CANNON-BOWERS, J. (Ed.). (2010). Serious Game Design and Development: Technologies for Training and Learning. Hershey, PA: IGI global.
CARTWRIGHT, N., & Hardie, J. (2012). Evidence-Based Policy: A Practical Guide to Doing It Better. New York, NY: Oxford University Press. [doi:10.1093/acprof:osobl/9780199841608.001.0001]
CASTI, J. (2012). X-Events: The Collapse of Everything. New York, NY: HarperCollins.
CHANG, S. L., Harding, N., Zachreson, C., Cliff, O. M., & Prokopenko, M. (2020). Modelling transmission and control of the COVID-19 pandemic in Australia. arXiv preprint arXiv:2003.10218.
CLIFF, O. M., Harding, M., Piraveen, M., Erten, Y. , Gambhir, M., Prokopenko, M. (2018). Investigating spatiotemporal dynamics and synchrony of influenza epidemics in Australia: an agent-based modelling approach. Simulation Modelling Practice and Theory, 87, 412–431. [doi:10.1016/j.simpat.2018.07.005]
COLLINS, H. and Evans, R. (2007). Rethinking Expertise. Chicago, IL: The University of Chicago Press.
COLBOURN, T. (2020). COVID-19: extending or relaxing distancing control measures. Lancet Public Health, In press. [doi:10.1016/s2468-2667(20)30072-4]
EDMONDS, B., Le Page, C., Bithell, M., Chattoe-Brown, E., Grimm, V., Meyer, R., Montañola-Sales, C., Ormerod, P., Root, H. and Squazzoni, F. (2019). Different Modelling Purposes. Journal of Artificial Societies and Social Simulation, 22(3) 6: https://www.jasss.org/22/3/6.html. [doi:10.18564/jasss.3993]
EDMONDS, B., Polhill, G. and Hales D. (2019). Predicting social systems – A challenge. Review of Artificial Societies and Social Simulation. 4th June 2019.
EPSTEIN, J. M. (2009). Modelling to contain epidemics. Nature, 460(687), 687. [doi:10.1038/460687a]
ETIENNE, M. (ed.) (2013). Companion Modelling. A Participatory Approach to Support Sustainable Development. Heidelberg and New York: Springer.
FERGUSON, N., Cummings, D., Cauchemez, S., Frasser, C:, Riley, S., Meeyai, A., Iamsirithaworn, S. and Burke, D. S: et al. (2005). Strategies for containing an emerging influenza pandemic in Southeast Asia. Nature, 437, 209–214. [doi:10.1038/nature04017]
FERGUSON, N., Cummings, D., Fraser, C., Cajka, J. X., Cooley, P. C. and Burke, D. S. (2006). Strategies for mitigating an influenza pandemic. Nature 442, 448–452 [doi:10.1038/nature04795]
FLACHE, A,, Mäs, M., Feliciani, T., Chattoe-Brown, E., Deffuant, G., Huet, S. and Lorenz, J. (2017). Models of social influence: Towards the next frontiers. Journal of Artificial Societies and Social Simulation, 20 (4) 2. https://www.jasss.org/20/4/2.html. [doi:10.18564/jasss.3521]
GEYER, R. and P. Cairney (eds.) (2015). Handbook on Complexity and Public Policy. Cheltenham: Edward Elgar.
GILBERT, Nigel, Ahrweiler, Petra, Barbrook-Johnson, Pete, Narasimhan, Kavin Preethi and Wilkinson, Helen (2018). Computational modelling of public policy: Reflections on practice. Journal of Artificial Societies and Social Simulation, 21 (1) 14: https://www.jasss.org/21/1/14.html. [doi:10.18564/jasss.3669]
GRIMM, V., Railsback, S. F., Vincenot, C. E., Berger, U., Gallagher, C., DeAngelis, D. L., Edmonds, B., Ge, J., Giske, J., Groeneveld, J., Johnston, A. S.A., Milles, A., Nabe-Nielsen, J., Polhill, J. G., Radchuk, V., Rohwäder, M.-S., Stillman, R. A., Thiele, J. C. and Ayllón, D.l (2020. The ODD protocol for describing agent-based and other simulation models: A second update to improve clarity, replication, and structural realism. Journal of Artificial Societies and Social Simulation, 23 (2) 7: https://www.jasss.org/23/2/7.html. [doi:10.1016/j.ecolmodel.2010.08.019]
GROSS, M. and W. Krohn, W. (2005). Society as experiment: sociological foundations for a self-experimental society. History of the Human Sciences, 18(2): 63-86. [doi:10.1177/0952695105054182]
HUNTER, Elizabeth, Mac Namee, Brian and Kelleher, John D. (2017). A taxonomy for agent-based models in human infectious disease epidemiology. Journal of Artificial Societies and Social Simulation, 20 (3) 2: ttp://jasss.soc.surrey.ac.uk/20/3/2.html. [doi:10.18564/jasss.3414]
HUNTER, Elizabeth, Mac Namee, Brian and Kelleher, John D. (2018). Using a socioeconomic Segregation burn-in model to initialise an agent-Based model for infectious diseases. Journal of Artificial Societies and Social Simulation, 21 (4) 9: https://www.jasss.org/21/4/9.html. [doi:10.18564/jasss.3870]
JAGER, W. & Edmonds, B. (2015). 'Policy making and modelling in a complex world.’ In Janssen, M., Wimmer, M. and Deljoo, A. (eds.), Policy Practice and Digital Science. Berlin Heidelberg, Springer, pp 57-74.
JASANOFF, S. (Ed.) (2004). States of Knowledge: The Co-Production of Science and Social Order. London: Routledge.
JONES, N. A., P. Perez, T.G. Measham, G. J. Kelly, P. d'Aquino, K. A. Daniell, A. Dray, and N. Ferrand. (2009). Evaluating Participatory Modeling: Developing a Framework for Cross-Case Analysis. Environmental Management, 44(6), 1180-1195. [doi:10.1007/s00267-009-9391-8]
KALNAY, E. (2003). Atmospheric Modeling, Data Assimilation and Predictability. New York, NY: Cambridge University Press.
KINGDON, J. W. (1984) Agendas, Alternatives and Public Policies. Boston: Little, Brown. Kolkman, D.A.,
KOLKMAN, D. A., Campo, P., Balke-Visser, T. and Gilbert, N. (2016). How to build models for government: criteria driving model acceptance in policymaking. Policy Sciences, 49(4), 489-504. [doi:10.1007/s11077-016-9250-4]
LAI, C.-C., Liu, Y. H., Wang, C. H., Wang, Y. H., Hsueh, S. C., Yen, M. Y., Ko, W. C., Hsueh, P. R. (2020). Asymptomatic carrier state, acute respiratory disease, and pneumonia due to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2): Facts and myths. Journal of Microbiology, Immunology and Infection, In Press. [doi:10.1016/j.jmii.2020.02.012]
LENTSCH, J. and Weingart, P. (Eds.) (2011). The Politics of Scientific Advice. Institutional Design for Quality Assurance. New York, NY: Cambridge University Press.
LIU, X., Kakade, M., Fuller, C.J., Fan, B., Fang, Y., Kong, J., Guan, Z., Wu, P. (2012). Depression after exposure to stressful events: lessons learned from the severe acute respiratory syndrome epidemic. Comprehensive Psychiatry, 53(1), 15-23. [doi:10.1016/j.comppsych.2011.02.003]
MENDOZA, G. A. and Prabhu, R. (2006): Participatory modeling and analysis for sustainable forest management: Overview of soft system dynamics models and applications. Forest Policy and Economics, 9(2), 179-196. [doi:10.1016/j.forpol.2005.06.006]
MITCHELL, M. (2009). Complexity. A Guided Tour. New York, NY: Oxford University Press.
PAWSON, R. (2006). Evidence-Based Policy: A Realist Perspective. London: Sage.
POLHILL, G. (2018). Why the social simulation community should tackle prediction. Review of Artificial Societies and Social Simulation. 6th August 2018.
POPPER, K. (1972). ‘The open society and its enemies.’ In J. Katz, A. M. Capron & E. Swift Glass (Eds.), Experimentation with Human Beings. The Authority of the Investigator, Subject, Professions, and State in the Human Experimentation Process. New York, NY: Russel Sage Foundation, pp. 266-268.
ROBLES-MORUA, A., K. . Halvorsen, A. S. Mayer, and E. R. Vivoni (2014). Exploring the application of participatory modeling approaches in the Sonora River Basin, Mexico. Environmental Modelling & Software, 52, 273–282 [doi:10.1016/j.envsoft.2013.10.006]
SHEN, S., Taleb, N. M. and Bar-Yam, Y. (2020). Review of Ferguson et al "Impact of non-pharmaceutical interventions..". New England Complex Systems Institute.
SQUAZZONI, F. (2010). The impact of agent-based models in the social sciences after 15 years of incursions. History of Economic Ideas, 18(2), 197-233.
SQUAZZONI, F. Ahrweiler P, Barros T, Bianchi F, Birukou A, Blom HJJ, Bravo G, Cowley S, Dignum V, Dondio P, Grimaldo F, Haire L, Hoyt J, Hurst P, Lammey R, MacCallum C, Marušić A, Mehmani B, Murray H, Nicholas D, Pedrazzi G, Puebla I, Rodgers P, Ross-Hellauer T, Seeber M, Shankar K, Van Rossum J, Willis M. (2020). Unlock ways to share peer review data. Nature, 578(7796):512-514. [doi:10.1038/d41586-020-00500-y]
SQUAZZONI, F., Jager, W. and Edmonds, B. (2014). Social simulation in the social sciences. A brief overview. Social Science Computer Review, 32(3), 279-294. [doi:10.1177/0894439313512975]
STROUD, P., Del Valle, S., Sydoriak, S., Riese, J. and Mniszewski, S. (2007). Spatial Dynamics of Pandemic Influenza in a Massive Artificial Society. Journal of Artificial Societies and Social Simulation, 10(4) 9: https://www.jasss.org/10/4/9.html.
THOMPSON KLEIN, J. (1990). Interdisciplinarity. History, Theory, & Practice. Detroit: Wayne State University Press.
VESPIGNANI, A. (2009). Predicting the behavior of techno-social systems. Science, 325, 425-428. [doi:10.1126/science.1171990]
VOINOV, A. and Shugart, H. H. (2013). ‘Integronsters’, integral and integrated modeling. Environmental Software & Modelling, 39, 149-153. [doi:10.1016/j.envsoft.2012.05.014]
WEAVER, K., Stares, P. and Kokusai Koryu Senta, N. (Eds.) (2001): Guidance for Governance: Comparing Alternative Sources of Public Policy Advice. Tokyo: Japan Center for International Exchange.
WEINGART, P. (1999). Scientific expertise and political accountability: paradoxes of science in politics. Science and Public Policy, 26(3), 151-161. [doi:10.3152/147154399781782437]
WEINGART, P. and Lentsch, J. (Eds.) (2009). Scientific Advice to Policy-Making: International Comparison. Leverkusen Opladen, Germany & Farmington Hills. MI, USA: Barbara Budrich.
WHO (World Health Organization) (2020). Novel Coronavirus (2019-nCoV) situation reports. Technical Report 1-24, WHO, January 2020. URL: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/ situation-reports.
WRASAI, P. T. and Swank, O. H. (2007). Policy Makers, Advisers, and Reputation. Journal of Economic Behavior and Organization, 62(4), 579-590. [doi:10.1016/j.jebo.2004.11.015]
WU, Joseph T. Kathy Leung, and Gabriel M Leung (2020). Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: A modelling study. The Lancet, In Press. [doi:10.1016/s0140-6736(20)30260-9]
YONEYAMA, T., Das, S. and Krishnamoorthy, M. (2012). A Hybrid Model for Disease Spread and an Application to the SARS Pandemic. Journal of Artificial Societies and Social Simulation, 15 (1) 5: https://www.jasss.org/15/1/5.html. [doi:10.18564/jasss.1782]
ZHANG, M., Verbraeck, A., Meng, R., Chen, B. and Qiu, X. (2016). Modeling Spatial Contacts for Epidemic Prediction in a Large-Scale Artificial City. Journal of Artificial Societies and Social Simulation, 19 (4) 3: https://www.jasss.org/19/4/3.html. [doi:10.18564/jasss.3148]
ZHOU, Q. (2014). The Princess in the Castle: Challenging Serious Game Play for Integrated Policy. Analysis and Planning. Next Generation Infrastructures Foundation.
ZIFF, A. L. and Ziff, R. M. (2020). Fractal kinetics of COVID-19 pandemic (with update 3/1/2020). MedRxiv Preprint. [doi:10.1101/2020.02.16.20023820]