« Return to Contents of this issue

Social Simulation for a Crisis: Results and Lessons from Simulating the COVID-19 Crisis

Dignum, Frank (Ed.)
Springer-Verlag: Berlin, 2021
ISBN 978-3-030-76397-8 (hb)
Order this book

Reviewed by Edmund Chattoe-Brown
School of Media, Communication and Sociology, University of Leicester, UK

Cover of book

“I’m going to have to science the s**t out of this.” (Film of The Martian, https://www.youtube.com/watch?v=BABM3EUo990)

By any standards this book is an impressive intellectual effort. To produce such a complex ABM, generate results (gaining policy maker attention) and publish something so substantial during a pandemic is admirable. In many respects, it does exactly what it says on the tin. It motivates crisis modelling, describes an ABM, analyses a range of policy interventions and reflects on practical challenges (from choosing programming languages to managing projects.)

In terms of scholarship too, the book contributes. A behaviour model involving values and affordances (if you cannot meet in your house you meet in the park) not only allows the ABM to cope with a genuinely new pandemic (and policies) but also elegantly justifies friendship networks enshrining shared values. Exploring the cross national effects of values, exit strategy outcomes and connected economic and epidemiological challenges all deserve attention from JASSS readers.

Nonetheless this book makes me uneasy through what it implies about ABM’s policy contribution.

My first concern involves sheer size. In a large book based on a huge ABM, obscurity in important matters is almost inevitable. Firstly, there is extensive discussion of the NetLogo ABM but (in a few sentences) it becomes clear that the implementation described is now abandoned for one in Repast (p. 89). Does this make the book an epitaph? Is there now any point in evaluating or extending this ABM? Secondly, many graphs are shown comparing simulated epidemics under different conditions but the reader has to concentrate hard (p. 55) to notice that contagiousness has been given an unrealistic value because otherwise epidemics don’t occur. How worried should policy makers be about that kind of adjustment? Thirdly, when trying to assess specific results, the very quantity of information presented is sometimes a barrier. For example, the ABM’s ability to record contacts at diverse sites is elegant and being able to demonstrate that interventions have unexpected effects because they shift contacts between sites is valuable. But one has to be reading really carefully to wonder why workplace infections (Figure 5.8, p. 133) are already so limited without intervention. There is very brief (and unevidenced) comment about workplaces being relatively socially distanced and some discussion of density parameters (parks are more “airy” than homes) but it is really hard for the reader to judge whether this specific value is a bug or a feature (and whether one should therefore trust the perhaps surprising result that home working alone shows virtually no transmission reduction).

My second concern is about whether the ABM crisis response needs to be self-consciously institutional rather than individual. The authors are certainly correct that policy makers will expect trustworthy working models from the ABM community. But, given the inevitable scarcity of resources, do we need to find ways to take the best (and justify that it is the best) from existing ABM? Or is there a risk that an arbitrary ABM will get all the attention regardless of its flaws? How has this ABM related to other COVID modelling effort and how should that other modelling effort best relate to it now?

My third concern is about science. When we say an ABM is a decision-support tool what follows? Are we absolved from asking if it is empirically congruent or if, when policy makers heed it, we are therefore contributing to good or harm? This concern strikes me repeatedly. What is the positive evidence for a values based architecture and how might the ABM behave differently with another architecture? (ABM recognises parameter effects through sensitivity analysis but seems to take the absence of architecture effects for granted – Chattoe-Brown 2021.) One cannot claim it is wrong to call comparing models (pp. 331-252) validation but the question then remains whether this ABM is actually disciplined by data. (Is it a problem, for example, that the modellers find 55 daily contacts realistic – p. 46 – when empirical research reports average values like 8 – Leung et al. 2017 – or 13 – Mossong et al. 2008?) I worry that, to be credible in the long term, the ABM community will have to be much tougher with itself about what it has actually demonstrated rather than just encouraged readers to believe.

My final concern involves empirical scale. For practical reasons, this ABM represents quite a small number of agents but also considers locations like a university. The UK statistical unit in which I live (a so called Middle Layer Super Output Area) has a population of about 12000. This is about 10 times the number of agents represented in the ABM. This being so, it is firstly possible that a small fraction of a MSOA contains very few of the locations the ABM represents (one could virtually enumerate them by walking the streets) and, related to this, that many of the people found in such an area may actually be passing through (visiting shops where they do not live or parking and going elsewhere to work.) I am thus not sure, even in an abstract sense, whether this ABM has a coherent notion of scale or boundaries for its population.

But these sources of unease have (I hope) a constructive side. Can the ABM community devise tools to automate the sensitivity analysis of models with 90 odd parameters? If we cannot agree on terminology, can we at least agree for credibility that some of our ABM evaluations need to involve real data? Can we establish ways, perhaps using experiments or gaming, to achieve intermediate validation of model elements like cognitive architectures? Can we clarify the logic of geographical areas and their populations to ensure coherent interpretation/scaling?

However, whether or not my concerns are echoed (or added to) by others, this book is solid enough to agree (or disagree) with and that makes it a valuable contribution to scientific progress in modelling crises.


References

Chattoe-Brown, E. (2021). Why Questions Like ‘Do Networks Matter?’ Matter to Methodology: How Agent-Based Modelling Makes it Possible to Answer Them. International Journal of Social Research Methodology, 24(4), pp. 429-442.

Leung, K. et al. (2017). Social Contact Patterns Relevant to the Spread of Respiratory Infectious Diseases in Hong Kong. Scientific Reports, 7, article 7974, 11 August.

Mossong, J., et al. (2008). Social Contacts and Mixing Patterns Relevant to the Spread of Infectious Diseases’, PLoS Medicine, 5(3), article e74, pp. 0381-0391.