Figure - uploaded by Erika Palmer
Content may be subject to copyright.
Figure e: Checking the predictability of an elementary CA. Time runs vertically from the top of the picture; the purple cells showing the simulation of the 'real' CA under transition rule eee, using shade to represent a cell in state e or r. The light blue (cyan) cells show the data, also using shade to represent a cell in state e or r. Below the data cells, colour is used to show predictability (green: invariably predictable; orange: asymmetrically unpredictable; red: symmetrically unpredictable), and shade to indicate the L cells forming the subset of interest (lightest shade), and cells aaected by the edge of the simulation (darkest shade) to allow evaluation of predictability in the finite and infinite CA case, the latter meaning that the darkest-shaded cells should be ignored. In this particular run, all but four of the eee rules have been eliminated because they do not fit the data in the cyan cells.

Figure e: Checking the predictability of an elementary CA. Time runs vertically from the top of the picture; the purple cells showing the simulation of the 'real' CA under transition rule eee, using shade to represent a cell in state e or r. The light blue (cyan) cells show the data, also using shade to represent a cell in state e or r. Below the data cells, colour is used to show predictability (green: invariably predictable; orange: asymmetrically unpredictable; red: symmetrically unpredictable), and shade to indicate the L cells forming the subset of interest (lightest shade), and cells aaected by the edge of the simulation (darkest shade) to allow evaluation of predictability in the finite and infinite CA case, the latter meaning that the darkest-shaded cells should be ignored. In this particular run, all but four of the eee rules have been eliminated because they do not fit the data in the cyan cells.

Source publication
Article
Full-text available
This paper uses two thought experiments to argue that the complexity of the systems to which agent-based models (ABMs) are often applied is not the central source of difficulties ABMs have with prediction. We define various levels of predictability, and argue that insofar as path-dependency is a necessary attribute of a complex system, ruling out s...

Contexts in source publication

Context 1
... emphasis is important because there is still potential utility in modelling wicked systems in providing the scope for detecting or estimating when the current system (A, K and associated transition functions) is shiiing away from its current basin of attraction. Figure : The problem of predicting in wicked systems with endogenous ontological novelty. The system for which data are available comprises TM states K and tape symbols A. Under certain conditions, a transition function f can pull a TM into a space where can have states in K * and write symbols in A * , with one or more other functions g leading to a future state/symbol space defined by subsets of K * and A * . ...
Context 2
... distribution of outcomes of asymmetric unpredictability is not a probability distribution in the case of deterministic models except in a Bayesian sense. Figure e shows this with an example set of predictions for a hypothetical CA with four possible states 1, 2, 3, 4 for each of four cells. Here, only one of the matching transition functions F 1 ...F 8 is the 'actual' transition function (i.e. the original data generator from which data were collected and then used to find the matching transition functions), and CAs are deterministic: the bottom-lee cell doesn't have a 5 8 probability of being g at time T , except that (in the absence of any other information) this is a reasonable 'degree of belief' that that cell is s at T given that five of eight matching transition functions make that prediction. ...
Context 3
... demonstrate the principle and provide an explicit counter-example to the claim that complex systems are unpredictable, Figure e shows the output from a simulation that runs an elementary CA to get some data, and then tries to 'find' the transition rule used by exhaustively exploring all lll transition rules, eliminating those rules that are not consistent with the data, and then plotting the predictability of cells using propositions s (green), , (orange) and d (red). (Individual cell omissive predictability is not an option when there are only two states.) ...
Context 4
... cells shown in light blue in Figure e are 'data' cells, and there are two consecutive timesteps in which data are recorded. A minimum of two snapshots is needed to determine the transition rule using a method that relies on eliminating rules that do not reproduce later snapshots given earlier ones. ...
Context 5
... first snapshot has many more cells in it than the second. The width of the second snapshot is controlled by a parameter max-data, which in the run in Figure e is ss. ...
Context 6
... the light-shaded region of the image (depicting the L cells) there are nevertheless cells coloured green throughout the run, illustrating that in this run at least, some cells are invariably predictable, even several time steps aaer the data. ... In Figure e, the value of max-data is increased from m to oo, using ggg replications of each setting ((,,,, runs in total), to show the relationship it has with the number of rules eliminated (n-eliminated). Of particular interest is the case where eee rules are eliminated, as this shows when the 'real' rule (((() has been found, and all cells are invariably predictable. ...
Context 7
... & Buvel llll; Schönfisch & de Roos ssss), with the argument that many of the 'emergent' eeects of synchronous CAs are artefacts of synchrony. Even so, there are special classes of asynchronous CA that have been proven capable of universal computation in a similar manner to that of the rule eee ECA (Yamashita et al. ). ... It is not diiicult to write an ABM that behaves in an analogous way. ...
Context 8
... * and K * thus represent the endogenous ontological uncertainty, but we have no data about them in the N snapshots used to determine which models will generate our predictions. Hence, even though we might narrow down the set of transition functions and orderings of TM activities that reproduce the N snapshots on the tape, all options for transition functions involving states in K * and reading or writing symbols on the tape in A * are open (see Figure e). Assuming the emergent new system is expected only to involve states and symbols that are proper subsets of K * and A * , then even though strictly speaking with respect to A the whole system is omissively predictable, with respect to the subset of A * that would usefully give us information about the expected new state of the system, the tape is symmetrically unpredictable. ...
Context 9
... emphasis is important because there is still potential utility in modelling wicked systems in providing the scope for detecting or estimating when the current system (A, K and associated transition functions) is shiiing away from its current basin of attraction. Figure : The problem of predicting in wicked systems with endogenous ontological novelty. The system for which data are available comprises TM states K and tape symbols A. Under certain conditions, a transition function f can pull a TM into a space where can have states in K * and write symbols in A * , with one or more other functions g leading to a future state/symbol space defined by subsets of K * and A * . ...

Citations

... Agent based simulation (ABS) is a computational method of capturing (spatio-)temporal dynamics occurring for multiple agents [54,55]. An ABS more often than not contains two main components -an environment and a population of agents which can be homogeneous or heterogeneous [56,57]. ...
Preprint
Full-text available
Recent pandemics have highlighted vulnerabilities in our global economic systems, especially supply chains. Possible future pandemic raises a dilemma for businesses owners between short-term profitability and long-term supply chain resilience planning. In this study, we propose a novel agent-based simulation model integrating extended Susceptible-Infected-Recovered (SIR) epidemiological model and supply and demand economic model to evaluate supply chain resilience strategies during pandemics. Using this model, we explore a range of supply chain resilience strategies under pandemic scenarios using in silico experiments. We find that a balanced approach to supply chain resilience performs better in both pandemic and non-pandemic times compared to extreme strategies, highlighting the importance of preparedness in the form of a better supply chain resilience. However, our analysis shows that the exact supply chain resilience strategy is hard to obtain for each firm and is relatively sensitive to the exact profile of the pandemic and economic state at the beginning of the pandemic. As such, we used a machine learning model that uses the agent-based simulation to estimate a near-optimal supply chain resilience strategy for a firm. The proposed model offers insights for policymakers and businesses to enhance supply chain resilience in the face of future pandemics, contributing to understanding the trade-offs between short-term gains and long-term sustainability in supply chain management before and during pandemics.
... As Solomon et al. (2020) note, "models that do the most to advance research and management focus on elucidating concepts and key mechanisms rather than explicitly predicting or explaining observations." Indeed, other recent work on modeling wicked problems, including SES, recognizes the problems with prediction and recommends that modelers be cautious (Polhill et al., 2021;Edmonds, 2023). In short, as Erica Thompson, author of Escape from Model Land, has noted, "models are primarily useful as metaphors and aids to thinking, rather than prediction engines" (qtd in Herndon, 2023). ...
Article
Full-text available
Key lessons about and limits to social-ecological systems (SES) modeling are widely available and frustratingly consistent over time. Prominent challenges include outdated perspectives about systems and models along with persistent disciplinary hegemony. The inherent complexity in SES means that an emphasis on discrete prediction is misplaced and has potentially reduced model efficacy for decision-making. Although computer models are definitely the tool to use to identify the complex relationships within SES, humans are messy and hence the ‘social’ in SES is often ignored, glossed over, or reduced to simplistic economic or demographic variables. This combination of factors has perpetuated biases in what is worth pursuing and/or publishing. In (re)visiting issues in SES modeling, including debates about model capabilities, data selection, and challenges in working across disciplinary lines, this reflection explores how the author’s experience aligns with extant literature as well as raises issues about what is absent from that body of work. The available lessons suggest that scholars and practitioners need to re-think how, why, and when to employ SES modeling in regulatory or other decision-making contexts.
... In recent years, prediction has progressively been brought to the foreground of the methodological literature in agent-based social simulation. Practitioners have discussed, among other things, the connection between prediction and explanation (Anzola, 2021a;Epstein, 2008;Thompson & Derr, 2009;Troitzsch, 2009), the conceptual and practical constraints for prediction (Axtell & Shaheen, 2021;De Matos Fernandes & Keijzer, 2020;Hassan et al., 2013;Polhill et al., 2021) the role of prediction in decision-making (Steinmann et al., 2020) and even the very meaning of what it means to successfully use agent-based models to predict social phenomena (Edmonds, Le Page et al., 2019). ...
... Much of the literature on prediction in agent-based modelling centres on clarifying the nature and role of prediction in social simulation (e.g. Edmonds, Le Page et al., 2019;Epstein, 2008;Hassan et al., 2013;Polhill et al., 2021;Thompson & Derr, 2009;Troitzsch, 2009). The concept of prediction itself has received significant attention, in part, because of the historic lack of consensus about whether social disciplines should concern themselves with prediction and, if so, whether it should follow the same procedures and be given the same status it has in natural disciplines (cf., discussions in political science (e.g. ...
... Data accommodation needs during prediction have been accounted for differently by agentbased social simulation practitioners. Polhill et al. (2021), for example, discuss how data accommodation increases tractability, for their focus is on the computational aspect of prediction. Alternatively, Steinmann et al. (2020), reflecting upon decision-making in the context of COVID-19, address the effect of paucity of data on parameterisation. ...
Article
Researchers have become increasingly interested in the potential use of agent-based modelling for the prediction of social phenomena, motivated by the desire, first, to further cement the method’s scientific status and, second, to participate in other scenarios, particularly in the aid of decision-making. This article contributes to the current discussion on prediction from the perspective of the disciplinary organisation of agent-based social simulation. It addresses conceptual and practical challenges pertaining to the community of practitioners, rather than individual instances of modelling. As such, it provides recommendations that invite both collective critical discussion and cooperation. The first two sections review conceptual challenges associated with the concept of prediction and its instantiation in the computational modelling of complex social phenomena. They identify methodological gaps and disagreements that warrant further analysis. The second two sections consider practical challenges related to the lack of a prediction framework that, on one hand, gives meaning and accommodates everyday prediction practices and, on the other hand, establishes more clearly the connection between prediction and other epistemic goals. This coordination at the practical level, it is claimed, might help to better position prediction with agent-based modelling within the larger social science’s methodological landscape.
... The input data required for ABMs to achieve the continuity with spatial LUC patterns and historical event sequencing required for casual attribution would be substantial. ABMs that have achieved such empirical validity are typically applied for prediction or scenario analysis rather than historical counterfactual analysis (Polhill et al., 2021). ...
Article
Full-text available
Land change models are important tools for land systems analysis, but their potential for causal inference using a counterfactual approach remains underdeveloped. This paper reviews the state of counterfactual land change modeling with the intent of causal inference. All of the reviewed studies promoted the value of creating ‘counterfactual worlds’ via simulation modeling to untangle complex causation in land systems in order to assess the effects of specific interventions and/or historical events. Several models used counterfactual analysis to challenge prevailing assumptions motivating past policy interventions, while others isolated the spatial heterogeneity of policy effects. The review also highlights methodological limitations and proposes best practices specific to counterfactual land change modeling for causal inference, such as ensemble approaches and multiple calibration-validation iterations. Counterfactual modeling is still underdeveloped in land system science, but it holds promise for advancing causal inference for some of the most challenging to study land change phenomenon.
... This scenario that policymakers face makes simulation in general and agent-based modeling in particular suitable tools. Even more so when considering the growing availability of data, computational power and the need for interdisciplinary teams ( Polhill et al. 2021 ), because simulation provides teams with a platform for communication, among other things: a common language that functions as a repository for concepts, understandings and behaviors that necessarily will have to function together to produce results. Moreover, in solving explicitly designed problems systemically, iteratively, over trial and errors, it helps foster communication among participants. ...
... This understanding that policymaking may involves complex systems implies that they are hard to predict ( Mitchell 2011 ;Polhill et al. 2021 ). Furthermore, full understanding of a single mechanism does not guarantee the comprehension of the whole: "the whole becomes not only more but very different from the sum of its parts" ( Anderson 1972 , 395). ...
Chapter
Full-text available
This chapter describes, justifies, presents the pros and cons of and illustrates the use of simulation modeling as a handy, cost-effective and agile tool for policymakers. Simulation modeling is flexible enough to accommodate different levels of detail, precision and time frameworks. It also serves the purpose of a concrete communication platform that facilitates scenario analysis, what-if alternatives and forward looking. We specifically define agent-based modeling within the larger simulation domain, provide a brief overview of other computation modeling methodologies and discuss the concepts of multiple models, verification, validation and calibration. The conceptual framework section closes with a discussion of advantages and disadvantages of using simulation modeling for policy at various stages of implementation. Finally, we present a panorama of actual applications of simulation modeling in policy, with an emphasis on economic analysis.
... One of the open issues of agent-based modeling is the systematic analysis of sensitivity to initial conditions in ABMs [3]. This affects the model reliability measures regarding predictive power and accuracy [28]. The challenge to find alternative means of investigating initial conditions sensitivity was raised by [7]. ...
Chapter
Agent-based models are powerful tools for understanding complex systems. Their accuracy and capacity for prediction, however, are dependent on the initial conditions of the model during simulation. Current agent-based modeling endeavors show a lack of systematic analysis of sensitivity to initial conditions, and renewed interest is given to this issue. In this paper, we hypothesize that we can analyze the effect of initial conditions in agent-based models through the positive and negative feedback behaviors of individual agents. For this, we present a systems theory interpretation of local agent behaviors based on closed loops. Our approach illustrates how the initial conditions (of the whole model or of individual agents) determine the presence of positive or negative feedback agents in the agent-based model, and that their numbers influence the steady state of the model. We perform a proof-of-concept analysis on a two-species butterfly-effect agent-based model.KeywordsAgent-based modelInitial conditionsSensitivity analysis
... Predicting how individuals engage in protective behaviours in social simulations is mostly associated with high error. Indeed, point prediction with a simple rule in complex social systems cannot be inherently accurate (Polhill et al., 2021;Hofman et al., 2017). In our case, predictive power is limited by the individual heterogeneity that our frugal rule does not capture. ...
Article
The COVID-19 epidemic highlighted the necessity to integrate dynamic human behaviour change into infectious disease transmission models. The adoption of health protective behaviour, such as handwashing or staying at home, depends on both epidemiological and personal variables. However, only a few models have been proposed in the recent literature to account for behavioural change in response to the health threat over time. This study aims to estimate the relevance of TELL ME, a simple and frugal agent-based model developed following the 2009 H1N1 outbreak to explain individual engagement in health protective behaviours in epidemic times and how communication can influence this. Basically, TELL ME includes a behavioural rule to simulate individual decisions to adopt health protective behaviours. To test this rule, we used behavioural data from a series of 12 cross-sectional surveys in France over a 6-month period (May to November 2020). Samples were representative of the French population (N=24,003). We found the TELL ME behavioural rule to be associated with a moderate to high error rate in representing the adoption of behaviours, indicating that parameter values are not constant over time and that other key variables influence individual decisions. These results highlight the crucial need for longitudinal behavioural data to better calibrate epidemiological models accounting for public responses to infectious disease threats.
... In this sense, unpredictability is not a result of emergence, but part of its cause. However, there is also the possibility of fundamentally new elements appearing in what one is trying to simulate -elements that go beyond what is in any simulation representing it as (Polhill et al., 2021) point out. ...
Article
Full-text available
This paper looks at the tension between the desire to claim predictive ability for Agent-Based Models (ABMs) and its extreme difficulty for social and ecological systems, suggesting that this is the main cause for the continuance of a rhetoric of prediction that is at odds with what is achievable. Following others, it recommends that it is better to avoid giving the impression of predictive ability until this has been iteratively and independently verified, due to the danger of suggesting more than is empirically warranted, especially in non-modellers. It notes that there is a restricted and technical context where prediction is useful, that of meta-modelling – when we are trying to explain and understand our own simulation models. If one is going to claim prediction, then a lot more care needs to be taken, implying minimal standards in practice and transparent honesty about the empirical track record – the over-enthusiastic claiming of prediction in casual ways needs to cease.
... mask mandates, lockdowns) on defined population health outcomes [6,[10][11][12]. Whilst the desire from the public and policy-makers is often that models be predictive, realistically, prediction at fine-scale within complex, dynamic systems is a difficult if not impossible challenge [13]. Models are therefore perhaps more usefully embraced as tools for forecasting the likely pattern of outcomes that might emerge under various conditions over time [6]. ...
Article
Full-text available
The COVID-19 pandemic has brought the combined disciplines of public health, infectious disease and policy modelling squarely into the spotlight. Never before have decisions regarding public health measures and their impacts been such a topic of international deliberation, from the level of individuals and communities through to global leaders. Nor have models—developed at rapid pace and often in the absence of complete information—ever been so central to the decision-making process. However, after nearly 3 years of experience with modelling, policy-makers need to be more confident about which models will be most helpful to support them when taking public health decisions, and modellers need to better understand the factors that will lead to successful model adoption and utilization. We present a three-stage framework for achieving these ends.
... Indeed, policymakers continue to face the difficulties and intricacies of tackling complex societal issues [2]. e magnitude, heterogeneity, and idiosyncrasies of societal problems make modelling purposeful questions a hard, wicked task [3,4]. Moreover, translating quantitative and computational methodologies, along with their uncertainties and embedded assumptions, into simple-at times one-page-narratives for policymakers' consumption may prove to be challenging [5]. ...