Order this book
Centre National de la Recherche Scientifique, Université Toulouse, France
Universidad de Cuenca, Ecuador
Chapter 1: the method of estimation of maximum likelihood is presented which offers a unified approach towards the estimation of the values sought. The basic concept is to develop a likelihood function as the joint probability density function of observable random variables. The method has some properties: sufficiency providing as much information about the true values of the unknown parameters, consistency to the unknown values, asymptotic efficiency and parameter invariance. Some difficulties arise when it is applied to complex system approaches. This method provides two major approaches to building confidence intervals: the asymptotic method and the likelihood ratio method.
Chapter 2: The purpose of this chapter is to formulate generic models for a category of objects by quantifying the similarities and differences between their different types. These models are useful when numerical data are available with which parameters can be evaluated in a formal framework. To do this, appropriate moments are defined in the available data and the uncertain parameters are adjusted until solutions in tolerable ranges are reached. The Method of Simulated Moments (MSM) is useful when the errors do not fulfil well-established distribution functions and the models include stochastic processes, and MSM can be applied with various types of data.
Chapter 3 deals with modelling by applying the structural equation. It is based on an analysis of the covariance of the data, which facilitates the inclusion of the variables observed directly and serves to simultaneously evaluate parameters and measurement errors by controlling the correlations between variables and errors. Its purpose is to feedback, to strengthen dynamic modelling. This model allows testing causal relationships that include intangible or latent variables. Its basis is found in structural equations that specify causal relationships between variables. In general, Structural Equation Modelling allows a flow of innovation and extension that includes multi-level modelling. However, conceptual difficulties exist to formulate causal hypotheses that operate under specific forms of a covariance matrix.
Chapter 4 focuses, from an idea of Forrester, on noise existence in modelling results: it invalidates prediction potential capacity of a model. Some systems simulate prototype behaviour under particular conditions or by coincidence. Others generate behaviour that can be replicated but with some errors. What will be the role of the replication in the model validation process? Using time-series data may be very hard for answering this issue. Applying filtering or other state resetting forms depends on the nature of system's behaviour. Techniques may be conditioned by interpretation of parameter estimates.
Chapter 5: Decision-making processes cannot always provide all the desired data. Model calibration may then serve as a tool to reach estimates of model parameters despite the constraints it may impose to build a useful overview explaining the variability of model results or political negotiations. The main focus here is that it refers to the use of emerging evidence combining and synergizing the benefits associated with traditional calibration and sensitivity analysis that such distributional assumptions deeply miss. Modelling allows for sampling empirical data; the subsequent distribution of the data across the parameter space can then be adequately assessed.
Chapter 6: An important system behaviour analysis tool is here presented: automatic pattern recognition. The chapter develops its basic principles regarding the system nature and behaviour, allowing point-to-point or global applications to be made. The chapter develops illustrative examples that can serve as a guide for the reader and complements with topics in which care must be taken when interpreting. It highlights the value of automatic interpretation versus the current visualisation used as a qualitative evaluation method, which is a subject rarely seen in classical literature.
Chapter 7 presents the usefulness of two non-equivalent analysis methods of the eigenvalues of mathematically-defined linear and even non-linear models: the Eigenvalue Elasticity Analysis (EEA) and the Dynamic Decomposition Weight Analysis (DDWA) both designed to identify the most prominent dynamics of a system. However, synthetizing the insights of the gains of using such methods would have been useful, i.e. not describing loop effects of the example but better what a newbie would have to look after, as well as providing the loop descriptions without having to consult the electronic model and the related textbook sections in the shown examples. In Figure 7.5., we suppose an error in the Y graph which ought to be the K graph (thereby in agreement with Figure 7.13b). The conclusion is a great summary of advantages, limitations and perspectives of the methods, of which the analysis easiness, the linearity obligations and the adaptability are the most important respectively.
Chapter 8 deals with optimisation in policy space and its stakes and practices, providing the counterintuitive (for a profane point of view) idea that optimisation can be used to evaluate non-optimal policies, errors and uncertainties. The chapter is strongly positioned in the assumption of the impossibility of a perfect optimisation and assumes the various postulates one has to integrate in a model optimisation / analysis approach. The SOPS method (Stochastic Optimisation in Policy Space) is here presented and used: thanks to examples, and especially a fishery one, the chapter goes from closed deterministic systems to nonlinear stochastic bypassing impacts of errors regarding measuring information and tested policy itself, including uncertainty impacts on policy effects.
Chapter 9 focuses on incremental adaptive decision processes facing uncertainties. The authors use decision trees to establish the exponentially-growing number of scenarios according to decision and event steps and, by “backwards induction”, eliminate major portions of these trees to simplify the analysis. The Hybrid framework combining decision & event scenario interface coupled with a simulation model is a way to explore the various existing paths based on the memory of past events to establish which decision & event path is the most relevant thereby providing clues on decision efficiency but as far as no unknown events occur.
Chapter 10 pursues the preceding chapter by presenting two procedures: first, a 7-steps sequential procedure called decision tree approach combining system dynamic deterministic modelling, uncertainty model building, Monte-Carlo based explorations, approximation estimations and finally backward inductions. Once real managerial choices are determined, this procedure explores the various sequences of decision / model outputs, including uncertainties until providing the most efficient path according to the one criterion chosen. The second method is a diffusion approximation approach that can integrate fluctuating risks in a more “continuous” way along the simulation steps, but is restricted to continuous and less complex scenario options. The method is referenced but unfortunately not presented in this chapter.
Chapter 11 deals again with optimisation through prescriptive models, again both linear and non-linear ones (p. 338; note: Pure optimisation does exist in engineering! And modelling is always an approximation of reality, irrespective of the approach). After an example of a non-linear but deterministic model, the article presents the equivalent optimisation process on non-linear stochastic models, with or without perfect data.
Finally, chapter 12 is dedicated to multiple stakeholders’ problems where non-cooperative dynamic games rule the set. However, it focuses on games where all actors are fully rational and know all about the system including the fact that other actors are rational too, leading ultimately to Nash equilibrium. Again, solving such differential games is assessed through optimisation tools for reaching at least one of the equilibrium points, each one being a balanced point between competing actors. One may here definitely question the rationality assumption and even more the full knowledge of actors.
Although the great benefit of the book is that it covers a wide range of useful analytical tools, stating that such tools having great potential because of their real-world applications, illustrates the fact that modelling real-world problems need to include information uncertainty issues rather than optimisation uncertainty issues. Roughly, the main issue is to get access to the very model this book tends to analyse and optimise. However, the whole useful presentation of the various cases with which dynamic system analysis can be assessed would have been more convincing without the promotion of commercial modelling software: the Vensim software tool which is apparently systematically used in all chapters. This book brings together, in an organised and systematic way, available and proven techniques around the modelling of dynamic systems. Its reading requires previous knowledge on various topics, which does not prevent it from serving as a basis to contribute to the training of students interested in modelling, especially techno-social systems.
As a conclusion, given the consistency of its content and its specific orientation towards students interested in analytical modelling, not all of them will want to read this book from the beginning to the end, but many will profit from consulting individual chapters.
Return to Contents of this issue
© Copyright JASSS, 2017