Calculated Surprises: A Philosophy of Computer Simulation — A Review

Tags

,

Johannes Lenhard, Calculated Surprises: A Philosophy of Computer Simulation, Oxford University Press, 2019, 256pp., £47.99 (hbk), ISBN 9780190873288.

Reviewed by Kevin B Korb, Monash University

First published in the Notre Dame Philosophical Review.

In the early days of electronic computers there was considerable doubt about their value to society, including a debate about whether they contributed to economic productivity at all (Brynjolfsson, 1993). A common view was that they made computations faster, but that they were not going to contribute anything fundamentally new to society. They were glorified punchcard machines. Such was the thinking behind such infamous predictions as that attributed to the president of IBM in 1943 that there may be a world market for five computers. Of course, by now such views seem quaintly anachronistic. Quantum computers offer the potential for exponential increases in computing power – and “nothing more” – but are the only way hard encryption is ever likely to break. Computers and the internet are all the evidence needed that some qualitative differences are breached by sufficiently many quantitative steps.

While these general questions are resolved, this debate still echoes elsewhere, including the philosophy of simulation. Some insist that the role of scientific simulation demands a radical new epistemology, whereas others assert that simulation, while providing new techniques, changes nothing fundamental. This is the debate Johannes Lenhard engages in Calculated Surprises.

Lenhard lands on the side of a new epistemology for simulation, while not landing too very far from the divide. Rather than claiming there is some one special feature of simulation that demands this new epistemology, as some have, he berates those who focus primarily on this or that specific feature that appears special; rather, the significant features are all special together. Per Lenhard, those significant features are: the ability to experiment with complex chaotic systems, the ability to visualize simulations and interact with them in real time, the plasticity of computer simulations (the ability to reconfigure them structurally and parametrically), and their opacity, that is, our difficulty in comprehending them. It is the unique combination of all these new features which forces us onto new epistemological terrain.

More exactly, Lenhard’s central thesis is that this combination means simulation is a new, transformative kind of mathematical modeling. To see what the unique combination produces, one needs to consider the full range of features, and therefore also the full range of kinds of computer simulation. Focusing only on a single type of simulation is as limiting as focusing on a single feature, per Lenhard. For example, much existing work exclusively considers models using difference equation approximations of dynamic systems, such as climate models. But conclusions reached on that basis are likely to overlook the rich diversity of modeling characterized by such methods as Cellular Automata (CA), discrete event simulation, Agent-Based Modeling (ABM), neural networks, Bayesian networks, etc.

Striking the right level of generality in treating simulation is important. Clearly, one can be either too specific or too general. In this moderate stance, Lenhard is surely right.

Plausibly, the class of simulations are bound together by family resemblance, rather than some clean set of necessary and sufficient conditions. It is a pity, then, that Lenhard simply upfront rejects consideration of stochasticity as an important feature of simulation. He says, reasonably, that some sacrifices have to be made (“even Odysseus had to sacrifice six of his crew”). And it’s true that some simulations are strictly deterministic, not even using pseudo-indeterminism, such as many CA. But it’s also true that stochastic methods are key for most of the important simulations in science. Furthermore, they have opened up genuinely new varieties of investigation, including all the varieties of Monte Carlo estimation, and are essential for meaningful Artificial Life and ABMs. This is a major and unhappy omission in Lenhard’s study.

One of the aspects of simulation Lenhard definitely gets right is the iterative and exploratory nature of much of it, emphasizing the process of simulation modeling. The ease of performing simulation experiments, compared to the expense and difficulty of experiments in real life, don’t just allow for millions of experiments to be run per setup (routinely driving confidence intervals of estimated values to neglible sizes, assuming we’re talking about stochastic simulations), but allow for using early simulation runs to inform the redesign or reconfiguration of later simulations, in an exploratory interaction of experimenter and experiment. Instead of simply relying on the outcomes of a few experimental setups to provide clear evidence for or against some theory driving the experiment, simulation allows for an iterative development of the model, with early experiments correcting the trajectory of the overall program. This underwrites much of the “autonomy” of simulation from theory. If a theory behind a simulation is incomplete, or simply in part mistaken, simulation experiments may nevertheless direct the research program, with feedback from real-world observations, expert opinion, or subsequent efforts to repair the theory. As Lenhard writes, in simulation “scientific ways of proceeding draw close to engineering” (p. 214).

Indeed, Lenhard points out that simulation science requires an iterative development of models. In many cases, the theory implemented in a simulation is very far from being sufficient even to provide a qualitative prediction of a simulation’s behavior. In one example given, Landman’s simulation of the development of a gold nanowire contradicted the underlying theory; only after the simulation produced it was a physical experiment run which confirmed the phenomenon (Landman, 2001). The underlying physical theory inspired the simulation, but the simulation itself forced further theoretical development. This aspect of simulation science explodes the traditional strict distinction in the philosophy of science between contexts of discovery and justification. This distinction may be of analytic value, for example when identifying Bayesian priors and posteriors in an inductive inference, but in simulation practice the contexts themselves of discovery and justification are one and the same. To be sure, Lakatos’s concept of scientific research programs throwing up anomalies (Lakatos, 1982) and overcoming them already weakens the distinction, but in simulation science the necessity of combined discovery and justification is ever present.

In connection with iterative development, scientific simulation has converged even more closely with engineering, widely adopting the “Spiral Model” for agile software development, which is precisely an iterative development process set in opposition to one-shot, severe tests of theoretical (program) correctness, i.e., in opposition to monolithic software QA testing. The Spiral loops through: entertaining a new (small) requirement, designing and coding to fulfill the requirement, testing the hoped-for fulfillment, and then looping back for a new requirement. This equivalence of process makes good sense given that simulations are software programs. To better understand simulation methods as scientific processes, a deeper exploration of this equivalence than Lenhard provides would be useful.

The epistemic opacity of simulation models is one of their notable features Lenhard highlights. It is very common that human insights into how a simulation works are limited, a fact which elevates the importance of visualizations of the intermediate and final results of a simulation and of interacting with them. Lenhard points out that this raises issues for our understanding of “scientific understanding”. Understanding is traditionally construed as a kind of epistemic state achieved within the confines of a brain. Talk of an “extended mind” brings home the important point that books, pens, computers and the cloud significantly enhance the range of our understanding, allowing us to “download” information we haven’t bothered to memorize, for example. But there still needs to be a central agent who is the focal point of understanding, at least in common parlance. Lenhard promotes a more radical reconception: that it is something like the system-as-a-whole that does the understanding. The human-cum-simulation can perform experiments, make predictions, advance science, even while the human acting, or examined, solo has no internal comprehension of what the hell the simulation is actually doing. Since successful predictions, engineering feats, etc. are standard criteria of human understanding, we should happily attribute understanding to the humans in the simulation system satisfying these criteria. This seems to be much of the basis for Lenhard’s claim that simulation epistemology is a radical departure from existing scientific epistemologies, since it radically extends our understanding of scientific understanding. I’m afraid I fail to see the radical shift, however. Anything described as understanding attributed to humans within a successful simulation system can as easily be described as a successful simulation system lacking full human understanding of a theory behind it. Lenhard fails to elucidate any clear benefits from a shift in language here. On the other hand, there is at least one clear benefit to conservatism, namely that we maintain a clear contact with existing language usage. We are all interested in advancing both our understanding of nature and our ability to engineer within and with it; it’s not obviously helpful to conflate the two.

Epistemic opacity also has epistemological consequences that Lenhard does not fully explore. While he emphasizes, even in his title, that simulation experiments often surprise, he does not point out that where the surprises are independently confirmed, as with the Landman case above, this provides significant confirmatory support for the correctness of the simulation, on clear Bayesian grounds. For those interested in this kind of issue, Volker Grimm et al. (2005) provide a clear explanation, from the point of view of Agent-Based Models (ABMs) in ecology.

Another unexplored topic is supervenience theory. This is more general than computer simulation theory, to be sure, but is connected to the opacity of simulations and complexity theory, and is especially acutely raised in the context of Artificial Life and Agent-Based Modeling, which provide not just an excuse but a pointed tool for considering supervenience. The very short form is: ABMs give rise to unexpected, difficult-to-explain high-level phenomena from possibly very simple low-level elements and their rules of operation (perhaps most famously in “boids” simulating bird flocks; Reynolds, 1987). This is known by a variety of names, such as, emergence, supervenience, implementation and multiple realization. It is not inevitable that a philosophy of simulation should encompass a theory of supervenience, but it is probably desirable.

It seems to me that in some respects an even more radical discussion of computer simulation than that pursued by Lenhard is in order. Simulations are literally ubiquitous across the sciences. That is, I’m unaware of any scientific displine which does not use them to advance knowledge. It is in wide use in astronomy, biology, chemistry, physics, climate science, mathematics, data science, social science, economics – and in many cases it is a primary and essential experimental method. Lenhard, oddly, at least appears to disagree, since he states that their common use has only reached “amazingly” many sciences, rather than simply all of them. I’d be interested to know which sciences remain immune to their advantages.

Lenhard’s Calculated Surprises introduces many of the issues that have been central to the debates within the philosophy of simulation and adopts sensible positions on most. He, for example, points out that model validation grounds simulations in the real world, offering a methodological antidote to extremist epistemologies’ flights of fancy. Lenhard’s is a book that patient beginners to the philosophy of simulation can profit from and that specialists should certainly look at. My main complaint, aside from its fairly turgid style (its German origin is clear enough), is the many important and interesting sides to simulation science that are simply ignored. A lack of examination of the scope and limits of simulation is one of those.

The ubiquity of simulation now goes even well beyond the domains of science themselves. It has recently found interesting and potentially important applications in history (e.g., University of York, 2020). Brian Skyrms has famously applied simulations to the study of philosophically interesting game theory (e.g., Skyrms, 2004). Social epistemology has employed simulation for some time already to answer questions about how collective beliefs and decisions may be arrived at (Douven, 2009; Salerno et al., 2017). I have applied simulation to the evolution of ethics and utility (Mascaro et al., 2011; Korb et al., 2016) and to studies in the philosophy of evolution (Woodberry et al., 2009; Korb & Dorin, 2011). I am presently attempting to build a computational tool for illustrating and testing various philosophical theories of causation. There is every reason to bring simulation into the heart of philosophical questions and especially into the philosophy of science. It is even plausible to me that instruction in simulation programming may become as necessary to graduate philosophical training as it already is in many of the sciences.

Paul Thagard formulated the key idea first: if you have a methodological idea of any merit, you should be able to turn it into a working algorithm (Thagard, 1993). Since a great deal of philosophy is about method, a great deal of philosophy not only can be, but needs to be, algorithmized. Simulation provides not just a test of the methodological ideas, and not just a demonstration of their potential, but also a test of the clarity of and relations between the underlying concepts, a test of the philosophizing itself. Who cannot simulate, cannot understand.

References

  • Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM36(12), 66-77.
  • Douven, I. (2009). Introduction: Computer simulations in social epistemology. Episteme6(2), 107-109.
  • Grimm, V., Revilla, E., Berger, U., Jeltsch, F., Mooij, W. M., Railsback, S. F., Thulke, H., Weiner, J., Wiegand, T. & DeAngelis, D. L. (2005). Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science310(5750), 987-991.
  • Korb, K. B., Brumley, L., & Kopp, C. (2016, July). An empirical study of the co-evolution of utility and predictive ability. In 2016 IEEE Congress on Evolutionary Computation (CEC) (pp. 703-710). IEEE.
  • Korb, K. B., & Dorin, A. (2011). Evolution unbound: Releasing the arrow of complexity. Biology & philosophy26(3), 317-338.
  • Lakatos, I. (1982). Philosophical Papers. Volume I: The Methodology of Scientific Research Programmes (edited by Worrall, J., & Currie, G). Cambridge University Press.
  • Mascaro, S., Korb, K., Nicholson, A., & Woodberry, O. (2011). Evolving ethics: The new science of good and evil. Imprint Academic, UK.
  • Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th annual conference on Computer graphics and interactive techniques (pp. 25-34). ACM.
  • Skyrms, B. (2004). The stag hunt and the evolution of social structure. Cambridge University Press.
  • Salerno, J. M., Bottoms, B. L., & Peter-Hagene, L. C. (2017). Individual versus group decision making: Jurors’ reliance on central and peripheral information to evaluate expert testimony. PloS one12(9).
  • Thagard, P. (1993). Computational philosophy of science. MIT press.
  • Woodberry, O. G., Korb, K. B., & Nicholson, A. E. (2009). Testing punctuated equilibrium theory using evolutionary activity statistics. In Australian Conference on Artificial Life (pp. 86-95). Springer, Berlin, Heidelberg.

How Extreme Weather Events Are Attributed to Anthropogenic Global Warming

Tags

, ,

— Kevin B Korb, 27 Jan, 2020

(modified 1 Feb, 2020)

Many politicians and media personalities continue to cast doubt on the idea that anthropogenic global warming (AGW) – the primary driver of current global climate change – could possibly be behind the growing frequency and severity of extreme weather events – the droughts, heatwaves, flooding, etc. that are every year breaking 100 year or greater historical records. This takes the form not just of a straightforward denial of climate change, but also of a more plausible denial of a connection between climate change and individual extreme events. Until ten or five years ago, many climate scientists themselves would have agreed with rejecting such a connection, and some journalists and politicians have followed them and continue following them, even when they have stopped leading anyone in that direction (see box below). Climate scientists have stopped agreeing with this, because in the meantime a new subdiscipline has been developed specifically for attributing extreme weather events to AGW or to natural variation, depending upon the specifics of the case. While it may suit the political preferences of some commentators to ignore this development, it is not in the general interest. Here I present a brief and simple introduction to the main ideas in current work on attributing individual events to global warming. (An even simpler introduction to attribution science, emphasizing legal liability, can be found in Colman, 2019.)

Climate versus Weather

It has become a commonplace to point out that weather is not climate: climate refers to a long-term pattern of weather, not individual events. Usually the point meant is that some hot, or cold, weather is not evidence for, or against, anthropogenic global warming or significant climate change. That, however, is not true. Long-term patterns influence short-term events, whether or not the short-term events are classified as “extreme”. As one of the original researchers on weather attribution put it:

In practice, all we can ever observe directly is weather, meaning the actual trajectory of the system over the climate attractor during a limited period of time. Hence we can never be sure, with finite observations and imperfect models, of what the climate is or how it is changing. (Allen, 2003)

This actually describes the relation between theories (or models, or simulations) and evidence in science quite generally. Claims about the state of the climate are theoretical, rather than observational. Theoretical claims cannot be directly observed to be true or false; but they do give rise to predictions whose probabilities can be calculated and whose outcomes can be observed. The probabilities of those outcomes provide support for and against our theories. There is always some uncertainty, but that pertaining to earth’s rotation around the sun, the disvalue of bleeding sick humans and the reality of AGW have been driven to near zero.

Certainly, larger and more frequent storms are one of the consequences that the climate models and climate scientists predict from global warming but you cannot attribute any particular storm to global warming, so let’s be quite clear about that. And the same scientists would agree with that. – Australian PM Malcolm Turnbull, 2016

It is problematic to directly attribute individual weather events, such as the current heatwave, to climate change because extreme weather events do occur as a part of natural climate variability. – Climate Change Minister Greg Combet, 2013

Scientists and the Bureau of Meteorology have repeatedly warned that individual events, be they the record cold temperatures and snow of late 2012 or heatwaves should not be attributed to any particular source. – Opposition spokesman for environment, Greg Hunt, 2013

I don’t think you can at all, at this stage, link individual events to [climate change]. – Australian Minister for Resources, Matt Canavan, 2019

You can’t blame individual weather events, such as the Queensland floods, on climate change. – Norelle Towie, journalist, 2011

Individual weather events may be too isolated to link directly to climate change. – Larry West, educational writer, 2017

The only special difficulty in understanding the relation between climate and weather lies in the high degree of variability in the weather; discerning the signal buried within the stochastic noise is non-trivial (aka “the detection problem”), which is one reason why climate science and data analysis should be relied upon instead of lay persons’ “gut feels”. Denialists often want to play this distinction both ways: when the weather is excessively hot, variability means there is no evidence of AGW; when the weather is excessively cold, that means AGW is not real.

What matters is what the overall trends are, and the overall trends include increasing numbers of new high temperatures being set and decreasing numbers of new low temperatures being set at like locations and seasons, worldwide. For example, that ratio is 2:1 in the US from 2000-2010 (Climate Nexus, 2019). Or more generally, we see this in the continuing phenomenon of the latest ten years including nine of the 10 hottest years globally on record (NOAA “Global Climate Report 2018”).

The analogy with the arguments about tobacco and cancer is a strong one. For decades, tobacco companies claimed that since the connection between smoking and cancers is stochastic (probabilistic, uncertain), individual cases of cancer could never be attributed to smoking, so liability in individual cases could not be proven (aka “the attribution problem”). The tobacco companies lost that argument: specific means of causal attribution have been developed for smoking (e.g., “relative risk”, which is closely related to the methods discussed below for weather attribution; O’Keefe et al., 2018). Likewise, there are now accepted methods of attributing weather events to global warming, which I will describe below.

Rejecting the connection between weather and climate, aside from often being an act of hypocrisy, implies a rejection of the connection between evidence and theory: ultimately, it leads to a rejection of science and scientific method.

Weather Severity Is Increasing

Logically before attributing extreme weather to human activity (“attribution”) comes finding that extreme weather is occurring more frequently than is natural (“detection”). Denialism regarding AGW of course extends to denialism of such increasing frequency of weather extremes. There are two main kinds of evidence of the worsening of weather worldwide.

Direct evidence includes straightforward measurements of weather. For example, measurements of the worldwide average temperature anomalies (departures from the mean temperature over some range of years) themselves have the extreme feature of showing ever hotter years, as noted above (NOAA “Global Climate Report 2018”). Simple statistics will report many of these kinds of measurements as exceedingly unlikely on the “null hypothesis” that the climate isn’t changing. More dramatic evidence comes in the form of increased frequency and intensity of flooding, droughts, etc. (IPCC AR5 WG2 Technical Summary 2014, Section A-1). There is considerable natural variability in such extremes, meaning there is some uncertainty about some types of extreme weather. The NOAA, for example, refuses to commit to there being any increased frequency or intensity of tropical storms; however, many other cases of extreme weather are clear and undisputed by scientists, as we shall see.

Indirect evidence includes claims and costs associated with insuring businesses, private properties and lives around the world. While the population size and the size of economies around the world have been increasing along with CO2 in the atmosphere – resulting in increased insurance exposure – the actual costs of natural disasters have increased at a rate greater than the simple economic increase would explain (see Figure 1). In consequence, for example, “many insurers are throwing out decades of outdated weather actuarial data and hiring teams of in-house climatologists, computer scientists and statisticians to redesign their risk models.” (Hoffman, 2018).The excess increase in costs, i.e., that beyond the underlying increase in the value of infrastructure and goods, can be attributed to climate change, as can the excess increase (beyond inflation) in the rates charged by insurers.



Figure 1. World-wide economic losses in billions of 2014 US$ due to natural disasters, insured and uninsured, with their five-year moving average (Holzheu, 2015). (Note that world GDP during this period has grown around 3% per year, which is much lower than the trend line above; World Bank, 2020.)

Another category of indirect argument for the increasing severity of weather comes from the theory of anthropogenic global warming itself. AGW implies a long-term shift in weather as the world heats, which in turn implies a succession of “new normals” – more extreme weather becoming normal until even more extreme weather replaces that norm – and hence a greater frequency of extreme weather events from the point of view of the old normal. In other words, everything that supports AGW, from validated general circulation models (GCMs) to observations, supports a general case that a variety of weather extremes is growing in frequency, intensity or both.

Is Anthropogenic Global Warming Real?

So, AGW implies an increase in many kinds of extreme weather; hence evidence for AGW also amounts to evidence that increases in extreme weather are real. That raises the question of AGW and the evidence for it. This article isn’t the best place to address this issue, so I’d simply like to remind people of a few basic points, in case, for example, you’re talking with someone rational:

  • Skepticism and denialism are not the same. Skeptics test claims to knowledge; denialists deny them. No (living) philosophical skeptic, for example, would refuse to look around before attempting to cross a busy road.
  • Science lives and breathes by skeptical challenges to received opinions. That’s not the same as holding all scientific propositions in equal contempt. Our technical civilization – almost everything about it – was generated by applying established science. It is not activists who are hypocrites for using trains, the internet and cars to spread their message; the hypocrites are those who use the same technology, but deny the science behind that technology.
  • Denialism requires adopting the belief that thousands of scientists from around the world are conspiring together to perpetrate a lie upon the public. David Grimes has an interesting probabilistic analysis of the longevity of unrevealed conspiracies (in which insiders have not blabbed about it), estimating that a climate conspiracy of this kind would require about 400,000 participants and its probability of enduring beyond a year or two is essentially zero [Grimes, 2016]. The lack of an insider revealing such a conspiracy is compelling evidence that there is no such conspiracy, in other words.

The Detection of Extreme Weather

The first issue to consider here is what to count as extreme weather – effectively a “Detection Problem” of distinguishing the “signal” of climate change from the “noise” of natural variation. The usual answer is to identify some probability threshold such that a kind of event having that probability on the assumption of a “null hypothesis” of natural variation would count as extreme. Different researchers will identify different thresholds. We might take, for example, a 1% chance of occurrence in a time interval under “natural” conditions as a threshold (which is not quite the same as a 1-in-100 interval event, by the way). “Natural” here needs to mean the conditions which would prevail were AGW not happening; ordinarily the average pre-industrial climate is taken as describing those conditions, since the few hundred years since then is too short a time period for natural processes to have changed earth’s climate much, going on historical observations (chapter 4, Houghton, 2009). The cycle of ice ages works, for example, on periods of tens of thousands of years.

Of course, a one percent event will happen eventually. But the additional idea here, which I elaborate upon below, is to compare the probability of an event happening under the assumption of natural variation to its probability assuming anthropogenic global warming. The latter probability I will write P(E|AGW) – the probability of event E assuming that AGW is known to be true; the former I will write P(E|¬AGW) – the probability of E assuming that AGW is known to be false. These kinds of probabilities (of events given some hypothesis) are called likelihoods in statistics. The likelihood ratio of interest is P(E|¬AGW)/P(E|AGW); the extent to which this ratio falls short of 1 (assuming it does) is the extent to which the occurence of the extreme event supports the anthropogenic global warming hypothesis versus the alternative no warming (natural variation only) hypothesis. (The inverse ratio is also known as “relative risk” in, e.g., epidemiology, where analogous attribution studies are done.) A single such event may not make much of a difference to our opinion about global warming, but a glut of them, which is what we have seen over the decades, leaves adherence to a non-warming world hypothesis simply a manifestation of irrationality. As scientists are not, for the most part, irrational, that is exactly why the scientific consensus on global warming is so strong.

Varieties of Extreme Weather

There is a large variety of types of extreme weather which appear likely to have been the result of global warming. A recent IPCC study found the following changes at the global scale likely to very likely to have been caused by AGW: increases in the length and frequency of heat waves, increases in surface temperature extremes (both high and low), increased frequency of floods. They express low confidence in observed increases in the intensity of tropical cyclones – which does not mean that they don’t believe it, but that the evidence, while supporting the claim, is not sufficiently compelling. On the other hand, there is no evidence for increased frequency of cyclones (Seneviratne et al., 2017). They don’t address other extremes, but the frequency (return period) and intensity of droughts, increases in ocean extreme temperatures, and increases in mean land and ocean temperatures have elsewhere been attributed to AGW (some references below).

In addition to measurements of extreme events, there is some theoretical basis for predicting their greater occurrence. For example, changes to ocean temperatures, and especially ice melt changing the density of water in the Arctic, are known to affect ocean currents, which, depending upon the degree of change, will have likely affects on weather patterns (e.g., NOAA, 2019). Again, warmer air is well known to hold more water vapor, leading to larger precipitation events, resulting in more floods (Coumou and Ramstorf, 2012). Warmer water feeds cyclonic storms, likely increasing their intensity, if not their frequency (e.g., Zielinski, 2015).

Causal Attribution Theory

If we can agree that detection has occurred – that is, that weather extremes are increasing beyond what background variability would explain – then we need to move on to attribution, explaining that increase. There will always be some claiming that individual events that are “merely” probabilistically related to causes can never be explained in terms of those causes. For example, insurers and manufacturers and their spokespersons can often be heard to say such things as that, while asbestos (smoking, etc.) causes cancer – raising its probability – this individual case of cancer could never be safely attributed to the proposed cause. This stance is contradicted by both the theory and practice of causal attribution.

What is Causation?

The traditional philosophy of causation, going back arguably to Aristotle and certainly to David Hume, was a deterministic theory that attempted to find necessary and sufficient conditions for one event to be a cause of another. That analytic approach to philosophy was itself exemplified in Plato’s Socratic dialogues, which, ironically, were mostly dialogues showing the futility of trying to capture concepts in a tight set of necessary and sufficient conditions. Nevertheless, determinism dominated both philosophy and society at large for many centuries. It took until the rise of probabilistic theories within science, and especially that of quantum theory, before a deterministic understanding of causality began to lose its grip, first to the wholly philosophical movement of “probabilistic causality” and subsequently the development of probabilistic artificial intelligence – Bayesian network technology – which subsumed probabilistic causal theories and applied computational modeling approaches to the philosophical theory of causality. Formal probabilistic theories of causal attribution have flowed out of this research. The defences of inaction or a refusal to pay out insurance reliant upon deterministic causality are at least a century out of date.

Describing the interventionist theory of causality based upon Bayesian network models is beyond my scope here. (If you are interested, see Judea Pearl’s Causality, James Woodward’s Making Things Happen, or my own [Handfield, Twardy, Korb and Oppy’s] “The Metaphysics of Causal Models,” Erkenntnis.)

Instead I will describe an accepted theory of causal attribution in climate science, which provides a clear criterion for ascribing extreme weather events to AGW.

Attribution Theory

The most widely used attribution method for extreme weather is the Fraction of Attributable Risk (FAR) for ascribing a portion of the responsibility of an event to AGW (Stott et al., 2004). It has a clear interpretation and justification, and it has the advantage of presenting attribution as a percentage of responsibility, similar to percentages of explained variation in statistics (as Sewall Wright, 1934, pioneered). That is, it can apportion, e.g., 80% of the responsibility of a flooding event to AGW and 20% to natural variation (¬AGW) in some particular case, which makes intuitive sense. So, I will primarily discuss FAR in reference to attributing specific events to AGW. It should be borne in mind, however, that there are alternative attribution methods with good claims to validity (including my own, currently in development, based upon Korb et al., 2011), as well as some criticism of FAR in the scientific literature. The methodological science of causal attribution is not as settled as the science of global warming more generally, but is clear enough to support the claims of climate scientists that extreme weather is increasing due to climate change and in many individual cases can be directly attributed to that climate change.

FAR compares the probability of an extreme event E under AGW – i.e., P(E|AGW) – and under a “null hypothesis” of no global warming (the negation of AGW, i.e., ¬AGW), by taking their ratio in:

FAR = 1 – P(E|¬AGW)/P(E|AGW)

As is common in statistics, E is taken as the set of events of a certain extremity or greater. For example, if there is a day in some region, say Sydney, Australia, with a high temperature of 48.9, then E would be the set of days with highs ≥ 48.9.

Assuming there are no “acts of god”, any event can be 100% attributed to prior causes; that is, the maximum proportion of risk that could possibly be explained is 1. FAR splits that attribution into two parts, that reflecting AGW and that reflecting everything else, i.e., natural variation in a pre-industrial climate (e.g., Schaller et al., 2016); it does so by subtracting from the maximum 1 that proportion that can fairly be allocated to the null hypothesis. To take a simple example (see Figure 2), suppose we are talking about an event with a 1% chance, assuming no AGW; i.e., P(E|¬AGW) = 0.01. Suppose that in fact AGW has raised the chances ten-fold; that is, P(E|AGW) = 0.1. Then the proportion FAR attributes to the null hypothesis is 0.01/0.1 = 0.1, and the fraction FAR attributes to AGW is the remainder, namely 0.9. Since AGW has raised the probability of events of this particular extremity – of E’s kind – 10 fold, it indeed seems fair to attribute 10% of the causation to natural variation and 90% to unnatural variation.

Figure 2. The region (event) E on the right shows where the natural distribution of weather (e.g., temperature) has no more than a 1% chance of producing that extreme an event, as determined by the left distribution. If AGW shifts the distribution to the right as in the figure, this extreme an event is 10 times more likely; that is, the area under the distribution to the right of the line is 10%, rather than 1%.

In order to compute FAR, we first need these probabilities of the extreme event. It’s natural to wonder where they come from, since we are talking about extreme events, and thus unlikely events that we wouldn’t have had the time and opportunity to measure. (To be sure, if good statistics have been collected historically, they may be used, especially for estimating P(E|¬AGW); some studies cited below have done that.) In fact, however, these likelihoods are derivable from the theories themselves, or simulations that represent such theories. GCMs are used to model anthropogenic global warming scenarios with different assumptions about the extent to which human economic behavior changes in the future, or fails to change. If we are interested in current extreme events, we can use such a model without any of the future scenarios: sampling the GCM model for the present will tell us how likely events of type E will be under current circumstances, with AGW. But we can also use the model to estimate P(E|¬AGW) by running it without the past human history of climate forcing, to see how likely E would be without humanity’s contributions. Since the GCMs are well validated, this is a perfectly good way to obtain the necessary likelihoods. (However, some caveats are raised below.)

Since individual weather events occur in specific locations, or at least specific regions, in order to best estimate the probabilities of such events, GCMs are typically used in combination with regional weather models, which can achieve greater resolutions than GCMs alone. (GCMs can also be modified to have finer resolutions over a particular region.) Regional models have been improving more rapidly than GCMs in recent years, which is one reason that FAR attributions are becoming both more accurate and more common (e.g., Black et al., 2016).

Attribution of Individual Weather Events

Thus, there is a growing body of work attributing specific extreme weather events to anthropogenic global warming using FAR, which represents the “fraction” of responsibility that an event of the given extremity, or greater, can be attributed to anthropogenic global warming versus natural variation in a pre-industrial climate. Much of this work is being coordinated and publicized by the World Weather Attribution organization, which is a consortium of research organizations around the world.

I note some recent examples of FAR attributions (with confidence intervals for the estimates when reported up front). I do not intend to explain these specific attributions here; you can follow the links, which lead to summary reports explaining them. Those summaries cite the formal academic publications, which detail the methods and simulations used and the relevant statistics concerning the results.

  • Flooding from tropical storm Imelda in September, 2019: FAR of 0.505 (± 0.12) (World Weather Attribution, 2019). [Note: This was not reported as FAR, but in likelihoods; conversion to FAR is straightforward. Links are to specific reports, which themselves link to academic publications.]
  • Heatwave in Germany and the UK, July, 2019: FAR between 0.67 and 0.9. The FAR for other parts of Europe were higher (but not specified in their summary) (World Weather Attribution, 2019).
  • Heatwave in France, June, 2019: FAR about 0.9 (World Weather Attribution, 2019).
  • Extreme rainfall from UK storm Desmond, December, 2017: FAR of about 0.375 (World Weather Attribution, 2019).
  • Drought in the Western Cape of South Africa from 2015-2017, leading to a potential “Day Zero” for Cape Town, when the water would run out (averted by rainfall in June, 2018). This extreme drought had an estimated FAR of about 0.67 (World Weather Attribution, 2019).
  • Extreme rainfall events in New Zealand from 2007-2017: FARs ranging from 0.10 to 0.40 (± 0.20 in each case). These fractions accounted for NZ$140.5M in insured costs, which was computed by multiplying the FARs with actual recorded costs (Noy, 2019). [NB: uninsured and non-dollar costs are ignored.] The application of FARs to compute responsibility for insurance costs by economists is a new initiative.
  • The 2016 marine heatwave that caused severe bleaching of the Great Barrier Reef was estimated to have a FAR of about 0.95 for maximum temperature and about 0.99 for duration of the heatwave by Oliver et al. (2018). Their report is part of an (approximately) annual report in the Bulletin of the American Meteorological Society that reports on a prior year’s extreme weather events attributable to human factors, the latest of which is Herring et al. (2018), a collection of thirty reports on events of 2016.

A recent review – re-examining FAR calculations via new simulations – of three dozen studies of droughts, heat waves, cold waves and precipitation events found numerous substantial FARs, ranging up to 0.99 in many cases, as well as a few with inverted FARs, indicating some events made less likely by anthropogenic global warming (Angélil et al., 2017).

The recent fires in Australia are being given a FAR analysis as I write this (see https://www.worldweatherattribution.org/bushfires-in-australia-2019-2020/). There is widespread agreement that the intensity of wildfires is increasing, and that the fire seasons in which they take place are lengthening. Fire simulation models capable of incorporating the observed consquences of climate change (droughts, heatwaves, etc.) are in use and can be applied to this kind of estimation, although that is not yet being done. The forthcoming analysis is limited to the precursors of the fires, drought and heat, but also including the Forest fire Weather Index (from a personal communication).

Despite the apparent precision of some of these FAR estimates, they all come with confidence intervals, i.e., ranges within which we would expect to find the true value. They are not all recorded above, but those who wish to find them can go to the original sources.

Another kind of uncertainty applies to these estimates, concerning the variations in the distributions used to estimate FARs such as those of Figure 2. Some suggest that AGW itself brings a greater variation in the weather, fattening the tails of any probability distribution over weather events, and so making extremes on both sides more likely. So, for example Figure 2 might more properly show a flatter (fatter) distribution associated withAGW, in addition to being shifted to the right of the distribution for ¬AGW. This, however, would not affect the appropriateness of a FAR estimation: whether the likelihood ratio for E is determined by a shift in mean, a change in the tails, or both, that ratio nevertheless correctly reports the probabilities of the observed weather event relative to each alternative.

A potentially more pointed criticism is that GCMs may be more variable than the real weather (e.g., Osborn, 2004). Higher variability implies reaching extremes more often (on both ends of the scale). This is exacerbated if using multiple GCMs in an ensemble prediction. Such increased variance may apply more to simulations of AGW than to ¬AGW, although that’s unclear. In any case, this is a fair criticism and suggests somewhat greater uncertainty in FAR attributions than may have been reported. It would be best addressed by improved validation of GCMs, whether individually or in ensemble. The science of weather attribution is relatively new and not entirely settled; nevertheless, the methods and results in qualitative terms are well tested and clear. Many individual extreme weather events can be attributed largely to human-induced climate change.

The Future of Extreme Weather

The future of extreme weather appears to be spectacular. Given the overwhelming scientific evidence for the existence and continued development of anthropogenic global warming, and the clear evidence of tepid commitment or positive opposition to action from political leaders around the world, climate change is not just baked in for the next few decades, but is likely to be accelerating during that time. The baking period will be the few hundred years thereafter. Extreme pessimism, however, should be discouraged. It really does matter just when, and how, national, regional and global activities to reduce or reverse greenhouse gas emissions are undertaken. Our choices could well determine whether we face only severe difficulties, or instead global chaos, or perhaps civilizational collapse, or even human extinction. It is certain that earth’s biosphere will recover to some equilibrium eventually; it’s not so certain whether that equilibrium will include us.

For the short term, at least, climate science will continue to make progress, including improved understanding of weather attribution. Our current understanding is already good enough to give strong support to the case for action, as put in a recent excellent review of the state of the art in weather attribution circa 2015 or so:

Event attribution studies …  have shown clear evidence for human influence having increased the probability of many extremely warm seasonal temperatures and reduced the probability of extremely cold seasonal temperatures in many parts of the world. The evidence for human influence on the probability of extreme precipitation events, droughts, and storms is more mixed. (Stott et al., 2016)

As I’ve shown above, since that review, attribution research has been extended to show considerable human influence on many cases of extreme rainfall, droughts and storms. While uncertainties remain, as regional and dynamic circulation models continue to improve, it seems certain that extreme weather attributions to anthropogenic causes will become both more pervasive and more definite in the near future. These improvements will enable us to better target our efforts at adaptation, as well as better understand the moral and legal responsibility for the damage done by unabated emissions.

Despite well-funded and entrenched opposition, we must push ahead with parallel projects to reduce, reverse and adapt to the drivers of climate change, in order to minimize the damage to our heirs, as well as to our future selves.

Acknowledgements

I would like to acknowledge the helpful comments of Steven Mascaro, Erik P Nyberg, Bruce Marcot, Lloyd Allison and anonymous reviewers to earlier versions of this article.

References

Allen, M. (2003). Liability for climate change. Nature421(6926), 891.

Angélil, O., Stone, D., Wehner, M., Paciorek, C. J., Krishnan, H. and Collins, W. (2017). An independent assessment of anthropogenic attribution statements for recent extreme temperature and rainfall events. Journal of Climate30, 5–16, doi:10.1175/JCLI-D-16-0077.1.

Bindoff, N.L., Stott, P.A., AchutaRao, K.M.,, Allen, M.R., Gillett, N.G., Gutzler, D., Hansingo, K., Hegerl, G., Hu, Y., Jain, S., Mokhov, I.I., Overland, J., Perlwitz, J., Sebbari, R., & Zhang, X. (2013). Detection and attribution of climate change: from global to regional climate. Climate Change 2013 The Physical Science Basis: Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. T. Stocker, D Qin, Plattner G-K et al. Cambridge, UK, Cambridge University Press: 867-952.

Black, M. T., Karoly, D. J., Rosier, S. M., Dean, S. M., King, A. D., Massey, N. R., … & Otto, F. E. (2016). The weather@home regional climate modelling project for Australia and New Zealand. Geoscientific Model Development9(9).

Climate Central (2019). The 10 Hottest Global Years on Record, 6 Feb, 2019. https://www.climatecentral.org/gallery/graphics/the-10-hottest-global-years-on-record

Climate Nexus (2019). Record High Temps vs. Record Low Temps.https://www.climatesignals.org/data/record-high-temps-vs-record-low-temps, accessed 6 Dec, 2019.

Colman, Z (2019). The new science fossil fuel companies fear. Politico, 22 Oct 2019. https://www.politico.com/agenda/story/2019/10/22/attribution-science-fossil-fuels-climate-change-001290

Coumou, D., & Rahmstorf, S. (2012). A decade of weather extremes. Nature climate change2(7), 491.

Faust, E. & Steuer, M. (2019). Climate Change Increases Wildfire Risk in California, Munich Re. url: https://www.munichre.com/topics-online/en/climate-change-and-natural-disasters/climate-change/climate-change-has-increased-wildfire-risk.html. Accessed 20 Nov 2019.

Grimes, D. R. (2016). On the viability of conspiratorial beliefs. PloS one11(1), e0147905.

Handfield, T., Twardy, C. R., Korb, K. B., & Oppy, G. (2008). The metaphysics of causal models. Erkenntnis68(2), 149-168.

Herring, S. C., Christidis, N., Hoell, A., Kossin, J. P., Schreck III, C. J., & Stott, P. A. (2018). Explaining extreme events of 2016 from a climate perspective. Bulletin of the American Meteorological Society99(1), S1-S157.

Hoffman, A.J. (2018). Rising insurance costs may convince Americans that climate change risks are real. The Conversation, 22 Oct, 2018. https://theconversation.com/rising-insurance-costs-may-convince-americans-that-climate-change-risks-are-real-105192

Holzheu, T (2015). Underinsurance of property risks: closing the gap. Swiss Re, Sigma No 5/2015.

Houghton, J. (2009). Global warming: the complete briefing. Cambridge University Press.

IPCC (2014). AR5 Climate Change 2014: Impacts, Adaptation, and Vulnerability.

Korb, K. B., Nyberg, E. P., & Hope, L. (2011). A new causal power theory. Illari, Russo and Williamson (Eds) Causality in the Sciences, Oxford University Press, pp. 628-652.

McAneney, J., Sandercock, B., Crompton, R., Mortlock, T., Musulin, R., Pielke Jr, R., & Gissing, A. (2019). Normalised insurance losses from Australian natural disasters: 1966–2017. Environmental Hazards, 1-20.

NOAA (2018). Global Climate Report –. Annual 2018. url: https://www.ncdc.noaa.gov/sotc/global/201813. Accessed 20 Nov 2019.

NOAA (2019). How does sea ice affect global climate?National Ocean Service website, https://oceanservice.noaa.gov/facts/sea-ice-climate.html, 11/15/19.

Noy, I. (2019). The economic costs of extreme weather events caused by climate change. Australasian Bayesian Network Modelling Society Conference, Wellington, New Zealand, 13-14 November, 2019.

Oliver, E. C., Perkins-Kirkpatrick, S. E., Holbrook, N. J., & Bindoff, N. L. (2018). Anthropogenic and natural influences on record 2016 marine heat waves. Bulletin of the American Meteorological Society99(1), S44-S48.

O’Keeffe, L. M., Taylor, G., Huxley, R. R., Mitchell, P., Woodward, M., & Peters, S. A. (2018). Smoking as a risk factor for lung cancer in women and men: a systematic review and meta-analysis. BMJ open8(10), https://bmjopen.bmj.com/content/8/10/e021611.

Osborn, T. J. (2004). Simulating the winter North Atlantic Oscillation: the roles of internal variability and greenhouse gas forcing. Climate Dynamics22(6-7), 605-623.

Pearl, J. (2000). Causality: Models, Reasoning and Inference. Cambridge: MIT Press.

Schaller, N., Kay, A. L., Lamb, R., Massey, N. R., Van Oldenborgh, G. J., Otto, F. E., Sparrow, S. N., Vautard, R., Yiou, P., Ashpole, I., Bowery, A., Crooks, S. M., Haustein, K., Huntingford, C., Ingram, W. J., Jones, R. G., Legg, T., Miller, J., Skeggs, J., Wallom, D., Weisheimer, A., Wilson, S., Stott, P. A., Allen, M. R. (2016). Human influence on climate in the 2014 southern England winter floods and their impacts. Nature Climate Change6(6), 627.

Seneviratne, S.I., N. Nicholls, D. Easterling, C.M. Goodess, S. Kanae, J. Kossin, Y. Luo, J. Marengo, K. McInnes, M. Rahimi, M. Reichstein, A. Sorteberg, C. Vera, and X. Zhang, 2012: Changes in climate extremes and their impacts on the natural physical environment. In: Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation [Field, C.B., V. Barros, T.F. Stocker, D. Qin, D.J. Dokken, K.L. Ebi, M.D. Mastrandrea, K.J. Mach, G.-K. Plattner, S.K. Allen, M. Tignor, and P.M. Midgley (eds.)]. A Special Report of Working Groups I and II of the Intergovernmental Panel on Climate Change (IPCC). Cambridge University Press, Cambridge, UK, and New York, NY, USA, pp. 109-230.

Stott, P. A., Christidis, N., Otto, F. E., Sun, Y., Vanderlinden, J. P., Van Oldenborgh, G. J., … & Zwiers, F. W. (2016). Attribution of extreme weather and climate‐related events. Wiley Interdisciplinary Reviews: Climate Change7(1), 23-41.

Stott, P. A., Stone, D. A., & Allen, M. R. (2004). Human contribution to the European heatwave of 2003. Nature432(7017), 610.

US Global Change Research Program (2017). Climate Science Special Report: Fourth National Climate Assessment, Volume I  [Wuebbles, D.J., D.W. Fahey, K.A. Hibbard, D.J. Dokken, B.C. Stewart, and T.K. Maycock (eds.)] doi: 10.7930/J0J964J6.

Wallace, C. S. (2005). Statistical and Inductive Inference by Minimum Message Length. Springer Verlag.

Woodward, J. (2005). Making Things Happen: A Theory of Causal Explanation. Oxford University Press.

World Bank (2010). Data, GDP annual growth. https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG, accessed 18 Jan 2020.

World Weather Attribution (2019). World Weather Attribution. http://www.worldweatherattribution.org/.

Wright, S. (1934). The method of path coefficients. The Annals of Mathematical Statistics5(3), 161-215.

Zielinski, S. (2015). Warmer Waters Are Making Pacific Typhoons Stronger. Smithsonian Magazine.

The Green New Deal

Tags

, ,

We have known collectively the dangers posed by the combination of modern civilization and human population growth since at least the 1960s. During that decade Paul Ehrlich published The Population Bomb (1968), which carried forward Thomas Malthus’s argument from the 19th century that exponential population growth models apply as much to humans as to other life forms and that relaxing the natural limits on resources and their utilization would provide only temporary material comforts soon overwhelmed by an expanding population. In The Limits of Growth (1972) the Club of Rome computer modelers expanded on these ideas by developing and testing a simulation of human population and economic activity incorporating natural resources and pollution. While their model was crude by recent standards, it did behave in qualitatively sensible ways. The story it told was that however you varied the inputs, e.g., extending resource limits or slowing population growth rates, if you stayed within anything like reasonable bounds, then the model showed a collapse of the population, through impossible levels of pollution, say, sometime during the 21st century. Neither of these pivotal books dealt with anthropogenic global warming explicitly, but the message was clear and still hasn’t changed: unfettered population and economic growth, at least on the models of both we have so far adopted, will be a disaster for our species and our environment. Nothing much has changed.

Rep Alexandria Ocasio-Cortez and Sen Ed Markey’s Green New Deal (GND) seeks genuine change. It’s modeled on Franklin Delano Rooseveldt’s New Deal in the sense that FDR’s New Deal radically changed America for generations. The name also evokes the mobilization behind the World War II effort that happened shortly thereafter. The point is that radical mobilization efforts are eminently possible when the threat to a nation is existential, and human-driven climate change certainly poses an existential threat. The GND, if passed, would be a clear, resounding dual statement of intent: first, the intent to counter the threat to civilization posed by the combination of human population growth and current economic activities; second, a more local statement of intent of achieving economic and political justice for American minorities.

The bill is strictly aspirational, calling out the urgency of the situation, rather than laying out a specific pathway. It’s stated goals are not of a kind that could lead to direct actions. GND shares with Extinction Rebellion a common view of the urgency of the situation and the optimism that if there is a common will to respond, that we can do something worthwhile to diminish the worst outcomes of anthropogenic global warming.

Some of the Main Goals laid out in the GND are:

  • Guaranteed jobs with family-sustaining wages for all people of the US
  • Maximizing the energy efficiency of all existing buildings in the US
  • Moving to electric cars and high-speed rail and away from air transport
  • Universal health care
  • Moving to sustainable farming
  • Moving to 100% renewable energy

Of course, the introduction of the GND has provoked a vigorous response from opponents. The most prominent objection, perhaps, is that it would be too expensive to be practicable. Certainly, refurbishing every building in America to maximize energy efficiency can’t be cheap. The obvious rebuttal, however, has been voiced by Greta Thunberg and other young activists: inaction will be far more expensive than action. Indeed, the GND in its initial Whereas’s states that inaction will lead to $500B in lost annual economic output in the US by 2100. Such a sum applied now, on the other hand, would clearly make a strong start to doing something about climate change. Aside from that, any dollar estimate of harm is never going to be a worst case estimate, since severe climate change is fully capable not just of direct economic impacts, but also of spurring warfare and social collapse, in ways where the real valuations entirely outstrip the speculative dollar valuations of harm. The right wing who harp about the expense are simply not yet prepared to think clearly about the consequences of the choices in front of us. (In my view, it is well past time that the decision bypass the obstruction.)

The whole point of the GND is that what is practicable depends upon the context, and what is practicable in times of war is of an entirely different scale to what is practicable in normal times. We are not in normal times. This is a time of war, and our enemy is us.

 

 

Interview on Machine Understanding

Tags

, , ,

Produced by Adam Ford:

If you are interested, some relevant references are:

Post Hoc Ergo Propter Hoc, or Correlation Implies Causation

Tags

, , , , ,

Wikipedia confidently explains this in its first sentence for this entry: “Post hoc ergo propter hoc (Latin: “after this, therefore because of this”) is a logical fallacy that states ‘Since event Y followed event X, event Y must have been caused by event X.’” This so-called fallacy is curious for a number of reasons. Taken literally it is a fallacy that is almost never committed, at least relative to the opportunities to commit it. There are (literally, perhaps) uncountably many events succeeding other events where no one does, nor would, invoke causality. Tides followed by stock market changes, cloud formation followed by earthquakes, and so on and so on. People do attribute causality to successive events of course: bumping a glass causing it to spill, slipping on a kitchen floor followed by a bump to the head. In fact, that’s how as infants we learn to get about in the world. Generally speaking, it is not merely temporal proximity that leads us to infer a causal relation. Other factors, including spatial proximity and the ability to recreate the succession under some range of circumstances, figure prominently in our causal attributions.

Of course, people also make mistakes with this kind of inference. In the early 1980s AIDS was attributed by some specifically to homosexual behavior. The two were correlated in some western countries, but the attribution was more a matter of the ignorance of the earlier spread of the disease in Africa than of fallacious reasoning. Or, anti-vaxxers infer a causal relation between vaccines and autism. In that case, there is not even a correlation to be explained, but still the supposed conjunction of the two is meant to confer support to the causal claim. The mistake here is likely due to some array of cognitive problems, including confirmation bias and more generally conspiritorial reasoning (which I will address on another occasion). But mistakes with any type of inductive reasoning, which inference to a causal relation certainly is, are inevitable. If you simply must avoid making mistakes, become a mathematician (where, at least, you likely won’t publish them!). The very idea of fallacies is misbegotten: there are (almost) no kinds of inference which are faulty because of their logical form alone (see my “Bayesian Informal Logic and Fallacy”). What makes these examples of post hoc wrong is particular to the examples themselves, not their form.

The more general complaint hereabouts is that “correlation doesn’t imply causation”, and it is accordingly more commonly abused than the objection to post hoc reasoning. Any number of deniers have appealed to it as a supposed fallacy to evade objections to gun control or the anthropogenic origins of global warming. It’s well past time that methodologists should have put down this kind of cognitive crime.

This supposed disconnect between correlation and causation has been the orthodox statistician’s mantra at least since Sir Ronald Fisher (“If we are studying a phenomenon with no prior knowledge of its causal structure, then calculations of total or partial correlations will not advance us one step” [Fisher, Statistical Methods for Research Workers, 1925] – a statement thoroughly debunked by many decades thereafter of causal inference based on observational data alone). While there are more circumspect readings of this slogan than to proscribe any causal inference from evidence of correlation, that overly ambitious reading is quite common and does much harm. It is unsupportable by any statistical or methodological considerations.

The key to seeing through the appearance of sobriety in the mantra is Hans Reichenbach’s Principle of the Common Cause (in his The Direction of Time, 1956). Reichenbach argued that any correlation between A and B must be explained in one of three ways: the correlation is spurious and will disappear upon further examination; A and B are causally related, either as direct or indirect causes one of the other or as common effects of a common cause (or ancestor); or as the result of magic. The latter he ruled out as being contrary to science.

Of course, apparent associations are often spurious, the result of noise in measurement or small samples. The “crisis of replicability” widely discussed now in academic psychology is largely based upon tests of low power, i.e., small samples. If a correlation doesn’t exist, it doesn’t need to be explained.

It’s also true that an endurring correlation between A and B is often the result of some association other than A directly causing B. For example, B may directly cause A, or there may be a convoluted chain of causes between them. Or, again, they may have a common cause, directly or remotely. The latter case is often called “confounding” and dismissed as showing no causal relation between A and B. But it is confounding only if the common cause cannot be located (and held constant, for example) and what we really want to know, say, is how much any causal chain from A to B is explanatory of B’s state. Finding a common cause that explains the correlation between A and B is just as much a causal discovery as any other.

I do not wish to be taken as suggesting that causal inference is simple. There are many other complications and difficulties to causal inference. For example, selection biases, including self-selection biases, can and do muck up any number of experiments, leading to incorrect conclusions. But nowhere amongst such cases will you find biases operating which are not themselves part of the causal story. Human experimenters are very complex causal stories themselves, and as much subject to bias as anyone else. So, our causal inferences often go wrong. That’s probably one reason why replicability is taken seriously by most scientists; it is no reason at all to dismiss the search for causal understanding.

There is now a science of causal discovery applying these ideas for data analysis in computer programs, one that has become a highly successful subdiscipline of machine learning, at least since Glymour, Scheines, Spirtes and Kelly’s Discovering Causal Structure (1987). (Their Part I, by the way, is a magnificent debunking of the orthodox mantra.)

The general application of “correlation doesn’t imply causation” to dismiss causal attributions is an example of a little learning being a dangerous thing – also known as the Dunning-Kruger effect.

 

The Sixth Extinction: A Review

Tags

, ,

Elizabeth Kolbert’s The Sixth Extinction is a highly readable, discursive review of the state of the biosphere in the Anthropocene — i.e., now. It’s aimed at a general audience and entertains as much as it informs, relating a wide variety of anecdotes, mostly derived from Kolbert’s travels and investigations while writing this book. I think it a very worthwhile book, especially perhaps as a present for those in your life who are skeptical about global warming or science in general. Not that Kolbert is a scientist or pretends to be one, but it offers an outsiders’ view of a fair few scientists in action, chronicling the decline of many species.

Kolbert’s report is necessarily pessimistic about the general prospects for a healthy biosphere, given that the evidence of species endangerment and decline is all around and she has spent some years now documenting it. But she tries to be as optimistic as possible. She points out a variety of successes in evading or mitigating other “tragedies of the commons”, such as the banning of DDT after Rachel Carson’s warning that our springs risked going silent. Or the prominent case of the missing (well, smaller) ozone hole.

On matters that are contentious within science, Ms Kolbert aims for neutrality. For example, what killed off the megafauna — such as the marsupial lion in Australia, cave bears and saber tooth cats in America, mammoths and aurochs in Europe — that was widespread prior to the presence of homo sapiens? One school suggests that climate change, say, in the form of retreating ice sheets, was the culprit. She points out that doubts arising from the fact that the extinctions of the megafauna occurred at quite different times and, indeed, in each case shortly after the arrival of humans, militate against climate change as a sole cause. The main alternative is, of course, that these are the first extinctions due to human activity, so that the Sixth Extinction began well before the industrial age. Kolbert points out that advocates for climatic causation criticize the anthropogenic crowd for having fallen for the post hoc ergo propter hoc fallacy. But neutrality on this point is a mistake. While correlation doesn’t strictly imply direct causation, it does strictly imply direct-or-indirect causation: Hans Reichenbach in The Direction of Time made the compelling point that if there is an enduring correlation between event types (not some haphazard result of small samples and noise), then there is either a direct causal chain, a common cause, or an indirect causal chain that will explain the correlation. Everything else is magic, and science abhors magic. Given that the extinctions and the arrivals of humans fit like a hand in a glove, it is implausible that there is no causal relationship between them. As sane Bayesians (i.e., weighers of evidence) we must at a minimum consider it the leading hypothesis until evidence against it is discovered. Of course, the existence of one cause does not preclude another (even if it makes it less likely); that is, climatic changes may well have contributed to human-induced extinctions in some cases.

On a final point Kolbert again opts for neutrality: does the Sixth Extinction imply our own? Can we survive the removal of so many plants and animals that the Anthropecene should be counted as one of the Great Extinction events? Will humanity’s seemingly boundless technological creativity find us a collective escape route?

I find the enthusiasm of some futurologists for planetary escape a bit baffling. The crunch of Global Warming will be hitting civilization pretty hard within 50 years, judging by anything but the most extremely optimistic projections. The ability to deal directly with Global Warming, and the related phenomena of overutilization of earth’s resources to support around 10 billion people at an advanced economic level of activity, is possibly within our grasp, but it is very much in doubt that we will collectively grasp that option. The ability to terraform and make, say, Mars habitable in a long-term sustainable way is not within our grasp and is not in any near term prospect. Simply escaping from our own earthly crematorium is not (yet) an option. If Elon Musk succeeds in reaching Mars, he will almost certainly soon thereafter die there.

The situation on earth isn’t so dissimilar. If Global Warming leads to massive agricultural failure, the watery entombment of half the major cities on earth, unheard of droughts, floods and typhoons, resource wars and human migrations, the strain on the instruments and processes of civilization is reasonably likely to break them. If civilization comes undone, it will be impossible to avoid massive starvation and societal collapse. The dream of some to wait it out in a bunker and emerge to a new utopia thereafter is about as likely as the descendants of Musk building a new civilization on Mars. Whether the extinction of civilization entails the final extinction of humanity is a moot point. But human life after civilization will surely be nasty, brutish and short.

The best alternative is to put a stop to Global Warming now, and use the energy and human resources that effort saves to solve the remaining problems of resource depletion, habitat destruction and human overpopulation. That requires a sense of urgency and a collective will so far absent.

The Tyranny of Metrics

Tags

,

In The Tyranny of Metrics, Jerry Muller presents a clear case against the current, and growing, over-reliance on KPIs and other performance metrics for assessing people’s and organizations’ work. One of the more common admonitions in AI is to be careful what you ask for, or, as Russell & Norvig put it in their incredibly popular textbook on AI, “what you ask for is what you get.” If your institutions set KPIs rewarding citations, for example, then you’re likely to end up with illicit citation rings, with pals citing each other pointlessly — pointless except for the KPI reward, of course. KPIs have a very strong tendency to replace the real objectives of an organization — education, good governance, public health, common welfare — with much lower quality ersatz objectives — student popularity, volume of memoranda, quick hospital discharges, numbers of arrests for petty crimes.

Another common problem is using absolute metrics when only relativized metrics make sense. For example, insisting that education funding reward schools whose students perform better on standardized tests can make for a good sound-bite, but has always been understood to produce “teaching to the test” — that is, a narrow educational focus that leads to ex-students poorly prepared to deal with a complex world with wide-ranging problems. But importantly it also leads to schools taking a low-risk approach to their education. Instead of seeking out and supporting students with disabilities or minority socioeconomic backgrounds, they will narrow their admissions to those already likely to perform well on standardized tests. That can really pay off for their budgets, but it’s also letting society down. The major blame should fall on the politicians who force the performance metrics on the schools in the first place. Instead of an absolute test-performance standard, a relativized standard, comparing outcome performance to initial performance, would eliminate that particular distortion of educational goals. (While doing nothing about teaching to the test, of course, which requires some other response.)

Muller reviews many of the ways in which metrics can go wrong, and he specifically considers them in some of the more socially important domains in which they do go wrong: education (secondary and tertiary), policing, medicine, finance, the military, foreign aid. His book is an excellent starting point for thinking about the general subject of work performance measurement or its particular consideration in any of these domains.

I can’t give Muller five stars here, however. One area in which he goes seriously astray is the issue of transparency in our institutions. Muller quite rightly points out that in many processes we require confidentiality and that the demand for transparency may well have gone too far. In diplomacy, privacy or secrecy can be essential to achieving a reasonable outcome. Diplomacy requires compromise, and compromise requires giving away something that you’d prefer not to. If the thing given way is made public too soon, then the diplomatic transaction may well implode before any compromise can be agreed. People have secrets for good reason. Similarly, the demand for honesty in politicians (generally an unsuccessful demand to be sure, but commonly loudly made in the media nonetheless) can turn into a fetish, where politicians who legitimately change their minds are excoriated for having no position or those who bend the truth to achieve a greater good are pummelled for dishonesty. The whole point of politics is for our politicians to achieve greater goods, not lesser goods, so these kinds of admonition by Muller are well taken.

Yet Muller goes far too far in this. Wikileaks’ revelation of war crimes in Iraq was accompanied by the revelation of identities of intelligence agents around the world. Julian Assange didn’t care about the safety of those agents or the future ability of, say, the CIA to recruit other agents. That’s a failure of over-zealous transparency, for sure. But Muller gives no credit at all to the other side. For example, he fails to accept that the whistleblowing revelation of the war crimes itself was a good thing and should be legally protected. He fails to acknowledge any benefit from Freedom of Information laws. Worse still, he castigates Edward Snowden for revealing many of the secrets of the NSA. While the United States has the right to have a “no such agency”, it doesn’t have the right to have spying and anti-encryption programs of the depth and breadth of the NSA. What Snowden exposed was, and is, criminal activity endangering democracy (see, e.g., https://www.bbc.com/news/uk-45510662). I think it absurd that Muller can’t put in even one word of support for such transparency. I wish for treatments of these subjects that show better balance and judgment than Muller’s.

The New Devil’s Dictionary

Ambrose Bierce’s Devil’s Dictionary is a fine entertainment. This derivative effort is intended to be at least mildly didactic, while no doubt being a little less amusing. The abuse of language for political effect has been going on a long while, but has accelerated in recent times, with the rise to large-scale dominance of the media by right wing ideologues. Let us all do a little something about it, at least by speaking and writing properly.

Corruption

The innate behavior of liberal politicians and activists in advocating
for the commonwealth, environmental sustainability or any regulation
of markets.

Of course, the real meaning is the act of dishonest dealing in return
for money or private gain, typically taking advantage of a position of
power. The abuse of the word “corruption” has become typical of Trump
and the Murdoch press. See, e.g.,
http://www.factcheck.org/2016/10/a-false-corruption-claim/

Political Correctness

Showing respect and common decency towards minorities or disadvantaged people, especially by leftists; a refusal to demonstrate the manly virtues of machismo, sexism, misogyny, racism and other forms of bigotry.

While the term “politically correct” was used by the left in a self-deprecating way in the 1970s, it has been appropriated by the right wing since to disparage those who supposedly go over the top in avoiding embarrassment to minorities, etc. The term is typically applied in response to someone simply showing ordinary courtesy and decency, revealing the lack thereof in the critic.

Reform

Government action to hobble its own ability to protect and promote the public interest, usually promoted on the grounds that government powers are abused and harmful. The latter is often true, especially when in pursuit of “reform”.

To reform something is to improve it, by, for example, removing obstacles to its proper function. But right wing politicians and media apply it when their intent is to undermine or defeat proper function.

Socialist

Someone who opposes the obscenely rich taking full advantage of their wealth, for example, by wanting to tax them for the welfare of the commonwealth.

In the common usage in the rightwing media, many who endorse regulated capitalist markets are routinely denounced as socialists, e.g., Barack Obama and Bernie Sanders (who, admittedly, falsely labels himself a socialist, without any implied denunciation). But socialists properly understood advocate public ownership and control of the means of production and oppose capitalist markets. Any dictionary will confirm this, but rightwing media commentators are not often found referring to dictionaries or other reliable sources of information.

Analysing Arguments Using Causal Bayesian Networks

Tags

, , , , , , ,

Kevin B Korb and Erik P Nyberg

Introduction

Analysing arguments is a hard business. Throughout much of the 20th century many philosophers thought that formal logic was a key tool for understanding ordinary language arguments. They spent an enormous amount of time and energy teaching formal logic to students before a slow accumulation of evidence showed that they were wrong and, in particular, that students were little or no better at dealing with arguments after training in formal logic than before (e.g., Nisbett, et al., 1987). Beginning around 1960 a low-level rebellion began, leading to inter-related efforts in understanding and teaching critical thinking and informal logic (e.g., Toulmin, 1958).

Argument mapping has long been a part of this alternative program; indeed it predates it. The idea behind argument mapping is that while formal logic fails to capture much about ordinary argument that can help people’s understanding, another kind of syntax might: graphs. If the nodes of a graph represent the key propositions in an argument and arrows represent the main lines of support or critique, then we might take advantage of one of the really great tools of human reasoning, namely, our visual system. Perhaps the first systematic use of argument maps was due to Wigmore (1913). He presented legal arguments as trees, with premises leading to intermediate conclusions, and these to a final conclusion. This simple concept of a tree diagram representing an argument or subargument – possibly enhanced with elements for indicating confirmatory and disconfirmatory arguments and also whether lines of reasoning function as alternatives or conjunctively – has been shown to be remarkably effective in helping students to improve their argumentative skills (Alvarez, 2007).

However effective and useful argument maps have been shown to be, there is one central aspect of most arguments that they entirely ignore: degrees of support. In deductive logic there is no room for degrees of support: arguments are either valid or invalid; premises are simply true or false. While that suffices for an understanding of Aristotle’s syllogisms, it doesn’t provide an insightful account, say, of arguments about global warming and what we should do about it. Diagnoses of the environment, human diseases or the final trajectory of our universe are all uncertain, and arguments about them may be better or worse, raising or lowering support, but very few are simply definitive. An account of human argument which does not accommodate the idea that some of these arguments are better than the others and that all of them are better than the arguments of flat-earthers is one that is simply a failure. Argument mapping can not be the whole story.

Our counterproposal begins with causal Bayesian networks (CBNs). These are a proper subset of Bayesian networks, which have proved remarkably useful for decision support, reasoning under uncertainty and data mining (Pearl, 1988; Korb & Nicholson, 2010). CBNs apply a causal semantics to Bayesian networks: whereas BNs interpret an arc as representing a direct probabilistic dependency between variables, CBNs interpret an arc as representing both a direct probabilistic and a direct causal dependency, given the available variables (Handfield, et al., 2008). When arguments concern the state of a causal system, past, present or future, the right approach to argumentation is to bring to bear the best evidence about that state to produce the best posterior probability for it. When a CBN incorporates the major pieces of evidence and their causal relation to the hypothesis in question, that may already be sufficient argument for a technologist used to working with Bayesian networks. For the rest of us, however, there is still a large gap between a persuasive CBN and a persuasive argument. So, our argumentation theory ultimately will need to incorporate also a methodology for translating CBNs into a natural language argument directed at a target audience.

Example

Consider the following simple argument:

We believe that Smith murdered his wife. A large proportion of murdered wives turn out to have been murdered by their husbands. Indeed, Smith’s wife had previously reported to police that he had assaulted her, and many murderers of their wives have such a police record. Furthermore, Smith would have fled the scene in his own blue car, and a witness has testified that the car the murderer escaped in was blue.

Unlike many informal arguments, this one is already simple and clear: the conclusion is stated upfront, the arguments are clearly differentiated, and there is no irrelevant verbiage. Like most informal arguments, however, it is a probabilistic enthymeme: it supports the conclusion probabilistically rather than deductively and relies on unstated premises. So, it’s hard to give a precise evaluation of it until we make both probabilities and premises more explicit, and combine them appropriately.

We can use this simple CBN to assess the argument:

Wife reported assault → Smith murdered wife → Car blue → Witness says car blue

The arrows indicate a direct causal influence of one variable on the probability distribution of the next variable. In this case, these are simple Boolean variables, and if one variable is true then this raises the probability that the next is true, e.g., if Smith did assault his wife, then this caused him to be more likely to murder his wife. (It could be that spousal assault and murder are actually correlated by common causes, but this wouldn’t alter the probabilistic relevance of assault to murder, so we can ignore the possibility here.)

First, we can do some research on crime statistics to find that 38% of murdered women were murdered by their intimate partners, and so get our probability prior to any other evidence.

Second, we can establish that 30% of women murdered by their intimate partners had previously reported to police being assaulted by those partners (based upon Olding and Benny-Morrison, 2015). Admittedly, as O. J. Simpson’s lawyer argued, the vast majority of husbands who assault their wives do not go on to murder them. However, his lawyer was wrong to claim that Simpson’s assault record was therefore irrelevant! We just need to add some additional probabilities, which a CBN forces us to find, and combine them appropriately, which a CBN does for us automatically. Suppose that in the general population only 3% of women have made such reports to police, and this factor doesn’t alter their chance of being murdered by someone else (based on Klein, 2009). Then it turns out that the assault information raises the probability of Smith being the murderer from 38% to 86%.

Third, suppose we accept that if Smith did murder his wife, then the probability of him using his own blue car is 75–95%. Since this is imprecise, we can set it at 85% (say) and vary it later to see how much that affects the probability of the conclusion (in a form of sensitivity analysis).

Fourth, we can test our witness to see how accurate they are in identifying the color of the car in similar circumstances. When a blue car drives past, they successfully identify it as blue 80% of the time. Should we conclude that the probability that the car was blue is 80%? This would be an infamous example, due to Tversky and Kahneman, of the Base Rate Fallacy — i.e., ignoring prior probabilities. In fact, we also need to know how successfully the witness can identify non-blue cars as non-blue (say, 90%) and the base rate of blue cars in the population (say, 15%). Then it turns out that the witness testimony alone would raise the probability that Smith was the murderer from 38% to 69%. Combining the witness testimony with the assault information, then the updated probability that Smith is the murderer rises to 96%.

Even this toy example illustrates that building a CBN forces one to think about how the main factors are causally related and to investigate all the necessary probabilities. Assuming the CBN is correct for the variables considered, and is built in one of many good BN software tools, it acts as a useful calculator: it combines these probabilities appropriately to calculate the probability of our conclusion. Thus, it helps prevent much of the vagueness and fallacious reasoning that are widespread, even in important legal arguments.

Alternative Techniques for Argument Analysis

Although there are genuine difficulties in using this technique, we believe that much of the resistance to it is based on imaginary difficulties, while the (italicized) rival techniques below have difficulties of their own.

In our toy example, the prose version of the argument doesn’t quantify the probabilities involved, doesn’t specify the missing premises, doesn’t indicate how the various factors are related to each other, and it’s far from clear how to compute an appropriate probability for the conclusion. The fact that the probabilities and premises aren’t specified doesn’t really make the argument non-probabilistic, it just makes it vague. Prose is often the final form of presenting an argument, but it is far from ideal for the prior analysis of an argument.

Resorting to techniques from formal logic, diagrammatic or otherwise, requires even more effort than CBN analysis, while typically losing information. It is really appropriate only for the most rigorous possible examination of essentially deductive arguments.

A more recent approach with some promising empirical backing is the use of argument maps. These are typically un-parameterized non-causal tree structures in which the conclusion is the trunk and all branches represent lines of argument leading to it. (See Tim van Gelder’s ‘Critical Thinking on the Web’.) Arguably, these are equivalent to a restricted class of Bayesian network without explicit parameters (as in the qualitative probabilistic networks of Wellman, 1990). Thus, they have many of the advantages of BNs, but they don’t provide much guidance in computing probabilities, so they can be vague and subject to the kinds of fallacious reasoning that are avoided with actual BNs. Also, as they are typically not causal, they can actually encourage misunderstanding of the scenario.

Objections

There are many common objections to the use of Bayesian networks, or causal Bayesian networks, for argumentation. Here we address some of these.

1) Bayesian network tools are difficult to use.

This is true for those who are not experienced with them. “Fluency” with BN tools requires training something on the order of the amount of training required to become a reasonably good argument analyst using any tool. (In our experience, some philosophers get fed up with Bayesian network tools when they fail to represent an argument effectively within the first ten minutes of use!)

There are other options besides training. For specific applications, easy-to-use GUIs have been developed. Also, Bayesian network tools can be (and should be) enhanced to support features that would make them easier for argument analysis, such as allowing nodes to be displayed with the full wording of a proposition which they represent. But that’s up to tool developers. In the meantime, serious argument analysts would profit from learning how to use the tools, not just for the sake of argumentation, but also for the wide range of other tasks they have been developed for, such as decision analysis.

2) BNs force you to put in precise numbers for priors and likelihoods; this is a kind of false precision. Argument maps are better because they are qualitative.

Certainly, numbers need to be entered to use the automated updating via Bayes’ theorem. As quantities, they are precise (at least to whatever limited-precision arithmetic the tool supports). That doesn’t mean that the precision need be false, meaning falsely interpreted. The user can be fully aware of their limits. Indeed, all BN tools support sensitivity analysis, the ability to test the BN’s behavior across a range of values. So, if the analyst is unsure of just what the probability of something is, she or he can try out a range of numbers to see what effect the variation has on other variables of interest. If the conclusion can be substantially weakened by pushing the probability of premises around within reasonable limits, then it’s correct to infer that the argument is not compelling, and, otherwise, the argument may be compelling. This kind of investigation of the merits of the argument — and uncertainty of our beliefs — is not possible with qualitative maps alone.

Forcing one to obtain numbers is actually an advantage, as the example above indicated: the analyst is forced to learn enough about the domain to model it effectively.

3) Where do the numbers come from?

This is an objection any Bayesian will have encountered repeatedly. Since we are here talking about causal Bayesian networks, the ultimate basis for these probabilities must be physical dispositions of causal systems. Practically speaking, they will be sourced using the same means that Bayesian network modellers use in all the applied sciences, a combination of sample data (using data mining tools) and expert opinion (see Korb and Nicholson, 2010, Part III for an introduction to such techniques).

4) Naive Bayesian networks (NBNs) have been used effectively for argument analysis and are much simpler, e.g., by Peter Sturrock (2013) in his “AKA Shakespeare”. Why not just use them?

NBNs for argumentation simplify by requiring that pieces of evidence be independent of each other given one or another of the hypotheses at issue. If the problem really has that structure, then there’s nothing wrong with expressing it in an NBN. However, distorting arguments into that structure when they don’t fit causes problems, rather than resolving them. In Sturrock’s case, he suggested, for example, that the Stratford Shakespeare not having left behind a corpus of unpublished writing, not having written for aristocrats for pay, and not having engaged in extensive correspondence with contemporaries are all independent items of evidence, meaning that their joint likelihood is obtained by multiplying their likelihoods together (and then multiplied again with the likelihoods of all other items of evidence he advanced). The result was that he found that the probability that the writings of Shakespeare came from the eponymous guy from Stratford ranged from 10-15 all the way down to 10-21! As Neil Thomason pointed out to us, this means that you would be more likely to encounter the author of those works by randomly plucking any human off the planet at the time (or since!), rather than arranging to meet that Will Shakespeare from Stratford! While the simplicity of NBNs is appealing, this is a case of making our models simpler than possible. Real dependencies and interrelatedness of evidence cannot be ignored.

5) Some arguments are not about causal processes, but have a structure that can only be illuminated otherwise.

Here’s a famous case:

Socrates was a human.

All humans are mortal.


Therefore, Socrates was mortal.

While Bayesian networks can certainly represent deductive arguments, they will not be causal. Furthermore, their probabilistic updating will be uninformative. A reasonable conclusion is that BNs are ill suited for analysing deductive arguments. Argument maps may or may not be helpful; at least, their lack of quantitative representation will do no harm in such cases.

This concession is not exactly painful: our advocacy of CBNs was always only about cases where causal reasoning does figure in the assessment of a thesis. Slightly more problematic are cases where the core reasoning might be claimed to be associative rather than causal. For example, yellow stained fingers are associated with lung cancer, but staining your fingers yellow is not a leading cause of lung cancer. That implies we can make meaningful arguments from one outcome to the other without following a causal chain. (The inference of a causal chain from such associations is frequently derided as the “post hoc propter hoc” fallacy.)

In such cases, however, we are still reasoning causally, and it is best to have that causal reasoning made explicit:

Yellow fingers SmokingLung Cancer

With the correct causal model, we can follow the dependencies, and we can also figure out the conditional independencies in the situation (screening off relations). Without the causal model available, we will only be using our intuitions to assess dependencies, and we will often get things wrong.

6) There are generally very many equally valid ways of modeling a causal system. How can one choose between them?

This is certainly correct. For example, between smoking and lung cancer there are a great many low-level causal processes required to damage lung cells and produce a malignant cancer. Whether we choose to model them or not depends upon our interests (pragmatics). If we are not arguing about the low-level processes, then we shall probably not bother to model them, as they would simply be a distraction. In general, there will always be multiple correct ways of modeling a causal system, meaning that the probabilistic (and causal) dependencies between the variables used are correctly represented. Which one you use will depend in part upon your argumentative purpose and in part upon your taste.

Argument Evaluation

If we are to know that our argument methods are good, we shall need methods of assessing them, built upon justifiable methods for assessing individual arguments. Arguments may be evaluated either as probabilistic predictions (if they are quantitative) or as natural language arguments or both. Here we will address quantitative evaluation. Evaluation of arguments in terms of their intelligibility, etc. we will leave to a future discussion.

One of the leading experts on probabilistic prediction in the social sciences, Philip Tetlock, has said “it really isn’t possible to measure the accuracy of probability judgment of an individual event” (Tetlock, 2015). This is not correct. To be sure, in context Tetlock points out that it is possible to measure the accuracy of probability judgments within a reference class, by accumulating the scores of individual predictions and using their average as a measure of judgment in like circumstances. Of course, if that is true, then such a measure applies equally to individual judgments within the reference class (one cannot accumulate the scores of individual predictions if there are no such scores!), so Tetlock’s point turns into the banal observation that you can “always” defend a failed probabilistic prediction. For example, if an event fails to occur that you have predicted with probability 99.9999%, you can shrug your shoulders and say “shit happens!” But actually that’s a defence that you cannot use too very often.

Tetlock suggests that the whole problem of assessing probabilistic predictions is a deep mystery. But his real problem is just the score he uses to assess predictions, namely the Brier score. It is a seriously defective measure of probabilistic predictions, and that ought to be surprising, since the real work in solving how to assess predictions was done half a century ago. But communications between the various sciences is slow and painful.

In most of statistical science an even worse measure of predictive adequacy is used: predictive accuracy. Predictive accuracy is defined as the number of correct predictions divided by the number of predictions. How can you do better in measuring predictive accuracy than using predictive accuracy? Of course, that’s why we slipped in the phrase “predictive adequacy” in place of “predictive accuracy”.

The problem with predictive accuracy is that it ignores the fact that prediction is inherently uncertain and so probabilistic. We should like our predicted probabilities to match the actual frequencies of outcomes that arise in similar circumstances. If, for example, we were using a true (stochastic) model to make our predictions, such a match would be guaranteed by the Law of Large Numbers. Predictive accuracy takes a probabilistic prediction’s modal value and effectively rounds it up to 1. For example, in measuring predictive accuracy, a probabilistic prediction that a mushroom is poisonous of 0.51 counts the same as one of 1. But that they should not be assessed as the same is obvious! The problem is what cognitive psychologists call “calibration”: if your probabilistic estimates match real frequencies on average, then you are well calibrated. Most of us are overconfident, pushing probabilities near 1 or 0 even nearer to 1 or 0. Nate Silver, for example, reports that events turning up 15% of the time are routinely said to be “impossible” (Silver, 2012). Another way of pointing this out is that predictive accuracy is not a strictly proper scoring rule, that is, it will reward the true probability distribution for events maximally, but it will also reward many incorrect distributions equally. For example, if you take every modal value and revise its probability to be maximal, you will have an incorrect distribution that is rewarded identically to the correct distribution.

Tetlock’s Brier score is strictly proper, but that doesn’t make it strictly correct. Propriety is a kind of minimum standard: if you can beat (or match) the truth with a false distribution, then the scoring function isn’t telling us what we want. Brier’s score reports the average squared deviation of the actual outcomes from the predicted outcome, so the goal is to minimize it (it is a form of root mean squared error). If we have the true distribution in hand, we cannot be beaten (any deviation from the actual probability will be punished over the long run). However, Brier’s score, while punishing deviant distributions, does so insufficiently in many cases. Consider the extreme case of predicting a mushroom’s edibility with probability 1. This will be punished when false with a penalty of 1. While such a penalty is maximal for a single prediction, in a long run of predictions, it may be washed out by other, better predictions. From a Bayesian point of view, this is highly irrational: a predicted probability of 1 corresponds to strictly infinite odds against any alternative occurring! That kind of bet is always irrational, and if it goes wrong, it should be punished by losing everything in the universe; that is, recovery should be impossible. The Brier score punishes mistakes in the range [0.9, 1] much the same, even though the shift from a prediction of 0.9 to 0.91 is qualitatively massively distinct from a shift from 0.99 to 1: a “step” from finite to infinite odds! Extreme probabilities need to be treated as extreme for a scoring function to correctly reward calibration and penalize miscalibration.

As we said, this problem has been solved some time ago, beginning with the work of Claude Shannon (Shannon and Weaver, 1949). Shannon proposed measuring information in a “message” by using an efficient code book to encode it and reporting the length of the encoding. An efficient code is one which allocates –log2 P(message) bits to all possible messages.

It turns out that log scores based upon Shannon’s information measure have all the properties we should like for scoring predictions. I.J. Good (1952) proposed as a score the number of bits required to encode the actual outcome given a Shannon efficient code based on the predicted outcome. That is, Good’s reward for binary predictions is:

reward

 

This is the negation of the number of bits to report the actual outcome using the code efficient for the predictive distribution plus 1. The addition of 1 just renormalizes the score, so that 0 reports complete ignorance, positive numbers predictive ability above chance and negative numbers worse than chance, relative to a prior probability of 0.5 for a binomial event. Hope and Korb (2004) generalized Good’s score to multinomial predictions.

Nothing will be able to beat the true distribution in encoding actual outcomes with an efficient code over the long run; indeed, nothing will match it, so the score is strictly proper. But the penalty for mistakes is straightforwardly related to the odds one would take to bet against the winning proposition. Infinite odds imply an outcome that is impossible, meaning in information-theoretic terms, an infinite message describing the outcome. No matter how long a sequence of predictions is scored, an infinite penalty added to a finite number of successes will remain an infinite penalty. So, irrationality is appropriately punished.

All of this refers to the usual circumstance of scoring or assessing predictions, where we know the outcome, but we are uncertain of the processes which bring it about. Supposing that we actually know how the outcomes are produced is supposing that we have an omniscient, God-like perspective on reality. But, in fact, in special cases we do have a God-like perspective, namely when the events we are predicting are the outcomes of a computer simulation that we know, because we built it. In such cases, we can score our models more directly than by looking at their predictions and comparing them to outcomes. We can simply compare a model, produced, say, by some argumentative method, with the simulation directly. In that case, another information-theoretic construct recommends itself: cross entropy (or, Kullback-Leibler divergence). Cross entropy reports the expected number of bits required to efficiently encode an outcome from the true model (simulation, above) using the learned model instead of the true model. In other words, since we have both models (true and learned) we can compare their probability distributions directly, in information-theoretic terms, rather than taking a lengthy detour through their outcomes and predicted outcomes.

In Search of a Method

CBNs are an advantageous medium for addressing other common issues in argument analysis. Active open-mindedness suggests we can minimize confirmation bias by proactively searching out alternative points of view and arguments. This can be supported by constructing CBNs with sources of evidence and lines of causal influence additional to those which might at first satisfy us, and, in particular, which might be expected to cut against our first conclusion. In view of confirmation bias (and anchoring, etc.), it might be useful to give the task of constructing an alternative CBN to a second party.

Another benefit in using CBNs is the direct computational support for assessing the confirmatory power of different pieces of evidence relative to one another, how “diagnostic” evidence is in picking out one hypothesis amongst many. While Bayes’ factors— the relative likelihood of one hypothesis to another for the evidence — have long been recommended for assessing confirmation, once coded into a CBN the diagnostic merits of evidence for the hypotheses in play is trivially computable, and computed, by the CBN itself. Hence, the merits of each line of argument can be clearly and quickly assessed, whether in isolation or in any combination.

All of the above does not provide a complete theory of argumentation using CBNs. These uses of causal Bayesian networks must sit within a larger method. This must include deciding when CBNs are appropriate and effective, and when not. When they are not effective, alternative techniques will need to be applied, such as deductive logic or argument mapping. A rich theory of argumentative context and audience analysis is needed in order to understand such issues as which lines of argument can be left implicit (enthymematic) and which sources of premises are acceptable. And guidance needs to be developed in how to translate a CBN, which only represents arguments implicitly, into an explicit formulation in ordinary language.

The required techniques in which CBN-based argumentation is embedded are largely just those employed in critical thinking and argument analysis generally. It is a substantial, but achievable, research program, ranging across disciplines, to develop these to the point where trained analysts might produce similar, and similarly effective, arguments from the same starting points.


The figure of 38% is a worldwide statistic from the WHO (“Domestic Violence”, Wikipedia). If the argument were specific to a country or region, other statistics might be more appropriate. The figure we have used is a reasonable one for the argument as stated, that is, without a specific context. Uncertainty for specific numbers can be treated via sensitivity analysis, as we discuss below.

References

Alvarez, Claudia (2007). Does philosophy improve critical thinking skills? Masters Thesis, Department of Philosophy, University of Melbourne.

Domestic violence. (n. d.). In Wikipedia, The Free Encyclopedia. Retrieved 16:49, March 29, 2016, from https://en.wikipedia.org/w/index.php?title=Domestic_violence&oldid=712521522

Good, I. J. (1952). Rational decisions. Journal of the Royal Statistical Society. Series B (Methodological), 107-114.

Handfield, T., Twardy, C. R., Korb, K. B., & Oppy, G. (2008). The metaphysics of causal models. Erkenntnis, 68(2), 149-168.

Hope, L. R., & Korb, K. B. (2004). A Bayesian metric for evaluating machine learning algorithms. In AI 2004: Advances in Artificial Intelligence (pp. 991-997). Springer Berlin Heidelberg.

Klein A. R. (2009). Practical Implications of Domestic Violence Research. National Institute of Justice Special Report. US Department of Justice. Retrieved from http://www.ncjrs.gov/pdffiles1/nij/225722.pdf

Korb, K. B., & Nicholson, A. E. (2010). Bayesian artificial intelligence. CRC press.

Nisbett, R. E., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. Science, 238(4827), 625-631.

Olding, R. and Benny-Morrison, A. (2015, Dec 16). The common misconception about domestic violence murders. The Sydney Morning Herald. Retrieved from http://www.smh.com.au/nsw/the-common-misconception-about-domestic-violence-murders-20151216-glp7vm.html

Pearl, J. (1988). Probabilistic reasoning in intelligent systems. Palo Alto. CA: Morgan-Kaufmann.

Shannon, C. E., & Weaver, W. (1949). The mathematical theory of information.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Sturrock, P. A. (2013) AKA Shakespeare. Palo Alto, Exoscience.

Tetlock, Philip (2015). Philip Tetlock on superforecasting. Interview with the Library of Economics and Liberty. http://www.econtalk.org/archives/2015/12/philip_tetlock.html

Toulmin, S. (1958). The Uses of Argument. Cambridge University.

Wellman, M. P. (1990). Fundamental concepts of qualitative probabilistic networks. Artificial Intelligence, 44(3), 257-303.

Wigmore, J. H. (1913). The principles of judicial proof: as given by logic, psychology, and general experience, and illustrated in judicial trials. Little, Brown.