**Tags**

Argument Mapping, argumentation, Bayesian analysis, Bayesian reasoning, critical thinking, evaluation, Informal Logic, predictive evaluation

*–** Kevin B Korb and Erik P Nyberg*

**Introduction**

Analysing arguments is a hard business. Throughout much of the 20^{th} century many philosophers thought that formal logic was a key tool for understanding ordinary language arguments. They spent an enormous amount of time and energy teaching formal logic to students before a slow accumulation of evidence showed that they were wrong and, in particular, that students were little or no better at dealing with arguments after training in formal logic than before (e.g., Nisbett, et al., 1987). Beginning around 1960 a low-level rebellion began, leading to inter-related efforts in understanding and teaching critical thinking and informal logic (e.g., Toulmin, 1958).

Argument mapping has long been a part of this alternative program; indeed it predates it. The idea behind argument mapping is that while formal logic fails to capture much about ordinary argument that can help people’s understanding, another kind of syntax might: graphs. If the nodes of a graph represent the key propositions in an argument and arrows represent the main lines of support or critique, then we might take advantage of one of the really great tools of human reasoning, namely, our visual system. Perhaps the first systematic use of argument maps was due to Wigmore (1913). He presented legal arguments as trees, with premises leading to intermediate conclusions, and these to a final conclusion. This simple concept of a tree diagram representing an argument or subargument – possibly enhanced with elements for indicating confirmatory and disconfirmatory arguments and also whether lines of reasoning function as alternatives or conjunctively – has been shown to be remarkably effective in helping students to improve their argumentative skills (Alvarez, 2007).

However effective and useful argument maps have been shown to be, there is one central aspect of most arguments that they entirely ignore: degrees of support. In deductive logic there is no room for degrees of support: arguments are either valid or invalid; premises are simply true or false. While that suffices for an understanding of Aristotle’s syllogisms, it doesn’t provide an insightful account, say, of arguments about global warming and what we should do about it. Diagnoses of the environment, human diseases or the final trajectory of our universe are all uncertain, and arguments about them may be better or worse, raising or lowering support, but very few are simply definitive. An account of human argument which does not accommodate the idea that some of these arguments are better than the others and that all of them are better than the arguments of flat-earthers is one that is simply a failure. Argument mapping can not be the whole story.

Our counterproposal begins with causal Bayesian networks (CBNs). These are a proper subset of Bayesian networks, which have proved remarkably useful for decision support, reasoning under uncertainty and data mining (Pearl, 1988; Korb & Nicholson, 2010). CBNs apply a causal semantics to Bayesian networks: whereas BNs interpret an arc as representing a direct probabilistic dependency between variables, CBNs interpret an arc as representing both a direct probabilistic and a direct causal dependency, given the available variables (Handfield, et al., 2008). When arguments concern the state of a causal system, past, present or future, the right approach to argumentation is to bring to bear the best evidence about that state to produce the best posterior probability for it. When a CBN incorporates the major pieces of evidence and their causal relation to the hypothesis in question, that may already be sufficient argument for a technologist used to working with Bayesian networks. For the rest of us, however, there is still a large gap between a persuasive CBN and a persuasive argument. So, our argumentation theory ultimately will need to incorporate also a methodology for translating CBNs into a natural language argument directed at a target audience.

**Example**

Consider the following simple argument:

We believe that Smith murdered his wife. A large proportion of murdered wives turn out to have been murdered by their husbands. Indeed, Smith’s wife had previously reported to police that he had assaulted her, and many murderers of their wives have such a police record. Furthermore, Smith would have fled the scene in his own blue car, and a witness has testified that the car the murderer escaped in was blue.

Unlike many informal arguments, this one is already simple and clear: the conclusion is stated upfront, the arguments are clearly differentiated, and there is no irrelevant verbiage. Like most informal arguments, however, it is a probabilistic enthymeme: it supports the conclusion probabilistically rather than deductively and relies on unstated premises. So, it’s hard to give a precise evaluation of it until we make both probabilities and premises more explicit, and combine them appropriately.

We can use this simple CBN to assess the argument:

*Wife reported assault →** Smith murdered wife **→ Car blue **→ Witness says car blue*

The arrows indicate a direct causal influence of one variable on the probability distribution of the next variable. In this case, these are simple Boolean variables, and if one variable is true then this raises the probability that the next is true, e.g., if Smith did assault his wife, then this caused him to be more likely to murder his wife. (It could be that spousal assault and murder are actually correlated by common causes, but this wouldn’t alter the probabilistic relevance of assault to murder, so we can ignore the possibility here.)

First, we can do some research on crime statistics to find that 38% of murdered women were murdered by their intimate partners, and so get our probability prior to any other evidence.^{†}

Second, we can establish that 30% of women murdered by their intimate partners had previously reported to police being assaulted by those partners (based upon Olding and Benny-Morrison, 2015). Admittedly, as O. J. Simpson’s lawyer argued, the vast majority of husbands who assault their wives do *not *go on to murder them. However, his lawyer was wrong to claim that Simpson’s assault record was therefore irrelevant! We just need to add some additional probabilities, which a CBN forces us to find, and combine them appropriately, which a CBN does for us automatically. Suppose that in the general population only 3% of women have made such reports to police, and this factor doesn’t alter their chance of being murdered by someone else (based on Klein, 2009). Then it turns out that the assault information raises the probability of Smith being the murderer from 38% to 86%.

Third, suppose we accept that if Smith did murder his wife, then the probability of him using his own blue car is 75–95%. Since this is imprecise, we can set it at 85% (say) and vary it later to see how much that affects the probability of the conclusion (in a form of sensitivity analysis).

Fourth, we can test our witness to see how accurate they are in identifying the color of the car in similar circumstances. When a blue car drives past, they successfully identify it as blue 80% of the time. Should we conclude that the probability that the car was blue is 80%? This would be an infamous example, due to Tversky and Kahneman, of the Base Rate Fallacy — i.e., ignoring prior probabilities. In fact, we also need to know how successfully the witness can identify non-blue cars as non-blue (say, 90%) and the base rate of blue cars in the population (say, 15%). Then it turns out that the witness testimony alone would raise the probability that Smith was the murderer from 38% to 69%. Combining the witness testimony with the assault information, then the updated probability that Smith is the murderer rises to 96%.

Even this toy example illustrates that building a CBN forces one to think about how the main factors are causally related and to investigate all the necessary probabilities. Assuming the CBN is correct for the variables considered, and is built in one of many good BN software tools, it acts as a useful calculator: it combines these probabilities appropriately to calculate the probability of our conclusion. Thus, it helps prevent much of the vagueness and fallacious reasoning that are widespread, even in important legal arguments.

**Alternative Techniques for Argument Analysis**

Although there are genuine difficulties in using this technique, we believe that much of the resistance to it is based on imaginary difficulties, while the (** italicized**) rival techniques below have difficulties of their own.

In our toy example, ** the prose version of the argument** doesn’t quantify the probabilities involved, doesn’t specify the missing premises, doesn’t indicate how the various factors are related to each other, and it’s far from clear how to compute an appropriate probability for the conclusion. The fact that the probabilities and premises aren’t specified doesn’t really make the argument non-probabilistic, it just makes it vague. Prose is often the final form of presenting an argument, but it is far from ideal for the prior analysis of an argument.

Resorting to techniques from ** formal logic**, diagrammatic or otherwise, requires even more effort than CBN analysis, while typically losing information. It is really appropriate only for the most rigorous possible examination of essentially deductive arguments.

A more recent approach with some promising empirical backing is ** the use of argument maps**. These are typically un-parameterized non-causal tree structures in which the conclusion is the trunk and all branches represent lines of argument leading to it. (See Tim van Gelder’s ‘Critical Thinking on the Web’.) Arguably, these are equivalent to a restricted class of Bayesian network without explicit parameters (as in the qualitative probabilistic networks of Wellman, 1990). Thus, they have many of the advantages of BNs, but they don’t provide much guidance in computing probabilities, so they can be vague and subject to the kinds of fallacious reasoning that are avoided with actual BNs. Also, as they are typically not causal, they can actually encourage misunderstanding of the scenario.

**Objections**

There are many common objections to the use of Bayesian networks, or causal Bayesian networks, for argumentation. Here we address some of these.

1) *Bayesian network tools are difficult to use. *

This is true for those who are not experienced with them. “Fluency” with BN tools requires training something on the order of the amount of training required to become a reasonably good argument analyst using any tool. (In our experience, some philosophers get fed up with Bayesian network tools when they fail to represent an argument effectively within the first ten minutes of use!)

There are other options besides training. For specific applications, easy-to-use GUIs have been developed. Also, Bayesian network tools can be (and should be) enhanced to support features that would make them easier for argument analysis, such as allowing nodes to be displayed with the full wording of a proposition which they represent. But that’s up to tool developers. In the meantime, serious argument analysts would profit from learning how to use the tools, not just for the sake of argumentation, but also for the wide range of other tasks they have been developed for, such as decision analysis.

2) *BNs force you to put in precise numbers for priors and likelihoods; this is a kind of false precision. Argument maps are better because they are qualitative.*

Certainly, numbers need to be entered to use the automated updating via Bayes’ theorem. As quantities, they are precise (at least to whatever limited-precision arithmetic the tool supports). That doesn’t mean that the precision need be false, meaning falsely interpreted. The user can be fully aware of their limits. Indeed, all BN tools support sensitivity analysis, the ability to test the BN’s behavior across a range of values. So, if the analyst is unsure of just what the probability of something is, she or he can try out a range of numbers to see what effect the variation has on other variables of interest. If the conclusion can be substantially weakened by pushing the probability of premises around within reasonable limits, then it’s correct to infer that the argument is not compelling, and, otherwise, the argument may be compelling. This kind of investigation of the merits of the argument — and uncertainty of our beliefs — is not possible with qualitative maps alone.

Forcing one to obtain numbers is actually an advantage, as the example above indicated: the analyst is forced to learn enough about the domain to model it effectively.

3) *Where do the numbers come from?*

This is an objection any Bayesian will have encountered repeatedly. Since we are here talking about causal Bayesian networks, the ultimate basis for these probabilities must be physical dispositions of causal systems. Practically speaking, they will be sourced using the same means that Bayesian network modellers use in all the applied sciences, a combination of sample data (using data mining tools) and expert opinion (see Korb and Nicholson, 2010, Part III for an introduction to such techniques).

4) *Naive Bayesian networks (NBNs) have been used effectively for argument analysis and are much simpler, e.g., by Peter Sturrock (2013) in his “AKA Shakespeare”. Why not just use them?*

NBNs for argumentation simplify by requiring that pieces of evidence be independent of each other given one or another of the hypotheses at issue. If the problem really has that structure, then there’s nothing wrong with expressing it in an NBN. However, distorting arguments into that structure when they don’t fit causes problems, rather than resolving them. In Sturrock’s case, he suggested, for example, that the Stratford Shakespeare not having left behind a corpus of unpublished writing, not having written for aristocrats for pay, and not having engaged in extensive correspondence with contemporaries are all independent items of evidence, meaning that their joint likelihood is obtained by multiplying their likelihoods together (and then multiplied again with the likelihoods of all other items of evidence he advanced). The result was that he found that the probability that the writings of Shakespeare came from the eponymous guy from Stratford ranged from 10^{-15} all the way down to 10^{-21}! As Neil Thomason pointed out to us, this means that you would be more likely to encounter the author of those works by randomly plucking any human off the planet at the time (or since!), rather than arranging to meet *that* Will Shakespeare from Stratford! While the simplicity of NBNs is appealing, this is a case of making our models simpler than possible. Real dependencies and interrelatedness of evidence cannot be ignored.

5) *Some arguments are not about causal processes, but have a structure that can only be illuminated otherwise.*

Here’s a famous case:

Socrates was a human.

All humans are mortal.

Therefore, Socrates was mortal.

While Bayesian networks can certainly represent deductive arguments, they will not be causal. Furthermore, their probabilistic updating will be uninformative. A reasonable conclusion is that BNs are ill suited for analysing deductive arguments. Argument maps may or may not be helpful; at least, their lack of quantitative representation will do no harm in such cases.

This concession is not exactly painful: our advocacy of CBNs was always only about cases where causal reasoning *does* figure in the assessment of a thesis. Slightly more problematic are cases where the core reasoning might be claimed to be associative rather than causal. For example, yellow stained fingers are associated with lung cancer, but staining your fingers yellow is not a leading cause of lung cancer. That implies we can make meaningful arguments from one outcome to the other without following a causal chain. (The inference of a causal chain from such associations is frequently derided as the “post hoc propter hoc” fallacy.)

In such cases, however, we are still reasoning causally, and it is best to have that causal reasoning made explicit:

*Yellow* *fingers* **← ***Smoking*** → ***Lung Cancer*

With the correct causal model, we can follow the dependencies, and we can also figure out the conditional independencies in the situation (screening off relations). Without the causal model available, we will only be using our intuitions to assess dependencies, and we will often get things wrong.

6) *There are generally very many equally valid ways of modeling a causal system. How can one choose between them?*

This is certainly correct. For example, between smoking and lung cancer there are a great many low-level causal processes required to damage lung cells and produce a malignant cancer. Whether we choose to model them or not depends upon our interests (pragmatics). If we are not arguing about the low-level processes, then we shall probably not bother to model them, as they would simply be a distraction. In general, there will always be multiple correct ways of modeling a causal system, meaning that the probabilistic (and causal) dependencies between the variables used are correctly represented. Which one you use will depend in part upon your argumentative purpose and in part upon your taste.

**Argument Evaluation**

If we are to know that our argument methods are good, we shall need methods of assessing them, built upon justifiable methods for assessing individual arguments. Arguments may be evaluated either as probabilistic predictions (if they are quantitative) or as natural language arguments or both. Here we will address quantitative evaluation. Evaluation of arguments in terms of their intelligibility, etc. we will leave to a future discussion.

One of the leading experts on probabilistic prediction in the social sciences, Philip Tetlock, has said “it really isn’t possible to measure the accuracy of probability judgment of an individual event” (Tetlock, 2015). This is not correct. To be sure, in context Tetlock points out that it *is* possible to measure the accuracy of probability judgments within a reference class, by accumulating the scores of individual predictions and using their average as a measure of judgment in like circumstances. Of course, if that is true, then such a measure applies equally to individual judgments within the reference class (one cannot accumulate the scores of individual predictions if there are no such scores!), so Tetlock’s point turns into the banal observation that you can “always” defend a failed probabilistic prediction. For example, if an event fails to occur that you have predicted with probability 99.9999%, you can shrug your shoulders and say “shit happens!” But actually that’s a defence that you cannot use too very often.

Tetlock suggests that the whole problem of assessing probabilistic predictions is a deep mystery. But his real problem is just the score he uses to assess predictions, namely the Brier score. It is a seriously defective measure of probabilistic predictions, and that ought to be surprising, since the real work in solving how to assess predictions was done half a century ago. But communications between the various sciences is slow and painful.

In most of statistical science an even worse measure of predictive adequacy is used: predictive accuracy. Predictive accuracy is defined as the number of correct predictions divided by the number of predictions. How can you do better in measuring predictive accuracy than using predictive accuracy? Of course, that’s why we slipped in the phrase “predictive adequacy” in place of “predictive accuracy”.

The problem with predictive accuracy is that it ignores the fact that prediction is inherently uncertain and so *probabilistic*. We should like our predicted probabilities to match the actual frequencies of outcomes that arise in similar circumstances. If, for example, we were using a true (stochastic) model to make our predictions, such a match would be guaranteed by the Law of Large Numbers. Predictive accuracy takes a probabilistic prediction’s modal value and effectively rounds it up to 1. For example, in measuring predictive accuracy, a probabilistic prediction that a mushroom is poisonous of 0.51 counts the same as one of 1. But that they should not be assessed as the same is obvious! The problem is what cognitive psychologists call “calibration”: if your probabilistic estimates match real frequencies on average, then you are well calibrated. Most of us are overconfident, pushing probabilities near 1 or 0 even nearer to 1 or 0. Nate Silver, for example, reports that events turning up 15% of the time are routinely said to be “impossible” (Silver, 2012). Another way of pointing this out is that predictive accuracy is not a strictly proper scoring rule, that is, it will reward the true probability distribution for events maximally, but it will also reward many incorrect distributions equally. For example, if you take every modal value and revise its probability to be maximal, you will have an incorrect distribution that is rewarded identically to the correct distribution.

Tetlock’s Brier score is strictly proper, but that doesn’t make it strictly correct. Propriety is a kind of minimum standard: if you can beat (or match) the truth with a false distribution, then the scoring function isn’t telling us what we want. Brier’s score reports the average squared deviation of the actual outcomes from the predicted outcome, so the goal is to minimize it (it is a form of root mean squared error). If we have the true distribution in hand, we cannot be beaten (any deviation from the actual probability will be punished over the long run). However, Brier’s score, while punishing deviant distributions, does so insufficiently in many cases. Consider the extreme case of predicting a mushroom’s edibility with probability 1. This will be punished when false with a penalty of 1. While such a penalty is maximal for a single prediction, in a long run of predictions, it may be washed out by other, better predictions. From a Bayesian point of view, this is highly irrational: a predicted probability of 1 corresponds to strictly **infinite** odds against any alternative occurring! That kind of bet is always irrational, and if it goes wrong, it should be punished by losing **everything in the universe**; that is, recovery should be impossible. The Brier score punishes mistakes in the range [0.9, 1] much the same, even though the shift from a prediction of 0.9 to 0.91 is qualitatively massively distinct from a shift from 0.99 to 1: a “step” from finite to infinite odds! Extreme probabilities need to be treated as extreme for a scoring function to correctly reward calibration and penalize miscalibration.

As we said, this problem has been solved some time ago, beginning with the work of Claude Shannon (Shannon and Weaver, 1949). Shannon proposed measuring information in a “message” by using an efficient code book to encode it and reporting the length of the encoding. An efficient code is one which allocates –log_{2} P(message) bits to all possible messages.

It turns out that log scores based upon Shannon’s information measure have all the properties we should like for scoring predictions. I.J. Good (1952) proposed as a score the number of bits required to encode the actual outcome given a Shannon efficient code based on the predicted outcome. That is, Good’s reward for binary predictions is:

This is the negation of the number of bits to report the actual outcome using the code efficient for the predictive distribution plus 1. The addition of 1 just renormalizes the score, so that 0 reports complete ignorance, positive numbers predictive ability above chance and negative numbers worse than chance, relative to a prior probability of 0.5 for a binomial event. Hope and Korb (2004) generalized Good’s score to multinomial predictions.

Nothing will be able to beat the true distribution in encoding actual outcomes with an efficient code over the long run; indeed, nothing will match it, so the score is strictly proper. But the penalty for mistakes is straightforwardly related to the odds one would take to bet against the winning proposition. Infinite odds imply an outcome that is impossible, meaning in information-theoretic terms, an infinite message describing the outcome. No matter how long a sequence of predictions is scored, an infinite penalty added to a finite number of successes will remain an infinite penalty. So, irrationality is appropriately punished.

All of this refers to the usual circumstance of scoring or assessing predictions, where we know the outcome, but we are uncertain of the processes which bring it about. Supposing that we actually know how the outcomes are produced is supposing that we have an omniscient, God-like perspective on reality. But, in fact, in special cases we do have a God-like perspective, namely when the events we are predicting are the outcomes of a computer simulation that we know, because we built it. In such cases, we can score our models more directly than by looking at their predictions and comparing them to outcomes. We can simply compare a model, produced, say, by some argumentative method, with the simulation directly. In that case, another information-theoretic construct recommends itself: cross entropy (or, Kullback-Leibler divergence). Cross entropy reports the expected number of bits required to efficiently encode an outcome from the true model (simulation, above) using the learned model instead of the true model. In other words, since we have both models (true and learned) we can compare their probability distributions directly, in information-theoretic terms, rather than taking a lengthy detour through their outcomes and predicted outcomes.

**In Search of a Method**

CBNs are an advantageous medium for addressing other common issues in argument analysis. Active open-mindedness suggests we can minimize confirmation bias by proactively searching out alternative points of view and arguments. This can be supported by constructing CBNs with sources of evidence and lines of causal influence additional to those which might at first satisfy us, and, in particular, which might be expected to cut against our first conclusion. In view of confirmation bias (and anchoring, etc.), it might be useful to give the task of constructing an alternative CBN to a second party.

Another benefit in using CBNs is the direct computational support for assessing the confirmatory power of different pieces of evidence relative to one another, how “diagnostic” evidence is in picking out one hypothesis amongst many. While Bayes’ factors— the relative likelihood of one hypothesis to another for the evidence — have long been recommended for assessing confirmation, once coded into a CBN the diagnostic merits of evidence for the hypotheses in play is trivially computable, and computed, by the CBN itself. Hence, the merits of each line of argument can be clearly and quickly assessed, whether in isolation or in any combination.

All of the above does not provide a complete theory of argumentation using CBNs. These uses of causal Bayesian networks must sit within a larger method. This must include deciding when CBNs are appropriate and effective, and when not. When they are not effective, alternative techniques will need to be applied, such as deductive logic or argument mapping. A rich theory of argumentative context and audience analysis is needed in order to understand such issues as which lines of argument can be left implicit (enthymematic) and which sources of premises are acceptable. And guidance needs to be developed in how to translate a CBN, which only represents arguments implicitly, into an explicit formulation in ordinary language.

The required techniques in which CBN-based argumentation is embedded are largely just those employed in critical thinking and argument analysis generally. It is a substantial, but achievable, research program, ranging across disciplines, to develop these to the point where trained analysts might produce similar, and similarly effective, arguments from the same starting points.

^{†} The figure of 38% is a worldwide statistic from the WHO (“Domestic Violence”, Wikipedia). If the argument were specific to a country or region, other statistics might be more appropriate. The figure we have used is a reasonable one for the argument as stated, that is, without a specific context. Uncertainty for specific numbers can be treated via sensitivity analysis, as we discuss below.

**References**

Alvarez, Claudia (2007). *Does philosophy improve critical thinking skills?* Masters Thesis, Department of Philosophy, University of Melbourne.

Domestic violence. (n. d.). In *Wikipedia, The Free Encyclopedia*. Retrieved 16:49, March 29, 2016, from https://en.wikipedia.org/w/index.php?title=Domestic_violence&oldid=712521522

Good, I. J. (1952). Rational decisions. *Journal of the Royal Statistical Society. Series B (Methodological)*, 107-114.

Handfield, T., Twardy, C. R., Korb, K. B., & Oppy, G. (2008). The metaphysics of causal models. *Erkenntnis*, *68*(2), 149-168.

Hope, L. R., & Korb, K. B. (2004). A Bayesian metric for evaluating machine learning algorithms. In *AI 2004: Advances in Artificial Intelligence* (pp. 991-997). Springer Berlin Heidelberg.

Klein A. R. (2009). Practical Implications of Domestic Violence Research. *National Institute of Justice Special Report.* US Department of Justice. Retrieved from http://www.ncjrs.gov/pdffiles1/nij/225722.pdf

Korb, K. B., & Nicholson, A. E. (2010). *Bayesian artificial intelligence*. CRC press.

Nisbett, R. E., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. *Science*, *238*(4827), 625-631.

Olding, R. and Benny-Morrison, A. (2015, Dec 16). The common misconception about domestic violence murders. *The Sydney Morning Herald*. Retrieved from http://www.smh.com.au/nsw/the-common-misconception-about-domestic-violence-murders-20151216-glp7vm.html

Pearl, J. (1988). *Probabilistic reasoning in intelligent systems*. Palo Alto. CA: Morgan-Kaufmann*. *

Shannon, C. E., & Weaver, W. (1949). *The mathematical theory of information.*

Silver, N. (2012). *The signal and the noise: the art and science of prediction*. Penguin UK.

Sturrock, P. A. (2013) *AKA Shakespeare*. Palo Alto, Exoscience.

Tetlock, Philip (2015). Philip Tetlock on superforecasting. Interview with the Library of Economics and Liberty. http://www.econtalk.org/archives/2015/12/philip_tetlock.html

Toulmin, S. (1958). *The Uses of Argument*. Cambridge University.

Wellman, M. P. (1990). Fundamental concepts of qualitative probabilistic networks. *Artificial Intelligence*, *44*(3), 257-303.

Wigmore, J. H. (1913). *The principles of judicial proof: as given by logic, psychology, and general experience, and illustrated in judicial trials*. Little, Brown.