Tags

, , , , , , , , , ,

Kevin B Korb

People have a right to speak; the question is, to whom should we be listening.
– Naomi Orestes and Erik M. Conway (2010)

There is no doubt that knowing how to sort more reliable from less reliable sources, and giving the reliable ones the largest role in forming our opinions, is a key critical thinking skill. George W Bush infamously launched a war in Iraq killing hundreds of thousands of people, as well as inspiring the Islamic State of Iraq and Syria and decades of instability in the Mideast, after dismissing UN reports on the non-existence of weapons of mass destruction in Iraq, which turned out to be reliable, and accepting IC reports to the contrary, partially based upon a since utterly discredited witness, “Curveball“. Had Bush, or those around him, been better critical thinkers, those extremely damaging effects could have been averted. In an era when unreliable sources abound, indeed are almost absurdly prolific, knowing something about these issues is more important than ever before.

Karl Popper pointed out that if the aim of science were simply to discover truths, science could stick to enumerating theorems of propositional logic, say, by adding more and more disjuncts to (P ∨ ¬ P). Of course, it doesn’t follow from the pointlessness of that exercise that we are uninterested in truths; rather, that we are interested in truths which actually help us with our problems, which can serve as premises to arguments of interest to us. The relevance of truths to problems is an important question, but one I am not addressing here. Instead I shall consider, on an assumption of relevance, how we decide whether premises are true, when we are not ourselves witnesses to their truth.

There are numerous negative indicators, as well as numerous positive indicators, of good sources of information in domains and problems foreign to us. I shall explain some that are especially useful to know.

The Problem

When you are dealing with an issue about which you have considerable hard-won expertise and understanding, it is usually not hard to decide whether a claim about it has merit. But, at least since the Renaissance, none of us has the time, or inclination, to become expert in every available subject, so we all have to rely on the expertise of others most of the time. The problem here is how to add to our pool of premises working assumptions, points of departure, claims which are most likely to be true, without going to the difficulty of directly testing or observing that they are true. How can we reasonably decide whether someone or some source has the relevant expertise we lack? How do we recognize reliable sources and avoid unreliable ones?

The problem is already acute in an era of internet and social media echo chambers. For pretty much any idea, crazy or not, there will be a source for it somewhere. And the problem will only get worse with AI-driven Deep Fakes muddying public discourse further. The problem of identifying and verifying reliable sources is a traditional concern of argument analysis. I will go over some of the main issues here.

A Consumer Analogy

There is a close analogue in consumer behavior to the problem of assessing information sources. Indeed, finding reliable products and services for consumption is actually a special case of finding reliable sources for the sake of critical reasoning. Reasoning to a purchase is a special case of reasoning, and to do it well you need to vet your sources. People who purchase cars, appliances, IT services based upon how slick and impressive their advertising is are people who acquire rubbish products and shoddy services. Somewhat more cautious folk look at consumer ratings first, either instead of or supplementing their feelings about the ads. They will do somewhat better than the simply reactive kind of consumer, but the popularity of products and services is a limited guide to the truth. A more cautious consumer will find consumer guides or their equivalent, such as Consumer Reports or Choice, which are developed by testers using a lab and see what they have to say. An exceedingly, perhaps an improbably, cautious consumer will go further, looking at the technical and engineering literature to find what Consumer Reports and others have overlooked.

These correspond to the different levels of investigation that are, or might be, applied to investigating and assessing the sources of information for arguments in general. The argument consumer who adopts beliefs because of a slick youtube presentation or an engaging Instagram video, or even a popular book, is quite likely the very same person who buys a car because of the memorable jingle that went with its ad. We should all hope to do better.

False Flags

First, I will point out some things reliability is not.

Authority

Perhaps foremost among these is “authority.” Authorities are those in positions of power and responsibility. They get into those positions for many reasons, often on the basis of merit. But the merits rarely include any deep understanding or expertise in outside fields. Hollywood actors rarely have expertise outside of how to act, or perhaps how to direct others to act. Politicians sometimes acquire expertise by being handed a portfolio and digging into it; otherwise, they mostly know about campaigning, raising money, debating and negotiating political deals. Popular media pay a lot of attention to the wider opinions of actors and politicians outside their direct expertise; except when they have the power to act upon those opinions, this attention is mostly unwarranted and often harmful.

Of course, there are acknowledged authorities within any field of study. The usual way of judging who they are is to have a look at their credentials. Does a claimed climate scientist have a PhD in a climate science? Does a media personality have a law degree? Examining credentials is a shortcut: if we have the time, we should prefer to test their expertise. That could involve, for example, tracing their stated conclusions backwards to discover what evidence they have relied upon and what arguments have led to their conclusions. That is the stuff of good argument analysis. In order to do such backfilling of how they reached their conclusions, you may first have to learn and practice argument analysis, before then doing the footwork of checking their reasoning. If we lack the opportunity to do detailed investigation, as we will in most cases, we might opt for merely checking credentials, or asking about an expert’s reputation, etc. But these options are shortcuts and rationally undercut the degree of trust we should put in that expert’s claims. Too many people think credentialing is the beginning and end of checking sources for their reliability.

There are many well-known cases both of genuine experts who lack the usual credentials and of credentialed people who lack the expertise. Freeman Dyson is a famous physicist with no doctorate, for an example of the former. For the latter, consider Dr Ian Plimer, credentialed in an earth science – geology – with a PhD from Macquarie University, who has a long track record of claims about global warming that are demonstrably false; for example, Skeptical Science has a web page identifying some, with references you can use to verify their falsity.

Authorities should neither be accepted nor rejected out of hand. (It seems to have become popular to reject them out of hand by conspiracy theorists, who often claim to thereby be exercising “critical thinking” skills. The truth is the very opposite, however: critical thinking requires thinking, not an automatic response, whether that’s acceptance or rejection.) When the claim in question is of little consequence, then a quick acceptance, perhaps as a working hypothesis, may be warranted. But where the matter is more important, then, depending on the degree of that importance, more thought or investigation about the reliability of the source is warranted.

Direct Observation

What else is reliability not? It is also not sourcing all of your evidence directly. If you cannot believe anything you have not directly observed, then you must live in a closet, since you would be unable to trust most of what seems to be going on around you. As human actors, we simply must have indirect sources of information which we trust, and so we must agree upon methods of validating those sources. A constant, unslacking demand for direct evidence is the cant of the obfuscationist.

The Demand for Proof

A similar overreach is to insist that claims, to be acceptable, first be proven, which claim often is used to reject a scientific conclusion. This plays upon a well-known truth about science, that its conclusions are almost always fallible and provisional, subject to retraction upon the discovery of disconfirming evidence. What goes unacknowledged is that scientific conclusions, while always subject to retraction in principle, come with different degrees of certainty and different degrees of acceptance by experts. The only place you’ll find serious debates about a flat earth, the germ theory of disease, a heliocentric universe, or smoking causing lung cancer is in the dark corners of the net — and not for example, at scientific conferences (unless they are about social media misinformation!). Anthropogenic global warming almost deserves to be in that same list, but it’s fair to observe there are still a few fringe scientists still disputing it.

The idea of demanding proof is to rely on people not understanding that science is simply not about stacking proofs on top of each other. It is a rhetorical ploy about generating and spreading FUD.

Ad Populum

Hardly anyone has an unadulterated belief in the value of popularity as a guide to truth. Belief in the existence of witches was popular in medieval Europe, however that belief only led to ashes. But, of course, in many circumstances popularity is an important guide to the truth. The popularity of books and films are a likely guide to entertainment for those whose tastes are near the norm. No doubt, the popularity of beliefs also provides some guidance to the truth, when the popularity is measured across individuals who have a strong commitment to finding and exposing the truth. Consensus views amongst a group of experts are worth attending to. Unfortunately, this era of social media, with the most popular opinions being spread precisely because they outrage rather than inform, requires positive resistance to the pull of popularity.

It is only in combination with some independent indication of expertise, understanding and commitment to the truth that popularity or consensus becomes interesting. What we are really after is how we can find those who are committed to finding and reporting the truth and add them to our pool of preferred sources.

Self-Belief

People who think they have gotten hold of a truth that the experts largely, or perhaps universally, disagree with are almost certainly deluding themselves. As I said above, the knee-jerk rejection of expert opinion isn’t a sign of critical thinking or deep insight. It’s a sign of something gone wrong. If you are actually correct that you are right and the existing experts wrong, then you should be able to justify your view by actually making a positive contribution to the field, for example by getting a PhD and writing up your justification as a thesis. Some who think themselves in that position will object that the whole academic world is in on a conspiracy, so they will have no opportunity to do that. In that case, what’s gone wrong is that the person is mentally ill. (The chances that thousands of scientists are in on a conspiracy that is yet to be revealed is essentially zero; see Grimes, 2021.)

True Flags

Genesis

The so-called Genetic Fallacy is, per Wikipedia (as of 22 Mar 2022):

A fallacy of irrelevance that is based solely on someone's or something's history, origin, or source rather than its current meaning or context. This overlooks any difference to be found in the present situation, typically transferring the positive or negative esteem from the earlier context. In other words, a claim is ignored in favor of attacking or championing its source. The fallacy therefore fails to assess the claim on its merit.

This entry is absurd. That is, taken as advice to simply ignore the source of a claim (which is suggested, if not stated, by saying it is a fallacy of irrelevance), it is an expression of an impossible idealism about assessing information independently of its source and entirely on its own merits. However, in order to assess claims outside of our own expertise without attending to the track record of their sources (or stand-ins for track records, such as reputations) would involve acquiring knowledge comparable to that of the relevant experts themselves. Doing this beyond a few topics of interest is beyond the mental and physical resources of any of us. Hence, relying upon expert advice is the only way we can hope to learn about the vast majority of topics of interest to us. And why should we rely upon those experts? Because they have a track record of producing relevant and accurate statements in their domains, as verified by: their supervisors and teachers during their education; experimental tests; their peers during reviews. In other words, when we accept the next statement by an expert or group of experts, we are doing so precisely because the Genetic Inference is not fallacious at all. Origins of statements matter.

What is a fallacy? The genetic fallacy can be real. If your interlocutor's intent is to distract from the content of a claim and cast doubt on it because of a suspect origin, then perhaps it is a genetic fallacy. An example from one of Douglas Walton's books on argumentation (I forget which one) cites a Canadian parliamentarian's comments on abortion being disparaged because they came from a male. That, of course, is a pure strawman and not the kind of genetics I'm advocating for here, since, so far as I know, being male is uncorrelated with truth or falsity, i.e., reliability. In practical fact, what we are prepared to call a fallacy or not depends upon both the content of a supposedly fallacious claim and the intent behind it, rather than being purely a matter of logical form, as many would have it. So, the genetic fallacy is fallacious not because of drawing attention to the source of a claim, but rather to characteristics of the source that are not indicative of reliability. 

Origins also matter, of course, when we are not dealing with experts per se, but journalists and their kin, such as science popularizers. Reputations in journalism matter, because we don’t all have the time to fly off to a war zone to check things out for ourselves. Some news sources at least try to double-source their claims, such as the New York Times, while others are happy to publish the most absurd nonsense, such as Rupert Murdoch’s Fox News. It doesn’t take a genius to figure out which of these sources is the more reliable, even though every source will have its biases. Biases, by the way, can be dealt with, for example by checking alternative (but reliable) sources with different biases. Outright lies are hard to deal with, if you’re trusting the source irresponsibly.

Origins still matter when we are dealing with relatives, acquaintances and strangers on Twitter. In fact, contrary to Wikipedia, origins always matter. For the vast majority of what we see and hear, origins are all that we are likely to take into account. The more sensible of us filter out the junk from Facebook, Youtube and Twitter.

It would be fair to call the Genetic Inference a fallacy, if the claim at issue were, say, the primary matter of a dispute and if its source were not acceptable to any reasonable participants in the dispute. For example, if we are playing a trivia game and have agreed in advance to use Wikipedia to settle disputes, then citing Wikipedia to resolve a dispute is hardly fallacious, even if the subject matter were fallacies. Or again, if the issue at hand were, say, the reality of anthropogenic global warming, in a non-professional setting, and we cite Wikipedia to determine average surface temperatures for the last ten years, that would also not be fallacious, even if one can raise doubts about it. On the other hand, if we are pursuing serious research, citing Wikipedia as our source and leaving matters at that would, indeed, be fallacious. One can, and should, dig deeper.

Indeed, one can always dig deeper, as any three year old (and Lewis Carroll) can tell you. Claims can always be interrogated, except when we run out of breath to ask further questions.

In short, whether the referral to the origin of a claim is fallacious or not depends upon the context. If the claim is a minor premise whose merits we do not wish to investigate, then it is probably not fallacious. If the claim is a key premise for an important argument, then it might be right to label a reliance upon the source’s reputation “the Genetic Fallacy”. But for most common argumentative purposes, relying upon a known track record of sources, or their reputations for honesty and reliability, is what we will be doing most of the time. And calling that a fallacy is fallacious.

So, when searching for premises, identifying reliable origins is the name of the game, outside of special cases where we ourselves are witnesses or otherwise providing evidence. In other words, when we are trying find reliable sources, the Genetic “Fallacy” is all there is.

The question is how to commit this “fallacy” in a way that reliably yields the truth.

Testing Reliability

A poor indicator of reliability is apparent objectivity or neutrality. The admonition of some to write scientific reports using the plural “we” or the passive voice may give a veneer of objectivity to the reports, but does nothing to debias them. It is, in fact, a misleading affectation.

The reputation of a source is probably already a better indicator of its reliability than the cosmetic touches they put on their claims, such as passive voice. Reputations are based on any number of random things, including things quite as cosmetic as voice, but at least there’s some chance someone’s reputation derives from actual truth-telling. A superficial objectivity only derives from an intent to present a degree of objectivity that may be entirely unfounded. But we can usually do better than rely on reputation.

Reliability can, and frequently should, be tested.

In some cases we can test the truth of claims ourselves. If you have relevant expertise, you may have sufficient knowledge to know whether a relevant claim is true, or you may have ready access to sources that can be used to test it. Alternatively, you may have access to different sources whom you have good reason to believe will be reliable in the domain a claim refers to. For example, if you are given advice by a doctor, if you have a personal contact who is also a doctor, you may have immediate recourse to a second opinion, or at least an opinion about seeking further opinions.

Fact Checking is a slightly different activity which you might employ. Established fact checking organizations, of which there are a fair number since the rise of the internet (e.g., Snopes, RMIT Factlab; Wikipedia keeps a list of such sites here), essentially do what used to be done by journalists: checking the sources of claims, attempting to confirm claims by identifying additional sources, providing the subjects of claims the opportunity to respond to any claim, and consulting with experts in relevant domains. Anybody can, and sometimes should, attempt these same activities where an important issue is at stake. First, though, you might want to either look to see if existing fact checkers have already done this work, or even suggest that they fact check a claim for you. In general, established fact checkers will have an easier time of it, since they have sufficient reputation, for example, to gain access to relevant experts.

In addition to these kinds of activities, you can simply look at what alternative sources say about the issue at hand. If you see a claim made in the New York Times, you might see what the Washington Post has to say, since they may say something contrary to the NYT. To be sure, both newspapers share a good deal of their worldview, and so they share biases. So, you might want to find a source with a different worldview. The Economist, for example, is a more conservative magazine than those two. Better still is to see what reputable sources outside the Anglo world have to say, such as Sueddeutsche Zeitung or Le Monde. If you turn to a Rupert Murdoch source, however, you have probably gone too far and are now in an alternative universe. In recent decades, Murdoch’s venues, both print and cable, have largely replaced news reports by rightwing rants. In any case, different sources can confirm or contradict each other, and either outcome is helpful for understanding the merits of the original claim. Ideally, your multiple sources are not just operating on different world views, but are actually sourcing their claims from different original sources; otherwise, they may be effectively repeating each other.

An aside: one of the often remarked advantages of taking a "break year" after school and traveling overseas is that you can gain considerable insight into how different the world looks away from home. US news media, for example, looks highly diverse from inside the US. Ignoring the extremists (such as many published by Murdoch), however, the vast bulk of US mainstream media shares a very strong bias, strongly favoring an American view of international politics, as well as exhibiting considerable ignorance of the world beyond US borders. This becomes apparent when you immerse yourself in a completely different set of biases. Of course, this kind of distortion is not peculiar to America. Indeed, the least exceptional thing about America may be American exceptionalism. Strangely, every country and culture are the very best in the world.

In summary, you can test the reliability of specific claims by specific sources in a variety of ways: relatively rarely by your own direct observation or experiment or comparing to your own prior knowledge; more easily and commonly by examining alternative sources; or, by investigating and fact checking. The benefit of any of these activities is two-fold: first, you can better gauge the truthfulness of a specific claim; but also importantly, you gain information about the reliability of the source of the claim. Reputations (other people’s opinions) are generally unreliable guides, but an understanding of the reliability of a source gained through repeated truth testing of their claims will provide you with what you need to perform the genetic non-fallacy correctly, your own informed opinion. Knowing the reliability of sources is essential to navigating our high volume world of information and misinformation.

A Hierarchy of Reliable Sources

There is no such thing as an infallible source. Eyewitness (first-person) testimony is, for example, notoriously unreliable. Given people’s proneness to confabulation, rationalization, confirmation bias and generating and explaining false memories, you should also treat your own recollections and firm beliefs with some skepticism. The confidence you have in your own beliefs is a very poor guide to their reliability (see Wikipedia’s “Overconfidence effect” and “Dunning-Kruger effect“, for example). Every source, however credible, should actually be tested when there is an opportunity to do so, assuming the time and trouble are worth the likely information gained. By doing so, you will gain the knowledge to make accurate judgments of the reliability of different sources when you do not have the opportunity to check them further.

Nevertheless, there clearly is an ordering from more to less reliable sources. Here is my attempt at such, together with some reasons for the order. You can check for yourself to see whether my ordering has any reliability, of course. Each step along the list represents at least some lower reliability, as I explain; of course, these are just general categories and there will be many specific exceptions, with either higher or lower reliability.

  1. Scientific reference material; resources of international scientific organizations.
  2. Scientific journals, conferences and reviews.
    • Examples: Science, Nature, PLOS ONE.
    • Journals and conferences come with different reputations; indeed, many academics spend many hours per year trying to rank them. However, those rankings are meant to judge a kind of pecking order for prestige and the apparent importance of publishing in one journal versus another; they are not specifically ranking reliability or the probability of assertions being true. The lowest ranked publications may indeed have trouble publishing the truth, however the standard for most scientific journals and conferences is that accepted publications be peer reviewed by two or more experts. The claims made in their publications have been “tested” at a minimum by four eyeballs, presumably with an attached brain. Articles with problematic aspects should have been reworked or rejected. Venues with lower standards will be ignored by most academics.
  3. Theses and dissertations; academic grant proposals.
    • PhD and Masters theses are often publicly available, or at least available through libraries. Recent ones can be very helpful in navigating current work in a field. They have the major advantage that, while all of them at least attempt to advance our knowledge, and so may spend its main chapters exploring a relatively narrow topic, they are written for examiners who may have a background centered elsewhere. In other words, the introductory and background chapters will be directed at generally, but not specifically, informed audience, often serving as an introduction to a field as good as any textbook, but more up to date.
    • Grant proposals are even more targeted and usually written by more senior researchers.
  4. Educational material and resources, including textbooks.
    • Most universities require their academics to be ambidextrous, producing both original research and original educational material. This has the benefit of their educational material being infused with new ideas and information, which has also been pretested through refereed publications. Nevertheless, the authors are typically covering a much broader landscape in their educating than in their own research, resulting in them sometimes taking shortcuts, such as relying on prior textbooks without checking them, in other words, violating my main advice here to test assertions (see, e.g., Bohren, 2009).
  5. Popular science writing, videos and podcasts.
    • Here, reliability is really tied to the individual source. Some popular science writers are also scientists, such as Neil deGrass Tyson, A.K. Dewdney and Brian Cox, and draw upon their own hard-won knowledge in doing popular science. Other science writers, while not starting with great reputations, learn their fields and do a good job of checking their information and so end up earning a good reputation. James Gleick and Martin Gardner come to mind. And then there are a host of writers who don’t come to mind at all. So this category really parallels the entire range of source reliability from top to bottom.
  6. Newspapers and news magazines.
    • During the 20th century the best known newspapers adopted practices that separated their publications from the pamphleteers and rags of the 19th century. This included an explicit commitment to honesty and to finding multiple sources for controversial or contestable assertions. In the 21st century internet environment many of these commitments have been abandoned. Still, some news media do attempt to maintain their standards and can be found to be mostly reliable.
  7. Wikipedia.
    • Wikipedia has an internal structure that includes editing for accuracy. They at least make the attempt to be reliable. Partly because anyone can edit their articles, however, their claims need to be treated with caution. While there is such a thing as the “wisdom of the crowd”, that tends to operate as a longer term corrective. And, in any case, where an unfounded doctrine has gained widespread support, the crowd will only reinforce rather than correct it. (See above on the “Genetic Fallacy”; for a more general discussion of the fallacies, that Wikipedia tends to accept without question, see Korb, 2004.)
  8. Blogs, instagram posts, youtube videos.
    • You tend to get what you pay for.

You will observe the repeated reference to science near the top of the list. The human science project has been and remains our best collective effort to increase our understanding of the world. While all the sciences have problems, including occasional fraud and hoaxes, they are also fairly actively policed and have a pretty good track record of catching and exposing indiscretions. It’s certainly better than the track record of catching and exposing political corruption in most countries. To get some idea, you can look at Retraction Watch, which reports on scientific publications that have been retracted. Being peer reviewed and accepted by scientific experts is no guarantee of correctness. Scientists are subject to the same cognitive biases and faults as everyone else. However, if you test the claims made in categories 1, 2 or 3, for example, and find that they are wrong, then you should publish your results, because you will be both subtracting what is wrong and adding what is right to our collective understanding of the world.

Additional Reading

There are more than plenty of both good and bad writings on the subjects I’ve touched upon here. I list a few that are accessible and I find have mostly good suggestions for how to source information or check information sources.

Fact Checking Resources

Many organizations have sprung up for fact checking, some sponsored by traditional news organization. These often also provide guides or other information for how to do your own fact checking. Here are some good examples:

  • Norddeutsche Rundfunk (NDR) provides educational materials (in German) for teachers of journalism, Medienkompetenz.
  • Many good fact checking organizations explain their procedures, which can help others learn to use them or adapt them. E.g., Factcheck.org,

References

Alvarez, Claudia (2007). Does philosophy improve critical thinking skills? Masters Thesis, Department of Philosophy, University of Melbourne.

Bohren, C. F. (2009). Physics textbook writing: Medieval, monastic mimicry. American Journal of Physics, 77(2), 101-103.

Carroll, L., aka C. Dodgson, (1895). What the Tortoise said to Achilles. Mind. Republished in Mind, 104 (416), 1995, 691–693, https://doi.org/10.1093/mind/104.416.691

Domestic violence. (n. d.). In Wikipedia, The Free Encyclopedia. Retrieved 16:49, March 29, 2016, from https://en.wikipedia.org/w/index.php?title=Domestic_violence&oldid=712521522

Good, I. J. (1952). Rational decisions. Journal of the Royal Statistical Society. Series B (Methodological), 107-114.

Grimes, D. R. (2021). Medical disinformation and the unviable nature of COVID-19 conspiracy theories. PLoS One, 16(3).

Handfield, T., Twardy, C. R., Korb, K. B., & Oppy, G. (2008). The metaphysics of causal models. Erkenntnis, 68(2), 149-168.

Hope, L. R., & Korb, K. B. (2004). A Bayesian metric for evaluating machine learning algorithms. In AI 2004: Advances in Artificial Intelligence (pp. 991-997). Springer Berlin Heidelberg.

Klein A. R. (2009). Practical Implications of Domestic Violence Research. National Institute of Justice Special Report. US Department of Justice. Retrieved from http://www.ncjrs.gov/pdffiles1/nij/225722.pdf

Korb, K. (2004). Bayesian informal logic and fallacy. Informal Logic, 24(1).

Korb, K. B., & Nicholson, A. E. (2010). Bayesian artificial intelligence. CRC press.

Nisbett, R. E., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. Science, 238(4827), 625-631.

Olding, R. and Benny-Morrison, A. (2015, Dec 16). The common misconception about domestic violence murders. The Sydney Morning Herald. Retrieved from http://www.smh.com.au/nsw/the-common-misconception-about-domestic-violence-murders-20151216-glp7vm.html

Orestes, N. and Conway, E. M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Climate Change. Bloomsbury Press.

Pearl, J. (1988). Probabilistic reasoning in intelligent systems. Palo Alto. CA: Morgan-Kaufmann.

Shannon, C. E., & Weaver, W. (1949). The mathematical theory of information.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Sturrock, P. A. (2013) AKA Shakespeare. Palo Alto, Exoscience.

Tetlock, Philip (2015). Philip Tetlock on superforecasting. Interview with the Library of Economics and Liberty. http://www.econtalk.org/archives/2015/12/philip_tetlock.html

Toulmin, S. (1958). The Uses of Argument. Cambridge University.

Wellman, M. P. (1990). Fundamental concepts of qualitative probabilistic networks. Artificial Intelligence, 44(3), 257-303.

Wigmore, J. H. (1913). The principles of judicial proof: as given by logic, psychology, and general experience, and illustrated in judicial trials. Little, Brown.