• About

BayesianWatch

~ Bayesian argument analysis in action

BayesianWatch

Category Archives: Writing

Reliable Sources

27 Wednesday Jul 2022

Posted by kbkorb in Argument Analysis, Minformation, Research, Writing

≈ Leave a comment

Tags

argument analysis, argumentation, critical thinking, fallacy, genetic fallacy, Informal Logic, misinformation, Murdoch, reasoning, reliability, Writing

– Kevin B Korb

People have a right to speak; the question is, to whom should we be listening.
– Naomi Orestes and Erik M. Conway (2010)

There is no doubt that knowing how to sort more reliable from less reliable sources, and giving the reliable ones the largest role in forming our opinions, is a key critical thinking skill. George W Bush infamously launched a war in Iraq killing hundreds of thousands of people, as well as inspiring the Islamic State of Iraq and Syria and decades of instability in the Mideast, after dismissing UN reports on the non-existence of weapons of mass destruction in Iraq, which turned out to be reliable, and accepting IC reports to the contrary, partially based upon a since utterly discredited witness, “Curveball“. Had Bush, or those around him, been better critical thinkers, those extremely damaging effects could have been averted. In an era when unreliable sources abound, indeed are almost absurdly prolific, knowing something about these issues is more important than ever before.

Karl Popper pointed out that if the aim of science were simply to discover truths, science could stick to enumerating theorems of propositional logic, say, by adding more and more disjuncts to (P ∨ ¬ P). Of course, it doesn’t follow from the pointlessness of that exercise that we are uninterested in truths; rather, that we are interested in truths which actually help us with our problems, which can serve as premises to arguments of interest to us. The relevance of truths to problems is an important question, but one I am not addressing here. Instead I shall consider, on an assumption of relevance, how we decide whether premises are true, when we are not ourselves witnesses to their truth.

There are numerous negative indicators, as well as numerous positive indicators, of good sources of information in domains and problems foreign to us. I shall explain some that are especially useful to know.

The Problem

When you are dealing with an issue about which you have considerable hard-won expertise and understanding, it is usually not hard to decide whether a claim about it has merit. But, at least since the Renaissance, none of us has the time, or inclination, to become expert in every available subject, so we all have to rely on the expertise of others most of the time. The problem here is how to add to our pool of premises working assumptions, points of departure, claims which are most likely to be true, without going to the difficulty of directly testing or observing that they are true. How can we reasonably decide whether someone or some source has the relevant expertise we lack? How do we recognize reliable sources and avoid unreliable ones?

The problem is already acute in an era of internet and social media echo chambers. For pretty much any idea, crazy or not, there will be a source for it somewhere. And the problem will only get worse with AI-driven Deep Fakes muddying public discourse further. The problem of identifying and verifying reliable sources is a traditional concern of argument analysis. I will go over some of the main issues here.

A Consumer Analogy

There is a close analogue in consumer behavior to the problem of assessing information sources. Indeed, finding reliable products and services for consumption is actually a special case of finding reliable sources for the sake of critical reasoning. Reasoning to a purchase is a special case of reasoning, and to do it well you need to vet your sources. People who purchase cars, appliances, IT services based upon how slick and impressive their advertising is are people who acquire rubbish products and shoddy services. Somewhat more cautious folk look at consumer ratings first, either instead of or supplementing their feelings about the ads. They will do somewhat better than the simply reactive kind of consumer, but the popularity of products and services is a limited guide to the truth. A more cautious consumer will find consumer guides or their equivalent, such as Consumer Reports or Choice, which are developed by testers using a lab and see what they have to say. An exceedingly, perhaps an improbably, cautious consumer will go further, looking at the technical and engineering literature to find what Consumer Reports and others have overlooked.

These correspond to the different levels of investigation that are, or might be, applied to investigating and assessing the sources of information for arguments in general. The argument consumer who adopts beliefs because of a slick youtube presentation or an engaging Instagram video, or even a popular book, is quite likely the very same person who buys a car because of the memorable jingle that went with its ad. We should all hope to do better.

False Flags

First, I will point out some things reliability is not.

Authority

Perhaps foremost among these is “authority.” Authorities are those in positions of power and responsibility. They get into those positions for many reasons, often on the basis of merit. But the merits rarely include any deep understanding or expertise in outside fields. Hollywood actors rarely have expertise outside of how to act, or perhaps how to direct others to act. Politicians sometimes acquire expertise by being handed a portfolio and digging into it; otherwise, they mostly know about campaigning, raising money, debating and negotiating political deals. Popular media pay a lot of attention to the wider opinions of actors and politicians outside their direct expertise; except when they have the power to act upon those opinions, this attention is mostly unwarranted and often harmful.

Of course, there are acknowledged authorities within any field of study. The usual way of judging who they are is to have a look at their credentials. Does a claimed climate scientist have a PhD in a climate science? Does a media personality have a law degree? Examining credentials is a shortcut: if we have the time, we should prefer to test their expertise. That could involve, for example, tracing their stated conclusions backwards to discover what evidence they have relied upon and what arguments have led to their conclusions. That is the stuff of good argument analysis. In order to do such backfilling of how they reached their conclusions, you may first have to learn and practice argument analysis, before then doing the footwork of checking their reasoning. If we lack the opportunity to do detailed investigation, as we will in most cases, we might opt for merely checking credentials, or asking about an expert’s reputation, etc. But these options are shortcuts and rationally undercut the degree of trust we should put in that expert’s claims. Too many people think credentialing is the beginning and end of checking sources for their reliability.

There are many well-known cases both of genuine experts who lack the usual credentials and of credentialed people who lack the expertise. Freeman Dyson is a famous physicist with no doctorate, for an example of the former. For the latter, consider Dr Ian Plimer, credentialed in an earth science – geology – with a PhD from Macquarie University, who has a long track record of claims about global warming that are demonstrably false; for example, Skeptical Science has a web page identifying some, with references you can use to verify their falsity.

Authorities should neither be accepted nor rejected out of hand. (It seems to have become popular to reject them out of hand by conspiracy theorists, who often claim to thereby be exercising “critical thinking” skills. The truth is the very opposite, however: critical thinking requires thinking, not an automatic response, whether that’s acceptance or rejection.) When the claim in question is of little consequence, then a quick acceptance, perhaps as a working hypothesis, may be warranted. But where the matter is more important, then, depending on the degree of that importance, more thought or investigation about the reliability of the source is warranted.

Direct Observation

What else is reliability not? It is also not sourcing all of your evidence directly. If you cannot believe anything you have not directly observed, then you must live in a closet, since you would be unable to trust most of what seems to be going on around you. As human actors, we simply must have indirect sources of information which we trust, and so we must agree upon methods of validating those sources. A constant, unslacking demand for direct evidence is the cant of the obfuscationist.

The Demand for Proof

A similar overreach is to insist that claims, to be acceptable, first be proven, which claim often is used to reject a scientific conclusion. This plays upon a well-known truth about science, that its conclusions are almost always fallible and provisional, subject to retraction upon the discovery of disconfirming evidence. What goes unacknowledged is that scientific conclusions, while always subject to retraction in principle, come with different degrees of certainty and different degrees of acceptance by experts. The only place you’ll find serious debates about a flat earth, the germ theory of disease, a heliocentric universe, or smoking causing lung cancer is in the dark corners of the net — and not for example, at scientific conferences (unless they are about social media misinformation!). Anthropogenic global warming almost deserves to be in that same list, but it’s fair to observe there are still a few fringe scientists still disputing it.

The idea of demanding proof is to rely on people not understanding that science is simply not about stacking proofs on top of each other. It is a rhetorical ploy about generating and spreading FUD.

Ad Populum

Hardly anyone has an unadulterated belief in the value of popularity as a guide to truth. Belief in the existence of witches was popular in medieval Europe, however that belief only led to ashes. But, of course, in many circumstances popularity is an important guide to the truth. The popularity of books and films are a likely guide to entertainment for those whose tastes are near the norm. No doubt, the popularity of beliefs also provides some guidance to the truth, when the popularity is measured across individuals who have a strong commitment to finding and exposing the truth. Consensus views amongst a group of experts are worth attending to. Unfortunately, this era of social media, with the most popular opinions being spread precisely because they outrage rather than inform, requires positive resistance to the pull of popularity.

It is only in combination with some independent indication of expertise, understanding and commitment to the truth that popularity or consensus becomes interesting. What we are really after is how we can find those who are committed to finding and reporting the truth and add them to our pool of preferred sources.

Self-Belief

People who think they have gotten hold of a truth that the experts largely, or perhaps universally, disagree with are almost certainly deluding themselves. As I said above, the knee-jerk rejection of expert opinion isn’t a sign of critical thinking or deep insight. It’s a sign of something gone wrong. If you are actually correct that you are right and the existing experts wrong, then you should be able to justify your view by actually making a positive contribution to the field, for example by getting a PhD and writing up your justification as a thesis. Some who think themselves in that position will object that the whole academic world is in on a conspiracy, so they will have no opportunity to do that. In that case, what’s gone wrong is that the person is mentally ill. (The chances that thousands of scientists are in on a conspiracy that is yet to be revealed is essentially zero; see Grimes, 2021.)

True Flags

Genesis

The so-called Genetic Fallacy is, per Wikipedia (as of 22 Mar 2022):

A fallacy of irrelevance that is based solely on someone's or something's history, origin, or source rather than its current meaning or context. This overlooks any difference to be found in the present situation, typically transferring the positive or negative esteem from the earlier context. In other words, a claim is ignored in favor of attacking or championing its source. The fallacy therefore fails to assess the claim on its merit.

This entry is absurd. That is, taken as advice to simply ignore the source of a claim (which is suggested, if not stated, by saying it is a fallacy of irrelevance), it is an expression of an impossible idealism about assessing information independently of its source and entirely on its own merits. However, in order to assess claims outside of our own expertise without attending to the track record of their sources (or stand-ins for track records, such as reputations) would involve acquiring knowledge comparable to that of the relevant experts themselves. Doing this beyond a few topics of interest is beyond the mental and physical resources of any of us. Hence, relying upon expert advice is the only way we can hope to learn about the vast majority of topics of interest to us. And why should we rely upon those experts? Because they have a track record of producing relevant and accurate statements in their domains, as verified by: their supervisors and teachers during their education; experimental tests; their peers during reviews. In other words, when we accept the next statement by an expert or group of experts, we are doing so precisely because the Genetic Inference is not fallacious at all. Origins of statements matter.

What is a fallacy? The genetic fallacy can be real. If your interlocutor's intent is to distract from the content of a claim and cast doubt on it because of a suspect origin, then perhaps it is a genetic fallacy. An example from one of Douglas Walton's books on argumentation (I forget which one) cites a Canadian parliamentarian's comments on abortion being disparaged because they came from a male. That, of course, is a pure strawman and not the kind of genetics I'm advocating for here, since, so far as I know, being male is uncorrelated with truth or falsity, i.e., reliability. In practical fact, what we are prepared to call a fallacy or not depends upon both the content of a supposedly fallacious claim and the intent behind it, rather than being purely a matter of logical form, as many would have it. So, the genetic fallacy is fallacious not because of drawing attention to the source of a claim, but rather to characteristics of the source that are not indicative of reliability. 

Origins also matter, of course, when we are not dealing with experts per se, but journalists and their kin, such as science popularizers. Reputations in journalism matter, because we don’t all have the time to fly off to a war zone to check things out for ourselves. Some news sources at least try to double-source their claims, such as the New York Times, while others are happy to publish the most absurd nonsense, such as Rupert Murdoch’s Fox News. It doesn’t take a genius to figure out which of these sources is the more reliable, even though every source will have its biases. Biases, by the way, can be dealt with, for example by checking alternative (but reliable) sources with different biases. Outright lies are hard to deal with, if you’re trusting the source irresponsibly.

Origins still matter when we are dealing with relatives, acquaintances and strangers on Twitter. In fact, contrary to Wikipedia, origins always matter. For the vast majority of what we see and hear, origins are all that we are likely to take into account. The more sensible of us filter out the junk from Facebook, Youtube and Twitter.

It would be fair to call the Genetic Inference a fallacy, if the claim at issue were, say, the primary matter of a dispute and if its source were not acceptable to any reasonable participants in the dispute. For example, if we are playing a trivia game and have agreed in advance to use Wikipedia to settle disputes, then citing Wikipedia to resolve a dispute is hardly fallacious, even if the subject matter were fallacies. Or again, if the issue at hand were, say, the reality of anthropogenic global warming, in a non-professional setting, and we cite Wikipedia to determine average surface temperatures for the last ten years, that would also not be fallacious, even if one can raise doubts about it. On the other hand, if we are pursuing serious research, citing Wikipedia as our source and leaving matters at that would, indeed, be fallacious. One can, and should, dig deeper.

Indeed, one can always dig deeper, as any three year old (and Lewis Carroll) can tell you. Claims can always be interrogated, except when we run out of breath to ask further questions.

In short, whether the referral to the origin of a claim is fallacious or not depends upon the context. If the claim is a minor premise whose merits we do not wish to investigate, then it is probably not fallacious. If the claim is a key premise for an important argument, then it might be right to label a reliance upon the source’s reputation “the Genetic Fallacy”. But for most common argumentative purposes, relying upon a known track record of sources, or their reputations for honesty and reliability, is what we will be doing most of the time. And calling that a fallacy is fallacious.

So, when searching for premises, identifying reliable origins is the name of the game, outside of special cases where we ourselves are witnesses or otherwise providing evidence. In other words, when we are trying find reliable sources, the Genetic “Fallacy” is all there is.

The question is how to commit this “fallacy” in a way that reliably yields the truth.

Testing Reliability

A poor indicator of reliability is apparent objectivity or neutrality. The admonition of some to write scientific reports using the plural “we” or the passive voice may give a veneer of objectivity to the reports, but does nothing to debias them. It is, in fact, a misleading affectation.

The reputation of a source is probably already a better indicator of its reliability than the cosmetic touches they put on their claims, such as passive voice. Reputations are based on any number of random things, including things quite as cosmetic as voice, but at least there’s some chance someone’s reputation derives from actual truth-telling. A superficial objectivity only derives from an intent to present a degree of objectivity that may be entirely unfounded. But we can usually do better than rely on reputation.

Reliability can, and frequently should, be tested.

In some cases we can test the truth of claims ourselves. If you have relevant expertise, you may have sufficient knowledge to know whether a relevant claim is true, or you may have ready access to sources that can be used to test it. Alternatively, you may have access to different sources whom you have good reason to believe will be reliable in the domain a claim refers to. For example, if you are given advice by a doctor, if you have a personal contact who is also a doctor, you may have immediate recourse to a second opinion, or at least an opinion about seeking further opinions.

Fact Checking is a slightly different activity which you might employ. Established fact checking organizations, of which there are a fair number since the rise of the internet (e.g., Snopes, RMIT Factlab; Wikipedia keeps a list of such sites here), essentially do what used to be done by journalists: checking the sources of claims, attempting to confirm claims by identifying additional sources, providing the subjects of claims the opportunity to respond to any claim, and consulting with experts in relevant domains. Anybody can, and sometimes should, attempt these same activities where an important issue is at stake. First, though, you might want to either look to see if existing fact checkers have already done this work, or even suggest that they fact check a claim for you. In general, established fact checkers will have an easier time of it, since they have sufficient reputation, for example, to gain access to relevant experts.

In addition to these kinds of activities, you can simply look at what alternative sources say about the issue at hand. If you see a claim made in the New York Times, you might see what the Washington Post has to say, since they may say something contrary to the NYT. To be sure, both newspapers share a good deal of their worldview, and so they share biases. So, you might want to find a source with a different worldview. The Economist, for example, is a more conservative magazine than those two. Better still is to see what reputable sources outside the Anglo world have to say, such as Sueddeutsche Zeitung or Le Monde. If you turn to a Rupert Murdoch source, however, you have probably gone too far and are now in an alternative universe. In recent decades, Murdoch’s venues, both print and cable, have largely replaced news reports by rightwing rants. In any case, different sources can confirm or contradict each other, and either outcome is helpful for understanding the merits of the original claim. Ideally, your multiple sources are not just operating on different world views, but are actually sourcing their claims from different original sources; otherwise, they may be effectively repeating each other.

An aside: one of the often remarked advantages of taking a "break year" after school and traveling overseas is that you can gain considerable insight into how different the world looks away from home. US news media, for example, looks highly diverse from inside the US. Ignoring the extremists (such as many published by Murdoch), however, the vast bulk of US mainstream media shares a very strong bias, strongly favoring an American view of international politics, as well as exhibiting considerable ignorance of the world beyond US borders. This becomes apparent when you immerse yourself in a completely different set of biases. Of course, this kind of distortion is not peculiar to America. Indeed, the least exceptional thing about America may be American exceptionalism. Strangely, every country and culture are the very best in the world.

In summary, you can test the reliability of specific claims by specific sources in a variety of ways: relatively rarely by your own direct observation or experiment or comparing to your own prior knowledge; more easily and commonly by examining alternative sources; or, by investigating and fact checking. The benefit of any of these activities is two-fold: first, you can better gauge the truthfulness of a specific claim; but also importantly, you gain information about the reliability of the source of the claim. Reputations (other people’s opinions) are generally unreliable guides, but an understanding of the reliability of a source gained through repeated truth testing of their claims will provide you with what you need to perform the genetic non-fallacy correctly, your own informed opinion. Knowing the reliability of sources is essential to navigating our high volume world of information and misinformation.

A Hierarchy of Reliable Sources

There is no such thing as an infallible source. Eyewitness (first-person) testimony is, for example, notoriously unreliable. Given people’s proneness to confabulation, rationalization, confirmation bias and generating and explaining false memories, you should also treat your own recollections and firm beliefs with some skepticism. The confidence you have in your own beliefs is a very poor guide to their reliability (see Wikipedia’s “Overconfidence effect” and “Dunning-Kruger effect“, for example). Every source, however credible, should actually be tested when there is an opportunity to do so, assuming the time and trouble are worth the likely information gained. By doing so, you will gain the knowledge to make accurate judgments of the reliability of different sources when you do not have the opportunity to check them further.

Nevertheless, there clearly is an ordering from more to less reliable sources. Here is my attempt at such, together with some reasons for the order. You can check for yourself to see whether my ordering has any reliability, of course. Each step along the list represents at least some lower reliability, as I explain; of course, these are just general categories and there will be many specific exceptions, with either higher or lower reliability.

  1. Scientific reference material; resources of international scientific organizations.
    • Examples: World Health Organization, Institute of Electrical and Electronics Engineers, International Science Council.
    • These are the result of international collaborations of scientists and have generally been reviewed by multiple experts, before and after publication. Their material has likely been sourced from the tiers below originally, but subjected to additional checking.
  2. Scientific journals, conferences and reviews.
    • Examples: Science, Nature, PLOS ONE.
    • Journals and conferences come with different reputations; indeed, many academics spend many hours per year trying to rank them. However, those rankings are meant to judge a kind of pecking order for prestige and the apparent importance of publishing in one journal versus another; they are not specifically ranking reliability or the probability of assertions being true. The lowest ranked publications may indeed have trouble publishing the truth, however the standard for most scientific journals and conferences is that accepted publications be peer reviewed by two or more experts. The claims made in their publications have been “tested” at a minimum by four eyeballs, presumably with an attached brain. Articles with problematic aspects should have been reworked or rejected. Venues with lower standards will be ignored by most academics.
  3. Theses and dissertations; academic grant proposals.
    • PhD and Masters theses are often publicly available, or at least available through libraries. Recent ones can be very helpful in navigating current work in a field. They have the major advantage that, while all of them at least attempt to advance our knowledge, and so may spend its main chapters exploring a relatively narrow topic, they are written for examiners who may have a background centered elsewhere. In other words, the introductory and background chapters will be directed at generally, but not specifically, informed audience, often serving as an introduction to a field as good as any textbook, but more up to date.
    • Grant proposals are even more targeted and usually written by more senior researchers.
  4. Educational material and resources, including textbooks.
    • Most universities require their academics to be ambidextrous, producing both original research and original educational material. This has the benefit of their educational material being infused with new ideas and information, which has also been pretested through refereed publications. Nevertheless, the authors are typically covering a much broader landscape in their educating than in their own research, resulting in them sometimes taking shortcuts, such as relying on prior textbooks without checking them, in other words, violating my main advice here to test assertions (see, e.g., Bohren, 2009).
  5. Popular science writing, videos and podcasts.
    • Here, reliability is really tied to the individual source. Some popular science writers are also scientists, such as Neil deGrass Tyson, A.K. Dewdney and Brian Cox, and draw upon their own hard-won knowledge in doing popular science. Other science writers, while not starting with great reputations, learn their fields and do a good job of checking their information and so end up earning a good reputation. James Gleick and Martin Gardner come to mind. And then there are a host of writers who don’t come to mind at all. So this category really parallels the entire range of source reliability from top to bottom.
  6. Newspapers and news magazines.
    • During the 20th century the best known newspapers adopted practices that separated their publications from the pamphleteers and rags of the 19th century. This included an explicit commitment to honesty and to finding multiple sources for controversial or contestable assertions. In the 21st century internet environment many of these commitments have been abandoned. Still, some news media do attempt to maintain their standards and can be found to be mostly reliable.
  7. Wikipedia.
    • Wikipedia has an internal structure that includes editing for accuracy. They at least make the attempt to be reliable. Partly because anyone can edit their articles, however, their claims need to be treated with caution. While there is such a thing as the “wisdom of the crowd”, that tends to operate as a longer term corrective. And, in any case, where an unfounded doctrine has gained widespread support, the crowd will only reinforce rather than correct it. (See above on the “Genetic Fallacy”; for a more general discussion of the fallacies, that Wikipedia tends to accept without question, see Korb, 2004.)
  8. Blogs, instagram posts, youtube videos.
    • You tend to get what you pay for.

You will observe the repeated reference to science near the top of the list. The human science project has been and remains our best collective effort to increase our understanding of the world. While all the sciences have problems, including occasional fraud and hoaxes, they are also fairly actively policed and have a pretty good track record of catching and exposing indiscretions. It’s certainly better than the track record of catching and exposing political corruption in most countries. To get some idea, you can look at Retraction Watch, which reports on scientific publications that have been retracted. Being peer reviewed and accepted by scientific experts is no guarantee of correctness. Scientists are subject to the same cognitive biases and faults as everyone else. However, if you test the claims made in categories 1, 2 or 3, for example, and find that they are wrong, then you should publish your results, because you will be both subtracting what is wrong and adding what is right to our collective understanding of the world.

Additional Reading

There are more than plenty of both good and bad writings on the subjects I’ve touched upon here. I list a few that are accessible and I find have mostly good suggestions for how to source information or check information sources.

Fact Checking Resources

Many organizations have sprung up for fact checking, some sponsored by traditional news organization. These often also provide guides or other information for how to do your own fact checking. Here are some good examples:

  • Norddeutsche Rundfunk (NDR) provides educational materials (in German) for teachers of journalism, Medienkompetenz.
  • Many good fact checking organizations explain their procedures, which can help others learn to use them or adapt them. E.g., Factcheck.org,

References

Alvarez, Claudia (2007). Does philosophy improve critical thinking skills? Masters Thesis, Department of Philosophy, University of Melbourne.

Bohren, C. F. (2009). Physics textbook writing: Medieval, monastic mimicry. American Journal of Physics, 77(2), 101-103.

Carroll, L., aka C. Dodgson, (1895). What the Tortoise said to Achilles. Mind. Republished in Mind, 104 (416), 1995, 691–693, https://doi.org/10.1093/mind/104.416.691

Domestic violence. (n. d.). In Wikipedia, The Free Encyclopedia. Retrieved 16:49, March 29, 2016, from https://en.wikipedia.org/w/index.php?title=Domestic_violence&oldid=712521522

Good, I. J. (1952). Rational decisions. Journal of the Royal Statistical Society. Series B (Methodological), 107-114.

Grimes, D. R. (2021). Medical disinformation and the unviable nature of COVID-19 conspiracy theories. PLoS One, 16(3).

Handfield, T., Twardy, C. R., Korb, K. B., & Oppy, G. (2008). The metaphysics of causal models. Erkenntnis, 68(2), 149-168.

Hope, L. R., & Korb, K. B. (2004). A Bayesian metric for evaluating machine learning algorithms. In AI 2004: Advances in Artificial Intelligence (pp. 991-997). Springer Berlin Heidelberg.

Klein A. R. (2009). Practical Implications of Domestic Violence Research. National Institute of Justice Special Report. US Department of Justice. Retrieved from http://www.ncjrs.gov/pdffiles1/nij/225722.pdf

Korb, K. (2004). Bayesian informal logic and fallacy. Informal Logic, 24(1).

Korb, K. B., & Nicholson, A. E. (2010). Bayesian artificial intelligence. CRC press.

Nisbett, R. E., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. Science, 238(4827), 625-631.

Olding, R. and Benny-Morrison, A. (2015, Dec 16). The common misconception about domestic violence murders. The Sydney Morning Herald. Retrieved from http://www.smh.com.au/nsw/the-common-misconception-about-domestic-violence-murders-20151216-glp7vm.html

Orestes, N. and Conway, E. M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Climate Change. Bloomsbury Press.

Pearl, J. (1988). Probabilistic reasoning in intelligent systems. Palo Alto. CA: Morgan-Kaufmann.

Shannon, C. E., & Weaver, W. (1949). The mathematical theory of information.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Sturrock, P. A. (2013) AKA Shakespeare. Palo Alto, Exoscience.

Tetlock, Philip (2015). Philip Tetlock on superforecasting. Interview with the Library of Economics and Liberty. http://www.econtalk.org/archives/2015/12/philip_tetlock.html

Toulmin, S. (1958). The Uses of Argument. Cambridge University.

Wellman, M. P. (1990). Fundamental concepts of qualitative probabilistic networks. Artificial Intelligence, 44(3), 257-303.

Wigmore, J. H. (1913). The principles of judicial proof: as given by logic, psychology, and general experience, and illustrated in judicial trials. Little, Brown.

Research Writing in Computer Science

01 Tuesday Feb 2022

Posted by kbkorb in Higher Education, Research, Writing

≈ Leave a comment

Tags

argument analysis, argumentation, critical thinking, grammar, Informal Logic, research, style

I have uploaded this paper on Research Writing since Monash University has deleted the webpages on which it appeared.

The New Devil’s Dictionary

20 Wednesday Dec 2017

Posted by kbkorb in Politics, Semantics, Uncategorized, Writing

≈ Leave a comment

Ambrose Bierce’s Devil’s Dictionary is a fine entertainment. This derivative effort is intended to be at least mildly didactic, while no doubt being a little less amusing. The abuse of language for political effect has been going on a long while, but has accelerated in recent times, with the rise to large-scale dominance of the media by right wing ideologues. Let us all do a little something about it, at least by speaking and writing properly.

Corruption

The innate behavior of liberal politicians and activists in advocating
for the commonwealth, environmental sustainability or any regulation
of markets.

Of course, the real meaning is the act of dishonest dealing in return
for money or private gain, typically taking advantage of a position of
power. The abuse of the word “corruption” has become typical of Trump
and the Murdoch press. See, e.g.,
http://www.factcheck.org/2016/10/a-false-corruption-claim/

Political Correctness

Showing respect and common decency towards minorities or disadvantaged people, especially by leftists; a refusal to demonstrate the manly virtues of machismo, sexism, misogyny, racism and other forms of bigotry.

While the term “politically correct” was used by the left in a self-deprecating way in the 1970s, it has been appropriated by the right wing since to disparage those who supposedly go over the top in avoiding embarrassment to minorities, etc. The term is typically applied in response to someone simply showing ordinary courtesy and decency, revealing the lack thereof in the critic.

Reform

Government action to hobble its own ability to protect and promote the public interest, usually promoted on the grounds that government powers are abused and harmful. The latter is often true, especially when in pursuit of “reform”.

To reform something is to improve it, by, for example, removing obstacles to its proper function. But right wing politicians and media apply it when their intent is to undermine or defeat proper function.

Socialist

Someone who opposes the obscenely rich taking full advantage of their wealth, for example, by wanting to tax them for the welfare of the commonwealth.

In the common usage in the rightwing media, many who endorse regulated capitalist markets are routinely denounced as socialists, e.g., Barack Obama and Bernie Sanders (who, admittedly, falsely labels himself a socialist, without any implied denunciation). But socialists properly understood advocate public ownership and control of the means of production and oppose capitalist markets. Any dictionary will confirm this, but rightwing media commentators are not often found referring to dictionaries or other reliable sources of information.

Steven Pinker’s The Sense of Style

15 Sunday Mar 2015

Posted by kbkorb in Cognitive science, Semantics, Writing

≈ 3 Comments

Tags

#argument, #English, #language, #pinker, #style, #writing

Being a professional writer (as most academics are), I have read many books on style. Most of them are opinionated, fussy and annoying — such as Strunk and White’s most (in)famous book, The Elements of Style. Some are opinionated, fussy and amusing — such as Fowler’s A Dictionary of Modern English Usage, at least before its editing fell into others’ hands. But Steven Pinker’s recent book on style is one of the few I’ve seen that is opinionated, unfussy and well informed — and the first I’ve seen that reflects a deep understanding of both language and cognition, which is both unsurprising (Pinker is a leading cognitive linguist) and inspiring. Someone’s finally done style in style!

The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century (2014, Allen Lane) is filled (primarily) with good sense about style. To give you some idea, I’ll give you its parting message (in my own words, mostly): If you really want to improve your writing, consider the following principles.

0. Don’t pay attention to the anal-retentive Ms Thistlebottoms of the world who insist that splitting infinitives is evil or that punctuation is always thus-and-so. They are more often wrong than right.

1. Look things up. Humans are doubly cursed with fallible memories and overconfidence in their beliefs. Strong convictions about the meanings of words, correct usage, and how things hang together, in both yourself and others, are only weak indicators of truth. Dictionaries and thesauri (I looked it up) should be consulted when there is doubt.

2. Make sure your arguments are sound. Verify your sources and test your arguments before publishing them, if you can. If you can’t, then learn something about argument analysis.

3. Don’t confuse an anecdote or your personal experience for good evidence for a general proposition. A cold winter doesn’t mean global warming is unreal, contrary to a large number of dimwits active on Twitter. If this causes you problems, learn something about science and scientific method.

4. Avoid false dichotomies. Everyone has some impulse towards characterizing their enemies as subhuman or evil. Black and white exhausts the usual color spectrum. Try to see a little better than your neighbor or interlocutor. Try to avoid the “fundamental attribution error”, that whatever someone has done or said can only be due to their internal nature, their essence as a subhuman. Conservatives are not necessarily evil (or stupid), nor are liberals.

5. Follow Ann Farmer’s tagline: “It isn’t about being right, it’s about getting it right.” Don’t distract yourself with ad hominem arguments, focus on the reasoning in arguments.

Pinker brings considerations of cognitive science to bear on questions of language and communication that are quite useful. One example is his treatment of the “Curse of Knowledge”: the tendency to assume that your audience is at a level similar to your own, so that things you take for granted are also well known to them. Pinker argues that this is the major cause of incomprehensible prose. The phrase was invented by economists trying to explain why some market players don’t take advantage of others’ ignorance, because they act as though unaware of the others’ ignorance, since they do not share it. But related difficulties with empathy and understanding are well studied in young children and primates by cognitive scientists attempting to understand how people model and reason about the mental states of others. People mature into an understanding of the mental states of others, unless they are handicapped. But we all retain some tendency to assume more knowledge than we ought in our readers and listeners. And this explains more than a fair share of bad student reviews of lecturers who can’t stop talking well above the level of comprehension of their students.

The Curse of Knowledge is, like many of the biases cognitive psychologists study, very hard to cure. As Donald Rumsfeld infamously said, there are known unknowns and unknown unknowns, and the unknown unknowns are the hardest to deal with. Since the Curse concerns states of mind we are not directly familiar with, we have to apply some imagination to cope with them. Being aware of the problem is a first step, after which, as Pinker writes, you can see it all around you: acronyms that are full of meaning to only some of the many people who encounter them (e.g., ADE, CD, VLE, IELTS, ATAR); a walking sign that tells you how long a walk takes, but not whether it is round-trip or one-way; innumerable gadgets with obscure combinations of buttons required to get things done; and innumerable computer applications likewise. If you notice the many ways other people’s knowledge is being used to stymie you, you may acquire a taste for catering to other people’s ignorance when you are communicating with them.

Perhaps the majority of style books document most closely the prejudices of their authors, rather than drawing upon much evidence. Pinker, rather more sensibly, refers to evidence both from the history of English usage and from cognitive science. This often clarifies matters of style, whether they are in dispute or not. For example, instead of simply stating that parallel constructions are often stylistically neat and dropping in a few illustrations, Pinker shows how parallel constructions aid the reader in parsing sentences and how failures of parallelism (“stylistic variations”) interfere.

The “Great War” of style is between Prescriptivists and Descriptivists. Prescriptivists believe that proper grammar and style can be codified in a set of rules, and good writing is a matter of finding the right ones and adhering to them. Linguists, lexicographers and good writers don’t agree with them. There is no algorithm for good writing — not yet, anyway. Linguists can trace the historical and pre-historical relations between families of languages because of two things: there is continuity in the way language is used, and there is continuous change. Were the Prescriptivists to win their war, languages would cease to be useful, for nothing will stop the world from changing. On the other hand, Pinker is not so drenched in his descriptive studies of human cognition to not see advantages in some prescriptions. Take the word “disinterested”. Pinker points out that its earliest uses were in the sense of one being uninterested, rather than in the sense of allowing an impartial judgment. The Oxford English Dictionary, at any rate, gives quotes from 1631 for the former and 1659 for the latter. A pure Descriptivist must accept both usages, but Pinker quite sensibly points out that “uninterested” works perfectly well for a lack of interest, so reserving “disinterested” for a compact way of expressing the second sense makes sense. As a language must strike a balance between its own past and its current surroundings, neither pure Prescriptivists nor pure Descriptivists can capture its essence.

When I read, I almost always find something to disagree about, and Pinker’s book, despite being first rate, is no exception. So in conclusion, I would like to register one complaint with Mr Pinker. While I think most of his judgments about language and style are right, and often well grounded with evidence, there is a principle which he ignores, that I have held to over my career: language is first and foremost spoken and only secondarily written. If you find yourself tempted to write something which you cannot imagine yourself speaking in any circumstances, then you are being tempted into a stylistic error. For a few examples: the use of the genitive apostrophe, according to Pinker, always demands a following “s” (except for some historical special cases). But this (according to me!) is wrong. People say “Bayes’ Theorem”, not “Bayeses Theorem”, and so it should be written with no trailing “s”. Similarly, acronyms are pronounced a certain way, and how the indefinite article is used with them depends upon that. So, if someone writes “an NBN connection”, that means in their idiolect “NBN” is spelled out when speaking; if they write “a NBN connection”, that either means that they say “NBN” in some other way, perhaps like “nibbin” or that they are not thinking about what they are writing. Many people miswrite the indefinite article, or English more generally, by not reflecting on how they speak.

The advice to read aloud your writings before finalizing them is not idle advice.

Recent Posts

  • Reliable Sources
  • Open Letter to RIPE NCC
  • Research Writing in Computer Science
  • Paul Krugman Argues with Zombies
  • The Reasoning Well

Recent Comments

Tanya Murphy on Analysing Arguments Using Caus…
ctwardy on Steven Pinker’s The Sens…
kbkorb on Steven Pinker’s The Sens…
william Miller on Steven Pinker’s The Sens…
kbkorb on I Support BDS

Archives

  • July 2022
  • March 2022
  • February 2022
  • October 2021
  • August 2021
  • February 2021
  • December 2020
  • March 2020
  • January 2020
  • March 2019
  • December 2018
  • September 2018
  • December 2017
  • March 2016
  • March 2015
  • February 2015
  • July 2014
  • June 2014
  • April 2014
  • February 2014
  • January 2014

Categories

  • Argument Analysis
  • Argument Mapping
  • Artificial Intelligence
  • Bayesian networks
  • Causal Attribution
  • Causal Bayesian networks
  • Causation
  • Cognitive science
  • Economics
  • Evidence
  • Higher Education
  • Inference
  • Method
  • Minformation
  • misinformation
  • Politics
  • Research
  • Review
  • Semantics
  • Simulation
  • Uncategorized
  • Writing

Meta

  • Register
  • Log in
  • Entries feed
  • Comments feed
  • WordPress.com

Blog Stats

  • 10,625 hits

Spam Blocked

11,196 spam blocked by Akismet
Follow BayesianWatch on WordPress.com
Follow @kbkorb

Blogroll

  • Austhink Critical Thinking on the Web
  • Nashville State Critical Thinking Initiative

Blog at WordPress.com.

Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use.
To find out more, including how to control cookies, see here: Cookie Policy
  • Follow Following
    • BayesianWatch
    • Join 46 other followers
    • Already have a WordPress.com account? Log in now.
    • BayesianWatch
    • Customize
    • Follow Following
    • Sign up
    • Log in
    • Report this content
    • View site in Reader
    • Manage subscriptions
    • Collapse this bar