The Reasoning Well

Featured

Tags

This will be a collection of hypothetical lectures that I might have delivered over the course of my academic career, but didn’t. The goal of this course of lectures is to introduce a broad array of tools, or ideas, or weapons for attacking reasoning problems, taking advantage of a broad range of disciplines. These are meant to be introductory, readily understood by intelligent laypeople who have never studied those disciplines and representing general-purspose methods that might become available to anyone who does study those disciplines at an undergraduate level. So, this collection is envisioned as a kind of Swiss-army knife for your brain. While that is my intention, I do not pretend to cover all the major disciplines, but emphasize those which have had a substantial impact on my intellectual life.

I have taken inspiration from two prolific and excellent writers of articles for Scientific American, A.K. Dewdney and Martin Gardner. In partial consequence of their inspiration, these lectures are somewhat loosely connected; they are intended to largely be intelligible independently of one another, although cross-references will guide the reader through some kinds of dependencies. While this is not intended to be scholarly in the sense of detailing every historical line of thought behind these lectures, or attributing all details to their originators, I do indicate where readers might turn for additional information on these ideas.

The top-level topics I am covering (in tentative order) include: Philosophy, Bayesian Reasoning, Argumentation, Mathematics and Computer Science, Physical Thinking, Modeling and Simulation, Evolution Theory, Information, Ethics, Politics, Cognition and Inference. Posts will be “collected” using the tag #ReasoningWell.

The Power of the People

Tags

, , , ,

Kevin B Korb, 30 Aug 2022

How often do we hear things like “What’s the point? I’m only one person. What I do won’t make a difference.” I most recently heard this on ABC Radio National, in the context of an argument for not … I don’t even recall what. No matter: this Inertia Argument is a nearly universal prescription for inaction, servicable on every occasion, from a refusal to vote, to saving one’s energy from social action for an alternative engagement with e-sports, to ex-PM Scott Morrison’s repeated assertion that as a contributor of around 1% of CO2 omissions Australia should be exempt from taking any serious climate action. The Inertia Argument is a Swiss army knife for the inert, who prefer not to be seen as shirkers, but instead as rational non-actors.

The logic of the argument is impeccable: in most cases where it’s employed, the action of a single human (or a single nation) would barely shift a needle; therefore, nothing should be done.

Interestingly, there is an equally compelling kind of argument that runs exactly in reverse while also advising inaction, the so-called Slippery Slope Argument. That is, if you should be willing to shift your position ever so slightly downhill, then you will inevitably slip and slide all the way to the bottom and a spectacular crash. For example, if you allow a blood alcohol concentration in drivers of, say, 0.01%, then before long you’ll be allowing 2% and then 5%. Or, if you allow abortions up to 6 weeks, then soon third trimester abortions will be legal, followed by infanticide. Oh wait! These are counterexamples! In both cases, societies have had no trouble at all intervening at some point, drawing a dividing line which, while being fairly arbitrary compared to microshifts up or down the scale, make for a legal divide that can be enforced and adhered to. So, the slippery slope isn’t nearly as slippery as its scare mongers generally assert.

The idea that Slippery Slope Arguments are compelling and that we must assert a stance of absolutely zero exceptions to a rule in order to maintain rules at all is deluded. It is akin to a common response to being exposed to philosophy by freshmen undergraduates, who often take some principle they’ve been introduced to and apply it without restraint to every case they can dream of. Religious extremists do the same. Either end of the stick — that a rule must be applied universally or not at all — simply exposes the wielder’s naivete, an unwillingness to acknowledge that we live in a world of color, mixtures and gradations, rather than one of black and white.

The Inertia Argument is equally bad, despite having a seductive appeal that leads huge numbers of people into a world of delusional belief. The classic move in argument analysis is to take an argument and find minimal hidden premises (those that strain credulity the least in attributing beliefs to the author of the argument) to make the argument strictly valid, that is, where there is no possibility of the conclusion being false when the premises are true. Finding those hidden premises exposes assumptions the author would, perhaps, prefer listeners to overlook. The Inertia Argument treated thus, goes:

The action of a single actor would have a negligible impact.

[Actions with negligible impact are not worth doing.]


Therefore, nothing should be done.

The bracketed sentence is the hidden premise making this argument valid. At first glance, it may seem innocuous. Certainly, actions which have no impact whatsoever are not worth doing; that’s hardly in dispute. So, why not also those actions with a negligible impact? Surely, they also shouldn’t be done.

The rub is that individually negligible actions are quite often not collectively negligible. Voting is a good example. The chances that your one vote will change the outcome of a large election are less than negligible; it’s akin to your chances of winning a lottery. And yet votes have frequently had dramatic and powerful consequences, throwing out hated corrupt governments, for example. If one Goth or Vandal had decided to attack the Roman Empire, that one barbarian would have died. But when their tribes and friends collectively attacked, even the Roman Empire could not stand. A single Egyptian could have hardly built a noticeable pyramid, but by now I’m sure you get the point.

Individual actions can be coordinated. That’s exactly what societies are for. Even if the efforts of a singleton left a singleton aren’t worth worrying about, the coordinated efforts of multiple singletons are. The employment of the specious Inertia Argument not just against individual action, but against collective action, as in Scott Morrison’s argument, is especially stupid. Collective action is where the payoff is, when the effects of individual actions are negligible.

So, the next time you hear the Inertia Argument used to defend inaction, perhaps you can ask your Arguer friend, “Why not take collective action and get a real payoff?” For example, the right response to Scott Morrison’s Inertia Argument that Australia can’t do anything significant about global warming would have been, “Why can’t Australia lead international action, rather than wallow in national inaction?” Apparently, this question never occurred to Australian journalists.

By the way, the theme that people who rely on argumentative form alone to tell them whether arguments are good or bad are, in fact, displaying very poor argument analysis skills is one you’ll find throughout this blog. Their argumentative world is as impoverished as those who think all rules are absolute and all principles universal.

Reliable Sources

Tags

, , , , , , , , , ,

Kevin B Korb

People have a right to speak; the question is, to whom should we be listening.
– Naomi Orestes and Erik M. Conway (2010)

There is no doubt that knowing how to sort more reliable from less reliable sources, and giving the reliable ones the largest role in forming our opinions, is a key critical thinking skill. George W Bush infamously launched a war in Iraq killing hundreds of thousands of people, as well as inspiring the Islamic State of Iraq and Syria and decades of instability in the Mideast, after dismissing UN reports on the non-existence of weapons of mass destruction in Iraq, which turned out to be reliable, and accepting IC reports to the contrary, partially based upon a since utterly discredited witness, “Curveball“. Had Bush, or those around him, been better critical thinkers, those extremely damaging effects could have been averted. In an era when unreliable sources abound, indeed are almost absurdly prolific, knowing something about these issues is more important than ever before.

Karl Popper pointed out that if the aim of science were simply to discover truths, science could stick to enumerating theorems of propositional logic, say, by adding more and more disjuncts to (P ∨ ¬ P). Of course, it doesn’t follow from the pointlessness of that exercise that we are uninterested in truths; rather, that we are interested in truths which actually help us with our problems, which can serve as premises to arguments of interest to us. The relevance of truths to problems is an important question, but one I am not addressing here. Instead I shall consider, on an assumption of relevance, how we decide whether premises are true, when we are not ourselves witnesses to their truth.

There are numerous negative indicators, as well as numerous positive indicators, of good sources of information in domains and problems foreign to us. I shall explain some that are especially useful to know.

The Problem

When you are dealing with an issue about which you have considerable hard-won expertise and understanding, it is usually not hard to decide whether a claim about it has merit. But, at least since the Renaissance, none of us has the time, or inclination, to become expert in every available subject, so we all have to rely on the expertise of others most of the time. The problem here is how to add to our pool of premises working assumptions, points of departure, claims which are most likely to be true, without going to the difficulty of directly testing or observing that they are true. How can we reasonably decide whether someone or some source has the relevant expertise we lack? How do we recognize reliable sources and avoid unreliable ones?

The problem is already acute in an era of internet and social media echo chambers. For pretty much any idea, crazy or not, there will be a source for it somewhere. And the problem will only get worse with AI-driven Deep Fakes muddying public discourse further. The problem of identifying and verifying reliable sources is a traditional concern of argument analysis. I will go over some of the main issues here.

A Consumer Analogy

There is a close analogue in consumer behavior to the problem of assessing information sources. Indeed, finding reliable products and services for consumption is actually a special case of finding reliable sources for the sake of critical reasoning. Reasoning to a purchase is a special case of reasoning, and to do it well you need to vet your sources. People who purchase cars, appliances, IT services based upon how slick and impressive their advertising is are people who acquire rubbish products and shoddy services. Somewhat more cautious folk look at consumer ratings first, either instead of or supplementing their feelings about the ads. They will do somewhat better than the simply reactive kind of consumer, but the popularity of products and services is a limited guide to the truth. A more cautious consumer will find consumer guides or their equivalent, such as Consumer Reports or Choice, which are developed by testers using a lab and see what they have to say. An exceedingly, perhaps an improbably, cautious consumer will go further, looking at the technical and engineering literature to find what Consumer Reports and others have overlooked.

These correspond to the different levels of investigation that are, or might be, applied to investigating and assessing the sources of information for arguments in general. The argument consumer who adopts beliefs because of a slick youtube presentation or an engaging Instagram video, or even a popular book, is quite likely the very same person who buys a car because of the memorable jingle that went with its ad. We should all hope to do better.

False Flags

First, I will point out some things reliability is not.

Authority

Perhaps foremost among these is “authority.” Authorities are those in positions of power and responsibility. They get into those positions for many reasons, often on the basis of merit. But the merits rarely include any deep understanding or expertise in outside fields. Hollywood actors rarely have expertise outside of how to act, or perhaps how to direct others to act. Politicians sometimes acquire expertise by being handed a portfolio and digging into it; otherwise, they mostly know about campaigning, raising money, debating and negotiating political deals. Popular media pay a lot of attention to the wider opinions of actors and politicians outside their direct expertise; except when they have the power to act upon those opinions, this attention is mostly unwarranted and often harmful.

Of course, there are acknowledged authorities within any field of study. The usual way of judging who they are is to have a look at their credentials. Does a claimed climate scientist have a PhD in a climate science? Does a media personality have a law degree? Examining credentials is a shortcut: if we have the time, we should prefer to test their expertise. That could involve, for example, tracing their stated conclusions backwards to discover what evidence they have relied upon and what arguments have led to their conclusions. That is the stuff of good argument analysis. In order to do such backfilling of how they reached their conclusions, you may first have to learn and practice argument analysis, before then doing the footwork of checking their reasoning. If we lack the opportunity to do detailed investigation, as we will in most cases, we might opt for merely checking credentials, or asking about an expert’s reputation, etc. But these options are shortcuts and rationally undercut the degree of trust we should put in that expert’s claims. Too many people think credentialing is the beginning and end of checking sources for their reliability.

There are many well-known cases both of genuine experts who lack the usual credentials and of credentialed people who lack the expertise. Freeman Dyson is a famous physicist with no doctorate, for an example of the former. For the latter, consider Dr Ian Plimer, credentialed in an earth science – geology – with a PhD from Macquarie University, who has a long track record of claims about global warming that are demonstrably false; for example, Skeptical Science has a web page identifying some, with references you can use to verify their falsity.

Authorities should neither be accepted nor rejected out of hand. (It seems to have become popular to reject them out of hand by conspiracy theorists, who often claim to thereby be exercising “critical thinking” skills. The truth is the very opposite, however: critical thinking requires thinking, not an automatic response, whether that’s acceptance or rejection.) When the claim in question is of little consequence, then a quick acceptance, perhaps as a working hypothesis, may be warranted. But where the matter is more important, then, depending on the degree of that importance, more thought or investigation about the reliability of the source is warranted.

Direct Observation

What else is reliability not? It is also not sourcing all of your evidence directly. If you cannot believe anything you have not directly observed, then you must live in a closet, since you would be unable to trust most of what seems to be going on around you. As human actors, we simply must have indirect sources of information which we trust, and so we must agree upon methods of validating those sources. A constant, unslacking demand for direct evidence is the cant of the obfuscationist.

The Demand for Proof

A similar overreach is to insist that claims, to be acceptable, first be proven, which claim often is used to reject a scientific conclusion. This plays upon a well-known truth about science, that its conclusions are almost always fallible and provisional, subject to retraction upon the discovery of disconfirming evidence. What goes unacknowledged is that scientific conclusions, while always subject to retraction in principle, come with different degrees of certainty and different degrees of acceptance by experts. The only place you’ll find serious debates about a flat earth, the germ theory of disease, a heliocentric universe, or smoking causing lung cancer is in the dark corners of the net — and not for example, at scientific conferences (unless they are about social media misinformation!). Anthropogenic global warming almost deserves to be in that same list, but it’s fair to observe there are still a few fringe scientists still disputing it.

The idea of demanding proof is to rely on people not understanding that science is simply not about stacking proofs on top of each other. It is a rhetorical ploy about generating and spreading FUD.

Ad Populum

Hardly anyone has an unadulterated belief in the value of popularity as a guide to truth. Belief in the existence of witches was popular in medieval Europe, however that belief only led to ashes. But, of course, in many circumstances popularity is an important guide to the truth. The popularity of books and films are a likely guide to entertainment for those whose tastes are near the norm. No doubt, the popularity of beliefs also provides some guidance to the truth, when the popularity is measured across individuals who have a strong commitment to finding and exposing the truth. Consensus views amongst a group of experts are worth attending to. Unfortunately, this era of social media, with the most popular opinions being spread precisely because they outrage rather than inform, requires positive resistance to the pull of popularity.

It is only in combination with some independent indication of expertise, understanding and commitment to the truth that popularity or consensus becomes interesting. What we are really after is how we can find those who are committed to finding and reporting the truth and add them to our pool of preferred sources.

Self-Belief

People who think they have gotten hold of a truth that the experts largely, or perhaps universally, disagree with are almost certainly deluding themselves. As I said above, the knee-jerk rejection of expert opinion isn’t a sign of critical thinking or deep insight. It’s a sign of something gone wrong. If you are actually correct that you are right and the existing experts wrong, then you should be able to justify your view by actually making a positive contribution to the field, for example by getting a PhD and writing up your justification as a thesis. Some who think themselves in that position will object that the whole academic world is in on a conspiracy, so they will have no opportunity to do that. In that case, what’s gone wrong is that the person is mentally ill. (The chances that thousands of scientists are in on a conspiracy that is yet to be revealed is essentially zero; see Grimes, 2021.)

True Flags

Genesis

The so-called Genetic Fallacy is, per Wikipedia (as of 22 Mar 2022):

A fallacy of irrelevance that is based solely on someone's or something's history, origin, or source rather than its current meaning or context. This overlooks any difference to be found in the present situation, typically transferring the positive or negative esteem from the earlier context. In other words, a claim is ignored in favor of attacking or championing its source. The fallacy therefore fails to assess the claim on its merit.

This entry is absurd. That is, taken as advice to simply ignore the source of a claim (which is suggested, if not stated, by saying it is a fallacy of irrelevance), it is an expression of an impossible idealism about assessing information independently of its source and entirely on its own merits. However, in order to assess claims outside of our own expertise without attending to the track record of their sources (or stand-ins for track records, such as reputations) would involve acquiring knowledge comparable to that of the relevant experts themselves. Doing this beyond a few topics of interest is beyond the mental and physical resources of any of us. Hence, relying upon expert advice is the only way we can hope to learn about the vast majority of topics of interest to us. And why should we rely upon those experts? Because they have a track record of producing relevant and accurate statements in their domains, as verified by: their supervisors and teachers during their education; experimental tests; their peers during reviews. In other words, when we accept the next statement by an expert or group of experts, we are doing so precisely because the Genetic Inference is not fallacious at all. Origins of statements matter.

What is a fallacy? The genetic fallacy can be real. If your interlocutor's intent is to distract from the content of a claim and cast doubt on it because of a suspect origin, then perhaps it is a genetic fallacy. An example from one of Douglas Walton's books on argumentation (I forget which one) cites a Canadian parliamentarian's comments on abortion being disparaged because they came from a male. That, of course, is a pure strawman and not the kind of genetics I'm advocating for here, since, so far as I know, being male is uncorrelated with truth or falsity, i.e., reliability. In practical fact, what we are prepared to call a fallacy or not depends upon both the content of a supposedly fallacious claim and the intent behind it, rather than being purely a matter of logical form, as many would have it. So, the genetic fallacy is fallacious not because of drawing attention to the source of a claim, but rather to characteristics of the source that are not indicative of reliability. 

Origins also matter, of course, when we are not dealing with experts per se, but journalists and their kin, such as science popularizers. Reputations in journalism matter, because we don’t all have the time to fly off to a war zone to check things out for ourselves. Some news sources at least try to double-source their claims, such as the New York Times, while others are happy to publish the most absurd nonsense, such as Rupert Murdoch’s Fox News. It doesn’t take a genius to figure out which of these sources is the more reliable, even though every source will have its biases. Biases, by the way, can be dealt with, for example by checking alternative (but reliable) sources with different biases. Outright lies are hard to deal with, if you’re trusting the source irresponsibly.

Origins still matter when we are dealing with relatives, acquaintances and strangers on Twitter. In fact, contrary to Wikipedia, origins always matter. For the vast majority of what we see and hear, origins are all that we are likely to take into account. The more sensible of us filter out the junk from Facebook, Youtube and Twitter.

It would be fair to call the Genetic Inference a fallacy, if the claim at issue were, say, the primary matter of a dispute and if its source were not acceptable to any reasonable participants in the dispute. For example, if we are playing a trivia game and have agreed in advance to use Wikipedia to settle disputes, then citing Wikipedia to resolve a dispute is hardly fallacious, even if the subject matter were fallacies. Or again, if the issue at hand were, say, the reality of anthropogenic global warming, in a non-professional setting, and we cite Wikipedia to determine average surface temperatures for the last ten years, that would also not be fallacious, even if one can raise doubts about it. On the other hand, if we are pursuing serious research, citing Wikipedia as our source and leaving matters at that would, indeed, be fallacious. One can, and should, dig deeper.

Indeed, one can always dig deeper, as any three year old (and Lewis Carroll) can tell you. Claims can always be interrogated, except when we run out of breath to ask further questions.

In short, whether the referral to the origin of a claim is fallacious or not depends upon the context. If the claim is a minor premise whose merits we do not wish to investigate, then it is probably not fallacious. If the claim is a key premise for an important argument, then it might be right to label a reliance upon the source’s reputation “the Genetic Fallacy”. But for most common argumentative purposes, relying upon a known track record of sources, or their reputations for honesty and reliability, is what we will be doing most of the time. And calling that a fallacy is fallacious.

So, when searching for premises, identifying reliable origins is the name of the game, outside of special cases where we ourselves are witnesses or otherwise providing evidence. In other words, when we are trying find reliable sources, the Genetic “Fallacy” is all there is.

The question is how to commit this “fallacy” in a way that reliably yields the truth.

Testing Reliability

A poor indicator of reliability is apparent objectivity or neutrality. The admonition of some to write scientific reports using the plural “we” or the passive voice may give a veneer of objectivity to the reports, but does nothing to debias them. It is, in fact, a misleading affectation.

The reputation of a source is probably already a better indicator of its reliability than the cosmetic touches they put on their claims, such as passive voice. Reputations are based on any number of random things, including things quite as cosmetic as voice, but at least there’s some chance someone’s reputation derives from actual truth-telling. A superficial objectivity only derives from an intent to present a degree of objectivity that may be entirely unfounded. But we can usually do better than rely on reputation.

Reliability can, and frequently should, be tested.

In some cases we can test the truth of claims ourselves. If you have relevant expertise, you may have sufficient knowledge to know whether a relevant claim is true, or you may have ready access to sources that can be used to test it. Alternatively, you may have access to different sources whom you have good reason to believe will be reliable in the domain a claim refers to. For example, if you are given advice by a doctor, if you have a personal contact who is also a doctor, you may have immediate recourse to a second opinion, or at least an opinion about seeking further opinions.

Fact Checking is a slightly different activity which you might employ. Established fact checking organizations, of which there are a fair number since the rise of the internet (e.g., Snopes, RMIT Factlab; Wikipedia keeps a list of such sites here), essentially do what used to be done by journalists: checking the sources of claims, attempting to confirm claims by identifying additional sources, providing the subjects of claims the opportunity to respond to any claim, and consulting with experts in relevant domains. Anybody can, and sometimes should, attempt these same activities where an important issue is at stake. First, though, you might want to either look to see if existing fact checkers have already done this work, or even suggest that they fact check a claim for you. In general, established fact checkers will have an easier time of it, since they have sufficient reputation, for example, to gain access to relevant experts.

In addition to these kinds of activities, you can simply look at what alternative sources say about the issue at hand. If you see a claim made in the New York Times, you might see what the Washington Post has to say, since they may say something contrary to the NYT. To be sure, both newspapers share a good deal of their worldview, and so they share biases. So, you might want to find a source with a different worldview. The Economist, for example, is a more conservative magazine than those two. Better still is to see what reputable sources outside the Anglo world have to say, such as Sueddeutsche Zeitung or Le Monde. If you turn to a Rupert Murdoch source, however, you have probably gone too far and are now in an alternative universe. In recent decades, Murdoch’s venues, both print and cable, have largely replaced news reports by rightwing rants. In any case, different sources can confirm or contradict each other, and either outcome is helpful for understanding the merits of the original claim. Ideally, your multiple sources are not just operating on different world views, but are actually sourcing their claims from different original sources; otherwise, they may be effectively repeating each other.

An aside: one of the often remarked advantages of taking a "break year" after school and traveling overseas is that you can gain considerable insight into how different the world looks away from home. US news media, for example, looks highly diverse from inside the US. Ignoring the extremists (such as many published by Murdoch), however, the vast bulk of US mainstream media shares a very strong bias, strongly favoring an American view of international politics, as well as exhibiting considerable ignorance of the world beyond US borders. This becomes apparent when you immerse yourself in a completely different set of biases. Of course, this kind of distortion is not peculiar to America. Indeed, the least exceptional thing about America may be American exceptionalism. Strangely, every country and culture are the very best in the world.

In summary, you can test the reliability of specific claims by specific sources in a variety of ways: relatively rarely by your own direct observation or experiment or comparing to your own prior knowledge; more easily and commonly by examining alternative sources; or, by investigating and fact checking. The benefit of any of these activities is two-fold: first, you can better gauge the truthfulness of a specific claim; but also importantly, you gain information about the reliability of the source of the claim. Reputations (other people’s opinions) are generally unreliable guides, but an understanding of the reliability of a source gained through repeated truth testing of their claims will provide you with what you need to perform the genetic non-fallacy correctly, your own informed opinion. Knowing the reliability of sources is essential to navigating our high volume world of information and misinformation.

A Hierarchy of Reliable Sources

There is no such thing as an infallible source. Eyewitness (first-person) testimony is, for example, notoriously unreliable. Given people’s proneness to confabulation, rationalization, confirmation bias and generating and explaining false memories, you should also treat your own recollections and firm beliefs with some skepticism. The confidence you have in your own beliefs is a very poor guide to their reliability (see Wikipedia’s “Overconfidence effect” and “Dunning-Kruger effect“, for example). Every source, however credible, should actually be tested when there is an opportunity to do so, assuming the time and trouble are worth the likely information gained. By doing so, you will gain the knowledge to make accurate judgments of the reliability of different sources when you do not have the opportunity to check them further.

Nevertheless, there clearly is an ordering from more to less reliable sources. Here is my attempt at such, together with some reasons for the order. You can check for yourself to see whether my ordering has any reliability, of course. Each step along the list represents at least some lower reliability, as I explain; of course, these are just general categories and there will be many specific exceptions, with either higher or lower reliability.

  1. Scientific reference material; resources of international scientific organizations.
  2. Scientific journals, conferences and reviews.
    • Examples: Science, Nature, PLOS ONE.
    • Journals and conferences come with different reputations; indeed, many academics spend many hours per year trying to rank them. However, those rankings are meant to judge a kind of pecking order for prestige and the apparent importance of publishing in one journal versus another; they are not specifically ranking reliability or the probability of assertions being true. The lowest ranked publications may indeed have trouble publishing the truth, however the standard for most scientific journals and conferences is that accepted publications be peer reviewed by two or more experts. The claims made in their publications have been “tested” at a minimum by four eyeballs, presumably with an attached brain. Articles with problematic aspects should have been reworked or rejected. Venues with lower standards will be ignored by most academics.
  3. Theses and dissertations; academic grant proposals.
    • PhD and Masters theses are often publicly available, or at least available through libraries. Recent ones can be very helpful in navigating current work in a field. They have the major advantage that, while all of them at least attempt to advance our knowledge, and so may spend its main chapters exploring a relatively narrow topic, they are written for examiners who may have a background centered elsewhere. In other words, the introductory and background chapters will be directed at generally, but not specifically, informed audience, often serving as an introduction to a field as good as any textbook, but more up to date.
    • Grant proposals are even more targeted and usually written by more senior researchers.
  4. Educational material and resources, including textbooks.
    • Most universities require their academics to be ambidextrous, producing both original research and original educational material. This has the benefit of their educational material being infused with new ideas and information, which has also been pretested through refereed publications. Nevertheless, the authors are typically covering a much broader landscape in their educating than in their own research, resulting in them sometimes taking shortcuts, such as relying on prior textbooks without checking them, in other words, violating my main advice here to test assertions (see, e.g., Bohren, 2009).
  5. Popular science writing, videos and podcasts.
    • Here, reliability is really tied to the individual source. Some popular science writers are also scientists, such as Neil deGrass Tyson, A.K. Dewdney and Brian Cox, and draw upon their own hard-won knowledge in doing popular science. Other science writers, while not starting with great reputations, learn their fields and do a good job of checking their information and so end up earning a good reputation. James Gleick and Martin Gardner come to mind. And then there are a host of writers who don’t come to mind at all. So this category really parallels the entire range of source reliability from top to bottom.
  6. Newspapers and news magazines.
    • During the 20th century the best known newspapers adopted practices that separated their publications from the pamphleteers and rags of the 19th century. This included an explicit commitment to honesty and to finding multiple sources for controversial or contestable assertions. In the 21st century internet environment many of these commitments have been abandoned. Still, some news media do attempt to maintain their standards and can be found to be mostly reliable.
  7. Wikipedia.
    • Wikipedia has an internal structure that includes editing for accuracy. They at least make the attempt to be reliable. Partly because anyone can edit their articles, however, their claims need to be treated with caution. While there is such a thing as the “wisdom of the crowd”, that tends to operate as a longer term corrective. And, in any case, where an unfounded doctrine has gained widespread support, the crowd will only reinforce rather than correct it. (See above on the “Genetic Fallacy”; for a more general discussion of the fallacies, that Wikipedia tends to accept without question, see Korb, 2004.)
  8. Blogs, instagram posts, youtube videos.
    • You tend to get what you pay for.

You will observe the repeated reference to science near the top of the list. The human science project has been and remains our best collective effort to increase our understanding of the world. While all the sciences have problems, including occasional fraud and hoaxes, they are also fairly actively policed and have a pretty good track record of catching and exposing indiscretions. It’s certainly better than the track record of catching and exposing political corruption in most countries. To get some idea, you can look at Retraction Watch, which reports on scientific publications that have been retracted. Being peer reviewed and accepted by scientific experts is no guarantee of correctness. Scientists are subject to the same cognitive biases and faults as everyone else. However, if you test the claims made in categories 1, 2 or 3, for example, and find that they are wrong, then you should publish your results, because you will be both subtracting what is wrong and adding what is right to our collective understanding of the world.

Additional Reading

There are more than plenty of both good and bad writings on the subjects I’ve touched upon here. I list a few that are accessible and I find have mostly good suggestions for how to source information or check information sources.

Fact Checking Resources

Many organizations have sprung up for fact checking, some sponsored by traditional news organization. These often also provide guides or other information for how to do your own fact checking. Here are some good examples:

  • Norddeutsche Rundfunk (NDR) provides educational materials (in German) for teachers of journalism, Medienkompetenz.
  • Many good fact checking organizations explain their procedures, which can help others learn to use them or adapt them. E.g., Factcheck.org,

References

Alvarez, Claudia (2007). Does philosophy improve critical thinking skills? Masters Thesis, Department of Philosophy, University of Melbourne.

Bohren, C. F. (2009). Physics textbook writing: Medieval, monastic mimicry. American Journal of Physics, 77(2), 101-103.

Carroll, L., aka C. Dodgson, (1895). What the Tortoise said to Achilles. Mind. Republished in Mind, 104 (416), 1995, 691–693, https://doi.org/10.1093/mind/104.416.691

Domestic violence. (n. d.). In Wikipedia, The Free Encyclopedia. Retrieved 16:49, March 29, 2016, from https://en.wikipedia.org/w/index.php?title=Domestic_violence&oldid=712521522

Good, I. J. (1952). Rational decisions. Journal of the Royal Statistical Society. Series B (Methodological), 107-114.

Grimes, D. R. (2021). Medical disinformation and the unviable nature of COVID-19 conspiracy theories. PLoS One, 16(3).

Handfield, T., Twardy, C. R., Korb, K. B., & Oppy, G. (2008). The metaphysics of causal models. Erkenntnis, 68(2), 149-168.

Hope, L. R., & Korb, K. B. (2004). A Bayesian metric for evaluating machine learning algorithms. In AI 2004: Advances in Artificial Intelligence (pp. 991-997). Springer Berlin Heidelberg.

Klein A. R. (2009). Practical Implications of Domestic Violence Research. National Institute of Justice Special Report. US Department of Justice. Retrieved from http://www.ncjrs.gov/pdffiles1/nij/225722.pdf

Korb, K. (2004). Bayesian informal logic and fallacy. Informal Logic, 24(1).

Korb, K. B., & Nicholson, A. E. (2010). Bayesian artificial intelligence. CRC press.

Nisbett, R. E., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. Science, 238(4827), 625-631.

Olding, R. and Benny-Morrison, A. (2015, Dec 16). The common misconception about domestic violence murders. The Sydney Morning Herald. Retrieved from http://www.smh.com.au/nsw/the-common-misconception-about-domestic-violence-murders-20151216-glp7vm.html

Orestes, N. and Conway, E. M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Climate Change. Bloomsbury Press.

Pearl, J. (1988). Probabilistic reasoning in intelligent systems. Palo Alto. CA: Morgan-Kaufmann.

Shannon, C. E., & Weaver, W. (1949). The mathematical theory of information.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Sturrock, P. A. (2013) AKA Shakespeare. Palo Alto, Exoscience.

Tetlock, Philip (2015). Philip Tetlock on superforecasting. Interview with the Library of Economics and Liberty. http://www.econtalk.org/archives/2015/12/philip_tetlock.html

Toulmin, S. (1958). The Uses of Argument. Cambridge University.

Wellman, M. P. (1990). Fundamental concepts of qualitative probabilistic networks. Artificial Intelligence, 44(3), 257-303.

Wigmore, J. H. (1913). The principles of judicial proof: as given by logic, psychology, and general experience, and illustrated in judicial trials. Little, Brown.

Open Letter to RIPE NCC

Tags

, , , ,

Dear RIPE Executive Board,

I would like to call into question your decision of 28 Feb to simply dismiss Ukraine’s proposal to, in effect, cut Russia off of the internet.

In defending your decision you correctly point out that the historical standard of the internet has been that it should be open to all, and, in particular, that “the means to communicate should not be affected by domestic political disputes, international conflicts or war.” This is of a piece with claims about technological neutrality, that is, that technologies are not to blame for how people use them. While the underlying motives of your view are, I’m sure, good and noble, the effect in practice is not.

First, the internet simply is not, nor can be, politically neutral. It is a primary means for a great many people to obtain their day-to-day understanding of what’s going on in the world. For precisely that reason, it has become a target for many governments in their efforts to manipulate and control their own people and to mislead or destabilize other people. You know as well as anyone that access to the internet varies not just by wealth, education, etc., but also by direct political intervention. You would like to avoid the internet becoming an explicit target of political action because its administration takes sides in any conflict. That is a very reaosnable objective, but to pretend the internet is not already a target is duplicitous and unhelpful.

Of course, I agree that the more politically engaged the internet administration becomes, the greater the danger to an open internet. The risks and benefits of an intervention need to be carefully assessed, and it makes good sense to put the “burden of proof” on active interventions rather than maintaining neutrality. I am with you there. Unfortunately, you are not with yourself, if you accept that position, since nowhere in your letter reporting your decision is there any weight given to the costs of non-intervention. As far as your letter is concerned, there are no risks involved in letting Putin carry on with whatever he is doing. I think much of the rest of humanity disagrees.

Second, the underlying logic for technological neutrality is simply specious. It may be literally true that “guns don’t kill people, people kill people” (ignoring AI, anyway). But the metaphorical reality is the opposite: guns enable killing and are hardly a neutral technology. This point is very widely accepted, if not by the NRA and its fellow travelers. Nuclear weapons, biological weapons, chemical weapons are all examples of technologies coming out of 20th century science which people quite generally have demanded come under international controls, and do. There is little that is neutral about them, beyond the fact that they don’t themselves advocate an ideology. Anybody can use them to kill. However, those technologies are on a side: the side opposed to human life.

The internet can likewise be used to kill.

You may say the internet is just about humans communicating with humans. I hope you are not that naive. Means of communication have been the preferred means of controlling populaces since the beginning of nations, and that has not changed with the internet. It’s only become easier.

You suggest that a failure to maintain strict impartiality would “jeopardise the very model that has been key to the development of the Internet in our service region.” As I said above, I agree. Whatever you do will jeopardize that model, because that model is always going to be at risk. My question is whether you have even considered the risks associated with other alternatives. If Putin succeeds, not just locally in Ukraine, but globally in destabilizing both the international rule of law and democracies world-wide, what will become of your internet neutrality? It is at risk either way, and some risks are greater than others. You have not done a risk analysis and are probably incapable of doing one, especially not on behalf of all of your users.

Regards, K B Korb

Paul Krugman Argues with Zombies

Tags

, ,

A Brief Review of Arguing with Zombies: Economics, Politics and the Fight for a Better Future

Arguing with zombies would seem like a waste of time. They don’t listen; indeed, they don’t cogitate. It may seem right, since what you’ve heard come out of their mouths is pure nonsense. But all they want to do is rip into you while you talk. This is a quandary facing many of us in the Age of Social Media. Brain-dead arguments are all the rage. Why even bother?

Well, almost certainly if your only goal, and only  possible effect, is to persuade the conveyor of a brain-dead argument that the argument is meritless, then you are wasting your time. Zombies don’t listen and they don’t have minds to change. Paul Krugman’s book nevertheless does suggest some value to arguing with them. He lays out his arguments so clearly and succinctly that it is a pleasure to read them, assuming you’re not a zombie. For those already familiar with the arguments that counter —  destroy, really — the supply-side fanaticism of Ronald Reagan and fellow Reaganauts, which is one of Krugman’s main targets, they serve to remind us of the main points. For the unfamiliar, they provide a great introduction to those issues. The value of arguing with zombies lies largely in providing guidance to newcomers to the arguments, to onlookers, and perhaps inoculating them against brain diseases, but also in supplying handy supports for the familiar in their daily travails. 

There is also potential to aid the author her or himself: collating and synthesizing the main arguments opposing the apparently ceaselessly rising tide of zombie arguments can serve the same purposes as teaching undergraduate classes. It aids the author in achieving clarity, focusing on key issues, as well as testing ideas and genuine arguments. As Krugman notes in this volume, achieving simplicity is itself hard work, and far more valuable than most people, lay or expert, appreciate.

These kinds of efforts are more important than ever before. In an age of Trump and Murdoch, Zukerberg and Thiel, a good deal of the money, and so time and effort, of those capable of influencing political decision making is flowing towards zombie ideas, given them an afterlife and energy well beyond any rational justification.

There are many particular arguments that Krugman takes on, providing, if you like, model arguments for the rest of us to wield at picnics, cocktail parties or other gatherings. The rest of us using these models and spreading the word is of the essence, so long as democracy remains a viable political option. Among the many:

  • That economic austerity is not the panacea for economic woes Thatcherites claimed it is.
  • That government budgets are not just like household budgets, requiring us to maintain a balance. In particular, that the debt phobia encouraged by the right in opposition, but almost always forgotten about when they gain power (e.g., Reagan’s defence spending and tax cuts, Trump’s enormous tax cut), is grounded in nothing but a misunderstanding of government debt and economics.
  • That Obamacare is the thin edge of the wedge of State Socialism. Also, that Denmark is an authoritarian, State Socialist dictatorship in the same category as Cuba, Stalin’s Soviet Union and Venezuela. In general, Krugman takes down the idea that the government protecting the public through regulation is inimical to any variety of capitalism. The social democracies (not socialist states) of Europe are long-standing counterexamples to this right wing “meme”.
  • These tax cuts have padded the pockets of the rich, but, no, it hasn’t trickled down since those pockets are nearly waterproof. In consequence, both income and wealth inequality in America has been rapidly growing (see, e.g., Pew Research Center, 2020, Trends in US Income and Wealth Inequality).
  • That Trump is no fluke: the right side of American politics has been long drifting, and more recently surging, towards outright racism, xenophobia and fear of the future; Trump is only a high-water mark in galvanizing hatred and fear, likely to be surpassed soon rather than late.
  • The mass media (including Krugman’s own New York Times) have often, but not always, done the public a great disservice, not by spreading fake news (so much), but by often omitting stories that undermine the right, and, in a misbegotten idea of “fair play”, selecting instead unfounded zombie arguments to represent “the other side”.
  • That the Norquist dogma that the only good government employee is a dead employee is a phony way of claiming that the right way to deal with a Commons is to run your cattle all over it until it is destroyed, so that you personally profit first and foremost, while your neighbors starve (“The Tragedy of the Commons”). All public goods should be captured by the powerful, and the weak or late should die.

The last idea is a misreading of Darwin (as in Social Darwinism), but a true reading of Ayn Rand’s “Objectivism”, as presented in her novel The Fountainhead. She has had a pernicious effect over the last seventy years, as the wealthy (i.e., potential financial supporters of rightwing propaganda) have read her and found very convenient arguments seeming to justify their wealth and their use of that wealth to screw everyone else. Hers, and theirs, is a morally vacuous universe — the alternative universe discerned by Kellyanne Conway and Donald Trump.

All of these messages (and more) will be familiar to many who follow the political debates. Krugman beefs them up with pointed and clear arguments that will make sense to almost anyone willing to digest them.

To take an example, consider the long-standing arguments over a claimed need to avoid large budget deficits by the right, which have recently heated up over $3.5T spending bills and the self-imposed punishment of requiring that a debt ceiling be raised to avoid defaulting on US government debt. Krugman illustrates the hypocrisy of these claims with the case of “Flimflam man” Paul Ryan, who decried Obama’s deficit spending even while advocating a $4T tax cut for the rich. Ryan’s proposed spending cuts, potentially hurting the poor and middle class, would have left a $1.3T hole in the budget, about which he was entirely unconcerned. The net effect of his plan would have simply been a huge new financial burdening of the poor and middle class for the benefit of the wealthy. Complaints about such proposals are inevitably labeled class warfare by our insightful mass media, of course.

It is worth pointing out that deficits incurred through tax cuts for the wealthy are very different from deficits incurred through public spending on health, welfare and infrastructure. The former put money into the pockets of the rich, who have no need to spend it, and generally do not. The latter directly grows the economy. Necessarily, therefore, the multiplier effect, the number of dollars circulating in the economy as a result of the additional deficit, will be smaller following a tax cut than an equivalent public investment. Money stuffed in a bank is less active than that paying a contractor, who pays a subcontractor, etc. This simple point has stumped economists from George Mason University, but shouldn’t go unnoticed by intelligent lay people. 

Related to this are specious claims that household and national finances work the same way, so when a government invests in infrastructure through deficit spending, for example, it is “stealing from future generations”. Since investments just are investments in the future, this idea is nonsensical. Not only are the budgetary time scales between nations and families radically different, any family will eventually have to repay its debts, or go bankrupt, whereas nations can and do carry debts indefinitely, so long as their economies have sufficient capacity. In particular, if an economy is growing at a rate higher than the interest rate on its debt, then the debt is likely to be manageable. Krugman points out that there are two distinct kinds of national economic conditions: normal conditions, when extra government spending can “crowd out” private borrowing by sending up interest rates, and depressed conditions, when reducing interest rates can fail to affect private borrowing due to a lack of confidence, and public spending may be needed to keep the economy from further tanking. Whereas many consider Obama’s spending during the Great Recession a failure, it was at least a half success, and the fully successful public spending response for that time was better exemplified by that of Australia under a Labor government. The corresponding considerations for family budgets are non-existent.

QAnoners, conspiracy theorists and others committed to zombie arguments are much like cultists. With a strong commitment to seeing confirmations of their views and reinterpreting disconfirmations as neutral or even positive, directly confronting them with sensible arguments is actually counterproductive. It is almost certainly better to have a friendly chat about the weather (as long as the weather isn’t extreme!). But engaging with those who are not yet committed, who may even be toying with zombie arguments, will be purposeful as long as science, civilization and democracy have any remaining signs of life. We should all thank Krugman for showing us how to engage.

Social Media Need to Be Regulated

Tags

, , , , , , ,

– Kevin B Korb

21 February 2021

Social Media

When Tim Berners-Lee invented the worldwide web in 1990 (Not the internet! The internet was effectively invented in the 1960s and first given form as the “ARPANET” well before Berners-Lee or Al Gore became involved.), a kind of starry-eyed idea that the internet would spread a love of knowledge and freedom around the world, if it were simply left alone by politicians, was very prominent. Most of us, having experienced the rise of social media on the back of the web and the internet, have since then been disabused of such notions, if we ever had them. While the webnet has made science, journalism and entertainment very much more widely available than ever before, it has notoriously also made available huge amounts of misinformation and disinformation, as well as private and semi-private places in which correspondents from around the world can cooperate in burnishing stories embodying them and so spread misunderstanding like a dark cloud over the world. Also notoriously, well-financed state organizations, such as St Petersburg’s IRA, can and do orchestrate disinformation campaigns using unsuspecting useful idiots. In short, much of the internet now operates as a kind of intellectual cesspool, one which no one is yet cleaning up.

Regulation

In keeping with this spirit of an unregulated wild west, social media have thus far escaped much of the burden of direct regulation. Google, Amazon, Facebook, Twitter, Netflix and others have captured huge amounts of personal data from their users and converted that information into huge revenue streams, in large part through capturing much of the worldwide advertising market. The regulatory environment that used to keep broadcasters and news organizations in check no longer applies. Of course, social media companies are corporations and come under existing regulations that apply to most corporations, such as tax and anti-trust laws. But their basis in new technology means that laws and regulations have, for the most part, yet to catch up with their behavior, influence and evasions. More than most traditional companies, for example, they have been highly adept at minimizing taxes, by having related companies provide services from low-tax countries and paying for them in high-tax countries, thus reducing profits where they hurt and maximizing them where they don’t. Of course, that’s an old game which manufacturing companies have played since well before Google or Twitter existed. However, moving manufacturing plants to low-tax districts is a good deal harder than moving around the nominal location of a web-based service, which can be provided from anywhere connected to the internet.

Not only do social media companies live in a low-regulatory environment, there are only poor prospects for that changing in an economic world largely dominated by an excess of a neoliberal ideology which views regulation as tantamount to corporate murder. However, it has never been more necessary to oppose this view: the threats to individual liberties and privacy posed by technology, both the communications technology of the internet and the emergence of applied AI, have never been greater and will not be controllable without proper regulation.

The Fairness Doctrine

The Fairness Doctrine was a part of the US Federal Communications Commission’s (FCC) regulatory framework from 1949 until 1987. The FCC did (and does) have regulatory authority over broadcast licences and used to enforce the Fairness Doctrine, which was:

The doctrine that imposes affirmative responsibilities on a broadcaster to provide coverage of issues of public importance that is adequate and fairly reflects differing viewpoints. In fulfilling its fairness doctrine obligations, a broadcaster must provide free time for the presentation of opposing views if a paid sponsor is unavailable and must initiate programming on public issues if no one else seeks to do so (The Fairness Doctrine, 2008).

In a shorter form, the Fairness Doctrine required broadcasters to cover issues of public interest in a manner that was fair and balanced. This was not interpreted as providing equal time for all points of view, but some coverage for important issues, plus some coverage for legitimate alternative points of view to what broadcasters had already presented. The doctrine had teeth and led to the cancellation of multiple licences (e.g., Watson, 2015; Parker, 2008). In fact, its effectiveness in supporting fair and balanced debate is arguably the reason that Ronald Reagan and his Republican supporters scrapped the rule in 1987.

USA Today has done a "Fact Check" on whether the scrapping of the Fairness Doctrine gave rise to the polarization in the US media most clearly exemplified by Fox News. They conclude that this is untrue, since the FCC's jurisdiction was limited to broadcasters, and cable news was not considered a broadcaster. Their argument is defective, however.

USA Today acknowledges that the Fairness Doctrine was effective in getting individual licensees to provide balanced coverage of issues. But they ignore the fact that the scope of jurisdiction of the FCC was in dispute in the 1980s. Already in 1968 the Supreme Court acknowledged their jurisdiction over cable, despite cable not technically being a broadcast medium, on the grounds that otherwise the FCC would be unable to fulfill its intended role. Then in 1972 the FCC explicitly imposed the Fairness Doctrine on cable operators. During the 70s and 80s these rules were slowly wound back, until, under Reagan-appointed commissioners, the FCC scrapped the rule, with Reagan vetoing a Congressional attempt to retain the Fairness Doctrine. In other words, before 1987 the Fairness Doctrine was successfully applied to cable, and Reagan terminated that, not just for cable, but also for broadcasting.

The result was that a cultural acceptance of news programs being balanced dissipated, in both cable and broadcasting. Fox News would never have been possible without these actions, despite their 100% phony slogan of being "Fair and Balanced" themselves. The USA Today "Fact Check" is well worthy of Three Pinocchios.

However, what I want to target here is Oreskes and Conway’s (2010) argument, in their otherwise excellent Merchants of Doubt, that the Fairness Doctrine did a great deal of damage to public discourse by making false equivalencing (“what aboutism”) a kind of norm, in counterpoint to the criticism that its scrapping has done damage by fostering polarization (see box above). They provide a detailed and well-argued account of how false equivalencing has undermined the public discussion, and so the public decision making, surrounding the harms of tobacco use, acid rain, pesticides, ozone degradation through CFCs and anthropogenic global warming. These issues are all importantly linked. They all have spawned devoted groups of deniers who fervently oppose regulatory measures for minimizing the harm caused by related industries — and these groups are largely overlapping, fueled by a common set of rightwing think tanks and common pools of money. (About the money, an especially revealing read is Nancy MacLeans’ Democracy in Chains.) While it’s clear that the scrapping of the Fairness Doctrine has encouraged voices of extremism, especially those backed by Rupert Murdoch, it’s also arguable that the Fairness Doctrine itself gave cover to extremists demanding to be heard on these and other topics — because it’s only fair! — when by rights they would have had a much smaller voice, should volume be in any way proportional to the merits of the cases being advanced. In a nutshell, that is Oreskes and Conway’s argument.

For those who look to the potential value of regulation returning to the role of promoting effective and useful public discourse, and to the Fairness Doctrine specifically as a model for that, this is an argument that must be addressed. The primary weakness in it is Oreskes and Conway’s elision of two key features of the Fairness Doctrine (at any rate, on my interpretation and that of Wikipedia, 2021). It explicitly does not provide for equal time for differing viewpoints, but some reasonable, if lesser, amount of time for legitimate alternative viewpoints (see box below). It provides for no (mandatory) time for illegitimate points of view. The legitimacy of differing points of view is up for debate in many cases, of course, and, when the Fairness Doctrine was in existence, legitimacy was ultimately settled by the courts, which have always been a rational backstop for deciding the limits of public discourse. Where the claims of a faction have been thoroughly discredited by science — as they have been in all the cases discussed in Oreskes and Conway’s book, and indeed already were at the times of the debate over their regulation — there is no need under the Fairness Doctrine to give any time to those points of view, nor would the courts force the presentation of illegitimate nonsense, regardless of the funds behind it. The push for false equivalency is indeed a prominent tactic of deniers of science, but, if it drew upon the Fairness Doctrine before 1987, then it did so without justification and under false pretences.

I am not an expert in the law, let alone in FCC law, but there are clear indications in US Supreme Court findings supporting my lesser point that legitimacy to some (unspecified) standard was required of a thesis or point of view before the Fairness Doctrine could be invoked, which I quote below. Regardless, even if my interpretation is mistaken, the more important point is that it could be true. If we are to adopt some version of a Fairness Doctrine for use in regulating social media, it needs to be one which supports legitimacy and rules out the discredited. Here are some quotes pertinent to the lesser issue (note that allowing disproven, illegitimate points of view a significant voice is clearly not in the public interest):

Referring to legislation supporting the Fairness Doctrine, the US Supreme Court observed: `Senator Scott, another Senate manager [of the legislation], added that: "It is intended to encompass all legitimate areas of public importance which are controversial," not just politics.' (US Supreme Court, 1969)

`The statutory authority of the FCC to promulgate these regulations derives from the mandate to the "Commission from time to time, as public convenience, interest, or necessity requires" to promulgate "such rules and regulations and prescribe such restrictions and conditions . . . as may be necessary to carry out the provisions of this chapter . . . ." 47 U.S.C. 303 and 303 (r).[note 7] The Commission is specifically directed to consider the demands of the public interest... This mandate to the FCC to assure that broadcasters operate in the public interest is a broad one, a power "not niggardly but expansive."' (US Supreme Court, 1969)

The Fairness Doctrine is repeatedly described as supporting broadcasting on important public issues, which would rule out, for example, giving time to flat-earthers. For example, "[licensees have] assumed the obligation of presenting important public questions fairly and without bias." (US Supreme Court, 1969)

On the restrictions imposed by the Fairness Doctrine on broadcasters' freedom of choice: "Such restrictions have been upheld by this Court only when they were narrowly tailored to further a substantial governmental interest, such as ensuring adequate and balanced coverage of public issues." (US Supreme Court, 1984)

Given the widespread and growing flow of misinformation and disinformation on social media, the Fairness Doctrine, or rather some descendant of it also incorporating protection of the public from promulgation of the illegitimate, could provide the justification and means of choking that flow and so allowing social media to serve the truly useful purpose of supporting a “marketplace of ideas” instead of being a poisonous wetmarket spawning misinformation pandemics.

In short, regulations sharing purpose with the Fairness Doctrine are fair game for nations wanting to foster valuable public debate, which is part of the foundation of any democracy. Such regulation is needed for traditional broadcasters. The US Supreme Court extended the doctrine to cable networks on the grounds that the FCC could not fulfill its function if cable were excluded. On the very same grounds, but with even stronger force, such regulation needs to be applied to internet- and web-based social media, which have collectively outgrown both broadcasting and cable in their reach and importance for public debate.

The GDPR

The EU’s General Data Protection Regulation (GDPR) was introduced in 2016, establishing EU-wide principles for the protection of personal data, including rights to informed consent to the collection of data and the restriction of its use to the purposes for which consent was given. The GDPR also provides for enforcement powers, with each member country having a Data Protection Authority (DPA) to investigate and prosecute violations. Of course, those US tech companies which have so successfully “monetized” your data objected long and loud to the GDPR. Once it became operational, however, they went quiet, since, while there are compliance costs, compliance is in fact feasible and doesn’t stop them earning money in Europe. The rest of the world benefits from EU regulation in a minimal way, when companies are either obliged to obey the GDPR because of doing business with the EU or where they simply prefer a uniform way of doing business across jurisdictions.

The social media goals for data acquisition are largely to do with (mis)using the data for better targeting advertising, because that’s largely where the revenue comes from. If users voluntarily agree to such use, knowing the scope of the usage in advance, that’s fair enough. And that’s exactly what the GDPR allows, as well as what it limits data usage to. But the threats involving data acquisition are now hugely greater than simply making money. Facial recognition software is now routinely used by police. With much of the world playing catch up with Chinese-level camera surveillance, the potential for abuse of such information is enormous. Deep fake technology has the potential to weaponize personal data, directing much more effectively manipulative advertising at you, as well as using your data to spread more effective and manipulative misinformation about you and groups you belong to. Identity theft using deep fake videos will be much easier than that using earlier technology, for example. As another example, blackmail and extortion based on compromising information have long been lucrative activities for criminals; blackmail and extortion based on compromising deep fake misinformation will be orders of magnitude easier. Deep Fakes will not for long be limited to passive videos and audios; they will soon be extended to real-time interactive simulations of a targeted victim, providing even more persuasive power for fakery (Wang, 2019). With the near-term development of the “Internet of Things” — wiring all of our refrigerators, cars, air conditioning systems, etc. into the internet — the raw data on which Surveillance Capitalism operates will expand exponentially for the foreseeable future. The rise, and combination, of Big Data and Machine Learning using Big Data (e.g., Deep Fakery) portends parlous times on the net. Berners-Lee style enthusiasm for a “free range” on an internet wild west is no longer so much quaint as simply dangerous.

Real News

There is still news reporting and journalism in the world. There are both private and public organizations which put a good deal of effort and money into tracking what’s happening of interest around the world and presenting it to their audiences. This is so despite, for example, US newspaper advertising revenue having declined about 55% since the invention of the worldwide web to 2018 (per a Pew Center Report), whereas in the same period US social media ad revenue grew from nothing to 3,571 times that of the newspapers (i.e., 357,100% more). Since news organizations originate and curate their news and opinion reports, it is reasonable to hold them accountable for the content, for example by allowing some defamation actions against them. Social media, on the other hand, simply offer platforms for others to write or speak upon. Especially given the size of their memberships, it is both impossible and unreasonable to expect them to police the content of posts in the same way as news media. Or, at least, that is the common view.

Indeed, this is the rationale behind the now famous Section 230 of the US Telecommunications Act of 1996 (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”) Making social media responsible for content, when that content is put out by millions or even billions of people, would make social media unviable. Not even the AI of the foreseeable future would be able to police their posts effectively enough to avoid huge legal costs. It’s possible, of course, that the courts would find a balance between the financial and operational health of social media organizations and their legal opponents in such a new environment, with Section 230 removed, but there is no guarantee. The nominal reason that Donald Trump wanted Section 230 deleted was that social media were censoring rightwing voices. But the reality is, of course, that without the protection of Section 230 social media would be forced to censor even more heavily or else just be shut down.

What I am calling for here is an even heavier forced censorship, in addition to new privacy protections. Illegitimate claims pushed by Russia’s IRA, Q, climate deniers, big monied interests must lose their voices. They are diminishing, not enriching, public debate. Illegitimacy, exposed by science and confirmed by courts, must not be heard. How exactly to make such a mandate operational is an open question. There need to be independent authorities for initial judgments of fairness and legitimacy in analogy to GDPR’s Data Protection Authorities, where independence refers to a lack of dependency upon both the social media organizations and a nation’s politics. In view of the latter, unlike the DPAs, it would be best if the new authority were explicitly international. There are plenty of precedents in international law for such organizations. Successful examples of international regulatory bodies include the UN’s Universal Postal Union, which coordinates worldwide postal systems, the UN’s International Maritime Organization, which regulates international shipping, or the World Trade Organization, which regulates international trade.

While forcing social media to report matters fairly, including intervening in their users’ mis/disinformation, would be a new burden on them, it is nothing like the threat revocation of Section 230 would raise. If social media are judged directly responsible for misinformation, through perhaps negligence, then penalties might be in order. But if a UN authority points out that some accounts are spreading disinformation, the existing practice of deleting those accounts would likely suffice to contain the matter. There is no need to threaten social media with punitive monetary damages. What we need is for public discourse to converge on civility, not suppression.

Free Speech

Open platforms resemble the public square, and the free discussion of politics that takes place on these platforms resembles an open marketplace of ideas. (Schweppe, 2019).  

What about free speech? If social media organizations are to be made and held responsible for providing something akin to a digital public square — a forum where any public issue may be discussed within the bounds of public decency and fairness — then won’t our right to free speech be infringed? On any reasonable understanding of these terms, the answer is “No”. The requirement of public decency has always been maintained for public squares. Fairness was introduced in the US in the mid-twentieth century, but appropriately. It was always at least implicitly a requirement of real public squares in any case: any citizen who pulled out a bullhorn and spoke over everyone else would have been hauled off for disturbing the peace.

Democracy depends upon free speech. And it is fitting that it is included in the very first amendment in the US Bill of Rights. But that right has never been absolute, nor can it be. The community decides what constraints to put upon it, but there is no community which allows unfettered a freedom to abuse, incite hatred, or endanger people. Somewhat older style libertarianism asserted individual rights, including speech rights, up to, but not beyond, the boundaries of others’ rights (i.e., there is an obligation to “refrain from violating the rights of others”, van der Fossen, 2019). Since libertarianism recently married neoliberal fanaticism, however, it seems like all constraints are off: individual rights, for example, now extend to refusing to wear masks during a pandemic, that is, to a newly invented right to infect and kill other people. The logical extension of such libertarianism to all varieties of behavior would turn libertarian moral philosophy into Thrasymachus’s “might makes right” — that is, a full-throated cry to be evil.

Oreskes and Conway meticulously trace much of this neoliberal-libertarian fusion back to the monied interests fighting against regulation in the public interest of the lucrative businesses of fossil fuel extraction, agriculture, manufacturing and tobacco. They maximize profits by putting all the burden of their “externalities” — pollution — on the public. Neoliberal libertarianism is a con.

Social media tech companies are playing an extension of that con. They adopt internal policing practices to monitor and control content exactly and only insofar as it is necessary to stave off the kind of regulation I’m calling for here. To the extent that regulation can be forestalled or avoided, the burdens of social media’s externalities can be foisted onto the public. These externalities include the polarization of public debate, the domination of monied interests of that debate through targeted advertising and the Murdoch press, the creation and magnification of extremely damaging conspiracy theories, the promotion of hate over cooperation. We cannot wait another generation to protect the public interest from these con artists.

Summary

Social media have grown from nothing to dominating public discussions around the world. They have evaded regulation so far very successfully in most cases. The growth in data collection, the rapid advance of AI technologies, the imminent flourishing of Deep Fake technology, the proven ability of interested parties to initiate and promote disinformation campaigns all point to an urgent and growing need for proper regulation of social media. The goals of such regulation should include at least the protection of personal data, the shackling of disinformation and the curbing of misinformation. The GDPR and the Fairness Doctrine provide some successful models — starting points — for considering such regulations. But the social media themselves are far richer and more far-reaching than the media of the past, spanning the worldwide web, so the regulations required must likewise be worldwide, preferably operating across borders as a neutral international body under international laws.

Acknowledgement

I thank anonymous reviewers for their helpful criticisms.

References

Fairness Doctrine (2008). West’s Encyclopedia of American Law, edition 2. Accessed February 7 2021 from https://legal-dictionary.thefreedictionary.com/Fairness+Doctrine

Parker, Everett (2008). The FCC & Censorship. Democracy Now. Accessed 7 February, 2021. https://www.democracynow.org/2008/3/6/the_fcc_censorship_legendary_media_activist

United States Supreme Court (1969). RED LION BROADCASTING CO. v. FCC(1969) No. 717 Argued: Decided: June 9, 1969 395 U.S. 367, 89 S. Ct. 1794, 23 L. Ed. 2d 371, 1 Med. L. Rptr. 2053 (1969).

United States Supreme Court (1984). 468 U.S. 364 104 S.Ct. 3106 82 L.Ed.2d 278 FEDERAL COMMUNICATIONS COMMISSION v. LEAGUE OF WOMEN VOTERS OF CALIFORNIA et al. No. 82-912. Supreme Court of the United States Argued Jan. 16, 1984. Decided July 2, 1984.

Schweppe, J (2019). Hawley Defends the Public Space. First Things https://www.firstthings.com/web-exclusives/2019/06/hawley-defends-the-public-square. Accessed 16 Feb 2021.

van der Vossen, Bas (2019). “Libertarianism”, The Stanford Encyclopedia of Philosophy (Spring 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2019/entries/libertarianism/&gt;. Accessed 16 February, 2021.

Wang, G.E. (2019). Humans in the Loop: The Design of Interactive AI Systems, Stanford University Human-Centered AI.

Watson, Roxanne. “Red Lion Broadcasting Co. v. FCC”. Encyclopedia Britannica, 11 Sep. 2014, https://www.britannica.com/event/Red-Lion-Broadcasting-Co-v-FCC. Accessed 7 February, 2021.

Wikipedia (2021). “FCC Fairness Doctrine“. Wikipedia. Accessed 21 February 2021.

Some Clarifying Notes on Covid-19

Tags

, , , , , , , ,

— Kevin B Korb, 8 Dec 2020 (Revised 12 Dec 2020)

Here I put together some of the key arguments for some of the important issues concerning the Covid-19 pandemic (alternatively, the SARS-CoV-2 pandemic, since that is the virus causing Covid-19). (Nota Bene: Much of this was written well before the date of publication. Rather than update the content, which would take some time, I now fill it out and publish as is, since I believe it still makes a contribution.)

The arguments themselves are mostly quite simple. The disagreements about the issues largely lie in disagreements about what the underlying facts are, with covid deniers mostly using unreliable sources of information (I’ve had unsourced youtube videos offered as scientific evidence) or misunderstanding statistical reasoning or scientific methods. The fundamental solution, or mental repair work, has to do with learning methods of critical reasoning, properly checking sources, learning scientific and statistical methods, etc. I will point out specific problems of this kind, but readers may also wish to consult general guides to such matters. (I had written one, which Extinction Rebellion deleted without making any backup; I will recreate it someday.)

Some active commentators think that critical reasoning means rejecting anything “the authorities” might have to say, calling this “healthy skepticism”. In fact, it is unhealthy skepticism. Critical reasoning involves testing relevant propositions, neither rejecting them because you don’t like the source nor accepting them because you do. To be sure, critical reasoning is also compatible with this kind of out-of-hand acceptance or rejection on the grounds of time and effort. No one can become an expert in every scientific field, so that’s why we have experts and that’s why sciences and other social activities establish vetting and review processes to test and publicize their own standards for reliability. (If you’d like to learn about critical reasoning, the Stanford Encyclopedia of Philosophy article “Critical Thinking” is a good place to start.)

For my part, I give proper references, unlike conspiracy theorists.

It doesn’t help that both the CDC and WHO have lost a good deal of credibility on Covid-19. The CDC appears to have been captured by the Trump administration and is now taking political orders instead of (or more exactly, in addition to) promoting science-based policy. There are, of course, many good scientists remaining in the CDC, but their bosses are owned politically, with the result that pronouncements by CDC are more suspect than ever before. (See also CDC Director Redfield’s letter to governors of 17 August 2020, effectively announcing vaccines will be considered safe prior to Phase 3 trials.) The WHO depends upon financial support from member nations, with the result that their pronouncements are subject to influence by those nations. The silver lining to the US’s withdrawal from the WHO is that the US no longer has such influence.

Early doubts about masks expressed by US and WHO health authorities were partially motivated by political aims rather than science, such as Dr Anthony Fauci’s publicly stated goal of reserving better masks for health care workers. Unfortunately, he actually said, falsely, that there was no scientific evidence supporting the public use of masks. One of the major principles taught in public health education is to tell the public the truth: losing the public’s confidence is one of the sure-fire ways of losing the public health war. Dr Fauci violated the public trust. That does not, of course, mean that his subsequent statements are also false. For the most part, they appear to be accurate. Similarly, the WHO publicly repeated messages from the Chinese government uncritically, in particular claiming that there was no evidence that covid-19 is transmitted between humans and also claiming there was no evidence that covid-19 can be transmitted by pre- or asymptomatic people (e.g., “WHO Comments Breed Confusion Over Asymptomatic Spread of COVID-19“). Both claims were known to be false at the time. The WHO has, of course, retracted those comments, but only after much damage was done.

Where I reference the CDC or WHO below, I have found their comments to be well sourced in the case at hand; the reader can always follow those sources. I now briefly treat some of the more contentious public health claims about Covid-19.

Covid-19 is not a significantly harmful disease

Covid-19 is both highly infectious and, in comparison with the most common respiratory diseases, highly virulent. The median R0 estimate from a review of numerous other studies (i.e., the expected number of people an infected person will infect without public health measures being put in place) is 2.8 (Liu et al., 2020). That rate implies rapid exponential growth in the early stages of an epidemic; indeed, anything above 1.0 does that, however the larger the number, the more rapid the spread.

Common flus have R0s ranging from 0.9 to 2.1 (Coburn, Wagner and Blower, 2009), which, while lower than that for SARS-CoV-2, is generally enough to cause problems. The main relevant differences between these flus and Covid-19 are: there is considerable partial immunity to influenza through prior exposure in the population; there are vaccines to help protect vulnerable subpopulations; the virulence, in both mortality and morbidity, is far less (multiple studies support an estimate of around 0.5% for the infectious fatality rate of Covid-19; e.g., the meta-analysis by Meyerowitz-Katz and Merone, 2020).

Much of the outcry over public health measures is fueled by a denial that the mortality rates for Covid-19 is as high as some have claimed. The very first point to make is that this claim, even if true, would be insufficient to make their case that the common health measures, including wearing masks, are unnecessary. It entirely ignores the very large morbidity of the disease. To be sure, we do not yet know the long-term damage this disease does to survivors. But the simple-minded assumption that asymptomatic, or subclinical, victims bear no consequences (e.g., Trump claiming children are virtually immune) is, at best, willful ignorance. Instead of that, the growing weight of the evidence is that subclinical victims suffer significant health damage (see, e.g., “Asymptomatic COVID: Silent, but Maybe Not Harmless“,

Schools should be open since children do not suffer significant harm from Covid-19

A recent BMJ study (27 Aug 2020) reinforces others showing that children and young people have less severe acute covid-19 than adults. Some early reports indicated that very few spreading events had been traced to schools; however, that has less evidentiary value than it might seem, since early on many schools were shut, and so could not have been sources of spreading events. Nevertheless, studies have shown that: when infected, children carry viral loads comparable to adults (Jones et al., 2020); children appear to spread the disease and have been the source of superspreader events (Kelly, 2020). Furthermore, the studies showing a high morbidity load for Covid-19 sufferers, including those with few or no symptoms, do not bode well for the future health of infected children. The disease affects every major organ in the body in many cases (e.g., Robba et al., 2020). Imposing those burdens on the children, and on their families and communities, is not a step to be taken lightly. Of course, as with all public health measures, the choice is not automatic; there must be a weighing up of benefits and harms. If the testing and contact tracing regime in a region or country is sufficiently robust, then schools may well be the first institution worth opening up.

Economics trumps health

It is widely and loudly argued that the health of the economy, affecting everyone and especially the poor, should come before the health of the few and, in particular, the health of the old and frail. The welfare of the 0.5% should not be allowed to dictate the lives of the remaining 99.5%.

This argument is fundamentally simply ignorant. The first thing it ignores is the very heavy morbidity load imposed on society by unchecked Covid-19. Subclinical sufferers may continue to work, but only by way of spreading the disease to coworkers. Assuming that’s not what “open economy” advocates have in mind, then subclinical victims will be out of the economy for the duration of their infectiousness only, one or two weeks. That’s around 40-50% of those infected. The rest will be out for the duration of their symptoms, ranging from a couple of weeks to many months. And there’s a very large tail of “long covid” patients who are incapacitated for at least months, perhaps years (Marshall, 2020). The “open economy” option implies allowing the spread of the disease, its consequent damage to the health of a very large percentage of the population, resulting in severe economic disruption for at least the duration of the pandemic.

The alternative view, one endorsed by many economists, is that caring for the health and well-being of society is the first step to sustaining, or rebuilding, the economy. A simulation study of the economics of pandemics by Barrett et al. (2011) directly supports this view. So too does the history of the 1918 Spanish Flu: a study of US cities shows that those which had more aggressive public health interventions, including masks and lockdowns, performed better economically (Hatchett, Mecher and Lipsitch, 2007).

Masks

Wearing masks is an individual choice, so the state has no right to mandate them

Assuming masks are effective in slowing a deadly pandemic, and that a deadly pandemic exists, this amounts to the claim that public health interests cannot override individual freedoms. Extreme libertarians might be enamored of such an argument, although libertarianism traditionally does not endorse the right to harm others, which violating mask mandates in these circumstances certainly can do. For example, the Stanford Encyclopedia of Philosophy article on Libertarianism states:

While people can be justifiably forced to do certain things (most obviously, to refrain from violating the rights of others) they cannot be coerced to serve the overall good of society, or even their own personal good.

Infecting others with a deadly disease violates others’ rights, of course. There is no accepted principle that absolutely asserts public health rights over individual rights, or vice versa. Society as a whole, through its institutions and public opinion, must adjudicate particular cases. But the claim of some that their individual freedoms always trump public health orders is simply stupid.

Masks are ineffective

Of course, mandating masks is pointless, an arbitrary and unnecessary restriction of people’s choices, if they have no effect on the disease. However, we have known for around one hundred years that they are effective in slowing and reducing the spread of many respiratory diseases such as Covid-19. The history of the 1918 flu epidemic includes an interesting episode of the response in San Francisco (see also Anti-Mask League of San Francisco). The short version is that mask wearing was accepted initially, and the first wave of the flu was bad enough, but after relaxing the rules a second wave came, when resistance to masks was much greater. In partial result of that, the second wave was far more devastating.

More direct evidence has become available in the meantime. Respiratory diseases such as Covid-19 are spread in the first instance by air, through water droplets ranging from large to extremely small, the former generally being called “droplets” and the latter “aerosols”. There are notable differences between masks, with some being more effective than others. So, any claim that masks are helpful in reducing Covid-19 spread most likely is making some restricted claim about a subset of possible masks. Finding that, say, a shawl or balaclava doesn’t help does not negate the claim.

Most masks have been proven effective at inhibiting larger droplets spreading (see CDC’s Considerations for Wearing Masks)

UCSF has an overview report on the effectiveness of masks that is worth reading, “Still Confused About Masks? Here’s the Science Behind How Face Masks Prevent Coronavirus.” To be sure, their update, indicating that valved masks are ineffective is mistaken on multiple points. First, they (along with the CDC and various other health authorities) ignore the simple and obvious point that if you do effective “sink control”, eliminating transmission at the recipient end, then you eliminate transmission. It takes two to tango. Second, there is in fact no evidence that significant (infectious) amounts of SARS-CoV-2 escapes through the valves; this is possible, but the evidence is thin. (Here is an interesting Salon article on this subject.) On other matters, however, the UCSF report is solid, in my opinion.

Masks are dangerous

Granted that masks are effective, some have claimed that they are dangerous. The danger may well counterbalance, or overbalance, the benefits, so, if true, this would make existing mask advice and mandates suspect. On the face of it, the claim is absurd, since medical practitioners have been wearing masks without observed ill effect for over one hundred years. Beneath the face of it, the claim is still absurd. You can read this Fact Check put together by the BBC.

References

Calculated Surprises: A Philosophy of Computer Simulation — A Review

Tags

,

Johannes Lenhard, Calculated Surprises: A Philosophy of Computer Simulation, Oxford University Press, 2019, 256pp., £47.99 (hbk), ISBN 9780190873288.

Reviewed by Kevin B Korb, Monash University

First published in the Notre Dame Philosophical Review.

In the early days of electronic computers there was considerable doubt about their value to society, including a debate about whether they contributed to economic productivity at all (Brynjolfsson, 1993). A common view was that they made computations faster, but that they were not going to contribute anything fundamentally new to society. They were glorified punchcard machines. Such was the thinking behind such infamous predictions as that attributed to the president of IBM in 1943 that there may be a world market for five computers. Of course, by now such views seem quaintly anachronistic. Quantum computers offer the potential for exponential increases in computing power – and “nothing more” – but are the only way hard encryption is ever likely to break. Computers and the internet are all the evidence needed that some qualitative differences are breached by sufficiently many quantitative steps.

While these general questions are resolved, this debate still echoes elsewhere, including the philosophy of simulation. Some insist that the role of scientific simulation demands a radical new epistemology, whereas others assert that simulation, while providing new techniques, changes nothing fundamental. This is the debate Johannes Lenhard engages in Calculated Surprises.

Lenhard lands on the side of a new epistemology for simulation, while not landing too very far from the divide. Rather than claiming there is some one special feature of simulation that demands this new epistemology, as some have, he berates those who focus primarily on this or that specific feature that appears special; rather, the significant features are all special together. Per Lenhard, those significant features are: the ability to experiment with complex chaotic systems, the ability to visualize simulations and interact with them in real time, the plasticity of computer simulations (the ability to reconfigure them structurally and parametrically), and their opacity, that is, our difficulty in comprehending them. It is the unique combination of all these new features which forces us onto new epistemological terrain.

More exactly, Lenhard’s central thesis is that this combination means simulation is a new, transformative kind of mathematical modeling. To see what the unique combination produces, one needs to consider the full range of features, and therefore also the full range of kinds of computer simulation. Focusing only on a single type of simulation is as limiting as focusing on a single feature, per Lenhard. For example, much existing work exclusively considers models using difference equation approximations of dynamic systems, such as climate models. But conclusions reached on that basis are likely to overlook the rich diversity of modeling characterized by such methods as Cellular Automata (CA), discrete event simulation, Agent-Based Modeling (ABM), neural networks, Bayesian networks, etc.

Striking the right level of generality in treating simulation is important. Clearly, one can be either too specific or too general. In this moderate stance, Lenhard is surely right.

Plausibly, the class of simulations are bound together by family resemblance, rather than some clean set of necessary and sufficient conditions. It is a pity, then, that Lenhard simply upfront rejects consideration of stochasticity as an important feature of simulation. He says, reasonably, that some sacrifices have to be made (“even Odysseus had to sacrifice six of his crew”). And it’s true that some simulations are strictly deterministic, not even using pseudo-indeterminism, such as many CA. But it’s also true that stochastic methods are key for most of the important simulations in science. Furthermore, they have opened up genuinely new varieties of investigation, including all the varieties of Monte Carlo estimation, and are essential for meaningful Artificial Life and ABMs. This is a major and unhappy omission in Lenhard’s study.

One of the aspects of simulation Lenhard definitely gets right is the iterative and exploratory nature of much of it, emphasizing the process of simulation modeling. The ease of performing simulation experiments, compared to the expense and difficulty of experiments in real life, don’t just allow for millions of experiments to be run per setup (routinely driving confidence intervals of estimated values to neglible sizes, assuming we’re talking about stochastic simulations), but allow for using early simulation runs to inform the redesign or reconfiguration of later simulations, in an exploratory interaction of experimenter and experiment. Instead of simply relying on the outcomes of a few experimental setups to provide clear evidence for or against some theory driving the experiment, simulation allows for an iterative development of the model, with early experiments correcting the trajectory of the overall program. This underwrites much of the “autonomy” of simulation from theory. If a theory behind a simulation is incomplete, or simply in part mistaken, simulation experiments may nevertheless direct the research program, with feedback from real-world observations, expert opinion, or subsequent efforts to repair the theory. As Lenhard writes, in simulation “scientific ways of proceeding draw close to engineering” (p. 214).

Indeed, Lenhard points out that simulation science requires an iterative development of models. In many cases, the theory implemented in a simulation is very far from being sufficient even to provide a qualitative prediction of a simulation’s behavior. In one example given, Landman’s simulation of the development of a gold nanowire contradicted the underlying theory; only after the simulation produced it was a physical experiment run which confirmed the phenomenon (Landman, 2001). The underlying physical theory inspired the simulation, but the simulation itself forced further theoretical development. This aspect of simulation science explodes the traditional strict distinction in the philosophy of science between contexts of discovery and justification. This distinction may be of analytic value, for example when identifying Bayesian priors and posteriors in an inductive inference, but in simulation practice the contexts themselves of discovery and justification are one and the same. To be sure, Lakatos’s concept of scientific research programs throwing up anomalies (Lakatos, 1982) and overcoming them already weakens the distinction, but in simulation science the necessity of combined discovery and justification is ever present.

In connection with iterative development, scientific simulation has converged even more closely with engineering, widely adopting the “Spiral Model” for agile software development, which is precisely an iterative development process set in opposition to one-shot, severe tests of theoretical (program) correctness, i.e., in opposition to monolithic software QA testing. The Spiral loops through: entertaining a new (small) requirement, designing and coding to fulfill the requirement, testing the hoped-for fulfillment, and then looping back for a new requirement. This equivalence of process makes good sense given that simulations are software programs. To better understand simulation methods as scientific processes, a deeper exploration of this equivalence than Lenhard provides would be useful.

The epistemic opacity of simulation models is one of their notable features Lenhard highlights. It is very common that human insights into how a simulation works are limited, a fact which elevates the importance of visualizations of the intermediate and final results of a simulation and of interacting with them. Lenhard points out that this raises issues for our understanding of “scientific understanding”. Understanding is traditionally construed as a kind of epistemic state achieved within the confines of a brain. Talk of an “extended mind” brings home the important point that books, pens, computers and the cloud significantly enhance the range of our understanding, allowing us to “download” information we haven’t bothered to memorize, for example. But there still needs to be a central agent who is the focal point of understanding, at least in common parlance. Lenhard promotes a more radical reconception: that it is something like the system-as-a-whole that does the understanding. The human-cum-simulation can perform experiments, make predictions, advance science, even while the human acting, or examined, solo has no internal comprehension of what the hell the simulation is actually doing. Since successful predictions, engineering feats, etc. are standard criteria of human understanding, we should happily attribute understanding to the humans in the simulation system satisfying these criteria. This seems to be much of the basis for Lenhard’s claim that simulation epistemology is a radical departure from existing scientific epistemologies, since it radically extends our understanding of scientific understanding. I’m afraid I fail to see the radical shift, however. Anything described as understanding attributed to humans within a successful simulation system can as easily be described as a successful simulation system lacking full human understanding of a theory behind it. Lenhard fails to elucidate any clear benefits from a shift in language here. On the other hand, there is at least one clear benefit to conservatism, namely that we maintain a clear contact with existing language usage. We are all interested in advancing both our understanding of nature and our ability to engineer within and with it; it’s not obviously helpful to conflate the two.

Epistemic opacity also has epistemological consequences that Lenhard does not fully explore. While he emphasizes, even in his title, that simulation experiments often surprise, he does not point out that where the surprises are independently confirmed, as with the Landman case above, this provides significant confirmatory support for the correctness of the simulation, on clear Bayesian grounds. For those interested in this kind of issue, Volker Grimm et al. (2005) provide a clear explanation, from the point of view of Agent-Based Models (ABMs) in ecology.

Another unexplored topic is supervenience theory. This is more general than computer simulation theory, to be sure, but is connected to the opacity of simulations and complexity theory, and is especially acutely raised in the context of Artificial Life and Agent-Based Modeling, which provide not just an excuse but a pointed tool for considering supervenience. The very short form is: ABMs give rise to unexpected, difficult-to-explain high-level phenomena from possibly very simple low-level elements and their rules of operation (perhaps most famously in “boids” simulating bird flocks; Reynolds, 1987). This is known by a variety of names, such as, emergence, supervenience, implementation and multiple realization. It is not inevitable that a philosophy of simulation should encompass a theory of supervenience, but it is probably desirable.

It seems to me that in some respects an even more radical discussion of computer simulation than that pursued by Lenhard is in order. Simulations are literally ubiquitous across the sciences. That is, I’m unaware of any scientific displine which does not use them to advance knowledge. It is in wide use in astronomy, biology, chemistry, physics, climate science, mathematics, data science, social science, economics – and in many cases it is a primary and essential experimental method. Lenhard, oddly, at least appears to disagree, since he states that their common use has only reached “amazingly” many sciences, rather than simply all of them. I’d be interested to know which sciences remain immune to their advantages.

Lenhard’s Calculated Surprises introduces many of the issues that have been central to the debates within the philosophy of simulation and adopts sensible positions on most. He, for example, points out that model validation grounds simulations in the real world, offering a methodological antidote to extremist epistemologies’ flights of fancy. Lenhard’s is a book that patient beginners to the philosophy of simulation can profit from and that specialists should certainly look at. My main complaint, aside from its fairly turgid style (its German origin is clear enough), is the many important and interesting sides to simulation science that are simply ignored. A lack of examination of the scope and limits of simulation is one of those.

The ubiquity of simulation now goes even well beyond the domains of science themselves. It has recently found interesting and potentially important applications in history (e.g., University of York, 2020). Brian Skyrms has famously applied simulations to the study of philosophically interesting game theory (e.g., Skyrms, 2004). Social epistemology has employed simulation for some time already to answer questions about how collective beliefs and decisions may be arrived at (Douven, 2009; Salerno et al., 2017). I have applied simulation to the evolution of ethics and utility (Mascaro et al., 2011; Korb et al., 2016) and to studies in the philosophy of evolution (Woodberry et al., 2009; Korb & Dorin, 2011). I am presently attempting to build a computational tool for illustrating and testing various philosophical theories of causation. There is every reason to bring simulation into the heart of philosophical questions and especially into the philosophy of science. It is even plausible to me that instruction in simulation programming may become as necessary to graduate philosophical training as it already is in many of the sciences.

Paul Thagard formulated the key idea first: if you have a methodological idea of any merit, you should be able to turn it into a working algorithm (Thagard, 1993). Since a great deal of philosophy is about method, a great deal of philosophy not only can be, but needs to be, algorithmized. Simulation provides not just a test of the methodological ideas, and not just a demonstration of their potential, but also a test of the clarity of and relations between the underlying concepts, a test of the philosophizing itself. Who cannot simulate, cannot understand.

References

  • Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM36(12), 66-77.
  • Douven, I. (2009). Introduction: Computer simulations in social epistemology. Episteme6(2), 107-109.
  • Grimm, V., Revilla, E., Berger, U., Jeltsch, F., Mooij, W. M., Railsback, S. F., Thulke, H., Weiner, J., Wiegand, T. & DeAngelis, D. L. (2005). Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science310(5750), 987-991.
  • Korb, K. B., Brumley, L., & Kopp, C. (2016, July). An empirical study of the co-evolution of utility and predictive ability. In 2016 IEEE Congress on Evolutionary Computation (CEC) (pp. 703-710). IEEE.
  • Korb, K. B., & Dorin, A. (2011). Evolution unbound: Releasing the arrow of complexity. Biology & philosophy26(3), 317-338.
  • Lakatos, I. (1982). Philosophical Papers. Volume I: The Methodology of Scientific Research Programmes (edited by Worrall, J., & Currie, G). Cambridge University Press.
  • Mascaro, S., Korb, K., Nicholson, A., & Woodberry, O. (2011). Evolving ethics: The new science of good and evil. Imprint Academic, UK.
  • Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th annual conference on Computer graphics and interactive techniques (pp. 25-34). ACM.
  • Skyrms, B. (2004). The stag hunt and the evolution of social structure. Cambridge University Press.
  • Salerno, J. M., Bottoms, B. L., & Peter-Hagene, L. C. (2017). Individual versus group decision making: Jurors’ reliance on central and peripheral information to evaluate expert testimony. PloS one12(9).
  • Thagard, P. (1993). Computational philosophy of science. MIT press.
  • Woodberry, O. G., Korb, K. B., & Nicholson, A. E. (2009). Testing punctuated equilibrium theory using evolutionary activity statistics. In Australian Conference on Artificial Life (pp. 86-95). Springer, Berlin, Heidelberg.