The Reasoning Well

Featured

Tags

This will be a collection of hypothetical lectures that I might have delivered over the course of my academic career, but didn’t. The goal of this course of lectures is to introduce a broad array of tools, or ideas, or weapons for attacking reasoning problems, taking advantage of a broad range of disciplines. These are meant to be introductory, readily understood by intelligent laypeople who have never studied those disciplines and representing general-purspose methods that might become available to anyone who does study those disciplines at an undergraduate level. So, this collection is envisioned as a kind of Swiss-army knife for your brain. While that is my intention, I do not pretend to cover all the major disciplines, but emphasize those which have had a substantial impact on my intellectual life.

I have taken inspiration from two prolific and excellent writers of articles for Scientific American, A.K. Dewdney and Martin Gardner. In partial consequence of their inspiration, these lectures are somewhat loosely connected; they are intended to largely be intelligible independently of one another, although cross-references will guide the reader through some kinds of dependencies. While this is not intended to be scholarly in the sense of detailing every historical line of thought behind these lectures, or attributing all details to their originators, I do indicate where readers might turn for additional information on these ideas.

The top-level topics I am covering (in tentative order) include: Philosophy, Bayesian Reasoning, Argumentation, Mathematics and Computer Science, Physical Thinking, Modeling and Simulation, Evolution Theory, Information, Ethics, Politics, Cognition and Inference. Posts will be “collected” using the tag #ReasoningWell.

Generative AI Does Not Violate Copyright, but Needs To 

Tags

, , , , ,

— Kevin B Korb, 30 Mar 2024                   

There have been numerous lawsuits brought against generative AI companies by copyright holders on creative works that claim generative AI infringes on the copyright of material used in training up those AIs. Some of those suits may succeed and some may fail. Furthermore, some of those that fail may well end up being widely acknowledged as deserving to have succeeded, even while failing. I will return to that last point at the end: it suggests that, perhaps exceedingly unsurprisingly, new technology requires new legislation. In the meantime, though, I want to argue that the underlying logic of many complaining copyright holders is flawed: that even though generative AI companies have paid for access to the copyrighted material, they have no right to use it in training without explicit permission. The courts may end up disagreeing with me — indeed, in recent times US courts in particular have shown a distinct inclination to legislate on their own — but I think existing copyright law clearly cuts against their argument.

Note that I’m not saying that no copyright holder has a legitimate case to make under existing copyright law. There are two kinds of complaint that I address here. The first claims that some works of art or authorship produced by AI are so close to an original as to count as an illegitimate copy. These are the kinds of claims that have been made by creators since IP law was invented; indeed, providing protection against such infringement is the primary purpose of those laws. The second claims that using original works, whether access is paid for or not, to train AIs that then go on, not to copy those works, but to produce works that are not recognizable as copies already violates copyright law. This kind of claim is novel, and, I claim, unsupported by existing copyright law. There are now multiple lawsuits making this kind of claim, however (see, e.g., The Intercept, Raw Story and Alternet Sue OpenAI and New York Times Sues OpenAI and Microsoft).

There is some gray area between these two complaints. The boundary between infringement and fair use of existing works will likely always be litigated and always be adjusted. Existing IP law protects derivative works that constitute infringement, wherever that line is drawn. But where people argue that the protection extends beyond what is counted as infringement, then it is overreach. For example,

In a case filed in late 2022, Andersen v. Stability AI et al., three artists formed a class to sue multiple generative AI platforms on the basis of the AI using their original works without license to train their AI in their styles, allowing users to generate works that may be insufficiently transformative from their existing, protected works, and, as a result, would be unauthorized derivative works. (Harvard Business Review, 7 Apr, 2023)

The judgment of whether generated works are insufficiently transformative is a normal one in IP law; the judgment that some of them may end up being insufficiently transformative — that the AI simply has the ability to infringe — is irrelevant to existing IP law, but nevertheless appears to be the motivating thought behind many complaints.

NYT in particular claims that OpenAI’s use of its reporting as training inputs “seek[s] to free-ride on the Times’s massive investment in its journalism by using it to build substitutive products without permission or payment.” (Guardian, 23 Dec 2023) Authors suing OpenAI have much the same motivation: “To preserve our literature, authors must have the ability to control if and how their works are used by generative AI.” (Guardian, 20 Sep 2023) The fear is not that GPT or other AIs will duplicate their training inputs, or produce modifications of them that are close enough to the original to count as infringements in court, but that the AIs will end up putting them out of their jobs by producing non-infringing works which perform the same roles in society that authors, musicians, newspaper publishers, etc. currently play. In other words, they fear for their incomes from future works, not the theft of their existing works.

The analogical case with humans would be where established authors, for example, sued new and upcoming authors for having learned their craft, in part, by reading the works of established authors. The new authors may well through their reading and study become better than the last generation of authors and so displace some of their sales. And they could not have done so without first reading those established authors. The same is true of painters who start out by copying copyrighted paintings to learn their craft. Needless to say, the same is true of every genre and form of creative work. If copyright law really prohibited learning from existing copyrighted works, without directly infringing on them, then creative arts would cease. That was never the intent, nor the practice, of existing copyright law.

Exactly the same reasoning applies to OpenAI and other Large Language Models and their training using existing copyrighted works. If existing copyright law can be used to put a stop to their training and becoming adept at their particular generative niche, then not only will this kind of AI cease to develop, but potentially lawyers will bring similar action against new human artists and stop them from creating new artwork. This whole effort is ill conceived and should not (but may) succeed.

Regardless, there is a real problem, newly arisen with generative AI, that does need to be addressed. Human artists may slowly replace a prior generation of artists, but not, in general, by erasing their ability to have an income, but simply by a natural replacement of the old with the new. Generative AI genuinely threatens to sweep away existing creative content in an unrestricted wave of similar, if non-infringing, new content. Much of the problem is just the unrestricted volume of similar work that can be rapidly created. It is a threat not just to mundane work and their producers, such as copy editing and report generation, but also to script writers, film directors, symphonic orchestras, painters, and so on. It is manifest that something needs to be done.

I suggest the answer, however, is not twisting existing copyright law into performing new functions badly, but in writing new laws that directly address the new problems.

New Law

I think the need for new copyright law is clear. As the Guardian has put it (28 Feb 2024):

The wave of lawsuits reflects a media industry-wide concern that generative AI will compete with established publishers as a source of information for internet users, while further sapping advertising revenues and undermining the quality of online news.

That’s a legitimate concern. Or, consider Aboriginal Australian painters who produce works in a particular range of styles and whose works bring high prices. Generative AIs can be trained to imitate them so well, and produce similar works en masse, so that their livelihoods might be destroyed. Society has an interest in protecting these kinds of businesses. While generative AI producers have a right to develop and apply their AI, creating new businesses or enhancing performance in old businesses, existing and future artists have a similar right to continue, and continue to develop, their own businesses. Governments have the difficult, but necessary, task of balancing these two needs in new legislation. The time for prevarication on regulating new technology is well and truly over.

Current copyright law, being focused entirely on the similarity of specific end-products of generative AI and the original works they were trained with, can only cover these cases by an interventionist court that deliberately distorts the law. What is needed are: a general debate in society about what protections are required or desirable and then new laws to achieve those ends without also killing off promising new technology. The expectation that courts will do the heavy legislative work misunderstands the role of the courts and, if met, will only lead to bad, unwritten IP law.

Why I am voting Yes for the Voice to Parliament

Tags

, ,

Kevin B Korb, 18 Sept 2023

The referendum on a Voice to Parliament will explicitly do two things: recognize Aboriginal and Torres Strait Islanders as the first peoples of Australia and establish a Voice to Parliament for them.

Many in the No campaign insist the former is a worthy act, while the second is not. Any number of reasons are given for the latter, a few of which I’ll review below.

A key argument is that a constitution is meant to be a foundational document for a democracy and ought to be limited to straightforward principles. In particular, it should not introduce distinctions between members of society but, as much as possible, provide a neutral grounding for citizenship for all. This argument makes very good sense to me. In an ideal world, the Australian constitution would not need to acknowledge anything about the diverse peoples of Australia.

However, we live in the real world. In the real world, not only did Britain invade Australia, but it took its land and resources by force, largely taking them away from the original inhabitants. As many have pointed out, we — those living today — cannot relive the wars and conflicts of the past indefinitely. We can see every day what that leads to in Israel, for example — to places that are not good. But it does not follow that we should simply ignore the past and pretend that everything occurring today takes place on its own terms and merits. When the recent past directly influences the conditions in which current decisions are made, it becomes necessary to address those influences. If the less recent past sets conditions which impair, obstruct or injure those involved in current decisions, then that too must be addressed. We could simply ignore the British invasion, if the descendants of the original inhabitants lived under equal circumstances and opportunities as the descendants of the invaders. But that is so far from the case that no one believes it, except, apparently, Jacinta Price.

Everyone, except Price, accepts that “The Gap” is real and needs to be bridged. In particular, disparities between indigenous and non-indigenous Australians in education, health, lifespan, economic well being, incarceration rates are large and, at best, being bridged at rates that will bring Australians together only after hundreds of years. Addressing these disparities, indeed, is one of the motivating forces for the referendum in the first place.

In short, Australia has been and remains a very uneven society. The fundamental law of the nation is an appropriate place to start to redress that inequality.

The Uluru Statement from the Heart was produced by a convention of indigenous people and calls for a truth and reconciliation process, so that Australia can come together as a society and move forward as a society. A key part of the proposal is a Voice that will allow indigenous people to be heard on the decisions and actions of government, something that has been absent for over 200 years.

I believe that every nation on earth needs a truth and reconciliation process. No country has avoided war, oppression and hatred in the past, and almost all, if not exactly all, have indigenous people who have been disadvantaged. We can, of course, shrug our shoulders and say “c’est la vie”. But we can also say it’s time to get over our differences. Our collective danger in this initial era of global warming impacts gives us more than enough reason to start working together, within and between nations.

The Uluru Statement, and the resulting referendum, provide an opportunity that is important not to pass up. We can come together as a nation, or we can turn our backs on each other and continue living disparate lives with differing opportunities. The Voice to Parliament may, in its essence, be a symbolic act that offers no material benefits to our most disadvantaged people, as some have insisted. But for humans symbols matter. Aristotle and Descartes’ idea that language is an essential difference between humans and other animals may have been exaggerated, but not hugely: language and symbols are indeed our special skill and power. This is the right first step toward a healed community. 

I will also speak to some of the additional prominent objections to voting Yes.

  • Jacinta Price texted to me that the consequences of the referendum are unknown. “Don’t Know? Vote No.”

The full consequences of any action are unknown, so this advice is a recipe for a disastrous universal paralysis. It may also be an allusion to the point that the exact procedures of, powers of, and means of election to the Voice are left to Parliament to decide. This has been portrayed by many as an omission or defect of the referendum, but it is actually intentional. The people’s elected body, Parliament, is the proper place for such details to be worked out, not the nation’s foundational document.

  • The Voice will be divisive, singling out one race over others.

As I explained above, Australian history already has divided our races. The Voice will help bring the community together, which is the opposite of division.

In addition to that, the constitutional amendment makes no reference to race, but to the Aboriginal and Torres Straight Islander peoples. I think the right way to understand that isn’t racial, but as a recognition of the special historical place occupied by First Nations peoples. That is precisely how the United States frames its political relations with its First Nations peoples, with the US Supreme Court repeatedly rejecting a racial interpretation.

  • The powers of the Voice will be too extreme, allowing the indigenous community to object and stop not just parliamentary legislation, but also executive decisions of government.

As many constitutional experts have pointed out (e.g., Anne Twomey), this is simply untrue. The referendum provides only for a Voice, not an executive or parliamentary authority. If Parliament passes implementing legislation that confers these kinds of powers on the Voice, that is something it has always had the power to do. The referendum does not add anything to that power. Ensuring that Parliament doesn’t make a mistake of that kind is part of the normal political life of the country.

  • Another complaint is that the Voice doesn’t provide any power beyond a voice and therefore gets us nowhere towards closing the Gap.

Of course, this contradicts the point immediately prior. Nevertheless, it is false as well. As I pointed out above, symbols matter, and this referendum at least gets the symbols right. It is a small first step. Where we go after taking it is up to us, but it is less likely to be backwards once we’ve stepped forwards.

If Australia rejects this referendum, it will be telling the world that we are backwards and wish to remain so.

MacIntyre’s Dark Winter Puts the Question to Virology

Tags

, , , , , , , , ,

Kevin B Korb, 3 Feb 2023

Raina MacIntyre has been one of the voices of reason amongst epidemiologists during the Covid pandemic. These voices were given a hearing in the first year or so by both media and politicians, followed, however, by a now longer period of waning immunity to irrational and foolish misinformation, leading to widespread encouragement and support for whatever SARS-Cov-2 variants are floating around.

Dr MacIntyre’s book Dark Winter reviews the Covid pandemic and, in particular, makes a case that it originated from a lab leak at the Wuhan Institute of Virology (WIV). That case is not exactly airtight, however it does seem to support a preponderance of evidence kind of verdict. (However, I won’t commit to that: on my part it is a mere seeming; I have not investigated it myself.) She points out that the leak hypothesis received a dismissive response in the beginning, for three reasons: it was conflated with the entirely different hypothesis that the pandemic was intentionally caused by China in a bioterror attack (upon themselves, apparently); it was early on put forward by Donald Trump, in the conflated form, and many media commentators therefore immediately turned against it; the community of virologists, not acting as a unity but acting independently in defence of their own interests, loudly and publicly denounced it as an impossibility, boosted by many science journalists. Virologists, despite having a clear conflict of interest (their research careers depend, in part, on lower levels of regulation of their labs), were repeatedly turned to as expert commentators on the issue and even called upon by the WHO to participate in their investigation of Covid’s origins in Wuhan, an investigation that was denied access to much of the most important information by China (e.g., the WIV’s genomic database) and nevertheless reported no credibility to the hypothesis of a lab leak at WIV as the source for the pandemic. According to their report, ignorance of evidence supporting a lab leak works just as well as evidence against it.

The conflicts of interest and credibility of the virology community in dealing with Covid certainly deserve a much more critical examination by both media and politicians. Anthony Fauci, for example, has mostly gotten a free pass from the media on “gain-of-function” research (GOF) funding by the NIH, which research attempts to make viruses more dangerous, with a view to better understanding the threats they pose. NIH funding for it, during a declared moratorium on this kind of research, was apparently approved by Dr Fauci in the form of grants to EcoHealth Alliance, which in turn funded GOF research at WIV. This may have been unwitting by Dr Fauci, since there is at least one level of indirection, however his flat claim to have not funded gain-of-function research during the moratorium on such funding appears to be false. 

Dr MacIntyre goes on to review the long history of lab leaks in virological and bioweapons research. That history is far more alarming than most scientists have been willing to acknowledge, let alone communicate, and is a valuable contribution of the book. She quite rightly observes that much of the lab research of interest here is “dual-use” research, that is, research which has, perhaps, a primary civilian purpose, as in protecting public health, but potential military or terrorist uses as well. Whenever multiple different uses, or also accidental outcomes, are possible, there is a decision problem: do the likely benefits of the civil purpose outweigh the potential disvalue of accidental pandemics or intentional misuse?

Rather than MacIntyre’s question about Covid’s origin, I suggest the more important question put by her Dark Winter is:

Should we support lab virology in its gain-of-function or chimeric (recombinant) research into dangerous viruses or not? And, if so, with what controls?

This question has been raised before and argued around, without any clear resolution. Virologists, and before them bioengineers working on GMO products, have pointed to potential benefits of this kind of research. For a recent example, chimeric research on the Omicron variant has determined that the coding for the spike protein cannot fully explain the lessened virulence of Omicron strains, and then the researchers went on to identify a distinct protein that is involved. Whatever the ramifications of this research, we can probably agree that knowing more about the mechanisms of transmission and virulence of SARS-Cov-2 is a good thing. A more generic defence of this kind of research is very commonly put, for example in this statement from the Washington Post: “The creation of recombinant or chimeric viruses in the laboratory is merely mimicking what happens naturally as viruses circulate, researchers say.” Or, as some have said, whatever is produced in the laboratory will be eventually produced by evolution anyway. Having done quite a lot of work with evolutionary algorithms, I can testify that that is just false. As Stephen Jay Gould liked to say, if you replay the tape of evolution, it’ll play back differently. Some evolutionary pathways are more accessible than others, and the less likely may never be traversed. In any case, even if we agreed that evolution would eventually find any GOF product, it makes a good deal of difference when that eventuality arises. If it’s too soon, we may be woefully unready.

There are many ways of framing MacIntyre’s question, but putting it as a formal decision problem can help stop endless, spinning arguments. Dr MacIntyre certainly has the right general idea here, suggesting it be analysed using epidemiological risk analysis. However, she fails to complete this thought in her book, saying only “I use this risk analysis as an exercise in the course Bioterrorism and Health Intelligence” at UNSW (p. 60) and giving only vague and under-supported indications of some of the needed probabilities.

I will below set up the problem as a choice between alternative public policies, which can help make the most important issues clear and help focus debate on them. It is really a question for the entire world, the international community, and perhaps that’s the very reason it has been neglected: there is no natural seat of responsibility for dealing with such problems. 

MacIntyre’s problem can be presented in either greater or lesser detail as a formal decision problem, but to provide a bound on discussion, let’s consider this a decision between three actions: let the research proceed without oversight or regulation; stop it by international agreement worldwide (at least amongst countries and labs that obey international law); adopt some international regulatory regime which, for example, requires regular independent inspections of labs to ensure they meet agreed standards. Currently, without debate, or even much acknowledgement, we are pursuing something like the first path, for, although there are WHO recommendations, they are entirely voluntary (and not even maintained on their website). To be sure, the US’s NIH does have its own regulatory mechanism for GOF US-funded research (currently under review; see, e.g., this Science article), and no doubt so too do some other national governments. But pandemics, by definition, are a worldwide problem, not national problems, and so require an international response. 

To solve the decision problem properly, we need some hard-to-come by numbers. For example, we would need to know the probabilities that over some period, say a year, that a virology lab, conducting research according to some given standard of safety, would accidentally release viruses capable of causing a pandemic. Many other like questions would also need to be answered, for example, what is the disvalue of an average pandemic under different circumstances, as measured perhaps in Quality-adjusted life years (or QALYs, a common measure of value which assesses not just life-years saved, but the quality of those years when diminished by disability or illness).

As I said, Dr MacIntyre doesn’t provide the ingredients needed to solve this decision problem, one of global significance and interest. She does, however, raise the problem explicitly, which is already exceptional. For what it’s worth, I shall put the question even more explicitly, in the form of this decision table:

ActionPandemic¬PandemicExpected Value
GOF-V x pV x (1-p)(-V x p) + (V x (1-p))
GOF-R-V x qV x (1-q)(-V x q) + (V x (1-q))
¬(GOF ∨ GOF-R)-V x rV x (1-r)(-V x r) + (V x (1-r))

Each row presents one mutually exclusive action. I use abbreviations for concision. GOF stands for allowing unregulated gain-of-function and chimeric research; GOF-R for allowing them only under an international regulatory regime, incorpoating biosecurity standards and independent inspections; ¬(GOF ∨ GOF-R) for not allowing them at all. “Pandemic” means a pandemic ensues during some time period, say a year, although not necessarily due to that research, as it could be natural or also someone deploying a virus as a bioweapon outside regulated laboratories. -V is the disvalue of an average pandemic; therefore, V is the value of avoiding it. p is the probability of such a pandemic arising given unregulated GOF research; q is the probability given regulated GOF research; r is the probability given a prohibition on research. These probabilities are unlikely to be the same, as argued by both proponents and detractors of GOF. The former claim GOF allows us to better understand, prevent and/or prepare for and respond to future pandemics, while the latter suggests the well-established human ability to make mistakes will lead to a higher probability of pandemics. Rather than engage in ungrounded disputation, as has occurred up to now, I propose that a serious effort be made to estimate those numbers based upon a review of the history of labs and their leaks, as well as the history of both natural and artificial epidemics. 

As in all real-world modeling, there are many different choices in both level of detail and the approach to modeling implied by the above decision table. For example, one could argue that the value/disvalue of a pandemic varies depending upon the action taken, since one of the goals of GOF is to find means to mitigate pandemics. For simplicity, for now at least, I would prefer to let such considerations impact the probability of an average pandemic arising (i.e., p, q, r). In any case, a serious research project would have to consider these things. Here, I’m only presenting the problem, and in a summary form.

In any case, if we can settle upon the numbers, whether well-justified or simply as working hypotheses, then we can find the maximum expected value in the last column above, and so choose an action.

My real point is that the public deserves, and should also demand, an answer to this decision problem from health experts, the World Health Organization and our politicians. Crickets is not an answer. The virologists’ nearly unanimous and unreflective defence of their own research is also not an answer.

Just recently, two of the leaders of the UNEP investigations into Covid origins have written in indirect support for this idea (Butler and Randolph, “There has been a suppression of the truth, secrecy and cover-ups on an Orwellian scale over the origin of Covid-19 in China”; note also the editorial at the bottom by Ian Birrell):

Irrespective of the origin of the pandemic, however, this debate has exposed that self-regulation of ‘gain of function’ research has been a dismal failure.

Self-regulation, indeed, is an indicator of the public, and especially of the media, failing to take a public interest stance on a topic.

The missing analysis of Dr MacIntyre would be of interest, if only because any and all informed analyses of the problem will aid public understanding of the problem. The more the better. Nevertheless, the most serious answer to the decision problem requires a properly funded, interdisciplinary research program, one that everyone should be demanding of the UN and WHO, in my opinion. It should be run by experts, but excluding those who have professional conflicts of interest, especially virologists; for example, it should include experts in epidemiology and its history, and in decision modeling and analysis.

Dr MacIntyre puts the case for a public debate over the decision problem as clearly as can be done (p. 230):

This is a threat that affects everyone, crosses national boundaries, and which cannot be effectively managed either in the traditional disciplinary silos or by individual nation states. Instead, it requires coordination, thought leadership and novel, cross-disciplinary, global solutions.

The Power of the People

Tags

, , , ,

Kevin B Korb, 30 Aug 2022

How often do we hear things like “What’s the point? I’m only one person. What I do won’t make a difference.” I most recently heard this on ABC Radio National, in the context of an argument for not … I don’t even recall what. No matter: this Inertia Argument is a nearly universal prescription for inaction, servicable on every occasion, from a refusal to vote, to saving one’s energy from social action for an alternative engagement with e-sports, to ex-PM Scott Morrison’s repeated assertion that as a contributor of around 1% of CO2 omissions Australia should be exempt from taking any serious climate action. The Inertia Argument is a Swiss army knife for the inert, who prefer not to be seen as shirkers, but instead as rational non-actors.

The logic of the argument is impeccable: in most cases where it’s employed, the action of a single human (or a single nation) would barely shift a needle; therefore, nothing should be done.

Interestingly, there is an equally compelling kind of argument that runs exactly in reverse while also advising inaction, the so-called Slippery Slope Argument. That is, if you should be willing to shift your position ever so slightly downhill, then you will inevitably slip and slide all the way to the bottom and a spectacular crash. For example, if you allow a blood alcohol concentration in drivers of, say, 0.01%, then before long you’ll be allowing 2% and then 5%. Or, if you allow abortions up to 6 weeks, then soon third trimester abortions will be legal, followed by infanticide. Oh wait! These are counterexamples! In both cases, societies have had no trouble at all intervening at some point, drawing a dividing line which, while being fairly arbitrary compared to microshifts up or down the scale, make for a legal divide that can be enforced and adhered to. So, the slippery slope isn’t nearly as slippery as its scare mongers generally assert.

The idea that Slippery Slope Arguments are compelling and that we must assert a stance of absolutely zero exceptions to a rule in order to maintain rules at all is deluded. It is akin to a common response to being exposed to philosophy by freshmen undergraduates, who often take some principle they’ve been introduced to and apply it without restraint to every case they can dream of. Religious extremists do the same. Either end of the stick — that a rule must be applied universally or not at all — simply exposes the wielder’s naivete, an unwillingness to acknowledge that we live in a world of color, mixtures and gradations, rather than one of black and white.

The Inertia Argument is equally bad, despite having a seductive appeal that leads huge numbers of people into a world of delusional belief. The classic move in argument analysis is to take an argument and find minimal hidden premises (those that strain credulity the least in attributing beliefs to the author of the argument) to make the argument strictly valid, that is, where there is no possibility of the conclusion being false when the premises are true. Finding those hidden premises exposes assumptions the author would, perhaps, prefer listeners to overlook. The Inertia Argument treated thus, goes:

The action of a single actor would have a negligible impact.

[Actions with negligible impact are not worth doing.]


Therefore, nothing should be done.

The bracketed sentence is the hidden premise making this argument valid. At first glance, it may seem innocuous. Certainly, actions which have no impact whatsoever are not worth doing; that’s hardly in dispute. So, why not also those actions with a negligible impact? Surely, they also shouldn’t be done.

The rub is that individually negligible actions are quite often not collectively negligible. Voting is a good example. The chances that your one vote will change the outcome of a large election are less than negligible; it’s akin to your chances of winning a lottery. And yet votes have frequently had dramatic and powerful consequences, throwing out hated corrupt governments, for example. If one Goth or Vandal had decided to attack the Roman Empire, that one barbarian would have died. But when their tribes and friends collectively attacked, even the Roman Empire could not stand. A single Egyptian could have hardly built a noticeable pyramid, but by now I’m sure you get the point.

Individual actions can be coordinated. That’s exactly what societies are for. Even if the efforts of a singleton left a singleton aren’t worth worrying about, the coordinated efforts of multiple singletons are. The employment of the specious Inertia Argument not just against individual action, but against collective action, as in Scott Morrison’s argument, is especially stupid. Collective action is where the payoff is, when the effects of individual actions are negligible.

So, the next time you hear the Inertia Argument used to defend inaction, perhaps you can ask your Arguer friend, “Why not take collective action and get a real payoff?” For example, the right response to Scott Morrison’s Inertia Argument that Australia can’t do anything significant about global warming would have been, “Why can’t Australia lead international action, rather than wallow in national inaction?” Apparently, this question never occurred to Australian journalists.

By the way, the theme that people who rely on argumentative form alone to tell them whether arguments are good or bad are, in fact, displaying very poor argument analysis skills is one you’ll find throughout this blog. Their argumentative world is as impoverished as those who think all rules are absolute and all principles universal.

Reliable Sources

Tags

, , , , , , , , , ,

Kevin B Korb

People have a right to speak; the question is, to whom should we be listening.
– Naomi Orestes and Erik M. Conway (2010)

There is no doubt that knowing how to sort more reliable from less reliable sources, and giving the reliable ones the largest role in forming our opinions, is a key critical thinking skill. George W Bush infamously launched a war in Iraq killing hundreds of thousands of people, as well as inspiring the Islamic State of Iraq and Syria and decades of instability in the Mideast, after dismissing UN reports on the non-existence of weapons of mass destruction in Iraq, which turned out to be reliable, and accepting IC reports to the contrary, partially based upon a since utterly discredited witness, “Curveball“. Had Bush, or those around him, been better critical thinkers, those extremely damaging effects could have been averted. In an era when unreliable sources abound, indeed are almost absurdly prolific, knowing something about these issues is more important than ever before.

Karl Popper pointed out that if the aim of science were simply to discover truths, science could stick to enumerating theorems of propositional logic, say, by adding more and more disjuncts to (P ∨ ¬ P). Of course, it doesn’t follow from the pointlessness of that exercise that we are uninterested in truths; rather, that we are interested in truths which actually help us with our problems, which can serve as premises to arguments of interest to us. The relevance of truths to problems is an important question, but one I am not addressing here. Instead I shall consider, on an assumption of relevance, how we decide whether premises are true, when we are not ourselves witnesses to their truth.

There are numerous negative indicators, as well as numerous positive indicators, of good sources of information in domains and problems foreign to us. I shall explain some that are especially useful to know.

The Problem

When you are dealing with an issue about which you have considerable hard-won expertise and understanding, it is usually not hard to decide whether a claim about it has merit. But, at least since the Renaissance, none of us has the time, or inclination, to become expert in every available subject, so we all have to rely on the expertise of others most of the time. The problem here is how to add to our pool of premises working assumptions, points of departure, claims which are most likely to be true, without going to the difficulty of directly testing or observing that they are true. How can we reasonably decide whether someone or some source has the relevant expertise we lack? How do we recognize reliable sources and avoid unreliable ones?

The problem is already acute in an era of internet and social media echo chambers. For pretty much any idea, crazy or not, there will be a source for it somewhere. And the problem will only get worse with AI-driven Deep Fakes muddying public discourse further. The problem of identifying and verifying reliable sources is a traditional concern of argument analysis. I will go over some of the main issues here.

A Consumer Analogy

There is a close analogue in consumer behavior to the problem of assessing information sources. Indeed, finding reliable products and services for consumption is actually a special case of finding reliable sources for the sake of critical reasoning. Reasoning to a purchase is a special case of reasoning, and to do it well you need to vet your sources. People who purchase cars, appliances, IT services based upon how slick and impressive their advertising is are people who acquire rubbish products and shoddy services. Somewhat more cautious folk look at consumer ratings first, either instead of or supplementing their feelings about the ads. They will do somewhat better than the simply reactive kind of consumer, but the popularity of products and services is a limited guide to the truth. A more cautious consumer will find consumer guides or their equivalent, such as Consumer Reports or Choice, which are developed by testers using a lab and see what they have to say. An exceedingly, perhaps an improbably, cautious consumer will go further, looking at the technical and engineering literature to find what Consumer Reports and others have overlooked.

These correspond to the different levels of investigation that are, or might be, applied to investigating and assessing the sources of information for arguments in general. The argument consumer who adopts beliefs because of a slick youtube presentation or an engaging Instagram video, or even a popular book, is quite likely the very same person who buys a car because of the memorable jingle that went with its ad. We should all hope to do better.

False Flags

First, I will point out some things reliability is not.

Authority

Perhaps foremost among these is “authority.” Authorities are those in positions of power and responsibility. They get into those positions for many reasons, often on the basis of merit. But the merits rarely include any deep understanding or expertise in outside fields. Hollywood actors rarely have expertise outside of how to act, or perhaps how to direct others to act. Politicians sometimes acquire expertise by being handed a portfolio and digging into it; otherwise, they mostly know about campaigning, raising money, debating and negotiating political deals. Popular media pay a lot of attention to the wider opinions of actors and politicians outside their direct expertise; except when they have the power to act upon those opinions, this attention is mostly unwarranted and often harmful.

Of course, there are acknowledged authorities within any field of study. The usual way of judging who they are is to have a look at their credentials. Does a claimed climate scientist have a PhD in a climate science? Does a media personality have a law degree? Examining credentials is a shortcut: if we have the time, we should prefer to test their expertise. That could involve, for example, tracing their stated conclusions backwards to discover what evidence they have relied upon and what arguments have led to their conclusions. That is the stuff of good argument analysis. In order to do such backfilling of how they reached their conclusions, you may first have to learn and practice argument analysis, before then doing the footwork of checking their reasoning. If we lack the opportunity to do detailed investigation, as we will in most cases, we might opt for merely checking credentials, or asking about an expert’s reputation, etc. But these options are shortcuts and rationally undercut the degree of trust we should put in that expert’s claims. Too many people think credentialing is the beginning and end of checking sources for their reliability.

There are many well-known cases both of genuine experts who lack the usual credentials and of credentialed people who lack the expertise. Freeman Dyson is a famous physicist with no doctorate, for an example of the former. For the latter, consider Dr Ian Plimer, credentialed in an earth science – geology – with a PhD from Macquarie University, who has a long track record of claims about global warming that are demonstrably false; for example, Skeptical Science has a web page identifying some, with references you can use to verify their falsity.

Authorities should neither be accepted nor rejected out of hand. (It seems to have become popular to reject them out of hand by conspiracy theorists, who often claim to thereby be exercising “critical thinking” skills. The truth is the very opposite, however: critical thinking requires thinking, not an automatic response, whether that’s acceptance or rejection.) When the claim in question is of little consequence, then a quick acceptance, perhaps as a working hypothesis, may be warranted. But where the matter is more important, then, depending on the degree of that importance, more thought or investigation about the reliability of the source is warranted.

Direct Observation

What else is reliability not? It is also not sourcing all of your evidence directly. If you cannot believe anything you have not directly observed, then you must live in a closet, since you would be unable to trust most of what seems to be going on around you. As human actors, we simply must have indirect sources of information which we trust, and so we must agree upon methods of validating those sources. A constant, unslacking demand for direct evidence is the cant of the obfuscationist.

The Demand for Proof

A similar overreach is to insist that claims, to be acceptable, first be proven, which claim often is used to reject a scientific conclusion. This plays upon a well-known truth about science, that its conclusions are almost always fallible and provisional, subject to retraction upon the discovery of disconfirming evidence. What goes unacknowledged is that scientific conclusions, while always subject to retraction in principle, come with different degrees of certainty and different degrees of acceptance by experts. The only place you’ll find serious debates about a flat earth, the germ theory of disease, a heliocentric universe, or smoking causing lung cancer is in the dark corners of the net — and not for example, at scientific conferences (unless they are about social media misinformation!). Anthropogenic global warming almost deserves to be in that same list, but it’s fair to observe there are still a few fringe scientists still disputing it.

The idea of demanding proof is to rely on people not understanding that science is simply not about stacking proofs on top of each other. It is a rhetorical ploy about generating and spreading FUD.

Ad Populum

Hardly anyone has an unadulterated belief in the value of popularity as a guide to truth. Belief in the existence of witches was popular in medieval Europe, however that belief only led to ashes. But, of course, in many circumstances popularity is an important guide to the truth. The popularity of books and films are a likely guide to entertainment for those whose tastes are near the norm. No doubt, the popularity of beliefs also provides some guidance to the truth, when the popularity is measured across individuals who have a strong commitment to finding and exposing the truth. Consensus views amongst a group of experts are worth attending to. Unfortunately, this era of social media, with the most popular opinions being spread precisely because they outrage rather than inform, requires positive resistance to the pull of popularity.

It is only in combination with some independent indication of expertise, understanding and commitment to the truth that popularity or consensus becomes interesting. What we are really after is how we can find those who are committed to finding and reporting the truth and add them to our pool of preferred sources.

Self-Belief

People who think they have gotten hold of a truth that the experts largely, or perhaps universally, disagree with are almost certainly deluding themselves. As I said above, the knee-jerk rejection of expert opinion isn’t a sign of critical thinking or deep insight. It’s a sign of something gone wrong. If you are actually correct that you are right and the existing experts wrong, then you should be able to justify your view by actually making a positive contribution to the field, for example by getting a PhD and writing up your justification as a thesis. Some who think themselves in that position will object that the whole academic world is in on a conspiracy, so they will have no opportunity to do that. In that case, what’s gone wrong is that the person is mentally ill. (The chances that thousands of scientists are in on a conspiracy that is yet to be revealed is essentially zero; see Grimes, 2021.)

True Flags

Genesis

The so-called Genetic Fallacy is, per Wikipedia (as of 22 Mar 2022):

A fallacy of irrelevance that is based solely on someone's or something's history, origin, or source rather than its current meaning or context. This overlooks any difference to be found in the present situation, typically transferring the positive or negative esteem from the earlier context. In other words, a claim is ignored in favor of attacking or championing its source. The fallacy therefore fails to assess the claim on its merit.

This entry is absurd. That is, taken as advice to simply ignore the source of a claim (which is suggested, if not stated, by saying it is a fallacy of irrelevance), it is an expression of an impossible idealism about assessing information independently of its source and entirely on its own merits. However, in order to assess claims outside of our own expertise without attending to the track record of their sources (or stand-ins for track records, such as reputations) would involve acquiring knowledge comparable to that of the relevant experts themselves. Doing this beyond a few topics of interest is beyond the mental and physical resources of any of us. Hence, relying upon expert advice is the only way we can hope to learn about the vast majority of topics of interest to us. And why should we rely upon those experts? Because they have a track record of producing relevant and accurate statements in their domains, as verified by: their supervisors and teachers during their education; experimental tests; their peers during reviews. In other words, when we accept the next statement by an expert or group of experts, we are doing so precisely because the Genetic Inference is not fallacious at all. Origins of statements matter.

What is a fallacy? The genetic fallacy can be real. If your interlocutor's intent is to distract from the content of a claim and cast doubt on it because of a suspect origin, then perhaps it is a genetic fallacy. An example from one of Douglas Walton's books on argumentation (I forget which one) cites a Canadian parliamentarian's comments on abortion being disparaged because they came from a male. That, of course, is a pure strawman and not the kind of genetics I'm advocating for here, since, so far as I know, being male is uncorrelated with truth or falsity, i.e., reliability. In practical fact, what we are prepared to call a fallacy or not depends upon both the content of a supposedly fallacious claim and the intent behind it, rather than being purely a matter of logical form, as many would have it. So, the genetic fallacy is fallacious not because of drawing attention to the source of a claim, but rather to characteristics of the source that are not indicative of reliability. 

Origins also matter, of course, when we are not dealing with experts per se, but journalists and their kin, such as science popularizers. Reputations in journalism matter, because we don’t all have the time to fly off to a war zone to check things out for ourselves. Some news sources at least try to double-source their claims, such as the New York Times, while others are happy to publish the most absurd nonsense, such as Rupert Murdoch’s Fox News. It doesn’t take a genius to figure out which of these sources is the more reliable, even though every source will have its biases. Biases, by the way, can be dealt with, for example by checking alternative (but reliable) sources with different biases. Outright lies are hard to deal with, if you’re trusting the source irresponsibly.

Origins still matter when we are dealing with relatives, acquaintances and strangers on Twitter. In fact, contrary to Wikipedia, origins always matter. For the vast majority of what we see and hear, origins are all that we are likely to take into account. The more sensible of us filter out the junk from Facebook, Youtube and Twitter.

It would be fair to call the Genetic Inference a fallacy, if the claim at issue were, say, the primary matter of a dispute and if its source were not acceptable to any reasonable participants in the dispute. For example, if we are playing a trivia game and have agreed in advance to use Wikipedia to settle disputes, then citing Wikipedia to resolve a dispute is hardly fallacious, even if the subject matter were fallacies. Or again, if the issue at hand were, say, the reality of anthropogenic global warming, in a non-professional setting, and we cite Wikipedia to determine average surface temperatures for the last ten years, that would also not be fallacious, even if one can raise doubts about it. On the other hand, if we are pursuing serious research, citing Wikipedia as our source and leaving matters at that would, indeed, be fallacious. One can, and should, dig deeper.

Indeed, one can always dig deeper, as any three year old (and Lewis Carroll) can tell you. Claims can always be interrogated, except when we run out of breath to ask further questions.

In short, whether the referral to the origin of a claim is fallacious or not depends upon the context. If the claim is a minor premise whose merits we do not wish to investigate, then it is probably not fallacious. If the claim is a key premise for an important argument, then it might be right to label a reliance upon the source’s reputation “the Genetic Fallacy”. But for most common argumentative purposes, relying upon a known track record of sources, or their reputations for honesty and reliability, is what we will be doing most of the time. And calling that a fallacy is fallacious.

So, when searching for premises, identifying reliable origins is the name of the game, outside of special cases where we ourselves are witnesses or otherwise providing evidence. In other words, when we are trying find reliable sources, the Genetic “Fallacy” is all there is.

The question is how to commit this “fallacy” in a way that reliably yields the truth.

Testing Reliability

A poor indicator of reliability is apparent objectivity or neutrality. The admonition of some to write scientific reports using the plural “we” or the passive voice may give a veneer of objectivity to the reports, but does nothing to debias them. It is, in fact, a misleading affectation.

The reputation of a source is probably already a better indicator of its reliability than the cosmetic touches they put on their claims, such as passive voice. Reputations are based on any number of random things, including things quite as cosmetic as voice, but at least there’s some chance someone’s reputation derives from actual truth-telling. A superficial objectivity only derives from an intent to present a degree of objectivity that may be entirely unfounded. But we can usually do better than rely on reputation.

Reliability can, and frequently should, be tested.

In some cases we can test the truth of claims ourselves. If you have relevant expertise, you may have sufficient knowledge to know whether a relevant claim is true, or you may have ready access to sources that can be used to test it. Alternatively, you may have access to different sources whom you have good reason to believe will be reliable in the domain a claim refers to. For example, if you are given advice by a doctor, if you have a personal contact who is also a doctor, you may have immediate recourse to a second opinion, or at least an opinion about seeking further opinions.

Fact Checking is a slightly different activity which you might employ. Established fact checking organizations, of which there are a fair number since the rise of the internet (e.g., Snopes, RMIT Factlab; Wikipedia keeps a list of such sites here), essentially do what used to be done by journalists: checking the sources of claims, attempting to confirm claims by identifying additional sources, providing the subjects of claims the opportunity to respond to any claim, and consulting with experts in relevant domains. Anybody can, and sometimes should, attempt these same activities where an important issue is at stake. First, though, you might want to either look to see if existing fact checkers have already done this work, or even suggest that they fact check a claim for you. In general, established fact checkers will have an easier time of it, since they have sufficient reputation, for example, to gain access to relevant experts.

In addition to these kinds of activities, you can simply look at what alternative sources say about the issue at hand. If you see a claim made in the New York Times, you might see what the Washington Post has to say, since they may say something contrary to the NYT. To be sure, both newspapers share a good deal of their worldview, and so they share biases. So, you might want to find a source with a different worldview. The Economist, for example, is a more conservative magazine than those two. Better still is to see what reputable sources outside the Anglo world have to say, such as Sueddeutsche Zeitung or Le Monde. If you turn to a Rupert Murdoch source, however, you have probably gone too far and are now in an alternative universe. In recent decades, Murdoch’s venues, both print and cable, have largely replaced news reports by rightwing rants. In any case, different sources can confirm or contradict each other, and either outcome is helpful for understanding the merits of the original claim. Ideally, your multiple sources are not just operating on different world views, but are actually sourcing their claims from different original sources; otherwise, they may be effectively repeating each other.

An aside: one of the often remarked advantages of taking a "break year" after school and traveling overseas is that you can gain considerable insight into how different the world looks away from home. US news media, for example, looks highly diverse from inside the US. Ignoring the extremists (such as many published by Murdoch), however, the vast bulk of US mainstream media shares a very strong bias, strongly favoring an American view of international politics, as well as exhibiting considerable ignorance of the world beyond US borders. This becomes apparent when you immerse yourself in a completely different set of biases. Of course, this kind of distortion is not peculiar to America. Indeed, the least exceptional thing about America may be American exceptionalism. Strangely, every country and culture are the very best in the world.

In summary, you can test the reliability of specific claims by specific sources in a variety of ways: relatively rarely by your own direct observation or experiment or comparing to your own prior knowledge; more easily and commonly by examining alternative sources; or, by investigating and fact checking. The benefit of any of these activities is two-fold: first, you can better gauge the truthfulness of a specific claim; but also importantly, you gain information about the reliability of the source of the claim. Reputations (other people’s opinions) are generally unreliable guides, but an understanding of the reliability of a source gained through repeated truth testing of their claims will provide you with what you need to perform the genetic non-fallacy correctly, your own informed opinion. Knowing the reliability of sources is essential to navigating our high volume world of information and misinformation.

A Hierarchy of Reliable Sources

There is no such thing as an infallible source. Eyewitness (first-person) testimony is, for example, notoriously unreliable. Given people’s proneness to confabulation, rationalization, confirmation bias and generating and explaining false memories, you should also treat your own recollections and firm beliefs with some skepticism. The confidence you have in your own beliefs is a very poor guide to their reliability (see Wikipedia’s “Overconfidence effect” and “Dunning-Kruger effect“, for example). Every source, however credible, should actually be tested when there is an opportunity to do so, assuming the time and trouble are worth the likely information gained. By doing so, you will gain the knowledge to make accurate judgments of the reliability of different sources when you do not have the opportunity to check them further.

Nevertheless, there clearly is an ordering from more to less reliable sources. Here is my attempt at such, together with some reasons for the order. You can check for yourself to see whether my ordering has any reliability, of course. Each step along the list represents at least some lower reliability, as I explain; of course, these are just general categories and there will be many specific exceptions, with either higher or lower reliability.

  1. Scientific reference material; resources of international scientific organizations.
  2. Scientific journals, conferences and reviews.
    • Examples: Science, Nature, PLOS ONE.
    • Journals and conferences come with different reputations; indeed, many academics spend many hours per year trying to rank them. However, those rankings are meant to judge a kind of pecking order for prestige and the apparent importance of publishing in one journal versus another; they are not specifically ranking reliability or the probability of assertions being true. The lowest ranked publications may indeed have trouble publishing the truth, however the standard for most scientific journals and conferences is that accepted publications be peer reviewed by two or more experts. The claims made in their publications have been “tested” at a minimum by four eyeballs, presumably with an attached brain. Articles with problematic aspects should have been reworked or rejected. Venues with lower standards will be ignored by most academics.
  3. Theses and dissertations; academic grant proposals.
    • PhD and Masters theses are often publicly available, or at least available through libraries. Recent ones can be very helpful in navigating current work in a field. They have the major advantage that, while all of them at least attempt to advance our knowledge, and so may spend its main chapters exploring a relatively narrow topic, they are written for examiners who may have a background centered elsewhere. In other words, the introductory and background chapters will be directed at generally, but not specifically, informed audience, often serving as an introduction to a field as good as any textbook, but more up to date.
    • Grant proposals are even more targeted and usually written by more senior researchers.
  4. Educational material and resources, including textbooks.
    • Most universities require their academics to be ambidextrous, producing both original research and original educational material. This has the benefit of their educational material being infused with new ideas and information, which has also been pretested through refereed publications. Nevertheless, the authors are typically covering a much broader landscape in their educating than in their own research, resulting in them sometimes taking shortcuts, such as relying on prior textbooks without checking them, in other words, violating my main advice here to test assertions (see, e.g., Bohren, 2009).
  5. Popular science writing, videos and podcasts.
    • Here, reliability is really tied to the individual source. Some popular science writers are also scientists, such as Neil deGrass Tyson, A.K. Dewdney and Brian Cox, and draw upon their own hard-won knowledge in doing popular science. Other science writers, while not starting with great reputations, learn their fields and do a good job of checking their information and so end up earning a good reputation. James Gleick and Martin Gardner come to mind. And then there are a host of writers who don’t come to mind at all. So this category really parallels the entire range of source reliability from top to bottom.
  6. Newspapers and news magazines.
    • During the 20th century the best known newspapers adopted practices that separated their publications from the pamphleteers and rags of the 19th century. This included an explicit commitment to honesty and to finding multiple sources for controversial or contestable assertions. In the 21st century internet environment many of these commitments have been abandoned. Still, some news media do attempt to maintain their standards and can be found to be mostly reliable.
  7. Wikipedia.
    • Wikipedia has an internal structure that includes editing for accuracy. They at least make the attempt to be reliable. Partly because anyone can edit their articles, however, their claims need to be treated with caution. While there is such a thing as the “wisdom of the crowd”, that tends to operate as a longer term corrective. And, in any case, where an unfounded doctrine has gained widespread support, the crowd will only reinforce rather than correct it. (See above on the “Genetic Fallacy”; for a more general discussion of the fallacies, that Wikipedia tends to accept without question, see Korb, 2004.)
  8. Blogs, instagram posts, youtube videos.
    • You tend to get what you pay for.

You will observe the repeated reference to science near the top of the list. The human science project has been and remains our best collective effort to increase our understanding of the world. While all the sciences have problems, including occasional fraud and hoaxes, they are also fairly actively policed and have a pretty good track record of catching and exposing indiscretions. It’s certainly better than the track record of catching and exposing political corruption in most countries. To get some idea, you can look at Retraction Watch, which reports on scientific publications that have been retracted. Being peer reviewed and accepted by scientific experts is no guarantee of correctness. Scientists are subject to the same cognitive biases and faults as everyone else. However, if you test the claims made in categories 1, 2 or 3, for example, and find that they are wrong, then you should publish your results, because you will be both subtracting what is wrong and adding what is right to our collective understanding of the world.

Additional Reading

There are more than plenty of both good and bad writings on the subjects I’ve touched upon here. I list a few that are accessible and I find have mostly good suggestions for how to source information or check information sources.

Fact Checking Resources

Many organizations have sprung up for fact checking, some sponsored by traditional news organization. These often also provide guides or other information for how to do your own fact checking. Here are some good examples:

  • Norddeutsche Rundfunk (NDR) provides educational materials (in German) for teachers of journalism, Medienkompetenz.
  • Many good fact checking organizations explain their procedures, which can help others learn to use them or adapt them. E.g., Factcheck.org,

References

Alvarez, Claudia (2007). Does philosophy improve critical thinking skills? Masters Thesis, Department of Philosophy, University of Melbourne.

Bohren, C. F. (2009). Physics textbook writing: Medieval, monastic mimicry. American Journal of Physics, 77(2), 101-103.

Carroll, L., aka C. Dodgson, (1895). What the Tortoise said to Achilles. Mind. Republished in Mind, 104 (416), 1995, 691–693, https://doi.org/10.1093/mind/104.416.691

Domestic violence. (n. d.). In Wikipedia, The Free Encyclopedia. Retrieved 16:49, March 29, 2016, from https://en.wikipedia.org/w/index.php?title=Domestic_violence&oldid=712521522

Good, I. J. (1952). Rational decisions. Journal of the Royal Statistical Society. Series B (Methodological), 107-114.

Grimes, D. R. (2021). Medical disinformation and the unviable nature of COVID-19 conspiracy theories. PLoS One, 16(3).

Handfield, T., Twardy, C. R., Korb, K. B., & Oppy, G. (2008). The metaphysics of causal models. Erkenntnis, 68(2), 149-168.

Hope, L. R., & Korb, K. B. (2004). A Bayesian metric for evaluating machine learning algorithms. In AI 2004: Advances in Artificial Intelligence (pp. 991-997). Springer Berlin Heidelberg.

Klein A. R. (2009). Practical Implications of Domestic Violence Research. National Institute of Justice Special Report. US Department of Justice. Retrieved from http://www.ncjrs.gov/pdffiles1/nij/225722.pdf

Korb, K. (2004). Bayesian informal logic and fallacy. Informal Logic, 24(1).

Korb, K. B., & Nicholson, A. E. (2010). Bayesian artificial intelligence. CRC press.

Nisbett, R. E., Fong, G. T., Lehman, D. R., & Cheng, P. W. (1987). Teaching reasoning. Science, 238(4827), 625-631.

Olding, R. and Benny-Morrison, A. (2015, Dec 16). The common misconception about domestic violence murders. The Sydney Morning Herald. Retrieved from http://www.smh.com.au/nsw/the-common-misconception-about-domestic-violence-murders-20151216-glp7vm.html

Orestes, N. and Conway, E. M. (2010). Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Climate Change. Bloomsbury Press.

Pearl, J. (1988). Probabilistic reasoning in intelligent systems. Palo Alto. CA: Morgan-Kaufmann.

Shannon, C. E., & Weaver, W. (1949). The mathematical theory of information.

Silver, N. (2012). The signal and the noise: the art and science of prediction. Penguin UK.

Sturrock, P. A. (2013) AKA Shakespeare. Palo Alto, Exoscience.

Tetlock, Philip (2015). Philip Tetlock on superforecasting. Interview with the Library of Economics and Liberty. http://www.econtalk.org/archives/2015/12/philip_tetlock.html

Toulmin, S. (1958). The Uses of Argument. Cambridge University.

Wellman, M. P. (1990). Fundamental concepts of qualitative probabilistic networks. Artificial Intelligence, 44(3), 257-303.

Wigmore, J. H. (1913). The principles of judicial proof: as given by logic, psychology, and general experience, and illustrated in judicial trials. Little, Brown.

Open Letter to RIPE NCC

Tags

, , , ,

Dear RIPE Executive Board,

I would like to call into question your decision of 28 Feb to simply dismiss Ukraine’s proposal to, in effect, cut Russia off of the internet.

In defending your decision you correctly point out that the historical standard of the internet has been that it should be open to all, and, in particular, that “the means to communicate should not be affected by domestic political disputes, international conflicts or war.” This is of a piece with claims about technological neutrality, that is, that technologies are not to blame for how people use them. While the underlying motives of your view are, I’m sure, good and noble, the effect in practice is not.

First, the internet simply is not, nor can be, politically neutral. It is a primary means for a great many people to obtain their day-to-day understanding of what’s going on in the world. For precisely that reason, it has become a target for many governments in their efforts to manipulate and control their own people and to mislead or destabilize other people. You know as well as anyone that access to the internet varies not just by wealth, education, etc., but also by direct political intervention. You would like to avoid the internet becoming an explicit target of political action because its administration takes sides in any conflict. That is a very reaosnable objective, but to pretend the internet is not already a target is duplicitous and unhelpful.

Of course, I agree that the more politically engaged the internet administration becomes, the greater the danger to an open internet. The risks and benefits of an intervention need to be carefully assessed, and it makes good sense to put the “burden of proof” on active interventions rather than maintaining neutrality. I am with you there. Unfortunately, you are not with yourself, if you accept that position, since nowhere in your letter reporting your decision is there any weight given to the costs of non-intervention. As far as your letter is concerned, there are no risks involved in letting Putin carry on with whatever he is doing. I think much of the rest of humanity disagrees.

Second, the underlying logic for technological neutrality is simply specious. It may be literally true that “guns don’t kill people, people kill people” (ignoring AI, anyway). But the metaphorical reality is the opposite: guns enable killing and are hardly a neutral technology. This point is very widely accepted, if not by the NRA and its fellow travelers. Nuclear weapons, biological weapons, chemical weapons are all examples of technologies coming out of 20th century science which people quite generally have demanded come under international controls, and do. There is little that is neutral about them, beyond the fact that they don’t themselves advocate an ideology. Anybody can use them to kill. However, those technologies are on a side: the side opposed to human life.

The internet can likewise be used to kill.

You may say the internet is just about humans communicating with humans. I hope you are not that naive. Means of communication have been the preferred means of controlling populaces since the beginning of nations, and that has not changed with the internet. It’s only become easier.

You suggest that a failure to maintain strict impartiality would “jeopardise the very model that has been key to the development of the Internet in our service region.” As I said above, I agree. Whatever you do will jeopardize that model, because that model is always going to be at risk. My question is whether you have even considered the risks associated with other alternatives. If Putin succeeds, not just locally in Ukraine, but globally in destabilizing both the international rule of law and democracies world-wide, what will become of your internet neutrality? It is at risk either way, and some risks are greater than others. You have not done a risk analysis and are probably incapable of doing one, especially not on behalf of all of your users.

Regards, K B Korb

Paul Krugman Argues with Zombies

Tags

, ,

A Brief Review of Arguing with Zombies: Economics, Politics and the Fight for a Better Future

Arguing with zombies would seem like a waste of time. They don’t listen; indeed, they don’t cogitate. It may seem right, since what you’ve heard come out of their mouths is pure nonsense. But all they want to do is rip into you while you talk. This is a quandary facing many of us in the Age of Social Media. Brain-dead arguments are all the rage. Why even bother?

Well, almost certainly if your only goal, and only  possible effect, is to persuade the conveyor of a brain-dead argument that the argument is meritless, then you are wasting your time. Zombies don’t listen and they don’t have minds to change. Paul Krugman’s book nevertheless does suggest some value to arguing with them. He lays out his arguments so clearly and succinctly that it is a pleasure to read them, assuming you’re not a zombie. For those already familiar with the arguments that counter —  destroy, really — the supply-side fanaticism of Ronald Reagan and fellow Reaganauts, which is one of Krugman’s main targets, they serve to remind us of the main points. For the unfamiliar, they provide a great introduction to those issues. The value of arguing with zombies lies largely in providing guidance to newcomers to the arguments, to onlookers, and perhaps inoculating them against brain diseases, but also in supplying handy supports for the familiar in their daily travails. 

There is also potential to aid the author her or himself: collating and synthesizing the main arguments opposing the apparently ceaselessly rising tide of zombie arguments can serve the same purposes as teaching undergraduate classes. It aids the author in achieving clarity, focusing on key issues, as well as testing ideas and genuine arguments. As Krugman notes in this volume, achieving simplicity is itself hard work, and far more valuable than most people, lay or expert, appreciate.

These kinds of efforts are more important than ever before. In an age of Trump and Murdoch, Zukerberg and Thiel, a good deal of the money, and so time and effort, of those capable of influencing political decision making is flowing towards zombie ideas, given them an afterlife and energy well beyond any rational justification.

There are many particular arguments that Krugman takes on, providing, if you like, model arguments for the rest of us to wield at picnics, cocktail parties or other gatherings. The rest of us using these models and spreading the word is of the essence, so long as democracy remains a viable political option. Among the many:

  • That economic austerity is not the panacea for economic woes Thatcherites claimed it is.
  • That government budgets are not just like household budgets, requiring us to maintain a balance. In particular, that the debt phobia encouraged by the right in opposition, but almost always forgotten about when they gain power (e.g., Reagan’s defence spending and tax cuts, Trump’s enormous tax cut), is grounded in nothing but a misunderstanding of government debt and economics.
  • That Obamacare is the thin edge of the wedge of State Socialism. Also, that Denmark is an authoritarian, State Socialist dictatorship in the same category as Cuba, Stalin’s Soviet Union and Venezuela. In general, Krugman takes down the idea that the government protecting the public through regulation is inimical to any variety of capitalism. The social democracies (not socialist states) of Europe are long-standing counterexamples to this right wing “meme”.
  • These tax cuts have padded the pockets of the rich, but, no, it hasn’t trickled down since those pockets are nearly waterproof. In consequence, both income and wealth inequality in America has been rapidly growing (see, e.g., Pew Research Center, 2020, Trends in US Income and Wealth Inequality).
  • That Trump is no fluke: the right side of American politics has been long drifting, and more recently surging, towards outright racism, xenophobia and fear of the future; Trump is only a high-water mark in galvanizing hatred and fear, likely to be surpassed soon rather than late.
  • The mass media (including Krugman’s own New York Times) have often, but not always, done the public a great disservice, not by spreading fake news (so much), but by often omitting stories that undermine the right, and, in a misbegotten idea of “fair play”, selecting instead unfounded zombie arguments to represent “the other side”.
  • That the Norquist dogma that the only good government employee is a dead employee is a phony way of claiming that the right way to deal with a Commons is to run your cattle all over it until it is destroyed, so that you personally profit first and foremost, while your neighbors starve (“The Tragedy of the Commons”). All public goods should be captured by the powerful, and the weak or late should die.

The last idea is a misreading of Darwin (as in Social Darwinism), but a true reading of Ayn Rand’s “Objectivism”, as presented in her novel The Fountainhead. She has had a pernicious effect over the last seventy years, as the wealthy (i.e., potential financial supporters of rightwing propaganda) have read her and found very convenient arguments seeming to justify their wealth and their use of that wealth to screw everyone else. Hers, and theirs, is a morally vacuous universe — the alternative universe discerned by Kellyanne Conway and Donald Trump.

All of these messages (and more) will be familiar to many who follow the political debates. Krugman beefs them up with pointed and clear arguments that will make sense to almost anyone willing to digest them.

To take an example, consider the long-standing arguments over a claimed need to avoid large budget deficits by the right, which have recently heated up over $3.5T spending bills and the self-imposed punishment of requiring that a debt ceiling be raised to avoid defaulting on US government debt. Krugman illustrates the hypocrisy of these claims with the case of “Flimflam man” Paul Ryan, who decried Obama’s deficit spending even while advocating a $4T tax cut for the rich. Ryan’s proposed spending cuts, potentially hurting the poor and middle class, would have left a $1.3T hole in the budget, about which he was entirely unconcerned. The net effect of his plan would have simply been a huge new financial burdening of the poor and middle class for the benefit of the wealthy. Complaints about such proposals are inevitably labeled class warfare by our insightful mass media, of course.

It is worth pointing out that deficits incurred through tax cuts for the wealthy are very different from deficits incurred through public spending on health, welfare and infrastructure. The former put money into the pockets of the rich, who have no need to spend it, and generally do not. The latter directly grows the economy. Necessarily, therefore, the multiplier effect, the number of dollars circulating in the economy as a result of the additional deficit, will be smaller following a tax cut than an equivalent public investment. Money stuffed in a bank is less active than that paying a contractor, who pays a subcontractor, etc. This simple point has stumped economists from George Mason University, but shouldn’t go unnoticed by intelligent lay people. 

Related to this are specious claims that household and national finances work the same way, so when a government invests in infrastructure through deficit spending, for example, it is “stealing from future generations”. Since investments just are investments in the future, this idea is nonsensical. Not only are the budgetary time scales between nations and families radically different, any family will eventually have to repay its debts, or go bankrupt, whereas nations can and do carry debts indefinitely, so long as their economies have sufficient capacity. In particular, if an economy is growing at a rate higher than the interest rate on its debt, then the debt is likely to be manageable. Krugman points out that there are two distinct kinds of national economic conditions: normal conditions, when extra government spending can “crowd out” private borrowing by sending up interest rates, and depressed conditions, when reducing interest rates can fail to affect private borrowing due to a lack of confidence, and public spending may be needed to keep the economy from further tanking. Whereas many consider Obama’s spending during the Great Recession a failure, it was at least a half success, and the fully successful public spending response for that time was better exemplified by that of Australia under a Labor government. The corresponding considerations for family budgets are non-existent.

QAnoners, conspiracy theorists and others committed to zombie arguments are much like cultists. With a strong commitment to seeing confirmations of their views and reinterpreting disconfirmations as neutral or even positive, directly confronting them with sensible arguments is actually counterproductive. It is almost certainly better to have a friendly chat about the weather (as long as the weather isn’t extreme!). But engaging with those who are not yet committed, who may even be toying with zombie arguments, will be purposeful as long as science, civilization and democracy have any remaining signs of life. We should all thank Krugman for showing us how to engage.

Social Media Need to Be Regulated

Tags

, , , , , , ,

– Kevin B Korb

21 February 2021

Social Media

When Tim Berners-Lee invented the worldwide web in 1990 (Not the internet! The internet was effectively invented in the 1960s and first given form as the “ARPANET” well before Berners-Lee or Al Gore became involved.), a kind of starry-eyed idea that the internet would spread a love of knowledge and freedom around the world, if it were simply left alone by politicians, was very prominent. Most of us, having experienced the rise of social media on the back of the web and the internet, have since then been disabused of such notions, if we ever had them. While the webnet has made science, journalism and entertainment very much more widely available than ever before, it has notoriously also made available huge amounts of misinformation and disinformation, as well as private and semi-private places in which correspondents from around the world can cooperate in burnishing stories embodying them and so spread misunderstanding like a dark cloud over the world. Also notoriously, well-financed state organizations, such as St Petersburg’s IRA, can and do orchestrate disinformation campaigns using unsuspecting useful idiots. In short, much of the internet now operates as a kind of intellectual cesspool, one which no one is yet cleaning up.

Regulation

In keeping with this spirit of an unregulated wild west, social media have thus far escaped much of the burden of direct regulation. Google, Amazon, Facebook, Twitter, Netflix and others have captured huge amounts of personal data from their users and converted that information into huge revenue streams, in large part through capturing much of the worldwide advertising market. The regulatory environment that used to keep broadcasters and news organizations in check no longer applies. Of course, social media companies are corporations and come under existing regulations that apply to most corporations, such as tax and anti-trust laws. But their basis in new technology means that laws and regulations have, for the most part, yet to catch up with their behavior, influence and evasions. More than most traditional companies, for example, they have been highly adept at minimizing taxes, by having related companies provide services from low-tax countries and paying for them in high-tax countries, thus reducing profits where they hurt and maximizing them where they don’t. Of course, that’s an old game which manufacturing companies have played since well before Google or Twitter existed. However, moving manufacturing plants to low-tax districts is a good deal harder than moving around the nominal location of a web-based service, which can be provided from anywhere connected to the internet.

Not only do social media companies live in a low-regulatory environment, there are only poor prospects for that changing in an economic world largely dominated by an excess of a neoliberal ideology which views regulation as tantamount to corporate murder. However, it has never been more necessary to oppose this view: the threats to individual liberties and privacy posed by technology, both the communications technology of the internet and the emergence of applied AI, have never been greater and will not be controllable without proper regulation.

The Fairness Doctrine

The Fairness Doctrine was a part of the US Federal Communications Commission’s (FCC) regulatory framework from 1949 until 1987. The FCC did (and does) have regulatory authority over broadcast licences and used to enforce the Fairness Doctrine, which was:

The doctrine that imposes affirmative responsibilities on a broadcaster to provide coverage of issues of public importance that is adequate and fairly reflects differing viewpoints. In fulfilling its fairness doctrine obligations, a broadcaster must provide free time for the presentation of opposing views if a paid sponsor is unavailable and must initiate programming on public issues if no one else seeks to do so (The Fairness Doctrine, 2008).

In a shorter form, the Fairness Doctrine required broadcasters to cover issues of public interest in a manner that was fair and balanced. This was not interpreted as providing equal time for all points of view, but some coverage for important issues, plus some coverage for legitimate alternative points of view to what broadcasters had already presented. The doctrine had teeth and led to the cancellation of multiple licences (e.g., Watson, 2015; Parker, 2008). In fact, its effectiveness in supporting fair and balanced debate is arguably the reason that Ronald Reagan and his Republican supporters scrapped the rule in 1987.

USA Today has done a "Fact Check" on whether the scrapping of the Fairness Doctrine gave rise to the polarization in the US media most clearly exemplified by Fox News. They conclude that this is untrue, since the FCC's jurisdiction was limited to broadcasters, and cable news was not considered a broadcaster. Their argument is defective, however.

USA Today acknowledges that the Fairness Doctrine was effective in getting individual licensees to provide balanced coverage of issues. But they ignore the fact that the scope of jurisdiction of the FCC was in dispute in the 1980s. Already in 1968 the Supreme Court acknowledged their jurisdiction over cable, despite cable not technically being a broadcast medium, on the grounds that otherwise the FCC would be unable to fulfill its intended role. Then in 1972 the FCC explicitly imposed the Fairness Doctrine on cable operators. During the 70s and 80s these rules were slowly wound back, until, under Reagan-appointed commissioners, the FCC scrapped the rule, with Reagan vetoing a Congressional attempt to retain the Fairness Doctrine. In other words, before 1987 the Fairness Doctrine was successfully applied to cable, and Reagan terminated that, not just for cable, but also for broadcasting.

The result was that a cultural acceptance of news programs being balanced dissipated, in both cable and broadcasting. Fox News would never have been possible without these actions, despite their 100% phony slogan of being "Fair and Balanced" themselves. The USA Today "Fact Check" is well worthy of Three Pinocchios.

However, what I want to target here is Oreskes and Conway’s (2010) argument, in their otherwise excellent Merchants of Doubt, that the Fairness Doctrine did a great deal of damage to public discourse by making false equivalencing (“what aboutism”) a kind of norm, in counterpoint to the criticism that its scrapping has done damage by fostering polarization (see box above). They provide a detailed and well-argued account of how false equivalencing has undermined the public discussion, and so the public decision making, surrounding the harms of tobacco use, acid rain, pesticides, ozone degradation through CFCs and anthropogenic global warming. These issues are all importantly linked. They all have spawned devoted groups of deniers who fervently oppose regulatory measures for minimizing the harm caused by related industries — and these groups are largely overlapping, fueled by a common set of rightwing think tanks and common pools of money. (About the money, an especially revealing read is Nancy MacLeans’ Democracy in Chains.) While it’s clear that the scrapping of the Fairness Doctrine has encouraged voices of extremism, especially those backed by Rupert Murdoch, it’s also arguable that the Fairness Doctrine itself gave cover to extremists demanding to be heard on these and other topics — because it’s only fair! — when by rights they would have had a much smaller voice, should volume be in any way proportional to the merits of the cases being advanced. In a nutshell, that is Oreskes and Conway’s argument.

For those who look to the potential value of regulation returning to the role of promoting effective and useful public discourse, and to the Fairness Doctrine specifically as a model for that, this is an argument that must be addressed. The primary weakness in it is Oreskes and Conway’s elision of two key features of the Fairness Doctrine (at any rate, on my interpretation and that of Wikipedia, 2021). It explicitly does not provide for equal time for differing viewpoints, but some reasonable, if lesser, amount of time for legitimate alternative viewpoints (see box below). It provides for no (mandatory) time for illegitimate points of view. The legitimacy of differing points of view is up for debate in many cases, of course, and, when the Fairness Doctrine was in existence, legitimacy was ultimately settled by the courts, which have always been a rational backstop for deciding the limits of public discourse. Where the claims of a faction have been thoroughly discredited by science — as they have been in all the cases discussed in Oreskes and Conway’s book, and indeed already were at the times of the debate over their regulation — there is no need under the Fairness Doctrine to give any time to those points of view, nor would the courts force the presentation of illegitimate nonsense, regardless of the funds behind it. The push for false equivalency is indeed a prominent tactic of deniers of science, but, if it drew upon the Fairness Doctrine before 1987, then it did so without justification and under false pretences.

I am not an expert in the law, let alone in FCC law, but there are clear indications in US Supreme Court findings supporting my lesser point that legitimacy to some (unspecified) standard was required of a thesis or point of view before the Fairness Doctrine could be invoked, which I quote below. Regardless, even if my interpretation is mistaken, the more important point is that it could be true. If we are to adopt some version of a Fairness Doctrine for use in regulating social media, it needs to be one which supports legitimacy and rules out the discredited. Here are some quotes pertinent to the lesser issue (note that allowing disproven, illegitimate points of view a significant voice is clearly not in the public interest):

Referring to legislation supporting the Fairness Doctrine, the US Supreme Court observed: `Senator Scott, another Senate manager [of the legislation], added that: "It is intended to encompass all legitimate areas of public importance which are controversial," not just politics.' (US Supreme Court, 1969)

`The statutory authority of the FCC to promulgate these regulations derives from the mandate to the "Commission from time to time, as public convenience, interest, or necessity requires" to promulgate "such rules and regulations and prescribe such restrictions and conditions . . . as may be necessary to carry out the provisions of this chapter . . . ." 47 U.S.C. 303 and 303 (r).[note 7] The Commission is specifically directed to consider the demands of the public interest... This mandate to the FCC to assure that broadcasters operate in the public interest is a broad one, a power "not niggardly but expansive."' (US Supreme Court, 1969)

The Fairness Doctrine is repeatedly described as supporting broadcasting on important public issues, which would rule out, for example, giving time to flat-earthers. For example, "[licensees have] assumed the obligation of presenting important public questions fairly and without bias." (US Supreme Court, 1969)

On the restrictions imposed by the Fairness Doctrine on broadcasters' freedom of choice: "Such restrictions have been upheld by this Court only when they were narrowly tailored to further a substantial governmental interest, such as ensuring adequate and balanced coverage of public issues." (US Supreme Court, 1984)

Given the widespread and growing flow of misinformation and disinformation on social media, the Fairness Doctrine, or rather some descendant of it also incorporating protection of the public from promulgation of the illegitimate, could provide the justification and means of choking that flow and so allowing social media to serve the truly useful purpose of supporting a “marketplace of ideas” instead of being a poisonous wetmarket spawning misinformation pandemics.

In short, regulations sharing purpose with the Fairness Doctrine are fair game for nations wanting to foster valuable public debate, which is part of the foundation of any democracy. Such regulation is needed for traditional broadcasters. The US Supreme Court extended the doctrine to cable networks on the grounds that the FCC could not fulfill its function if cable were excluded. On the very same grounds, but with even stronger force, such regulation needs to be applied to internet- and web-based social media, which have collectively outgrown both broadcasting and cable in their reach and importance for public debate.

The GDPR

The EU’s General Data Protection Regulation (GDPR) was introduced in 2016, establishing EU-wide principles for the protection of personal data, including rights to informed consent to the collection of data and the restriction of its use to the purposes for which consent was given. The GDPR also provides for enforcement powers, with each member country having a Data Protection Authority (DPA) to investigate and prosecute violations. Of course, those US tech companies which have so successfully “monetized” your data objected long and loud to the GDPR. Once it became operational, however, they went quiet, since, while there are compliance costs, compliance is in fact feasible and doesn’t stop them earning money in Europe. The rest of the world benefits from EU regulation in a minimal way, when companies are either obliged to obey the GDPR because of doing business with the EU or where they simply prefer a uniform way of doing business across jurisdictions.

The social media goals for data acquisition are largely to do with (mis)using the data for better targeting advertising, because that’s largely where the revenue comes from. If users voluntarily agree to such use, knowing the scope of the usage in advance, that’s fair enough. And that’s exactly what the GDPR allows, as well as what it limits data usage to. But the threats involving data acquisition are now hugely greater than simply making money. Facial recognition software is now routinely used by police. With much of the world playing catch up with Chinese-level camera surveillance, the potential for abuse of such information is enormous. Deep fake technology has the potential to weaponize personal data, directing much more effectively manipulative advertising at you, as well as using your data to spread more effective and manipulative misinformation about you and groups you belong to. Identity theft using deep fake videos will be much easier than that using earlier technology, for example. As another example, blackmail and extortion based on compromising information have long been lucrative activities for criminals; blackmail and extortion based on compromising deep fake misinformation will be orders of magnitude easier. Deep Fakes will not for long be limited to passive videos and audios; they will soon be extended to real-time interactive simulations of a targeted victim, providing even more persuasive power for fakery (Wang, 2019). With the near-term development of the “Internet of Things” — wiring all of our refrigerators, cars, air conditioning systems, etc. into the internet — the raw data on which Surveillance Capitalism operates will expand exponentially for the foreseeable future. The rise, and combination, of Big Data and Machine Learning using Big Data (e.g., Deep Fakery) portends parlous times on the net. Berners-Lee style enthusiasm for a “free range” on an internet wild west is no longer so much quaint as simply dangerous.

Real News

There is still news reporting and journalism in the world. There are both private and public organizations which put a good deal of effort and money into tracking what’s happening of interest around the world and presenting it to their audiences. This is so despite, for example, US newspaper advertising revenue having declined about 55% since the invention of the worldwide web to 2018 (per a Pew Center Report), whereas in the same period US social media ad revenue grew from nothing to 3,571 times that of the newspapers (i.e., 357,100% more). Since news organizations originate and curate their news and opinion reports, it is reasonable to hold them accountable for the content, for example by allowing some defamation actions against them. Social media, on the other hand, simply offer platforms for others to write or speak upon. Especially given the size of their memberships, it is both impossible and unreasonable to expect them to police the content of posts in the same way as news media. Or, at least, that is the common view.

Indeed, this is the rationale behind the now famous Section 230 of the US Telecommunications Act of 1996 (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”) Making social media responsible for content, when that content is put out by millions or even billions of people, would make social media unviable. Not even the AI of the foreseeable future would be able to police their posts effectively enough to avoid huge legal costs. It’s possible, of course, that the courts would find a balance between the financial and operational health of social media organizations and their legal opponents in such a new environment, with Section 230 removed, but there is no guarantee. The nominal reason that Donald Trump wanted Section 230 deleted was that social media were censoring rightwing voices. But the reality is, of course, that without the protection of Section 230 social media would be forced to censor even more heavily or else just be shut down.

What I am calling for here is an even heavier forced censorship, in addition to new privacy protections. Illegitimate claims pushed by Russia’s IRA, Q, climate deniers, big monied interests must lose their voices. They are diminishing, not enriching, public debate. Illegitimacy, exposed by science and confirmed by courts, must not be heard. How exactly to make such a mandate operational is an open question. There need to be independent authorities for initial judgments of fairness and legitimacy in analogy to GDPR’s Data Protection Authorities, where independence refers to a lack of dependency upon both the social media organizations and a nation’s politics. In view of the latter, unlike the DPAs, it would be best if the new authority were explicitly international. There are plenty of precedents in international law for such organizations. Successful examples of international regulatory bodies include the UN’s Universal Postal Union, which coordinates worldwide postal systems, the UN’s International Maritime Organization, which regulates international shipping, or the World Trade Organization, which regulates international trade.

While forcing social media to report matters fairly, including intervening in their users’ mis/disinformation, would be a new burden on them, it is nothing like the threat revocation of Section 230 would raise. If social media are judged directly responsible for misinformation, through perhaps negligence, then penalties might be in order. But if a UN authority points out that some accounts are spreading disinformation, the existing practice of deleting those accounts would likely suffice to contain the matter. There is no need to threaten social media with punitive monetary damages. What we need is for public discourse to converge on civility, not suppression.

Free Speech

Open platforms resemble the public square, and the free discussion of politics that takes place on these platforms resembles an open marketplace of ideas. (Schweppe, 2019).  

What about free speech? If social media organizations are to be made and held responsible for providing something akin to a digital public square — a forum where any public issue may be discussed within the bounds of public decency and fairness — then won’t our right to free speech be infringed? On any reasonable understanding of these terms, the answer is “No”. The requirement of public decency has always been maintained for public squares. Fairness was introduced in the US in the mid-twentieth century, but appropriately. It was always at least implicitly a requirement of real public squares in any case: any citizen who pulled out a bullhorn and spoke over everyone else would have been hauled off for disturbing the peace.

Democracy depends upon free speech. And it is fitting that it is included in the very first amendment in the US Bill of Rights. But that right has never been absolute, nor can it be. The community decides what constraints to put upon it, but there is no community which allows unfettered a freedom to abuse, incite hatred, or endanger people. Somewhat older style libertarianism asserted individual rights, including speech rights, up to, but not beyond, the boundaries of others’ rights (i.e., there is an obligation to “refrain from violating the rights of others”, van der Fossen, 2019). Since libertarianism recently married neoliberal fanaticism, however, it seems like all constraints are off: individual rights, for example, now extend to refusing to wear masks during a pandemic, that is, to a newly invented right to infect and kill other people. The logical extension of such libertarianism to all varieties of behavior would turn libertarian moral philosophy into Thrasymachus’s “might makes right” — that is, a full-throated cry to be evil.

Oreskes and Conway meticulously trace much of this neoliberal-libertarian fusion back to the monied interests fighting against regulation in the public interest of the lucrative businesses of fossil fuel extraction, agriculture, manufacturing and tobacco. They maximize profits by putting all the burden of their “externalities” — pollution — on the public. Neoliberal libertarianism is a con.

Social media tech companies are playing an extension of that con. They adopt internal policing practices to monitor and control content exactly and only insofar as it is necessary to stave off the kind of regulation I’m calling for here. To the extent that regulation can be forestalled or avoided, the burdens of social media’s externalities can be foisted onto the public. These externalities include the polarization of public debate, the domination of monied interests of that debate through targeted advertising and the Murdoch press, the creation and magnification of extremely damaging conspiracy theories, the promotion of hate over cooperation. We cannot wait another generation to protect the public interest from these con artists.

Summary

Social media have grown from nothing to dominating public discussions around the world. They have evaded regulation so far very successfully in most cases. The growth in data collection, the rapid advance of AI technologies, the imminent flourishing of Deep Fake technology, the proven ability of interested parties to initiate and promote disinformation campaigns all point to an urgent and growing need for proper regulation of social media. The goals of such regulation should include at least the protection of personal data, the shackling of disinformation and the curbing of misinformation. The GDPR and the Fairness Doctrine provide some successful models — starting points — for considering such regulations. But the social media themselves are far richer and more far-reaching than the media of the past, spanning the worldwide web, so the regulations required must likewise be worldwide, preferably operating across borders as a neutral international body under international laws.

Acknowledgement

I thank anonymous reviewers for their helpful criticisms.

References

Fairness Doctrine (2008). West’s Encyclopedia of American Law, edition 2. Accessed February 7 2021 from https://legal-dictionary.thefreedictionary.com/Fairness+Doctrine

Parker, Everett (2008). The FCC & Censorship. Democracy Now. Accessed 7 February, 2021. https://www.democracynow.org/2008/3/6/the_fcc_censorship_legendary_media_activist

United States Supreme Court (1969). RED LION BROADCASTING CO. v. FCC(1969) No. 717 Argued: Decided: June 9, 1969 395 U.S. 367, 89 S. Ct. 1794, 23 L. Ed. 2d 371, 1 Med. L. Rptr. 2053 (1969).

United States Supreme Court (1984). 468 U.S. 364 104 S.Ct. 3106 82 L.Ed.2d 278 FEDERAL COMMUNICATIONS COMMISSION v. LEAGUE OF WOMEN VOTERS OF CALIFORNIA et al. No. 82-912. Supreme Court of the United States Argued Jan. 16, 1984. Decided July 2, 1984.

Schweppe, J (2019). Hawley Defends the Public Space. First Things https://www.firstthings.com/web-exclusives/2019/06/hawley-defends-the-public-square. Accessed 16 Feb 2021.

van der Vossen, Bas (2019). “Libertarianism”, The Stanford Encyclopedia of Philosophy (Spring 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2019/entries/libertarianism/&gt;. Accessed 16 February, 2021.

Wang, G.E. (2019). Humans in the Loop: The Design of Interactive AI Systems, Stanford University Human-Centered AI.

Watson, Roxanne. “Red Lion Broadcasting Co. v. FCC”. Encyclopedia Britannica, 11 Sep. 2014, https://www.britannica.com/event/Red-Lion-Broadcasting-Co-v-FCC. Accessed 7 February, 2021.

Wikipedia (2021). “FCC Fairness Doctrine“. Wikipedia. Accessed 21 February 2021.