The Reasoning Well

Featured

Tags

This will be a collection of hypothetical lectures that I might have delivered over the course of my academic career, but didn’t. The goal of this course of lectures is to introduce a broad array of tools, or ideas, or weapons for attacking reasoning problems, taking advantage of a broad range of disciplines. These are meant to be introductory, readily understood by intelligent laypeople who have never studied those disciplines and representing general-purspose methods that might become available to anyone who does study those disciplines at an undergraduate level. So, this collection is envisioned as a kind of Swiss-army knife for your brain. While that is my intention, I do not pretend to cover all the major disciplines, but emphasize those which have had a substantial impact on my intellectual life.

I have taken inspiration from two prolific and excellent writers of articles for Scientific American, A.K. Dewdney and Martin Gardner. In partial consequence of their inspiration, these lectures are somewhat loosely connected; they are intended to largely be intelligible independently of one another, although cross-references will guide the reader through some kinds of dependencies. While this is not intended to be scholarly in the sense of detailing every historical line of thought behind these lectures, or attributing all details to their originators, I do indicate where readers might turn for additional information on these ideas.

The top-level topics I am covering (in tentative order) include: Philosophy, Bayesian Reasoning, Argumentation, Mathematics and Computer Science, Physical Thinking, Modeling and Simulation, Evolution Theory, Information, Ethics, Politics, Cognition and Inference. Posts will be “collected” using the tag #ReasoningWell.

Social Media Need to Be Regulated

Tags

, , , , , , ,

– Kevin B Korb

21 February 2021

Social Media

When Tim Berners-Lee invented the worldwide web in 1990 (Not the internet! The internet was effectively invented in the 1960s and first given form as the “ARPANET” well before Berners-Lee or Al Gore became involved.), a kind of starry-eyed idea that the internet would spread a love of knowledge and freedom around the world, if it were simply left alone by politicians, was very prominent. Most of us, having experienced the rise of social media on the back of the web and the internet, have since then been disabused of such notions, if we ever had them. While the webnet has made science, journalism and entertainment very much more widely available than ever before, it has notoriously also made available huge amounts of misinformation and disinformation, as well as private and semi-private places in which correspondents from around the world can cooperate in burnishing stories embodying them and so spread misunderstanding like a dark cloud over the world. Also notoriously, well-financed state organizations, such as St Petersburg’s IRA, can and do orchestrate disinformation campaigns using unsuspecting useful idiots. In short, much of the internet now operates as a kind of intellectual cesspool, one which no one is yet cleaning up.

Regulation

In keeping with this spirit of an unregulated wild west, social media have thus far escaped much of the burden of direct regulation. Google, Amazon, Facebook, Twitter, Netflix and others have captured huge amounts of personal data from their users and converted that information into huge revenue streams, in large part through capturing much of the worldwide advertising market. The regulatory environment that used to keep broadcasters and news organizations in check no longer applies. Of course, social media companies are corporations and come under existing regulations that apply to most corporations, such as tax and anti-trust laws. But their basis in new technology means that laws and regulations have, for the most part, yet to catch up with their behavior, influence and evasions. More than most traditional companies, for example, they have been highly adept at minimizing taxes, by having related companies provide services from low-tax countries and paying for them in high-tax countries, thus reducing profits where they hurt and maximizing them where they don’t. Of course, that’s an old game which manufacturing companies have played since well before Google or Twitter existed. However, moving manufacturing plants to low-tax districts is a good deal harder than moving around the nominal location of a web-based service, which can be provided from anywhere connected to the internet.

Not only do social media companies live in a low-regulatory environment, there are only poor prospects for that changing in an economic world largely dominated by an excess of a neoliberal ideology which views regulation as tantamount to corporate murder. However, it has never been more necessary to oppose this view: the threats to individual liberties and privacy posed by technology, both the communications technology of the internet and the emergence of applied AI, have never been greater and will not be controllable without proper regulation.

The Fairness Doctrine

The Fairness Doctrine was a part of the US Federal Communications Commission’s (FCC) regulatory framework from 1949 until 1987. The FCC did (and does) have regulatory authority over broadcast licences and used to enforce the Fairness Doctrine, which was:

The doctrine that imposes affirmative responsibilities on a broadcaster to provide coverage of issues of public importance that is adequate and fairly reflects differing viewpoints. In fulfilling its fairness doctrine obligations, a broadcaster must provide free time for the presentation of opposing views if a paid sponsor is unavailable and must initiate programming on public issues if no one else seeks to do so (The Fairness Doctrine, 2008).

In a shorter form, the Fairness Doctrine required broadcasters to cover issues of public interest in a manner that was fair and balanced. This was not interpreted as providing equal time for all points of view, but some coverage for important issues, plus some coverage for legitimate alternative points of view to what broadcasters had already presented. The doctrine had teeth and led to the cancellation of multiple licences (e.g., Watson, 2015; Parker, 2008). In fact, its effectiveness in supporting fair and balanced debate is arguably the reason that Ronald Reagan and his Republican supporters scrapped the rule in 1987.

USA Today has done a "Fact Check" on whether the scrapping of the Fairness Doctrine gave rise to the polarization in the US media most clearly exemplified by Fox News. They conclude that this is untrue, since the FCC's jurisdiction was limited to broadcasters, and cable news was not considered a broadcaster. Their argument is defective, however.

USA Today acknowledges that the Fairness Doctrine was effective in getting individual licensees to provide balanced coverage of issues. But they ignore the fact that the scope of jurisdiction of the FCC was in dispute in the 1980s. Already in 1968 the Supreme Court acknowledged their jurisdiction over cable, despite cable not technically being a broadcast medium, on the grounds that otherwise the FCC would be unable to fulfill its intended role. Then in 1972 the FCC explicitly imposed the Fairness Doctrine on cable operators. During the 70s and 80s these rules were slowly wound back, until, under Reagan-appointed commissioners, the FCC scrapped the rule, with Reagan vetoing a Congressional attempt to retain the Fairness Doctrine. In other words, before 1987 the Fairness Doctrine was successfully applied to cable, and Reagan terminated that, not just for cable, but also for broadcasting.

The result was that a cultural acceptance of news programs being balanced dissipated, in both cable and broadcasting. Fox News would never have been possible without these actions, despite their 100% phony slogan of being "Fair and Balanced" themselves. The USA Today "Fact Check" is well worthy of Three Pinocchios.

However, what I want to target here is Oreskes and Conway’s (2010) argument, in their otherwise excellent Merchants of Doubt, that the Fairness Doctrine did a great deal of damage to public discourse by making false equivalencing (“what aboutism”) a kind of norm, in counterpoint to the criticism that its scrapping has done damage by fostering polarization (see box above). They provide a detailed and well-argued account of how false equivalencing has undermined the public discussion, and so the public decision making, surrounding the harms of tobacco use, acid rain, pesticides, ozone degradation through CFCs and anthropogenic global warming. These issues are all importantly linked. They all have spawned devoted groups of deniers who fervently oppose regulatory measures for minimizing the harm caused by related industries — and these groups are largely overlapping, fueled by a common set of rightwing think tanks and common pools of money. (About the money, an especially revealing read is Nancy MacLeans’ Democracy in Chains.) While it’s clear that the scrapping of the Fairness Doctrine has encouraged voices of extremism, especially those backed by Rupert Murdoch, it’s also arguable that the Fairness Doctrine itself gave cover to extremists demanding to be heard on these and other topics — because it’s only fair! — when by rights they would have had a much smaller voice, should volume be in any way proportional to the merits of the cases being advanced. In a nutshell, that is Oreskes and Conway’s argument.

For those who look to the potential value of regulation returning to the role of promoting effective and useful public discourse, and to the Fairness Doctrine specifically as a model for that, this is an argument that must be addressed. The primary weakness in it is Oreskes and Conway’s elision of two key features of the Fairness Doctrine (at any rate, on my interpretation and that of Wikipedia, 2021). It explicitly does not provide for equal time for differing viewpoints, but some reasonable, if lesser, amount of time for legitimate alternative viewpoints (see box below). It provides for no (mandatory) time for illegitimate points of view. The legitimacy of differing points of view is up for debate in many cases, of course, and, when the Fairness Doctrine was in existence, legitimacy was ultimately settled by the courts, which have always been a rational backstop for deciding the limits of public discourse. Where the claims of a faction have been thoroughly discredited by science — as they have been in all the cases discussed in Oreskes and Conway’s book, and indeed already were at the times of the debate over their regulation — there is no need under the Fairness Doctrine to give any time to those points of view, nor would the courts force the presentation of illegitimate nonsense, regardless of the funds behind it. The push for false equivalency is indeed a prominent tactic of deniers of science, but, if it drew upon the Fairness Doctrine before 1987, then it did so without justification and under false pretences.

I am not an expert in the law, let alone in FCC law, but there are clear indications in US Supreme Court findings supporting my lesser point that legitimacy to some (unspecified) standard was required of a thesis or point of view before the Fairness Doctrine could be invoked, which I quote below. Regardless, even if my interpretation is mistaken, the more important point is that it could be true. If we are to adopt some version of a Fairness Doctrine for use in regulating social media, it needs to be one which supports legitimacy and rules out the discredited. Here are some quotes pertinent to the lesser issue (note that allowing disproven, illegitimate points of view a significant voice is clearly not in the public interest):

Referring to legislation supporting the Fairness Doctrine, the US Supreme Court observed: `Senator Scott, another Senate manager [of the legislation], added that: "It is intended to encompass all legitimate areas of public importance which are controversial," not just politics.' (US Supreme Court, 1969)

`The statutory authority of the FCC to promulgate these regulations derives from the mandate to the "Commission from time to time, as public convenience, interest, or necessity requires" to promulgate "such rules and regulations and prescribe such restrictions and conditions . . . as may be necessary to carry out the provisions of this chapter . . . ." 47 U.S.C. 303 and 303 (r).[note 7] The Commission is specifically directed to consider the demands of the public interest... This mandate to the FCC to assure that broadcasters operate in the public interest is a broad one, a power "not niggardly but expansive."' (US Supreme Court, 1969)

The Fairness Doctrine is repeatedly described as supporting broadcasting on important public issues, which would rule out, for example, giving time to flat-earthers. For example, "[licensees have] assumed the obligation of presenting important public questions fairly and without bias." (US Supreme Court, 1969)

On the restrictions imposed by the Fairness Doctrine on broadcasters' freedom of choice: "Such restrictions have been upheld by this Court only when they were narrowly tailored to further a substantial governmental interest, such as ensuring adequate and balanced coverage of public issues." (US Supreme Court, 1984)

Given the widespread and growing flow of misinformation and disinformation on social media, the Fairness Doctrine, or rather some descendant of it also incorporating protection of the public from promulgation of the illegitimate, could provide the justification and means of choking that flow and so allowing social media to serve the truly useful purpose of supporting a “marketplace of ideas” instead of being a poisonous wetmarket spawning misinformation pandemics.

In short, regulations sharing purpose with the Fairness Doctrine are fair game for nations wanting to foster valuable public debate, which is part of the foundation of any democracy. Such regulation is needed for traditional broadcasters. The US Supreme Court extended the doctrine to cable networks on the grounds that the FCC could not fulfill its function if cable were excluded. On the very same grounds, but with even stronger force, such regulation needs to be applied to internet- and web-based social media, which have collectively outgrown both broadcasting and cable in their reach and importance for public debate.

The GDPR

The EU’s General Data Protection Regulation (GDPR) was introduced in 2016, establishing EU-wide principles for the protection of personal data, including rights to informed consent to the collection of data and the restriction of its use to the purposes for which consent was given. The GDPR also provides for enforcement powers, with each member country having a Data Protection Authority (DPA) to investigate and prosecute violations. Of course, those US tech companies which have so successfully “monetized” your data objected long and loud to the GDPR. Once it became operational, however, they went quiet, since, while there are compliance costs, compliance is in fact feasible and doesn’t stop them earning money in Europe. The rest of the world benefits from EU regulation in a minimal way, when companies are either obliged to obey the GDPR because of doing business with the EU or where they simply prefer a uniform way of doing business across jurisdictions.

The social media goals for data acquisition are largely to do with (mis)using the data for better targeting advertising, because that’s largely where the revenue comes from. If users voluntarily agree to such use, knowing the scope of the usage in advance, that’s fair enough. And that’s exactly what the GDPR allows, as well as what it limits data usage to. But the threats involving data acquisition are now hugely greater than simply making money. Facial recognition software is now routinely used by police. With much of the world playing catch up with Chinese-level camera surveillance, the potential for abuse of such information is enormous. Deep fake technology has the potential to weaponize personal data, directing much more effectively manipulative advertising at you, as well as using your data to spread more effective and manipulative misinformation about you and groups you belong to. Identity theft using deep fake videos will be much easier than that using earlier technology, for example. As another example, blackmail and extortion based on compromising information have long been lucrative activities for criminals; blackmail and extortion based on compromising deep fake misinformation will be orders of magnitude easier. Deep Fakes will not for long be limited to passive videos and audios; they will soon be extended to real-time interactive simulations of a targeted victim, providing even more persuasive power for fakery (Wang, 2019). With the near-term development of the “Internet of Things” — wiring all of our refrigerators, cars, air conditioning systems, etc. into the internet — the raw data on which Surveillance Capitalism operates will expand exponentially for the foreseeable future. The rise, and combination, of Big Data and Machine Learning using Big Data (e.g., Deep Fakery) portends parlous times on the net. Berners-Lee style enthusiasm for a “free range” on an internet wild west is no longer so much quaint as simply dangerous.

Real News

There is still news reporting and journalism in the world. There are both private and public organizations which put a good deal of effort and money into tracking what’s happening of interest around the world and presenting it to their audiences. This is so despite, for example, US newspaper advertising revenue having declined about 55% since the invention of the worldwide web to 2018 (per a Pew Center Report), whereas in the same period US social media ad revenue grew from nothing to 3,571 times that of the newspapers (i.e., 357,100% more). Since news organizations originate and curate their news and opinion reports, it is reasonable to hold them accountable for the content, for example by allowing some defamation actions against them. Social media, on the other hand, simply offer platforms for others to write or speak upon. Especially given the size of their memberships, it is both impossible and unreasonable to expect them to police the content of posts in the same way as news media. Or, at least, that is the common view.

Indeed, this is the rationale behind the now famous Section 230 of the US Telecommunications Act of 1996 (“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”) Making social media responsible for content, when that content is put out by millions or even billions of people, would make social media unviable. Not even the AI of the foreseeable future would be able to police their posts effectively enough to avoid huge legal costs. It’s possible, of course, that the courts would find a balance between the financial and operational health of social media organizations and their legal opponents in such a new environment, with Section 230 removed, but there is no guarantee. The nominal reason that Donald Trump wanted Section 230 deleted was that social media were censoring rightwing voices. But the reality is, of course, that without the protection of Section 230 social media would be forced to censor even more heavily or else just be shut down.

What I am calling for here is an even heavier forced censorship, in addition to new privacy protections. Illegitimate claims pushed by Russia’s IRA, Q, climate deniers, big monied interests must lose their voices. They are diminishing, not enriching, public debate. Illegitimacy, exposed by science and confirmed by courts, must not be heard. How exactly to make such a mandate operational is an open question. There need to be independent authorities for initial judgments of fairness and legitimacy in analogy to GDPR’s Data Protection Authorities, where independence refers to a lack of dependency upon both the social media organizations and a nation’s politics. In view of the latter, unlike the DPAs, it would be best if the new authority were explicitly international. There are plenty of precedents in international law for such organizations. Successful examples of international regulatory bodies include the UN’s Universal Postal Union, which coordinates worldwide postal systems, the UN’s International Maritime Organization, which regulates international shipping, or the World Trade Organization, which regulates international trade.

While forcing social media to report matters fairly, including intervening in their users’ mis/disinformation, would be a new burden on them, it is nothing like the threat revocation of Section 230 would raise. If social media are judged directly responsible for misinformation, through perhaps negligence, then penalties might be in order. But if a UN authority points out that some accounts are spreading disinformation, the existing practice of deleting those accounts would likely suffice to contain the matter. There is no need to threaten social media with punitive monetary damages. What we need is for public discourse to converge on civility, not suppression.

Free Speech

Open platforms resemble the public square, and the free discussion of politics that takes place on these platforms resembles an open marketplace of ideas. (Schweppe, 2019).  

What about free speech? If social media organizations are to be made and held responsible for providing something akin to a digital public square — a forum where any public issue may be discussed within the bounds of public decency and fairness — then won’t our right to free speech be infringed? On any reasonable understanding of these terms, the answer is “No”. The requirement of public decency has always been maintained for public squares. Fairness was introduced in the US in the mid-twentieth century, but appropriately. It was always at least implicitly a requirement of real public squares in any case: any citizen who pulled out a bullhorn and spoke over everyone else would have been hauled off for disturbing the peace.

Democracy depends upon free speech. And it is fitting that it is included in the very first amendment in the US Bill of Rights. But that right has never been absolute, nor can it be. The community decides what constraints to put upon it, but there is no community which allows unfettered a freedom to abuse, incite hatred, or endanger people. Somewhat older style libertarianism asserted individual rights, including speech rights, up to, but not beyond, the boundaries of others’ rights (i.e., there is an obligation to “refrain from violating the rights of others”, van der Fossen, 2019). Since libertarianism recently married neoliberal fanaticism, however, it seems like all constraints are off: individual rights, for example, now extend to refusing to wear masks during a pandemic, that is, to a newly invented right to infect and kill other people. The logical extension of such libertarianism to all varieties of behavior would turn libertarian moral philosophy into Thrasymachus’s “might makes right” — that is, a full-throated cry to be evil.

Oreskes and Conway meticulously trace much of this neoliberal-libertarian fusion back to the monied interests fighting against regulation in the public interest of the lucrative businesses of fossil fuel extraction, agriculture, manufacturing and tobacco. They maximize profits by putting all the burden of their “externalities” — pollution — on the public. Neoliberal libertarianism is a con.

Social media tech companies are playing an extension of that con. They adopt internal policing practices to monitor and control content exactly and only insofar as it is necessary to stave off the kind of regulation I’m calling for here. To the extent that regulation can be forestalled or avoided, the burdens of social media’s externalities can be foisted onto the public. These externalities include the polarization of public debate, the domination of monied interests of that debate through targeted advertising and the Murdoch press, the creation and magnification of extremely damaging conspiracy theories, the promotion of hate over cooperation. We cannot wait another generation to protect the public interest from these con artists.

Summary

Social media have grown from nothing to dominating public discussions around the world. They have evaded regulation so far very successfully in most cases. The growth in data collection, the rapid advance of AI technologies, the imminent flourishing of Deep Fake technology, the proven ability of interested parties to initiate and promote disinformation campaigns all point to an urgent and growing need for proper regulation of social media. The goals of such regulation should include at least the protection of personal data, the shackling of disinformation and the curbing of misinformation. The GDPR and the Fairness Doctrine provide some successful models — starting points — for considering such regulations. But the social media themselves are far richer and more far-reaching than the media of the past, spanning the worldwide web, so the regulations required must likewise be worldwide, preferably operating across borders as a neutral international body under international laws.

Acknowledgement

I thank anonymous reviewers for their helpful criticisms.

References

Fairness Doctrine (2008). West’s Encyclopedia of American Law, edition 2. Accessed February 7 2021 from https://legal-dictionary.thefreedictionary.com/Fairness+Doctrine

Parker, Everett (2008). The FCC & Censorship. Democracy Now. Accessed 7 February, 2021. https://www.democracynow.org/2008/3/6/the_fcc_censorship_legendary_media_activist

United States Supreme Court (1969). RED LION BROADCASTING CO. v. FCC(1969) No. 717 Argued: Decided: June 9, 1969 395 U.S. 367, 89 S. Ct. 1794, 23 L. Ed. 2d 371, 1 Med. L. Rptr. 2053 (1969).

United States Supreme Court (1984). 468 U.S. 364 104 S.Ct. 3106 82 L.Ed.2d 278 FEDERAL COMMUNICATIONS COMMISSION v. LEAGUE OF WOMEN VOTERS OF CALIFORNIA et al. No. 82-912. Supreme Court of the United States Argued Jan. 16, 1984. Decided July 2, 1984.

Schweppe, J (2019). Hawley Defends the Public Space. First Things https://www.firstthings.com/web-exclusives/2019/06/hawley-defends-the-public-square. Accessed 16 Feb 2021.

van der Vossen, Bas (2019). “Libertarianism”, The Stanford Encyclopedia of Philosophy (Spring 2019 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/spr2019/entries/libertarianism/&gt;. Accessed 16 February, 2021.

Wang, G.E. (2019). Humans in the Loop: The Design of Interactive AI Systems, Stanford University Human-Centered AI.

Watson, Roxanne. “Red Lion Broadcasting Co. v. FCC”. Encyclopedia Britannica, 11 Sep. 2014, https://www.britannica.com/event/Red-Lion-Broadcasting-Co-v-FCC. Accessed 7 February, 2021.

Wikipedia (2021). “FCC Fairness Doctrine“. Wikipedia. Accessed 21 February 2021.

Some Clarifying Notes on Covid-19

Tags

, , , , , , , ,

— Kevin B Korb, 8 Dec 2020 (Revised 12 Dec 2020)

Here I put together some of the key arguments for some of the important issues concerning the Covid-19 pandemic (alternatively, the SARS-CoV-2 pandemic, since that is the virus causing Covid-19). (Nota Bene: Much of this was written well before the date of publication. Rather than update the content, which would take some time, I now fill it out and publish as is, since I believe it still makes a contribution.)

The arguments themselves are mostly quite simple. The disagreements about the issues largely lie in disagreements about what the underlying facts are, with covid deniers mostly using unreliable sources of information (I’ve had unsourced youtube videos offered as scientific evidence) or misunderstanding statistical reasoning or scientific methods. The fundamental solution, or mental repair work, has to do with learning methods of critical reasoning, properly checking sources, learning scientific and statistical methods, etc. I will point out specific problems of this kind, but readers may also wish to consult general guides to such matters. (I had written one, which Extinction Rebellion deleted without making any backup; I will recreate it someday.)

Some active commentators think that critical reasoning means rejecting anything “the authorities” might have to say, calling this “healthy skepticism”. In fact, it is unhealthy skepticism. Critical reasoning involves testing relevant propositions, neither rejecting them because you don’t like the source nor accepting them because you do. To be sure, critical reasoning is also compatible with this kind of out-of-hand acceptance or rejection on the grounds of time and effort. No one can become an expert in every scientific field, so that’s why we have experts and that’s why sciences and other social activities establish vetting and review processes to test and publicize their own standards for reliability. (If you’d like to learn about critical reasoning, the Stanford Encyclopedia of Philosophy article “Critical Thinking” is a good place to start.)

For my part, I give proper references, unlike conspiracy theorists.

It doesn’t help that both the CDC and WHO have lost a good deal of credibility on Covid-19. The CDC appears to have been captured by the Trump administration and is now taking political orders instead of (or more exactly, in addition to) promoting science-based policy. There are, of course, many good scientists remaining in the CDC, but their bosses are owned politically, with the result that pronouncements by CDC are more suspect than ever before. (See also CDC Director Redfield’s letter to governors of 17 August 2020, effectively announcing vaccines will be considered safe prior to Phase 3 trials.) The WHO depends upon financial support from member nations, with the result that their pronouncements are subject to influence by those nations. The silver lining to the US’s withdrawal from the WHO is that the US no longer has such influence.

Early doubts about masks expressed by US and WHO health authorities were partially motivated by political aims rather than science, such as Dr Anthony Fauci’s publicly stated goal of reserving better masks for health care workers. Unfortunately, he actually said, falsely, that there was no scientific evidence supporting the public use of masks. One of the major principles taught in public health education is to tell the public the truth: losing the public’s confidence is one of the sure-fire ways of losing the public health war. Dr Fauci violated the public trust. That does not, of course, mean that his subsequent statements are also false. For the most part, they appear to be accurate. Similarly, the WHO publicly repeated messages from the Chinese government uncritically, in particular claiming that there was no evidence that covid-19 is transmitted between humans and also claiming there was no evidence that covid-19 can be transmitted by pre- or asymptomatic people (e.g., “WHO Comments Breed Confusion Over Asymptomatic Spread of COVID-19“). Both claims were known to be false at the time. The WHO has, of course, retracted those comments, but only after much damage was done.

Where I reference the CDC or WHO below, I have found their comments to be well sourced in the case at hand; the reader can always follow those sources. I now briefly treat some of the more contentious public health claims about Covid-19.

Covid-19 is not a significantly harmful disease

Covid-19 is both highly infectious and, in comparison with the most common respiratory diseases, highly virulent. The median R0 estimate from a review of numerous other studies (i.e., the expected number of people an infected person will infect without public health measures being put in place) is 2.8 (Liu et al., 2020). That rate implies rapid exponential growth in the early stages of an epidemic; indeed, anything above 1.0 does that, however the larger the number, the more rapid the spread.

Common flus have R0s ranging from 0.9 to 2.1 (Coburn, Wagner and Blower, 2009), which, while lower than that for SARS-CoV-2, is generally enough to cause problems. The main relevant differences between these flus and Covid-19 are: there is considerable partial immunity to influenza through prior exposure in the population; there are vaccines to help protect vulnerable subpopulations; the virulence, in both mortality and morbidity, is far less (multiple studies support an estimate of around 0.5% for the infectious fatality rate of Covid-19; e.g., the meta-analysis by Meyerowitz-Katz and Merone, 2020).

Much of the outcry over public health measures is fueled by a denial that the mortality rates for Covid-19 is as high as some have claimed. The very first point to make is that this claim, even if true, would be insufficient to make their case that the common health measures, including wearing masks, are unnecessary. It entirely ignores the very large morbidity of the disease. To be sure, we do not yet know the long-term damage this disease does to survivors. But the simple-minded assumption that asymptomatic, or subclinical, victims bear no consequences (e.g., Trump claiming children are virtually immune) is, at best, willful ignorance. Instead of that, the growing weight of the evidence is that subclinical victims suffer significant health damage (see, e.g., “Asymptomatic COVID: Silent, but Maybe Not Harmless“,

Schools should be open since children do not suffer significant harm from Covid-19

A recent BMJ study (27 Aug 2020) reinforces others showing that children and young people have less severe acute covid-19 than adults. Some early reports indicated that very few spreading events had been traced to schools; however, that has less evidentiary value than it might seem, since early on many schools were shut, and so could not have been sources of spreading events. Nevertheless, studies have shown that: when infected, children carry viral loads comparable to adults (Jones et al., 2020); children appear to spread the disease and have been the source of superspreader events (Kelly, 2020). Furthermore, the studies showing a high morbidity load for Covid-19 sufferers, including those with few or no symptoms, do not bode well for the future health of infected children. The disease affects every major organ in the body in many cases (e.g., Robba et al., 2020). Imposing those burdens on the children, and on their families and communities, is not a step to be taken lightly. Of course, as with all public health measures, the choice is not automatic; there must be a weighing up of benefits and harms. If the testing and contact tracing regime in a region or country is sufficiently robust, then schools may well be the first institution worth opening up.

Economics trumps health

It is widely and loudly argued that the health of the economy, affecting everyone and especially the poor, should come before the health of the few and, in particular, the health of the old and frail. The welfare of the 0.5% should not be allowed to dictate the lives of the remaining 99.5%.

This argument is fundamentally simply ignorant. The first thing it ignores is the very heavy morbidity load imposed on society by unchecked Covid-19. Subclinical sufferers may continue to work, but only by way of spreading the disease to coworkers. Assuming that’s not what “open economy” advocates have in mind, then subclinical victims will be out of the economy for the duration of their infectiousness only, one or two weeks. That’s around 40-50% of those infected. The rest will be out for the duration of their symptoms, ranging from a couple of weeks to many months. And there’s a very large tail of “long covid” patients who are incapacitated for at least months, perhaps years (Marshall, 2020). The “open economy” option implies allowing the spread of the disease, its consequent damage to the health of a very large percentage of the population, resulting in severe economic disruption for at least the duration of the pandemic.

The alternative view, one endorsed by many economists, is that caring for the health and well-being of society is the first step to sustaining, or rebuilding, the economy. A simulation study of the economics of pandemics by Barrett et al. (2011) directly supports this view. So too does the history of the 1918 Spanish Flu: a study of US cities shows that those which had more aggressive public health interventions, including masks and lockdowns, performed better economically (Hatchett, Mecher and Lipsitch, 2007).

Masks

Wearing masks is an individual choice, so the state has no right to mandate them

Assuming masks are effective in slowing a deadly pandemic, and that a deadly pandemic exists, this amounts to the claim that public health interests cannot override individual freedoms. Extreme libertarians might be enamored of such an argument, although libertarianism traditionally does not endorse the right to harm others, which violating mask mandates in these circumstances certainly can do. For example, the Stanford Encyclopedia of Philosophy article on Libertarianism states:

While people can be justifiably forced to do certain things (most obviously, to refrain from violating the rights of others) they cannot be coerced to serve the overall good of society, or even their own personal good.

Infecting others with a deadly disease violates others’ rights, of course. There is no accepted principle that absolutely asserts public health rights over individual rights, or vice versa. Society as a whole, through its institutions and public opinion, must adjudicate particular cases. But the claim of some that their individual freedoms always trump public health orders is simply stupid.

Masks are ineffective

Of course, mandating masks is pointless, an arbitrary and unnecessary restriction of people’s choices, if they have no effect on the disease. However, we have known for around one hundred years that they are effective in slowing and reducing the spread of many respiratory diseases such as Covid-19. The history of the 1918 flu epidemic includes an interesting episode of the response in San Francisco (see also Anti-Mask League of San Francisco). The short version is that mask wearing was accepted initially, and the first wave of the flu was bad enough, but after relaxing the rules a second wave came, when resistance to masks was much greater. In partial result of that, the second wave was far more devastating.

More direct evidence has become available in the meantime. Respiratory diseases such as Covid-19 are spread in the first instance by air, through water droplets ranging from large to extremely small, the former generally being called “droplets” and the latter “aerosols”. There are notable differences between masks, with some being more effective than others. So, any claim that masks are helpful in reducing Covid-19 spread most likely is making some restricted claim about a subset of possible masks. Finding that, say, a shawl or balaclava doesn’t help does not negate the claim.

Most masks have been proven effective at inhibiting larger droplets spreading (see CDC’s Considerations for Wearing Masks)

UCSF has an overview report on the effectiveness of masks that is worth reading, “Still Confused About Masks? Here’s the Science Behind How Face Masks Prevent Coronavirus.” To be sure, their update, indicating that valved masks are ineffective is mistaken on multiple points. First, they (along with the CDC and various other health authorities) ignore the simple and obvious point that if you do effective “sink control”, eliminating transmission at the recipient end, then you eliminate transmission. It takes two to tango. Second, there is in fact no evidence that significant (infectious) amounts of SARS-CoV-2 escapes through the valves; this is possible, but the evidence is thin. (Here is an interesting Salon article on this subject.) On other matters, however, the UCSF report is solid, in my opinion.

Masks are dangerous

Granted that masks are effective, some have claimed that they are dangerous. The danger may well counterbalance, or overbalance, the benefits, so, if true, this would make existing mask advice and mandates suspect. On the face of it, the claim is absurd, since medical practitioners have been wearing masks without observed ill effect for over one hundred years. Beneath the face of it, the claim is still absurd. You can read this Fact Check put together by the BBC.

References

Calculated Surprises: A Philosophy of Computer Simulation — A Review

Tags

,

Johannes Lenhard, Calculated Surprises: A Philosophy of Computer Simulation, Oxford University Press, 2019, 256pp., £47.99 (hbk), ISBN 9780190873288.

Reviewed by Kevin B Korb, Monash University

First published in the Notre Dame Philosophical Review.

In the early days of electronic computers there was considerable doubt about their value to society, including a debate about whether they contributed to economic productivity at all (Brynjolfsson, 1993). A common view was that they made computations faster, but that they were not going to contribute anything fundamentally new to society. They were glorified punchcard machines. Such was the thinking behind such infamous predictions as that attributed to the president of IBM in 1943 that there may be a world market for five computers. Of course, by now such views seem quaintly anachronistic. Quantum computers offer the potential for exponential increases in computing power – and “nothing more” – but are the only way hard encryption is ever likely to break. Computers and the internet are all the evidence needed that some qualitative differences are breached by sufficiently many quantitative steps.

While these general questions are resolved, this debate still echoes elsewhere, including the philosophy of simulation. Some insist that the role of scientific simulation demands a radical new epistemology, whereas others assert that simulation, while providing new techniques, changes nothing fundamental. This is the debate Johannes Lenhard engages in Calculated Surprises.

Lenhard lands on the side of a new epistemology for simulation, while not landing too very far from the divide. Rather than claiming there is some one special feature of simulation that demands this new epistemology, as some have, he berates those who focus primarily on this or that specific feature that appears special; rather, the significant features are all special together. Per Lenhard, those significant features are: the ability to experiment with complex chaotic systems, the ability to visualize simulations and interact with them in real time, the plasticity of computer simulations (the ability to reconfigure them structurally and parametrically), and their opacity, that is, our difficulty in comprehending them. It is the unique combination of all these new features which forces us onto new epistemological terrain.

More exactly, Lenhard’s central thesis is that this combination means simulation is a new, transformative kind of mathematical modeling. To see what the unique combination produces, one needs to consider the full range of features, and therefore also the full range of kinds of computer simulation. Focusing only on a single type of simulation is as limiting as focusing on a single feature, per Lenhard. For example, much existing work exclusively considers models using difference equation approximations of dynamic systems, such as climate models. But conclusions reached on that basis are likely to overlook the rich diversity of modeling characterized by such methods as Cellular Automata (CA), discrete event simulation, Agent-Based Modeling (ABM), neural networks, Bayesian networks, etc.

Striking the right level of generality in treating simulation is important. Clearly, one can be either too specific or too general. In this moderate stance, Lenhard is surely right.

Plausibly, the class of simulations are bound together by family resemblance, rather than some clean set of necessary and sufficient conditions. It is a pity, then, that Lenhard simply upfront rejects consideration of stochasticity as an important feature of simulation. He says, reasonably, that some sacrifices have to be made (“even Odysseus had to sacrifice six of his crew”). And it’s true that some simulations are strictly deterministic, not even using pseudo-indeterminism, such as many CA. But it’s also true that stochastic methods are key for most of the important simulations in science. Furthermore, they have opened up genuinely new varieties of investigation, including all the varieties of Monte Carlo estimation, and are essential for meaningful Artificial Life and ABMs. This is a major and unhappy omission in Lenhard’s study.

One of the aspects of simulation Lenhard definitely gets right is the iterative and exploratory nature of much of it, emphasizing the process of simulation modeling. The ease of performing simulation experiments, compared to the expense and difficulty of experiments in real life, don’t just allow for millions of experiments to be run per setup (routinely driving confidence intervals of estimated values to neglible sizes, assuming we’re talking about stochastic simulations), but allow for using early simulation runs to inform the redesign or reconfiguration of later simulations, in an exploratory interaction of experimenter and experiment. Instead of simply relying on the outcomes of a few experimental setups to provide clear evidence for or against some theory driving the experiment, simulation allows for an iterative development of the model, with early experiments correcting the trajectory of the overall program. This underwrites much of the “autonomy” of simulation from theory. If a theory behind a simulation is incomplete, or simply in part mistaken, simulation experiments may nevertheless direct the research program, with feedback from real-world observations, expert opinion, or subsequent efforts to repair the theory. As Lenhard writes, in simulation “scientific ways of proceeding draw close to engineering” (p. 214).

Indeed, Lenhard points out that simulation science requires an iterative development of models. In many cases, the theory implemented in a simulation is very far from being sufficient even to provide a qualitative prediction of a simulation’s behavior. In one example given, Landman’s simulation of the development of a gold nanowire contradicted the underlying theory; only after the simulation produced it was a physical experiment run which confirmed the phenomenon (Landman, 2001). The underlying physical theory inspired the simulation, but the simulation itself forced further theoretical development. This aspect of simulation science explodes the traditional strict distinction in the philosophy of science between contexts of discovery and justification. This distinction may be of analytic value, for example when identifying Bayesian priors and posteriors in an inductive inference, but in simulation practice the contexts themselves of discovery and justification are one and the same. To be sure, Lakatos’s concept of scientific research programs throwing up anomalies (Lakatos, 1982) and overcoming them already weakens the distinction, but in simulation science the necessity of combined discovery and justification is ever present.

In connection with iterative development, scientific simulation has converged even more closely with engineering, widely adopting the “Spiral Model” for agile software development, which is precisely an iterative development process set in opposition to one-shot, severe tests of theoretical (program) correctness, i.e., in opposition to monolithic software QA testing. The Spiral loops through: entertaining a new (small) requirement, designing and coding to fulfill the requirement, testing the hoped-for fulfillment, and then looping back for a new requirement. This equivalence of process makes good sense given that simulations are software programs. To better understand simulation methods as scientific processes, a deeper exploration of this equivalence than Lenhard provides would be useful.

The epistemic opacity of simulation models is one of their notable features Lenhard highlights. It is very common that human insights into how a simulation works are limited, a fact which elevates the importance of visualizations of the intermediate and final results of a simulation and of interacting with them. Lenhard points out that this raises issues for our understanding of “scientific understanding”. Understanding is traditionally construed as a kind of epistemic state achieved within the confines of a brain. Talk of an “extended mind” brings home the important point that books, pens, computers and the cloud significantly enhance the range of our understanding, allowing us to “download” information we haven’t bothered to memorize, for example. But there still needs to be a central agent who is the focal point of understanding, at least in common parlance. Lenhard promotes a more radical reconception: that it is something like the system-as-a-whole that does the understanding. The human-cum-simulation can perform experiments, make predictions, advance science, even while the human acting, or examined, solo has no internal comprehension of what the hell the simulation is actually doing. Since successful predictions, engineering feats, etc. are standard criteria of human understanding, we should happily attribute understanding to the humans in the simulation system satisfying these criteria. This seems to be much of the basis for Lenhard’s claim that simulation epistemology is a radical departure from existing scientific epistemologies, since it radically extends our understanding of scientific understanding. I’m afraid I fail to see the radical shift, however. Anything described as understanding attributed to humans within a successful simulation system can as easily be described as a successful simulation system lacking full human understanding of a theory behind it. Lenhard fails to elucidate any clear benefits from a shift in language here. On the other hand, there is at least one clear benefit to conservatism, namely that we maintain a clear contact with existing language usage. We are all interested in advancing both our understanding of nature and our ability to engineer within and with it; it’s not obviously helpful to conflate the two.

Epistemic opacity also has epistemological consequences that Lenhard does not fully explore. While he emphasizes, even in his title, that simulation experiments often surprise, he does not point out that where the surprises are independently confirmed, as with the Landman case above, this provides significant confirmatory support for the correctness of the simulation, on clear Bayesian grounds. For those interested in this kind of issue, Volker Grimm et al. (2005) provide a clear explanation, from the point of view of Agent-Based Models (ABMs) in ecology.

Another unexplored topic is supervenience theory. This is more general than computer simulation theory, to be sure, but is connected to the opacity of simulations and complexity theory, and is especially acutely raised in the context of Artificial Life and Agent-Based Modeling, which provide not just an excuse but a pointed tool for considering supervenience. The very short form is: ABMs give rise to unexpected, difficult-to-explain high-level phenomena from possibly very simple low-level elements and their rules of operation (perhaps most famously in “boids” simulating bird flocks; Reynolds, 1987). This is known by a variety of names, such as, emergence, supervenience, implementation and multiple realization. It is not inevitable that a philosophy of simulation should encompass a theory of supervenience, but it is probably desirable.

It seems to me that in some respects an even more radical discussion of computer simulation than that pursued by Lenhard is in order. Simulations are literally ubiquitous across the sciences. That is, I’m unaware of any scientific displine which does not use them to advance knowledge. It is in wide use in astronomy, biology, chemistry, physics, climate science, mathematics, data science, social science, economics – and in many cases it is a primary and essential experimental method. Lenhard, oddly, at least appears to disagree, since he states that their common use has only reached “amazingly” many sciences, rather than simply all of them. I’d be interested to know which sciences remain immune to their advantages.

Lenhard’s Calculated Surprises introduces many of the issues that have been central to the debates within the philosophy of simulation and adopts sensible positions on most. He, for example, points out that model validation grounds simulations in the real world, offering a methodological antidote to extremist epistemologies’ flights of fancy. Lenhard’s is a book that patient beginners to the philosophy of simulation can profit from and that specialists should certainly look at. My main complaint, aside from its fairly turgid style (its German origin is clear enough), is the many important and interesting sides to simulation science that are simply ignored. A lack of examination of the scope and limits of simulation is one of those.

The ubiquity of simulation now goes even well beyond the domains of science themselves. It has recently found interesting and potentially important applications in history (e.g., University of York, 2020). Brian Skyrms has famously applied simulations to the study of philosophically interesting game theory (e.g., Skyrms, 2004). Social epistemology has employed simulation for some time already to answer questions about how collective beliefs and decisions may be arrived at (Douven, 2009; Salerno et al., 2017). I have applied simulation to the evolution of ethics and utility (Mascaro et al., 2011; Korb et al., 2016) and to studies in the philosophy of evolution (Woodberry et al., 2009; Korb & Dorin, 2011). I am presently attempting to build a computational tool for illustrating and testing various philosophical theories of causation. There is every reason to bring simulation into the heart of philosophical questions and especially into the philosophy of science. It is even plausible to me that instruction in simulation programming may become as necessary to graduate philosophical training as it already is in many of the sciences.

Paul Thagard formulated the key idea first: if you have a methodological idea of any merit, you should be able to turn it into a working algorithm (Thagard, 1993). Since a great deal of philosophy is about method, a great deal of philosophy not only can be, but needs to be, algorithmized. Simulation provides not just a test of the methodological ideas, and not just a demonstration of their potential, but also a test of the clarity of and relations between the underlying concepts, a test of the philosophizing itself. Who cannot simulate, cannot understand.

References

  • Brynjolfsson, E. (1993). The productivity paradox of information technology. Communications of the ACM36(12), 66-77.
  • Douven, I. (2009). Introduction: Computer simulations in social epistemology. Episteme6(2), 107-109.
  • Grimm, V., Revilla, E., Berger, U., Jeltsch, F., Mooij, W. M., Railsback, S. F., Thulke, H., Weiner, J., Wiegand, T. & DeAngelis, D. L. (2005). Pattern-oriented modeling of agent-based complex systems: lessons from ecology. Science310(5750), 987-991.
  • Korb, K. B., Brumley, L., & Kopp, C. (2016, July). An empirical study of the co-evolution of utility and predictive ability. In 2016 IEEE Congress on Evolutionary Computation (CEC) (pp. 703-710). IEEE.
  • Korb, K. B., & Dorin, A. (2011). Evolution unbound: Releasing the arrow of complexity. Biology & philosophy26(3), 317-338.
  • Lakatos, I. (1982). Philosophical Papers. Volume I: The Methodology of Scientific Research Programmes (edited by Worrall, J., & Currie, G). Cambridge University Press.
  • Mascaro, S., Korb, K., Nicholson, A., & Woodberry, O. (2011). Evolving ethics: The new science of good and evil. Imprint Academic, UK.
  • Reynolds, C. W. (1987). Flocks, herds and schools: A distributed behavioral model. In Proceedings of the 14th annual conference on Computer graphics and interactive techniques (pp. 25-34). ACM.
  • Skyrms, B. (2004). The stag hunt and the evolution of social structure. Cambridge University Press.
  • Salerno, J. M., Bottoms, B. L., & Peter-Hagene, L. C. (2017). Individual versus group decision making: Jurors’ reliance on central and peripheral information to evaluate expert testimony. PloS one12(9).
  • Thagard, P. (1993). Computational philosophy of science. MIT press.
  • Woodberry, O. G., Korb, K. B., & Nicholson, A. E. (2009). Testing punctuated equilibrium theory using evolutionary activity statistics. In Australian Conference on Artificial Life (pp. 86-95). Springer, Berlin, Heidelberg.

How Extreme Weather Events Are Attributed to Anthropogenic Global Warming

Tags

, ,

— Kevin B Korb, 27 Jan, 2020

Many politicians and media personalities continue to cast doubt on the idea that anthropogenic global warming (AGW) – the primary driver of current global climate change – could possibly be behind the growing frequency and severity of extreme weather events – the droughts, heatwaves, flooding, etc. that are every year breaking 100 year or greater historical records. This takes the form not just of a straightforward denial of climate change, but also of a more plausible denial of a connection between climate change and individual extreme events. Until ten or five years ago, many climate scientists themselves would have agreed with rejecting such a connection, and some journalists and politicians have followed them and continue following them, even when they have stopped leading anyone in that direction (see box below). Climate scientists have stopped agreeing with this, because in the meantime a new subdiscipline has been developed specifically for attributing extreme weather events to AGW or to natural variation, depending upon the specifics of the case. While it may suit the political preferences of some commentators to ignore this development, it is not in the general interest. Here I present a brief and simple introduction to the main ideas in current work on attributing individual events to global warming. (An even simpler introduction to attribution science, emphasizing legal liability, can be found in Colman, 2019.)

Climate versus Weather

It has become a commonplace to point out that weather is not climate: climate refers to a long-term pattern of weather, not individual events. Usually the point meant is that some hot, or cold, weather is not evidence for, or against, anthropogenic global warming or significant climate change. That, however, is not true. Long-term patterns influence short-term events, whether or not the short-term events are classified as “extreme”. As one of the original researchers on weather attribution put it:

In practice, all we can ever observe directly is weather, meaning the actual trajectory of the system over the climate attractor during a limited period of time. Hence we can never be sure, with finite observations and imperfect models, of what the climate is or how it is changing. (Allen, 2003)

This actually describes the relation between theories (or models, or simulations) and evidence in science quite generally. Claims about the state of the climate are theoretical, rather than observational. Theoretical claims cannot be directly observed to be true or false; but they do give rise to predictions whose probabilities can be calculated and whose outcomes can be observed. The probabilities of those outcomes provide support for and against our theories. There is always some uncertainty, but that pertaining to earth’s rotation around the sun, the disvalue of bleeding sick humans and the reality of AGW have been driven to near zero.

Certainly, larger and more frequent storms are one of the consequences that the climate models and climate scientists predict from global warming but you cannot attribute any particular storm to global warming, so let’s be quite clear about that. And the same scientists would agree with that. – Australian PM Malcolm Turnbull, 2016

It is problematic to directly attribute individual weather events, such as the current heatwave, to climate change because extreme weather events do occur as a part of natural climate variability. – Climate Change Minister Greg Combet, 2013

Scientists and the Bureau of Meteorology have repeatedly warned that individual events, be they the record cold temperatures and snow of late 2012 or heatwaves should not be attributed to any particular source. – Opposition spokesman for environment, Greg Hunt, 2013

I don’t think you can at all, at this stage, link individual events to [climate change]. – Australian Minister for Resources, Matt Canavan, 2019

You can’t blame individual weather events, such as the Queensland floods, on climate change. – Norelle Towie, journalist, 2011

Individual weather events may be too isolated to link directly to climate change. – Larry West, educational writer, 2017

The only special difficulty in understanding the relation between climate and weather lies in the high degree of variability in the weather; discerning the signal buried within the stochastic noise is non-trivial (aka “the detection problem”), which is one reason why climate science and data analysis should be relied upon instead of lay persons’ “gut feels”. Denialists often want to play this distinction both ways: when the weather is excessively hot, variability means there is no evidence of AGW; when the weather is excessively cold, that means AGW is not real.

What matters is what the overall trends are, and the overall trends include increasing numbers of new high temperatures being set and decreasing numbers of new low temperatures being set at like locations and seasons, worldwide. For example, that ratio is 2:1 in the US from 2000-2010 (Climate Nexus, 2019). Or more generally, we see this in the continuing phenomenon of the latest ten years including nine of the 10 hottest years globally on record (NOAA “Global Climate Report 2018”).

The analogy with the arguments about tobacco and cancer is a strong one. For decades, tobacco companies claimed that since the connection between smoking and cancers is stochastic (probabilistic, uncertain), individual cases of cancer could never be attributed to smoking, so liability in individual cases could not be proven (aka “the attribution problem”). The tobacco companies lost that argument: specific means of causal attribution have been developed for smoking (e.g., “relative risk”, which is closely related to the methods discussed below for weather attribution; O’Keefe et al., 2018). Likewise, there are now accepted methods of attributing weather events to global warming, which I will describe below.

Rejecting the connection between weather and climate, aside from often being an act of hypocrisy, implies a rejection of the connection between evidence and theory: ultimately, it leads to a rejection of science and scientific method.

Weather Severity Is Increasing

Logically before attributing extreme weather to human activity (“attribution”) comes finding that extreme weather is occurring more frequently than is natural (“detection”). Denialism regarding AGW of course extends to denialism of such increasing frequency of weather extremes. There are two main kinds of evidence of the worsening of weather worldwide.

Direct evidence includes straightforward measurements of weather. For example, measurements of the worldwide average temperature anomalies (departures from the mean temperature over some range of years) themselves have the extreme feature of showing ever hotter years, as noted above (NOAA “Global Climate Report 2018”). Simple statistics will report many of these kinds of measurements as exceedingly unlikely on the “null hypothesis” that the climate isn’t changing. More dramatic evidence comes in the form of increased frequency and intensity of flooding, droughts, etc. (IPCC AR5 WG2 Technical Summary 2014, Section A-1). There is considerable natural variability in such extremes, meaning there is some uncertainty about some types of extreme weather. The NOAA, for example, refuses to commit to there being any increased frequency or intensity of tropical storms; however, many other cases of extreme weather are clear and undisputed by scientists, as we shall see.

Indirect evidence includes claims and costs associated with insuring businesses, private properties and lives around the world. While the population size and the size of economies around the world have been increasing along with CO2 in the atmosphere – resulting in increased insurance exposure – the actual costs of natural disasters have increased at a rate greater than the simple economic increase would explain (see Figure 1). In consequence, for example, “many insurers are throwing out decades of outdated weather actuarial data and hiring teams of in-house climatologists, computer scientists and statisticians to redesign their risk models.” (Hoffman, 2018).The excess increase in costs, i.e., that beyond the underlying increase in the value of infrastructure and goods, can be attributed to climate change, as can the excess increase (beyond inflation) in the rates charged by insurers.



Figure 1. World-wide economic losses in billions of 2014 US$ due to natural disasters, insured and uninsured, with their five-year moving average (Holzheu, 2015). (Note that world GDP during this period has grown around 3% per year, which is much lower than the trend line above; World Bank, 2020.)

Another category of indirect argument for the increasing severity of weather comes from the theory of anthropogenic global warming itself. AGW implies a long-term shift in weather as the world heats, which in turn implies a succession of “new normals” – more extreme weather becoming normal until even more extreme weather replaces that norm – and hence a greater frequency of extreme weather events from the point of view of the old normal. In other words, everything that supports AGW, from validated general circulation models (GCMs) to observations, supports a general case that a variety of weather extremes is growing in frequency, intensity or both.

Is Anthropogenic Global Warming Real?

So, AGW implies an increase in many kinds of extreme weather; hence evidence for AGW also amounts to evidence that increases in extreme weather are real. That raises the question of AGW and the evidence for it. This article isn’t the best place to address this issue, so I’d simply like to remind people of a few basic points, in case, for example, you’re talking with someone rational:

  • Skepticism and denialism are not the same. Skeptics test claims to knowledge; denialists deny them. No (living) philosophical skeptic, for example, would refuse to look around before attempting to cross a busy road.
  • Science lives and breathes by skeptical challenges to received opinions. That’s not the same as holding all scientific propositions in equal contempt. Our technical civilization – almost everything about it – was generated by applying established science. It is not activists who are hypocrites for using trains, the internet and cars to spread their message; the hypocrites are those who use the same technology, but deny the science behind that technology.
  • Denialism requires adopting the belief that thousands of scientists from around the world are conspiring together to perpetrate a lie upon the public. David Grimes has an interesting probabilistic analysis of the longevity of unrevealed conspiracies (in which insiders have not blabbed about it), estimating that a climate conspiracy of this kind would require about 400,000 participants and its probability of enduring beyond a year or two is essentially zero [Grimes, 2016]. The lack of an insider revealing such a conspiracy is compelling evidence that there is no such conspiracy, in other words.

The Detection of Extreme Weather

The first issue to consider here is what to count as extreme weather – effectively a “Detection Problem” of distinguishing the “signal” of climate change from the “noise” of natural variation. The usual answer is to identify some probability threshold such that a kind of event having that probability on the assumption of a “null hypothesis” of natural variation would count as extreme. Different researchers will identify different thresholds. We might take, for example, a 1% chance of occurrence in a time interval under “natural” conditions as a threshold (which is not quite the same as a 1-in-100 interval event, by the way). “Natural” here needs to mean the conditions which would prevail were AGW not happening; ordinarily the average pre-industrial climate is taken as describing those conditions, since the few hundred years since then is too short a time period for natural processes to have changed earth’s climate much, going on historical observations (chapter 4, Houghton, 2009). The cycle of ice ages works, for example, on periods of tens of thousands of years.

Of course, a one percent event will happen eventually. But the additional idea here, which I elaborate upon below, is to compare the probability of an event happening under the assumption of natural variation to its probability assuming anthropogenic global warming. The latter probability I will write P(E|AGW) – the probability of event E assuming that AGW is known to be true; the former I will write P(E|¬AGW) – the probability of E assuming that AGW is known to be false. These kinds of probabilities (of events given some hypothesis) are called likelihoods in statistics. The likelihood ratio of interest is P(E|¬AGW)/P(E|AGW); the extent to which this ratio falls short of 1 (assuming it does) is the extent to which the occurence of the extreme event supports the anthropogenic global warming hypothesis versus the alternative no warming (natural variation only) hypothesis. (The inverse ratio is also known as “relative risk” in, e.g., epidemiology, where analogous attribution studies are done.) A single such event may not make much of a difference to our opinion about global warming, but a glut of them, which is what we have seen over the decades, leaves adherence to a non-warming world hypothesis simply a manifestation of irrationality. As scientists are not, for the most part, irrational, that is exactly why the scientific consensus on global warming is so strong.

Varieties of Extreme Weather

There is a large variety of types of extreme weather which appear likely to have been the result of global warming. A recent IPCC study found the following changes at the global scale likely to very likely to have been caused by AGW: increases in the length and frequency of heat waves, increases in surface temperature extremes (both high and low), increased frequency of floods. They express low confidence in observed increases in the intensity of tropical cyclones – which does not mean that they don’t believe it, but that the evidence, while supporting the claim, is not sufficiently compelling. On the other hand, there is no evidence for increased frequency of cyclones (Seneviratne et al., 2017). They don’t address other extremes, but the frequency (return period) and intensity of droughts, increases in ocean extreme temperatures, and increases in mean land and ocean temperatures have elsewhere been attributed to AGW (some references below).

In addition to measurements of extreme events, there is some theoretical basis for predicting their greater occurrence. For example, changes to ocean temperatures, and especially ice melt changing the density of water in the Arctic, are known to affect ocean currents, which, depending upon the degree of change, will have likely affects on weather patterns (e.g., NOAA, 2019). Again, warmer air is well known to hold more water vapor, leading to larger precipitation events, resulting in more floods (Coumou and Ramstorf, 2012). Warmer water feeds cyclonic storms, likely increasing their intensity, if not their frequency (e.g., Zielinski, 2015).

Causal Attribution Theory

If we can agree that detection has occurred – that is, that weather extremes are increasing beyond what background variability would explain – then we need to move on to attribution, explaining that increase. There will always be some claiming that individual events that are “merely” probabilistically related to causes can never be explained in terms of those causes. For example, insurers and manufacturers and their spokespersons can often be heard to say such things as that, while asbestos (smoking, etc.) causes cancer – raising its probability – this individual case of cancer could never be safely attributed to the proposed cause. This stance is contradicted by both the theory and practice of causal attribution.

What is Causation?

The traditional philosophy of causation, going back arguably to Aristotle and certainly to David Hume, was a deterministic theory that attempted to find necessary and sufficient conditions for one event to be a cause of another. That analytic approach to philosophy was itself exemplified in Plato’s Socratic dialogues, which, ironically, were mostly dialogues showing the futility of trying to capture concepts in a tight set of necessary and sufficient conditions. Nevertheless, determinism dominated both philosophy and society at large for many centuries. It took until the rise of probabilistic theories within science, and especially that of quantum theory, before a deterministic understanding of causality began to lose its grip, first to the wholly philosophical movement of “probabilistic causality” and subsequently the development of probabilistic artificial intelligence – Bayesian network technology – which subsumed probabilistic causal theories and applied computational modeling approaches to the philosophical theory of causality. Formal probabilistic theories of causal attribution have flowed out of this research. The defences of inaction or a refusal to pay out insurance reliant upon deterministic causality are at least a century out of date.

Describing the interventionist theory of causality based upon Bayesian network models is beyond my scope here. (If you are interested, see Judea Pearl’s Causality, James Woodward’s Making Things Happen, or my own [Handfield, Twardy, Korb and Oppy’s] “The Metaphysics of Causal Models,” Erkenntnis.)

Instead I will describe an accepted theory of causal attribution in climate science, which provides a clear criterion for ascribing extreme weather events to AGW.

Attribution Theory

The most widely used attribution method for extreme weather is the Fraction of Attributable Risk (FAR) for ascribing a portion of the responsibility of an event to AGW (Stott et al., 2004). It has a clear interpretation and justification, and it has the advantage of presenting attribution as a percentage of responsibility, similar to percentages of explained variation in statistics (as Sewall Wright, 1934, pioneered). That is, it can apportion, e.g., 80% of the responsibility of a flooding event to AGW and 20% to natural variation (¬AGW) in some particular case, which makes intuitive sense. So, I will primarily discuss FAR in reference to attributing specific events to AGW. It should be borne in mind, however, that there are alternative attribution methods with good claims to validity (including my own, currently in development, based upon Korb et al., 2011), as well as some criticism of FAR in the scientific literature. The methodological science of causal attribution is not as settled as the science of global warming more generally, but is clear enough to support the claims of climate scientists that extreme weather is increasing due to climate change and in many individual cases can be directly attributed to that climate change.

FAR compares the probability of an extreme event E under AGW – i.e., P(E|AGW) – and under a “null hypothesis” of no global warming (the negation of AGW, i.e., ¬AGW), by taking their ratio in:

FAR = 1 – P(E|¬AGW)/P(E|AGW)

As is common in statistics, E is taken as the set of events of a certain extremity or greater. For example, if there is a day in some region, say Sydney, Australia, with a high temperature of 48.9, then E would be the set of days with highs ≥ 48.9.

Assuming there are no “acts of god”, any event can be 100% attributed to prior causes; that is, the maximum proportion of risk that could possibly be explained is 1. FAR splits that attribution into two parts, that reflecting AGW and that reflecting everything else, i.e., natural variation in a pre-industrial climate (e.g., Schaller et al., 2016); it does so by subtracting from the maximum 1 that proportion that can fairly be allocated to the null hypothesis. To take a simple example (see Figure 2), suppose we are talking about an event with a 1% chance, assuming no AGW; i.e., P(E|¬AGW) = 0.01. Suppose that in fact AGW has raised the chances ten-fold; that is, P(E|AGW) = 0.1. Then the proportion FAR attributes to the null hypothesis is 0.01/0.1 = 0.1, and the fraction FAR attributes to AGW is the remainder, namely 0.9. Since AGW has raised the probability of events of this particular extremity – of E’s kind – 10 fold, it indeed seems fair to attribute 10% of the causation to natural variation and 90% to unnatural variation.

Figure 2. The region (event) E on the right shows where the natural distribution of weather (e.g., temperature) has no more than a 1% chance of producing that extreme an event, as determined by the left distribution. If AGW shifts the distribution to the right as in the figure, this extreme an event is 10 times more likely; that is, the area under the distribution to the right of the line is 10%, rather than 1%.

In order to compute FAR, we first need these probabilities of the extreme event. It’s natural to wonder where they come from, since we are talking about extreme events, and thus unlikely events that we wouldn’t have had the time and opportunity to measure. (To be sure, if good statistics have been collected historically, they may be used, especially for estimating P(E|¬AGW); some studies cited below have done that.) In fact, however, these likelihoods are derivable from the theories themselves, or simulations that represent such theories. GCMs are used to model anthropogenic global warming scenarios with different assumptions about the extent to which human economic behavior changes in the future, or fails to change. If we are interested in current extreme events, we can use such a model without any of the future scenarios: sampling the GCM model for the present will tell us how likely events of type E will be under current circumstances, with AGW. But we can also use the model to estimate P(E|¬AGW) by running it without the past human history of climate forcing, to see how likely E would be without humanity’s contributions. Since the GCMs are well validated, this is a perfectly good way to obtain the necessary likelihoods. (However, some caveats are raised below.)

Since individual weather events occur in specific locations, or at least specific regions, in order to best estimate the probabilities of such events, GCMs are typically used in combination with regional weather models, which can achieve greater resolutions than GCMs alone. (GCMs can also be modified to have finer resolutions over a particular region.) Regional models have been improving more rapidly than GCMs in recent years, which is one reason that FAR attributions are becoming both more accurate and more common (e.g., Black et al., 2016).

Attribution of Individual Weather Events

Thus, there is a growing body of work attributing specific extreme weather events to anthropogenic global warming using FAR, which represents the “fraction” of responsibility that an event of the given extremity, or greater, can be attributed to anthropogenic global warming versus natural variation in a pre-industrial climate. Much of this work is being coordinated and publicized by the World Weather Attribution organization, which is a consortium of research organizations around the world.

I note some recent examples of FAR attributions (with confidence intervals for the estimates when reported up front). I do not intend to explain these specific attributions here; you can follow the links, which lead to summary reports explaining them. Those summaries cite the formal academic publications, which detail the methods and simulations used and the relevant statistics concerning the results.

  • Flooding from tropical storm Imelda in September, 2019: FAR of 0.505 (± 0.12) (World Weather Attribution, 2019). [Note: This was not reported as FAR, but in likelihoods; conversion to FAR is straightforward. Links are to specific reports, which themselves link to academic publications.]
  • Heatwave in Germany and the UK, July, 2019: FAR between 0.67 and 0.9. The FAR for other parts of Europe were higher (but not specified in their summary) (World Weather Attribution, 2019).
  • Heatwave in France, June, 2019: FAR about 0.9 (World Weather Attribution, 2019).
  • Extreme rainfall from UK storm Desmond, December, 2017: FAR of about 0.375 (World Weather Attribution, 2019).
  • Drought in the Western Cape of South Africa from 2015-2017, leading to a potential “Day Zero” for Cape Town, when the water would run out (averted by rainfall in June, 2018). This extreme drought had an estimated FAR of about 0.67 (World Weather Attribution, 2019).
  • Extreme rainfall events in New Zealand from 2007-2017: FARs ranging from 0.10 to 0.40 (± 0.20 in each case). These fractions accounted for NZ$140.5M in insured costs, which was computed by multiplying the FARs with actual recorded costs (Noy, 2019). [NB: uninsured and non-dollar costs are ignored.] The application of FARs to compute responsibility for insurance costs by economists is a new initiative.
  • The 2016 marine heatwave that caused severe bleaching of the Great Barrier Reef was estimated to have a FAR of about 0.95 for maximum temperature and about 0.99 for duration of the heatwave by Oliver et al. (2018). Their report is part of an (approximately) annual report in the Bulletin of the American Meteorological Society that reports on a prior year’s extreme weather events attributable to human factors, the latest of which is Herring et al. (2018), a collection of thirty reports on events of 2016.

A recent review – re-examining FAR calculations via new simulations – of three dozen studies of droughts, heat waves, cold waves and precipitation events found numerous substantial FARs, ranging up to 0.99 in many cases, as well as a few with inverted FARs, indicating some events made less likely by anthropogenic global warming (Angélil et al., 2017).

The recent fires in Australia are being given a FAR analysis as I write this (see https://www.worldweatherattribution.org/bushfires-in-australia-2019-2020/). There is widespread agreement that the intensity of wildfires is increasing, and that the fire seasons in which they take place are lengthening. Fire simulation models capable of incorporating the observed consquences of climate change (droughts, heatwaves, etc.) are in use and can be applied to this kind of estimation, although that is not yet being done. The forthcoming analysis is limited to the precursors of the fires, drought and heat, but also including the Forest fire Weather Index (from a personal communication).

Despite the apparent precision of some of these FAR estimates, they all come with confidence intervals, i.e., ranges within which we would expect to find the true value. They are not all recorded above, but those who wish to find them can go to the original sources.

Another kind of uncertainty applies to these estimates, concerning the variations in the distributions used to estimate FARs such as those of Figure 2. Some suggest that AGW itself brings a greater variation in the weather, fattening the tails of any probability distribution over weather events, and so making extremes on both sides more likely. So, for example Figure 2 might more properly show a flatter (fatter) distribution associated withAGW, in addition to being shifted to the right of the distribution for ¬AGW. This, however, would not affect the appropriateness of a FAR estimation: whether the likelihood ratio for E is determined by a shift in mean, a change in the tails, or both, that ratio nevertheless correctly reports the probabilities of the observed weather event relative to each alternative.

A potentially more pointed criticism is that GCMs may be more variable than the real weather (e.g., Osborn, 2004). Higher variability implies reaching extremes more often (on both ends of the scale). This is exacerbated if using multiple GCMs in an ensemble prediction. Such increased variance may apply more to simulations of AGW than to ¬AGW, although that’s unclear. In any case, this is a fair criticism and suggests somewhat greater uncertainty in FAR attributions than may have been reported. It would be best addressed by improved validation of GCMs, whether individually or in ensemble. The science of weather attribution is relatively new and not entirely settled; nevertheless, the methods and results in qualitative terms are well tested and clear. Many individual extreme weather events can be attributed largely to human-induced climate change.

The Future of Extreme Weather

The future of extreme weather appears to be spectacular. Given the overwhelming scientific evidence for the existence and continued development of anthropogenic global warming, and the clear evidence of tepid commitment or positive opposition to action from political leaders around the world, climate change is not just baked in for the next few decades, but is likely to be accelerating during that time. The baking period will be the few hundred years thereafter. Extreme pessimism, however, should be discouraged. It really does matter just when, and how, national, regional and global activities to reduce or reverse greenhouse gas emissions are undertaken. Our choices could well determine whether we face only severe difficulties, or instead global chaos, or perhaps civilizational collapse, or even human extinction. It is certain that earth’s biosphere will recover to some equilibrium eventually; it’s not so certain whether that equilibrium will include us.

For the short term, at least, climate science will continue to make progress, including improved understanding of weather attribution. Our current understanding is already good enough to give strong support to the case for action, as put in a recent excellent review of the state of the art in weather attribution circa 2015 or so:

Event attribution studies …  have shown clear evidence for human influence having increased the probability of many extremely warm seasonal temperatures and reduced the probability of extremely cold seasonal temperatures in many parts of the world. The evidence for human influence on the probability of extreme precipitation events, droughts, and storms is more mixed. (Stott et al., 2016)

As I’ve shown above, since that review, attribution research has been extended to show considerable human influence on many cases of extreme rainfall, droughts and storms. While uncertainties remain, as regional and dynamic circulation models continue to improve, it seems certain that extreme weather attributions to anthropogenic causes will become both more pervasive and more definite in the near future. These improvements will enable us to better target our efforts at adaptation, as well as better understand the moral and legal responsibility for the damage done by unabated emissions.

Despite well-funded and entrenched opposition, we must push ahead with parallel projects to reduce, reverse and adapt to the drivers of climate change, in order to minimize the damage to our heirs, as well as to our future selves.

Acknowledgements

I would like to acknowledge the helpful comments of Steven Mascaro, Erik P Nyberg, Bruce Marcot, Lloyd Allison and anonymous reviewers to earlier versions of this article.

References

Allen, M. (2003). Liability for climate change. Nature421(6926), 891.

Angélil, O., Stone, D., Wehner, M., Paciorek, C. J., Krishnan, H. and Collins, W. (2017). An independent assessment of anthropogenic attribution statements for recent extreme temperature and rainfall events. Journal of Climate30, 5–16, doi:10.1175/JCLI-D-16-0077.1.

Bindoff, N.L., Stott, P.A., AchutaRao, K.M.,, Allen, M.R., Gillett, N.G., Gutzler, D., Hansingo, K., Hegerl, G., Hu, Y., Jain, S., Mokhov, I.I., Overland, J., Perlwitz, J., Sebbari, R., & Zhang, X. (2013). Detection and attribution of climate change: from global to regional climate. Climate Change 2013 The Physical Science Basis: Working Group I Contribution to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change. T. Stocker, D Qin, Plattner G-K et al. Cambridge, UK, Cambridge University Press: 867-952.

Black, M. T., Karoly, D. J., Rosier, S. M., Dean, S. M., King, A. D., Massey, N. R., … & Otto, F. E. (2016). The weather@home regional climate modelling project for Australia and New Zealand. Geoscientific Model Development9(9).

Climate Central (2019). The 10 Hottest Global Years on Record, 6 Feb, 2019. https://www.climatecentral.org/gallery/graphics/the-10-hottest-global-years-on-record

Climate Nexus (2019). Record High Temps vs. Record Low Temps.https://www.climatesignals.org/data/record-high-temps-vs-record-low-temps, accessed 6 Dec, 2019.

Colman, Z (2019). The new science fossil fuel companies fear. Politico, 22 Oct 2019. https://www.politico.com/agenda/story/2019/10/22/attribution-science-fossil-fuels-climate-change-001290

Coumou, D., & Rahmstorf, S. (2012). A decade of weather extremes. Nature climate change2(7), 491.

Faust, E. & Steuer, M. (2019). Climate Change Increases Wildfire Risk in California, Munich Re. url: https://www.munichre.com/topics-online/en/climate-change-and-natural-disasters/climate-change/climate-change-has-increased-wildfire-risk.html. Accessed 20 Nov 2019.

Grimes, D. R. (2016). On the viability of conspiratorial beliefs. PloS one11(1), e0147905.

Handfield, T., Twardy, C. R., Korb, K. B., & Oppy, G. (2008). The metaphysics of causal models. Erkenntnis68(2), 149-168.

Herring, S. C., Christidis, N., Hoell, A., Kossin, J. P., Schreck III, C. J., & Stott, P. A. (2018). Explaining extreme events of 2016 from a climate perspective. Bulletin of the American Meteorological Society99(1), S1-S157.

Hoffman, A.J. (2018). Rising insurance costs may convince Americans that climate change risks are real. The Conversation, 22 Oct, 2018. https://theconversation.com/rising-insurance-costs-may-convince-americans-that-climate-change-risks-are-real-105192

Holzheu, T (2015). Underinsurance of property risks: closing the gap. Swiss Re, Sigma No 5/2015.

Houghton, J. (2009). Global warming: the complete briefing. Cambridge University Press.

IPCC (2014). AR5 Climate Change 2014: Impacts, Adaptation, and Vulnerability.

Korb, K. B., Nyberg, E. P., & Hope, L. (2011). A new causal power theory. Illari, Russo and Williamson (Eds) Causality in the Sciences, Oxford University Press, pp. 628-652.

McAneney, J., Sandercock, B., Crompton, R., Mortlock, T., Musulin, R., Pielke Jr, R., & Gissing, A. (2019). Normalised insurance losses from Australian natural disasters: 1966–2017. Environmental Hazards, 1-20.

NOAA (2018). Global Climate Report –. Annual 2018. url: https://www.ncdc.noaa.gov/sotc/global/201813. Accessed 20 Nov 2019.

NOAA (2019). How does sea ice affect global climate?National Ocean Service website, https://oceanservice.noaa.gov/facts/sea-ice-climate.html, 11/15/19.

Noy, I. (2019). The economic costs of extreme weather events caused by climate change. Australasian Bayesian Network Modelling Society Conference, Wellington, New Zealand, 13-14 November, 2019.

Oliver, E. C., Perkins-Kirkpatrick, S. E., Holbrook, N. J., & Bindoff, N. L. (2018). Anthropogenic and natural influences on record 2016 marine heat waves. Bulletin of the American Meteorological Society99(1), S44-S48.

O’Keeffe, L. M., Taylor, G., Huxley, R. R., Mitchell, P., Woodward, M., & Peters, S. A. (2018). Smoking as a risk factor for lung cancer in women and men: a systematic review and meta-analysis. BMJ open8(10), https://bmjopen.bmj.com/content/8/10/e021611.

Osborn, T. J. (2004). Simulating the winter North Atlantic Oscillation: the roles of internal variability and greenhouse gas forcing. Climate Dynamics22(6-7), 605-623.

Pearl, J. (2000). Causality: Models, Reasoning and Inference. Cambridge: MIT Press.

Schaller, N., Kay, A. L., Lamb, R., Massey, N. R., Van Oldenborgh, G. J., Otto, F. E., Sparrow, S. N., Vautard, R., Yiou, P., Ashpole, I., Bowery, A., Crooks, S. M., Haustein, K., Huntingford, C., Ingram, W. J., Jones, R. G., Legg, T., Miller, J., Skeggs, J., Wallom, D., Weisheimer, A., Wilson, S., Stott, P. A., Allen, M. R. (2016). Human influence on climate in the 2014 southern England winter floods and their impacts. Nature Climate Change6(6), 627.

Seneviratne, S.I., N. Nicholls, D. Easterling, C.M. Goodess, S. Kanae, J. Kossin, Y. Luo, J. Marengo, K. McInnes, M. Rahimi, M. Reichstein, A. Sorteberg, C. Vera, and X. Zhang, 2012: Changes in climate extremes and their impacts on the natural physical environment. In: Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation [Field, C.B., V. Barros, T.F. Stocker, D. Qin, D.J. Dokken, K.L. Ebi, M.D. Mastrandrea, K.J. Mach, G.-K. Plattner, S.K. Allen, M. Tignor, and P.M. Midgley (eds.)]. A Special Report of Working Groups I and II of the Intergovernmental Panel on Climate Change (IPCC). Cambridge University Press, Cambridge, UK, and New York, NY, USA, pp. 109-230.

Stott, P. A., Christidis, N., Otto, F. E., Sun, Y., Vanderlinden, J. P., Van Oldenborgh, G. J., … & Zwiers, F. W. (2016). Attribution of extreme weather and climate‐related events. Wiley Interdisciplinary Reviews: Climate Change7(1), 23-41.

Stott, P. A., Stone, D. A., & Allen, M. R. (2004). Human contribution to the European heatwave of 2003. Nature432(7017), 610.

US Global Change Research Program (2017). Climate Science Special Report: Fourth National Climate Assessment, Volume I  [Wuebbles, D.J., D.W. Fahey, K.A. Hibbard, D.J. Dokken, B.C. Stewart, and T.K. Maycock (eds.)] doi: 10.7930/J0J964J6.

Wallace, C. S. (2005). Statistical and Inductive Inference by Minimum Message Length. Springer Verlag.

Woodward, J. (2005). Making Things Happen: A Theory of Causal Explanation. Oxford University Press.

World Bank (2010). Data, GDP annual growth. https://data.worldbank.org/indicator/NY.GDP.MKTP.KD.ZG, accessed 18 Jan 2020.

World Weather Attribution (2019). World Weather Attribution. http://www.worldweatherattribution.org/.

Wright, S. (1934). The method of path coefficients. The Annals of Mathematical Statistics5(3), 161-215.

Zielinski, S. (2015). Warmer Waters Are Making Pacific Typhoons Stronger. Smithsonian Magazine.

The Green New Deal

Tags

, ,

We have known collectively the dangers posed by the combination of modern civilization and human population growth since at least the 1960s. During that decade Paul Ehrlich published The Population Bomb (1968), which carried forward Thomas Malthus’s argument from the 19th century that exponential population growth models apply as much to humans as to other life forms and that relaxing the natural limits on resources and their utilization would provide only temporary material comforts soon overwhelmed by an expanding population. In The Limits of Growth (1972) the Club of Rome computer modelers expanded on these ideas by developing and testing a simulation of human population and economic activity incorporating natural resources and pollution. While their model was crude by recent standards, it did behave in qualitatively sensible ways. The story it told was that however you varied the inputs, e.g., extending resource limits or slowing population growth rates, if you stayed within anything like reasonable bounds, then the model showed a collapse of the population, through impossible levels of pollution, say, sometime during the 21st century. Neither of these pivotal books dealt with anthropogenic global warming explicitly, but the message was clear and still hasn’t changed: unfettered population and economic growth, at least on the models of both we have so far adopted, will be a disaster for our species and our environment. Nothing much has changed.

Rep Alexandria Ocasio-Cortez and Sen Ed Markey’s Green New Deal (GND) seeks genuine change. It’s modeled on Franklin Delano Rooseveldt’s New Deal in the sense that FDR’s New Deal radically changed America for generations. The name also evokes the mobilization behind the World War II effort that happened shortly thereafter. The point is that radical mobilization efforts are eminently possible when the threat to a nation is existential, and human-driven climate change certainly poses an existential threat. The GND, if passed, would be a clear, resounding dual statement of intent: first, the intent to counter the threat to civilization posed by the combination of human population growth and current economic activities; second, a more local statement of intent of achieving economic and political justice for American minorities.

The bill is strictly aspirational, calling out the urgency of the situation, rather than laying out a specific pathway. It’s stated goals are not of a kind that could lead to direct actions. GND shares with Extinction Rebellion a common view of the urgency of the situation and the optimism that if there is a common will to respond, that we can do something worthwhile to diminish the worst outcomes of anthropogenic global warming.

Some of the Main Goals laid out in the GND are:

  • Guaranteed jobs with family-sustaining wages for all people of the US
  • Maximizing the energy efficiency of all existing buildings in the US
  • Moving to electric cars and high-speed rail and away from air transport
  • Universal health care
  • Moving to sustainable farming
  • Moving to 100% renewable energy

Of course, the introduction of the GND has provoked a vigorous response from opponents. The most prominent objection, perhaps, is that it would be too expensive to be practicable. Certainly, refurbishing every building in America to maximize energy efficiency can’t be cheap. The obvious rebuttal, however, has been voiced by Greta Thunberg and other young activists: inaction will be far more expensive than action. Indeed, the GND in its initial Whereas’s states that inaction will lead to $500B in lost annual economic output in the US by 2100. Such a sum applied now, on the other hand, would clearly make a strong start to doing something about climate change. Aside from that, any dollar estimate of harm is never going to be a worst case estimate, since severe climate change is fully capable not just of direct economic impacts, but also of spurring warfare and social collapse, in ways where the real valuations entirely outstrip the speculative dollar valuations of harm. The right wing who harp about the expense are simply not yet prepared to think clearly about the consequences of the choices in front of us. (In my view, it is well past time that the decision bypass the obstruction.)

The whole point of the GND is that what is practicable depends upon the context, and what is practicable in times of war is of an entirely different scale to what is practicable in normal times. We are not in normal times. This is a time of war, and our enemy is us.

 

 

Interview on Machine Understanding

Tags

, , ,

Produced by Adam Ford:

If you are interested, some relevant references are:

Post Hoc Ergo Propter Hoc, or Correlation Implies Causation

Tags

, , , , ,

Wikipedia confidently explains this in its first sentence for this entry: “Post hoc ergo propter hoc (Latin: “after this, therefore because of this”) is a logical fallacy that states ‘Since event Y followed event X, event Y must have been caused by event X.’” This so-called fallacy is curious for a number of reasons. Taken literally it is a fallacy that is almost never committed, at least relative to the opportunities to commit it. There are (literally, perhaps) uncountably many events succeeding other events where no one does, nor would, invoke causality. Tides followed by stock market changes, cloud formation followed by earthquakes, and so on and so on. People do attribute causality to successive events of course: bumping a glass causing it to spill, slipping on a kitchen floor followed by a bump to the head. In fact, that’s how as infants we learn to get about in the world. Generally speaking, it is not merely temporal proximity that leads us to infer a causal relation. Other factors, including spatial proximity and the ability to recreate the succession under some range of circumstances, figure prominently in our causal attributions.

Of course, people also make mistakes with this kind of inference. In the early 1980s AIDS was attributed by some specifically to homosexual behavior. The two were correlated in some western countries, but the attribution was more a matter of the ignorance of the earlier spread of the disease in Africa than of fallacious reasoning. Or, anti-vaxxers infer a causal relation between vaccines and autism. In that case, there is not even a correlation to be explained, but still the supposed conjunction of the two is meant to confer support to the causal claim. The mistake here is likely due to some array of cognitive problems, including confirmation bias and more generally conspiritorial reasoning (which I will address on another occasion). But mistakes with any type of inductive reasoning, which inference to a causal relation certainly is, are inevitable. If you simply must avoid making mistakes, become a mathematician (where, at least, you likely won’t publish them!). The very idea of fallacies is misbegotten: there are (almost) no kinds of inference which are faulty because of their logical form alone (see my “Bayesian Informal Logic and Fallacy”). What makes these examples of post hoc wrong is particular to the examples themselves, not their form.

The more general complaint hereabouts is that “correlation doesn’t imply causation”, and it is accordingly more commonly abused than the objection to post hoc reasoning. Any number of deniers have appealed to it as a supposed fallacy to evade objections to gun control or the anthropogenic origins of global warming. It’s well past time that methodologists should have put down this kind of cognitive crime.

This supposed disconnect between correlation and causation has been the orthodox statistician’s mantra at least since Sir Ronald Fisher (“If we are studying a phenomenon with no prior knowledge of its causal structure, then calculations of total or partial correlations will not advance us one step” [Fisher, Statistical Methods for Research Workers, 1925] – a statement thoroughly debunked by many decades thereafter of causal inference based on observational data alone). While there are more circumspect readings of this slogan than to proscribe any causal inference from evidence of correlation, that overly ambitious reading is quite common and does much harm. It is unsupportable by any statistical or methodological considerations.

The key to seeing through the appearance of sobriety in the mantra is Hans Reichenbach’s Principle of the Common Cause (in his The Direction of Time, 1956). Reichenbach argued that any correlation between A and B must be explained in one of three ways: the correlation is spurious and will disappear upon further examination; A and B are causally related, either as direct or indirect causes one of the other or as common effects of a common cause (or ancestor); or as the result of magic. The latter he ruled out as being contrary to science.

Of course, apparent associations are often spurious, the result of noise in measurement or small samples. The “crisis of replicability” widely discussed now in academic psychology is largely based upon tests of low power, i.e., small samples. If a correlation doesn’t exist, it doesn’t need to be explained.

It’s also true that an endurring correlation between A and B is often the result of some association other than A directly causing B. For example, B may directly cause A, or there may be a convoluted chain of causes between them. Or, again, they may have a common cause, directly or remotely. The latter case is often called “confounding” and dismissed as showing no causal relation between A and B. But it is confounding only if the common cause cannot be located (and held constant, for example) and what we really want to know, say, is how much any causal chain from A to B is explanatory of B’s state. Finding a common cause that explains the correlation between A and B is just as much a causal discovery as any other.

I do not wish to be taken as suggesting that causal inference is simple. There are many other complications and difficulties to causal inference. For example, selection biases, including self-selection biases, can and do muck up any number of experiments, leading to incorrect conclusions. But nowhere amongst such cases will you find biases operating which are not themselves part of the causal story. Human experimenters are very complex causal stories themselves, and as much subject to bias as anyone else. So, our causal inferences often go wrong. That’s probably one reason why replicability is taken seriously by most scientists; it is no reason at all to dismiss the search for causal understanding.

There is now a science of causal discovery applying these ideas for data analysis in computer programs, one that has become a highly successful subdiscipline of machine learning, at least since Glymour, Scheines, Spirtes and Kelly’s Discovering Causal Structure (1987). (Their Part I, by the way, is a magnificent debunking of the orthodox mantra.)

The general application of “correlation doesn’t imply causation” to dismiss causal attributions is an example of a little learning being a dangerous thing – also known as the Dunning-Kruger effect.

 

The Sixth Extinction: A Review

Tags

, ,

Elizabeth Kolbert’s The Sixth Extinction is a highly readable, discursive review of the state of the biosphere in the Anthropocene — i.e., now. It’s aimed at a general audience and entertains as much as it informs, relating a wide variety of anecdotes, mostly derived from Kolbert’s travels and investigations while writing this book. I think it a very worthwhile book, especially perhaps as a present for those in your life who are skeptical about global warming or science in general. Not that Kolbert is a scientist or pretends to be one, but it offers an outsiders’ view of a fair few scientists in action, chronicling the decline of many species.

Kolbert’s report is necessarily pessimistic about the general prospects for a healthy biosphere, given that the evidence of species endangerment and decline is all around and she has spent some years now documenting it. But she tries to be as optimistic as possible. She points out a variety of successes in evading or mitigating other “tragedies of the commons”, such as the banning of DDT after Rachel Carson’s warning that our springs risked going silent. Or the prominent case of the missing (well, smaller) ozone hole.

On matters that are contentious within science, Ms Kolbert aims for neutrality. For example, what killed off the megafauna — such as the marsupial lion in Australia, cave bears and saber tooth cats in America, mammoths and aurochs in Europe — that was widespread prior to the presence of homo sapiens? One school suggests that climate change, say, in the form of retreating ice sheets, was the culprit. She points out that doubts arising from the fact that the extinctions of the megafauna occurred at quite different times and, indeed, in each case shortly after the arrival of humans, militate against climate change as a sole cause. The main alternative is, of course, that these are the first extinctions due to human activity, so that the Sixth Extinction began well before the industrial age. Kolbert points out that advocates for climatic causation criticize the anthropogenic crowd for having fallen for the post hoc ergo propter hoc fallacy. But neutrality on this point is a mistake. While correlation doesn’t strictly imply direct causation, it does strictly imply direct-or-indirect causation: Hans Reichenbach in The Direction of Time made the compelling point that if there is an enduring correlation between event types (not some haphazard result of small samples and noise), then there is either a direct causal chain, a common cause, or an indirect causal chain that will explain the correlation. Everything else is magic, and science abhors magic. Given that the extinctions and the arrivals of humans fit like a hand in a glove, it is implausible that there is no causal relationship between them. As sane Bayesians (i.e., weighers of evidence) we must at a minimum consider it the leading hypothesis until evidence against it is discovered. Of course, the existence of one cause does not preclude another (even if it makes it less likely); that is, climatic changes may well have contributed to human-induced extinctions in some cases.

On a final point Kolbert again opts for neutrality: does the Sixth Extinction imply our own? Can we survive the removal of so many plants and animals that the Anthropecene should be counted as one of the Great Extinction events? Will humanity’s seemingly boundless technological creativity find us a collective escape route?

I find the enthusiasm of some futurologists for planetary escape a bit baffling. The crunch of Global Warming will be hitting civilization pretty hard within 50 years, judging by anything but the most extremely optimistic projections. The ability to deal directly with Global Warming, and the related phenomena of overutilization of earth’s resources to support around 10 billion people at an advanced economic level of activity, is possibly within our grasp, but it is very much in doubt that we will collectively grasp that option. The ability to terraform and make, say, Mars habitable in a long-term sustainable way is not within our grasp and is not in any near term prospect. Simply escaping from our own earthly crematorium is not (yet) an option. If Elon Musk succeeds in reaching Mars, he will almost certainly soon thereafter die there.

The situation on earth isn’t so dissimilar. If Global Warming leads to massive agricultural failure, the watery entombment of half the major cities on earth, unheard of droughts, floods and typhoons, resource wars and human migrations, the strain on the instruments and processes of civilization is reasonably likely to break them. If civilization comes undone, it will be impossible to avoid massive starvation and societal collapse. The dream of some to wait it out in a bunker and emerge to a new utopia thereafter is about as likely as the descendants of Musk building a new civilization on Mars. Whether the extinction of civilization entails the final extinction of humanity is a moot point. But human life after civilization will surely be nasty, brutish and short.

The best alternative is to put a stop to Global Warming now, and use the energy and human resources that effort saves to solve the remaining problems of resource depletion, habitat destruction and human overpopulation. That requires a sense of urgency and a collective will so far absent.