Epistemology refers to standards for knowledge, evidence, and justification. Political epistemology focuses on how those standards are negotiated and updated in public life, and how debates about knowledge and reasoning affect political decisions. Digital political epistemology has to do with how new computer technologies – especially social media and algorithmic mediation – disrupt existing epistemic norms.
Much of my work focuses on how social media has changed the ways we hold ourselves and others accountable for spreading misinformation. I think that the big problem with digital misinformation is not that it leads to more people believing false things. The big problem is that pervasive online misinformation leads us to distrust reliable information sources – and worst of all, to feel we cannot rely on our fellow democratic citizens to make good choices.
Fake news
Fake News and Partisan Epistemology (2017) Kennedy Institute of Ethics Journal free access
In this paper, written just a few months after people first began to use the phrase ‘fake news’, I define the idea and explain why social media sharing of fake news is so difficult. The problem comes down to what I call ‘bent testimony’. On social media we have unstable norms for holding people accountable for bad information; people say ‘a retweet is not an endorsement’ and sometimes this is accepted as a good excuse. Yet, even though we all realize that online word-of-mouth is unreliable in this way, we are tempted to trust it regardless. I argue that this is often a result of partisanship: when we know that someone shares our political values, we tend to assume that their claims are more reliable than we otherwise should. This sort of partisan epistemology is individually reasonable for busy citizens overburdened in a crowded information environment, but socially it leads to irrational and fractious decisions. To deal effectively with fake news, we need to redesign social media platforms to sustain norms of epistemic accountability. For a short statement of one proposal, see my op ed here:
How to fix fake news (2018) New York Times
There have been a number of published responses to my writing on fake news. One is by Michel Croce and Tommaso Piazza (2021) ‘Consuming Fake News: Can we do any better?‘ Social Epistemology. I wrote a response to their paper, which you can find here:
Confronting Fake News Through Non-Ideal Epistemology (2022) Social Epistemology Review and Reply Collective free access
Deepfakes
Deepfakes and the Epistemic Backstop (2020) Philosophers’ Imprint free access
Deepfakes are computer-generated video or audio recordings that seem to show real people doing things they never did or said. By now you’ve probably heard the worry that deepfakes will be used to trick people into believing disinformation. I argue that there is an even more worrying problem looming. Once deepfakes are widely known to be possible, we will be unable to trust any recordings – even truthful ones! This will erode what I call the ‘epistemic backstop’ role that audio and video recordings have played in public life for more than a century. When we face conflicting testimony about an important event, we tend to look for recordings to establish objective facts (think of the famous ‘smoking gun’ tape that ended Richard Nixon’s presidency). Once it becomes possible to cast deepfake doubt on any politically-inconvenient recording, we will lose access to this stabilizing force. Our public debates will sink even more deeply into irresolvable disagreement and we will become even less trusting. In this related op ed, I argue that we need to start getting smarter about tracing sources of online videos now, before deepfakes are everywhere:
Deepfakes are coming. We can no longer believe what we see. (2019) New York Times
Deepfakes, Deep Harms (forthcoming) Journal of Ethics and Social Philosophy free access
This paper is co-authored with my former student Leah Cohen. Here we set aside the political effects of deepfakes (see above) and focus on their personal, ethical costs. We identify three important ways that deepfakes may amplify existing ethical challenges or create new ones. (1) Deepfakes unsettle the ongoing debate in feminist theory about whether pornography is intrinsically objectifying. The people depicted in deepfake porn are usually composites of multiple individuals, and these stitched-together entities are not capable of giving any sort of consent. (2) Deepfakes can be used to force people to speak about personal topics (e.g. their sexual orientation). Even if the claims in a deepfake are true, the person presented as saying them may not wish to talk about it at all. (3) Deepfakes enable a novel sort of psychological torture, which we call ‘panoptic gaslighting’, where a person’s memory and sense of identity can be gradually undermined through subtle deepfaked changes in recordings of their day-to-day life.
Weaponized Skepticism
We have become more skeptical about social media information that we were a few years ago. Unfortunately, this skepticism can be dangerous as well. In this paper I argue that authoritarians can weaponize skepticism to damage democratic civil society. Starting from a close analysis of Russia’s social media interference operations (2014-2018), I argue that anti-democratic actors often want to be caught planting disinformation in social media. They are not trying to get people to believe lies. Instead, they are trying to make people distrust their fellow citizens. If the people around you seem to be idiots who share obvious propaganda online, why should you trust them to vote sensibly? Democracy depends for its survival on mutual trust among citizens; if authoritarians can undermine this, then it doesn’t matter to them if most people see through their lies. Once we appreciate this weaponization strategy, we can see that dismantling online misinformation is a much more delicate problem than simply warning people to be aware. I develop this point a bit further, with specific application to online misinformation as a security threat to democratic states, in this short piece for the NATO Association of Canada (scroll to page 11 in the pdf):
Technology and society
I’ve published several short pieces on the ethical, political, and epistemic implications of digital technology in my regular ‘Morals of the Story‘ column for the Times Literary Supplement. Here are a few highlights:
Moral truth from an algorithm? (October 2021) – Can machines learn to make moral judgments like humans by generalizing from thousands of crowdsourced opinions? The work of moral philosopher John Rawls suggests this isn’t as bizarre as it sounds.
Millions for bragging rights (March 2021) – Why would anyone want an NFT artwork when the visual image is freely available online? The answer seems to be: social status. But the art-collecting world’s shift away from physical objects loses touch with how art connects us to the past.
Tweets of the deceased (September 2020) – American politician Herman Cain died from Covid-19, but kept on tweeting about the news. Is it acceptable for a dead public figure’s family or coworkers to operate their social media account like a macabre puppet? There may be an upside: social media can allow us all to live, in a sense, through ongoing expression of the things we cared about in life.
The internet is an angry and capricious god (June 2020) – We can use social media to bring crowdsourced justice down on people who treat others badly. But sometimes the mob gets its wrong. The internet has turned into the kind of ‘Big God’ described in psychological theory: an all-seeing avatar of justice. Except it isn’t always right, and it is accountable to no one.
A twenty-first-century Platonic Republic (May 2020) – Sidewalk Labs (part of Google) canceled its plan to build a digital city-of-the-future in downtown Toronto. Many privacy advocates celebrated the demise of a project that threatened to gather data on every aspect of residents’ lives. I am not so sure: wouldn’t it have been better to run this experiment in Toronto than wait for it to be done in a more authoritarian place?