I previously wrote about the morality of pornographic AI deepfakes. But other worries about deepfakes—political deepfakes—are menacing society, too. This presidential cycle is the first real taste of an expansive future reckoning with AI-generated disinformation, but
recently argued that fears about this are overblown. He makes the following arguments against the AI disinformation threat to democracy (I’ve reworded them a bit):Disinformation isn’t the root of modern political issues.
Political persuasion is very difficult.
The competitive media environment isn’t conducive to disinformation.
The establishment’s AI tools will outclass untrustworthy renegades.
It’s a worthwhile read, and these points merit someone articulating them, but ultimately, I don’t find them assuaging my worries about AI disinformation very much. It’s true that disinformation isn’t the root of our modern political maladies, but that doesn’t prevent it from exacerbating already perilous conditions.
AI Political Persuasion
It’s worth dissecting the idea of political persuasion a bit. Political conversions—persuading someone to realize their wrongheadedness about a particular topic and betray their political allegiance—seem rare. However, intra-party political persuasion that involves merely adjusting (rather than abandoning) one’s outlook is more commonplace. I’m unsure whether these allegiance-preserving beliefs are always more easily radicalized than moderated, but the echo-chambering of modern society has plainly steered us towards a more polarized world.
While people rarely concede territory to political opponents based on info or reasoning, this is partly because we’re so adept at cultivating a diet of confirmatory information and limiting exposure to unpleasant realizations. People will gravitate towards AI that reinforces their existing outlook rather than AI that presents a more truthful impression of the world, just as they currently do with non-AI media. So, while tribal loyalties will likely prohibit a future wherein people readily swap out worldviews in accordance with the predilections of whoever are the most skillful AI-disinformation operators, that obstinacy won’t similarly thwart further polarization.
Williams is likewise correct that the media landscape is competitive, but the lodestar of that competition is profit, and major news organizations optimizing for profits aren’t fiercely protecting their reputations for evenhandedness and accuracy by straightforwardly adhering to these values—they’re maximizing profits by securing viewership, and viewers don’t cherish unbiased reportage enough to override their affinity for reassurance. Many Fox News viewers defected to more radical outfits upon the network calling Arizona for Biden in 2020, for example. And, to the extent viewers do value trustworthiness, they often judge whether a news outlet is trustworthy by how well it conforms to their worldview rather than its objectivity.
Fox News recently settled with Dominion Voting Systems for 787 million dollars for spreading false information about that election. If something like that doesn’t weaken a news organization, why retain confidence in consumers’ epistemic prudence? Instead, many right-wingers now view Fox as a bunch of turncoats who aren’t radical enough. If worries about AI’s effects on politics deserve to be mollified, it can’t be because media consumers are too judicious to tolerate it.
People don’t always favor whatever is healthiest, and we haven’t instilled within society the same type of guilt and stigma around watching partisan news as we have for pigging out heavily processed foods, but even then, it’s not like we don’t have an obesity problem. Moreover, cultivating that sense of embarrassment about watching something unreliable or misleading is tough when viewers are in denial about it. Even the politically neutral idea that people ought to skew their news intake away from their own preconceptions seems like a daunting request—I’ve seen the discomfort that encountering politically inconvenient information causes people—getting them to exclusively eat salads might be easier.
Moreover, while I appreciate the notion that established institutions (presumably with greater resources) will be able to outmatch unreliable renegades, establishment narratives aren’t always the most saleable. Plus, isn’t it imaginable that we encounter an asymmetry where the content-generating tech vastly outclasses the content-policing tech? And I wonder about the possibility that white-hat establishment players could end up wielding inferior AI by needing to comply with safety precautions, etc. Unscrupulous training or operation of AI might be a troublesome and stubborn advantage.
Other Dangers
Even if AI fails to persuade people very regularly, it could still prove deleterious to modern politics. Firstly, deepfake tech can frustrate democracy by confusing portions of the electorate about the appropriate procedure for voting (someone already tried this in New Hampshire). Moreover, I’m worried about ubiquitous AI disinformation degrading the signal-to-noise ratio and engendering permanent confusion about the legitimacy of newfound information. When confronted with an untrustworthy or ambiguous landscape of beliefs, folks unfortunately substitute intuition for (what should be) humility.
You see this nonsensical tendency plenty with something like climate change or vaccines. Conspiracy theorists will eagerly promote heterodox scientists, not just to backfill their precooked amateurish worldview with some veneer of expertise, but as evidence for scientific uncertainty. People think expert disagreement is a green light to ignore science entirely and afford credence to their naive instincts,1 and they repeatedly mistake disagreement between experts as permission for unconstrained belief formation.
If experts were truly perplexed about something, then laypersons should be especially doubtful about their own opinions rather than view it as an opening to completely bypass evidence-based reasoning, but alas, expert uncertainty paradoxically seems to make laypersons more confident in their guesswork and even less careful in their beliefs. And if AI succeeds in creating a false impression of ambiguity everywhere, then whatever muted instinct for deference towards epistemic superiors currently exists might disappear entirely, and AI disinformation could worsen the already-too-common tendency of people to shoot from the hip with their political beliefs.
Thus, ultimately, the political risk of AI isn’t misinformation converting the unwary—it’s misinformation calcifying biases and unduly reinforcing misplaced convictions, either by creating misleading impressions about the world or by manufacturing such informational anarchy that voters abandon the project of connecting their decisions to facts altogether.
Paradoxically, widespread scientific agreement can be portrayed as evidence of groupthink, while disagreement is painted as cluelessness.