Reevaluating the Threat of Deep Fakes in the 2024 Elections
January 4, 2024
Deep fakes are a political threat in 2024, but not the threat we imagine. Contemporary humans can detect deep fakes well through contextual understanding, but watch them anyway like Walter Mitties who need no longer even bother to dream. Watching them does not so much lead to a decision to choose one candidate as to a growing disaffection with political reality and democracy. This edition of the Hybrid Intelligencer aims to shift the discourse around deep fakes from a focus on misinformation to a broader understanding of their psychological and societal impacts. By reassessing the risks they pose, we can better prepare ourselves to mitigate their more subtle but long-lasting harms. And then give you some GPTs to make it more fun….
Recent studies and Peter Carlyon in the Rand Blog show that the short-term influences of deep fakes on voter choices and election outcomes are significantly overestimated. Instead, the real danger may lie elsewhere, in a more insidious and perhaps more profound effect on the public psyche.
The ability of contemporary humans to detect deep fakes is surprisingly robust. Contrary to the alarmists, studies indicate that people usually discern these fabrications (much better than AIs can) through contextual understanding. The minority who share them often do so without questioning the authority of the sources and without regard to “media literacy.” These findings undermine the persuasiveness of the content, opening up a Pandora’s box of psycho-social explanations for the sharing and the reading. Here is one: Deep fakes represent not so much a deception as an escapism, fueling a broader disengagement and disenchantment with the political process and democracy.
1. Human Proficiency in Detecting Deep Fakes
The MIT Study: A pivotal study sheds light on the human proficiency for detecting fake videos. In a remarkable display of discernment, participants were able to identify a deep fake video of Vladimir Putin with a 70% success rate, far surpassing the AI model’s judgment, which rated the video as having only an 8% chance of being fake.
Contextual Understanding in Deep Fake Detection: Humans use a myriad of subtle cues that AI currently struggles to replicate or understand. These include inconsistencies in lighting, unusual facial expressions, or mismatches between the spoken word and lip movement. More importantly, humans consider the broader context: the likelihood of a public figure making a particular statement or acting in a certain way. This ability to contextualize is not something AI can easily mimic, giving humans a distinct edge in detecting deep fakes.
The Limits of Technical Sophistication: While deep fakes have grown increasingly sophisticated, they are far from flawless. Technical imperfections often linger — a slight uncanniness, an odd glitch, or an unnatural movement. But beyond these technical telltales, it is the narrative incongruities that often give them away. Humans, as natural story-weavers and pattern-seekers, are adept at noticing when a narrative seems contrived or when a depicted scenario deviates markedly from established character or historical patterns.
Implications for the Electoral Process: The recognition of this human skill is crucial in the context of elections. It suggests that the threat of deep fakes decisively swaying public opinion or voter behavior may be overstated. While they undoubtedly represent a concerning evolution in misinformation, their effectiveness is tempered by an inherent human skepticism and an ability to critique and question what is seen.
2. The Walter Mitty Effect — Escapism through Deep Fakes
Back in the days before AI, James Thurber’s fictional character, Walter Mitty, escapes his mundane reality through vivid daydreams, living out his fantasies and desires in an imagined world. Just as reality TV obviated the need for ideologies in the march toward totalitarianism, generative AI can be misused as an even lazier replacement for Mitty’s daydreaming, letting its users inhabit their fantasy worlds by just passively watching, including through deep fakes.
Engagement Despite Awareness: Contrary to the assumption that deep fakes primarily deceive, many individuals knowingly interact with these creations. This interaction is less about being convinced of their realness and more about indulging in a narrative that aligns with one’s pre-existing beliefs or desires. In a world increasingly characterized by polarized views and echo chambers, deep fakes offer a surreal extension of these echo chambers where one’s fantasies can be visualized and experienced.
Deep Fakes as Political Fan Fiction: Deep fakes in the political arena can be seen as a form of “political fan fiction,” where supporters or detractors of a political figure or ideology create or consume content that reinforces their views. This content, often exaggerated or blatantly false, isn’t meant to convince the undecided. Instead, it serves as a rallying point or a means of catharsis for those already entrenched in their beliefs. The danger here is not in a direct persuasion but in the intensification of existing biases and the widening of ideological divides.
The Escapist’s Lure and Its Consequences: As individuals consume and engage with content they know to be false, it gradually erodes the line between fact and fiction in public discourse. This erosion doesn’t necessarily lead to an immediate change in political choice or allegiance. Instead, it fosters a growing cynicism towards all forms of political content, deep fake or not. The result is an increasing disaffection with the political process and a skepticism towards democratic institutions and their ability to convey the truth.
3. Getting Back in the Critical Political Game Before It’s Too Late
Since deep fakes and the broader disinformation environment are human rather than tech problems, I can deal with them the way I deal with almost everything these days, making new GPTs to encourage critical thinking and action. This week I have two for you:
- Deep Faker, a maker of deep fakes so bad that they may cure you of gazing passively at deep fakes, also knowledgeable about all the information sources here and more and a thoughtful commentator on deep fake risks and how to address them; and
- 2024 Authoritarian Political Campaign Consultant, for all the t-shirts, posters, and bumper stickers you need to expose the hypocrisy and pandering that often accompany authoritarian and populist rhetoric.
Both of these GPTs are sometimes challenged by apparently dynamic GPT content restrictions, but they have their moments. Let’s help Walter Mitty out of the easy chair and back into the arena.
Every week I try to suggest good new things to do with generative AI here.