That tool is here, categorized as deepfakes; it is defined as a piece of synthetic media crafted by artificially intelligent technology that renders its viewer incapable of picking apart its authenticity. Deepfakes were first noticed by authorities as a serious instrument of destruction when it was used for financial fraud and celebrity porn; then in 2018, compromised videos of state leaders and big tech founders such as Barack Obama and Mark Zuckerberg emerged which sent the web into a frenzy.
Our democracies were built on the enlightened idea that if we all commit to a social contract, founded on facts and sound logic, then we will be alright, but now that very proposition is being threatened. In fact, if you look hard enough into the disinformation telescope, some might say it has the potential to threaten the very fabric of our reality. What we do now matters, the tools we create to mitigate their damage, matters.
Experts in government and cybersecurity are incredibly anxious because this technology is not only cheap to make but can be co-opted by a range of bad-faith actors such as hackers, nation-states and political agitators.
Someone who knows the dangers of this tool all too well is expert and author Nina Schick, the author of Deep Fakes and the Infocalypse, she warns us that an impending information overload looms and explains to us the dangerous political consequences this has, both in terms of national security and public trust in politics.
Part of her work involves advising global leaders on these dangers from the next potential democratic president of the United States, Joe Biden to Anders Fogh Rasmussen, the former Secretary-General of NATO. What we’re seeing now is more than just a Nicholas Cage meme or acts of revenge porn, these are the early signs of a sinister science fiction novel coming to life, and if we’re not ready for it, Schick says the consequences will be unimaginable.
“ Deepfakes could give everyone the power to fake anything, and if everything can be faked then everyone has plausible deniability.”
Illustration: Joan Wong
The idea of weaponising information or harnessing propaganda as a tool for political means isn’t something new. I think about deepfakes and disinformation within this current paradigm shift, and it doesn’t feel that different.
In fact, you only have to go back to the era of Stalin and Hitler and the age of the Roman empire to see what I mean, what is different about deepfakes and the age of disinformation we are living through?
I absolutely agree with your starting premise that the idea of disinformation and misinformation has been going on since time immemorial. It’s something that has been with us since the start of human civilisation, an inherent phenomenon of human communication.
The reason why I think it’s different today is twofold. Firstly, if you look at the significant technological changes that have emerged out of the information ecosystem, they transformed human history. When you look at the printing press which is arguably over the last 1000 years the most significant technological innovation before we hit the 20th century, if that hadn’t been invented we wouldn’t have had the Reformation and everything else that happened afterwards.
From the invention of the printing press to the invention of photography you had 400 years where society was able to keep up with the challenges and was able to adapt alongside the significant changes that were transforming our information ecosystem.
If you look at what’s happened in the age of information, I’m talking about the advent of the internet, the advent of the smartphone, the advent of social media, all of which happened in the last 30 years, it hasn’t taken a lot for the age of information to become an age of disinformation.
Video has become the most important medium of communication. So, what’s happening now is that there’s a video revolution taking place. We are all able to consume video online, but we are also all making videos too and uploading them online. Everyone is becoming a producer of video as well as a consumer of video.
We all thought this would be a utopia but have quickly seen its darker side. We’re already struggling, and now we’re getting this technology where AI can subvert video as a medium of communication. What’s more, is that it democratises what would have been the domain of Hollywood Studios. Some of these technologies were previously confined to multi-million dollar budgets and a team of special effects artists. Now anyone is going to be able to do it.
In the past 10-years, these technological advances have transformed our technological ecosystem to the point where we haven’t had time to keep up. The question is, is society ready for this?
‘In the past 10-years, these technological advances have transformed our technological ecosystem to the point where we haven’t had time to keep up. The question is, is society ready for this?‘
In your book, you mention that we are 5-7 years away from AI being able to generate synthetic media for commercial purposes. My most immediate concern is that the damage this medium will create is incalculable; it is incredibly dangerous and existentially threatening.
We need to think of a way to challenge these forces; we need a technology that can play defence as much as offence. There must be a way for us to mitigate the damage these nefarious technologies such as deepfakes will do. What’s the general consensus about how we can fight this?
To a certain extent, the point at which synthetic media generation becomes undetectable is a moot point. We know it’s coming and scientists think it’s within the next decade. But it’s a red herring to the extent that you don’t even need it to be perfect for it to be effective. We’ve seen that in the last ten years with the fore-bearers of deepfakes, which is something I call cheapfakes, manipulated media that has no kind of AI in it. They’re sophisticated and can also cause damage.
But the second point is, how do we start protecting society against it? We have to recognise that there isn’t going to be one silver bullet answer. It’s about rebuilding society in a way that puts the information ecosystem into a safer place. When you start to drill down into what that looks like it would be very encouraging for readers to know that the heavy lifting has already started.
There are amazing people who have been working in this area for even longer than me, who are tackling the problem from both sides. The viability of that approach is that we need buy-ins, and I think you only start getting that buy-in when you recognise how important this challenge is for humanity.
You need the whole tech community on board. You need governments, yes, but I’m hesitant to say that this is something that can be regulated out of existence. For example, some countries like South Korea and China have banned deepfakes, but that becomes a slippery slope to go down. Take the case of China, when the government has the authority to say what media is authentic or not, that becomes a very dangerous path.
You do need regulators and policymakers, but you also need civil society. A lot of these incredible people that I interviewed for the book, they’re emerging from various different angles.
For example, there’s the human rights organisation Witness which has been operating in the Global South for many years, teaching activists and human rights organisations how to use video as a way of documenting human rights abuses and holding those in power to account. They recognise that with the power of deepfakes, the very fragile consensus in some parts of the world we have can be completely destroyed. What I will say is that when synthetic media started emerging, it was so exciting, you had more research on the generational side, but now more and more people are interested in the detection side.
In terms of technical solutions, there may come the point where a generator can beat any detection, and that’s just an open question in the research community. Regarding the network approach that society needs to take, we’re already starting to see that happen. There’s this incredible initiative launched by Adobe with buy-ins from Twitter and companies like Truepic. Truepic is a company which works on building into the hardware of your phone or device to make it incorruptible. I would imagine that as this problem becomes more articulated, we will see more of these network approaches, which is the only way in the end.
I want to talk about who is really accountable here. You talk about Donald Trump and Vladimir Putin sowing the seeds of chaos in your book, but I actually want to talk about the people who have an equal share in the responsibility for helping sow these seeds. I interviewed the founder of Reddit a few years ago, and when I pressed him on his thoughts about his platform getting Donald Trump elected, he downplayed the effectiveness of the platform.
You yourself have called Reddit ground zero for deepfakes. Big tech founders like Jack Dorsey of Twitter, Mark Zuckerberg of Facebook, and the founders of Instagram, might be viewed decades from now as the real villains, the people who set up the distribution networks and the railroads so to speak that enables all of this.
How accountable at a policy level and at a more cultural level, should we be making these companies and their tools?
I agree with you that they are accountable in all of this. But I do think that when they started building their platforms, they genuinely set out with good intentions. But this strikes at the heart of the fundamental problem when it comes to the exponential technological advances of the information age. A lot of the founders have this utopian vision, and when it very quickly turns out that there’s a negative side, they are like, ‘it’s not how we envisioned this’.
Look at Mark Zuckerberg, for example; arguably, there is no global leader who is more powerful than him if you look at Facebook’s reach. If we don’t like our political leaders, we can vote them out, such as Trump, come November. And that goes for the UK, the same in Sweden, and the same in every liberal Western democracy. Whereas big tech leaders have so much power and they’re completely unaccountable to us. They’re only accountable to their shareholders. So I don’t buy at all the argument that they are simply creators of technology and that they have no responsibility because they do.
"Within the next 5 to 7 years any YouTuber will have the kind of power that Hollywood producers have now."
They’re not impartial.
They’re not impartial, and their inventions have entirely changed the trajectory of human society, in ways we haven’t even begun to understand yet. I think you’re right when you say there may be a future in which these guys are seen as the villains. If you take the gilded era at the end of the 19th century beginning of the 20th century you had these barons in the United States who monopolised all of the new industries such as railroads, steam, coal, eventually what happened is that those monopolies were broken up. Policymakers regulated the gilded era. I think something like that needs to happen with the big tech companies.
It feels like the very idea of reality itself is at stake here. I think a lot of what you talk about in your book is centred around our shared understanding of what constitutes reality. Objectively speaking in a democracy, we all agree on a shared framework, or what facts of our everyday life constitute our reality.
Could you discuss a little bit about this idea of reality and what deepfakes potentially can do in subverting the very notion of that idea?
Yes, it’s a brilliant question, and I suppose you could get very philosophical about it. From a Buddhist perspective, what is reality? Reality is nothing but your perception. But for the interest of this conversation, and indeed for the interests of society, and a country and a democracy that works you absolutely need to have some basis of objectivity so that you can debate the facts, come to a consensus, and then move the society forward in that way.
The problem with deepfakes is that they erode at that consensus. This is already happening in our political debates of the last few years, even before the emergence of deepfakes. In fact, deepfakes are going to take that one step further.
What I mean by that is, deepfakes could give everyone the power to fake anything, and if everything can be faked, then everyone has plausible deniability. One place where you already see this happening, and again I’m talking about the Western democratic world, it’s from arguably the most important person in the Western democratic world because he’s the leader of the United States, Donald Trump himself.
In 2016, when you may recall a video emerged of him bragging about “grabbing a woman by the pussy”, at the time he churlishly apologised, calling it locker room talk. People thought at the time it would derail his presidential bid. It didn’t, but you know, he still admitted that he had done it and he apologised. If that video were to emerge today, he would just dismiss it as fake.
In subsequent interviews when that video has come up, that’s exactly what he’s done. Another example is the unfortunate death of George Floyd and the harrowing 8-minute viral video which showed him begging for his life. I understand how powerful that video was.
But I did come across an African American woman who is standing for Senate for the Republican party, and she has written an entire paper and uses her platform on Twitter, her following on YouTube, to make the argument that the entire George Floyd video was a deepfake.
"I came across an African American woman who is standing for Senate for the Republican party. She has written an entire paper, and uses her platform on Twitter and on YouTube, to make the argument that the entire George Floyd video was a deepfake."
Yes, she’s still fringe and her platform is still small. But I knew this was going to happen and I wrote about it in my book. I said it’s only a matter of time before something like this is denied. But the point is when the fake media becomes ubiquitous, it increases something called ‘the liar’s dividend’ where anything can be dismissed as fake. So the video that caused a global movement for social justice, if this had come out in 2024 or 2028, in future, you will probably see a countermovement of people saying it’s was disinformation and a deepfake. Even before you have deepfakes disrupting society, you have the mere existence of deepfakes amplifying the liar’s dividend.
One thing I would say to the readers of this interview is that a liberal democratic tradition is not something to be taken for granted, and I know that because I’m half Nepalese. I grew up in a part of the world where the state has failed, and liberal democracy doesn’t exist. Sometimes it seems that these values we hold are immutable, that no one can take them away, and that is simply not true.
This interview has been condensed and edited for clarity purposes