Tristan Harris
'We're 10 years into this mass hypnosis.'
Many of us know that we are in one way or another dependent on our phones and the code written on them, in fact, we check our phones on average 75 times a day and spend an altogether average of 22 hours a week on social media. It's those pavlovian bells that ring every 10 minutes that help bring us one step closer to a type of godliness.
How were we to know that the utopia that was big tech would turn dark so quickly, that the promise of boundless friends, connections and useful information would lead us down rabbit holes and into systemic polarisation? All the meanwhile, its founding fathers continuing to sell us the bountiful pleasures of its programs.
Underneath it all, the place where this revolution was born, Silicon Valley a dystopian mantra echoes out, “if you aren’t paying for the product then you are the product”. Like a unit being programmed to hit the lever for a rush, we now know, that even, its founders won’t allow their kids to use their own products. This has all become the perfect storm, an industry drunk on profit and a global population all too willing to be their rats in a cage.
At this moment, the world in desperation turns to Tristan Harris for some clarity. Described as the closest thing to a conscience for Silicon Valley. In 2013, working for Google as a design ethicist, he noticed early on the storm clouds brewing over the horizon. He rang alarm bells, no one listened.
Years later, in a series of told-you-so moments, he has now become the clergyman for reason and logic. Leading an army under the banner, the Center for Humane Technology, Harris regularly appears to millions on TED talks, on 60 minutes and even appearing in the U.S. Congress.
What makes Tristan so unique is that he is a student of the dark arts of the algorithmic business, someone who helped aim its powers directly on you, he can see right through your YouTube or Instagram binges.
The forthcoming documentary, The Social Dilemma is just the latest clarion call to change for our behaviour. A culmination of years of work from Tristan trying to get people to realise the damage that this technology is doing to us. He speaks like a philosopher but behaves more like a lobbyist, someone who is ready to open our eyes to what is really going on, the question remains for us all, is it too late?
There is a sense of paralysis at the moment, a standoff so to speak, between the users of social media around the world who don’t want to alter their behaviour, and the tech companies who also refuse to change their code. Are we all just waiting for some miracle adjustment to happen?
That’s a good question. I am assuming from your question that you’re very aware of how hard it is for people to change their behaviour related to social media. The reality, is we’re only going to get there if we change the competitive environment that technology companies operate in. It’s the equivalent of going back in time to the first moment we discovered Exxon was causing climate change. At that moment they needed a standoff between everyone in their cars and Exxon.
That’s what makes today’s technology so inhumane. The fact that we’re forced to use infrastructure which is contaminated, and toxic, not just for us, but for society at large.
Kids don’t get to choose if they want to keep in touch with their friends who are all using Instagram; they have to keep using Instagram to reach them. So it’s not just about thinking, is my social media use good for me?
Even if you do the right thing, and everyone else keeps using it, it can still harm you. A good example of that is in India. There was fake news about Muslims who killed sacred cows in India, then that fake news spread on WhatsApp, and it led to a whole bunch of mob lynchings. Those Muslims didn’t have to be on Facebook to be impacted by Facebook’s construction of reality. The current technology we have doesn’t give us a choice. We are forced to use inhumane technology. That said, we often say this problem is like climate change.
Unlike climate change though, instead of having to change three billion people’s individual behaviours, plus hundreds of companies in thousands of sectors, in this case with the tech companies, there are probably only 100 people at the top of the tech industry that you’d have to convince. Plus another 50 policymakers across the world that could fix this. We’re talking about 150 people, who if there was the collective will, could change what they are doing. That still sounds optimistic because they would be going against the financial incentives of trillion-dollar companies—specifically the advertising model of Facebook, Twitter, and Google and so on.
This is not going to be an easy battle, but ultimately I think what we’ve been missing is a shared truth about the breakdown of shared truth.
Tristan Harris testifies in Congress, 2019 (C-Span)
Here is what infuriates me. We know these platforms so well, they are highly visible, the networks are used every moment of every day by billions of people. But the founders, their VP’s, the technologists, the engineers behind it, are totally absent from this discussion. You might see a blog here, a Tweet there, a testimonial in the senate, perhaps all a form of virtue signalling. But it feels like the organisers of this rally are largely invisible. That’s frightening and frustrating to me. How do we draw them out the darkness to help fix this problem?
Well, that’s actually one thing that’s been surprising, is how few people in the tech industry have been speaking out about these issues.
One critique you could make would be to say, why are there so few voices in the film from the technology world talking about this problem? Luckily Chris Hughes, the co-founder of Facebook, wrote that op-ed in support of our work. Roger McNamee, renowned investor as well – you also see moments such as when Brian Acton, the founder of WhatsApp Tweeting #DeleteFacebook.
It is time. #deletefacebook
— Brian Acton (@brianacton) March 20, 2018
But it’s always these little siren songs. They’re not really asking, what would it really take to make us change this? I think they’re trapped. When I look at people in the tech industry, it’s like watching a hostage video. The things they say don’t make much sense. And you’re like, why are they saying these bizarre things? And then you realise someone is holding a gun to their head off-camera. That gun is their business model. It’s shareholder pressure. It’s the fact that they are trapped in this position.
These are people who almost escape responsibility. They can fly anywhere in the world at any moment as part of a breakaway society, but if they wanted to, they could actually show up. I know personally, having interacted with some of them in the past, that on a human level they are concerned. It’s very hard for them to make public choices in which they won’t be trashed for doing so. We don’t live in a culture of forgiveness. These social platforms are about hate all the time, and so no matter what Facebook does, we’re probably going to trash them and say, too little too late.
It is interesting because I started reading Brave New World by Aldous Huxley for the first time – when I interviewed Yuval Noah Harari, he said that’s his favourite book. It’s interesting how prophetic Aldous Huxley was even almost 100 years ago, especially when it comes to the idea of engineering humans. Do you think the attention economy is just a theoretical framework for what is going to come next?
I would say it’s more the forces of unbounded, infinite growth economics that are driving these things.
My inspiration for much of this work is down to Neil Postman. If he were alive, he should be leading this project. He wrote the book Amusing Ourselves to Death.
Yuval and I are friends, and we talked extensively about how hard it is to see what is really going on, because as he says in 21 Lessons for the 21st Century, on what basis can you say that someone happy and satisfied by the perfect stimulation, the perfect pornography, the perfect video on YouTube is wrong. As technology starts to compete with reality in delivering sweetness and benefits that are irresistible, where is the harm in that? That quandary is why I think Aldous Huxley’s book is so powerful.
In terms of why this is happening. Much like a person who gets diabetes and is sold a prescription for diabetes treatment, they remain much more profitable than someone who never got diabetes because the system selects for the most profitable outcome. That’s what capitalism is doing.
A human being in the attention capitalism model is worth more if they are addicted, outraged, polarised and disinformed, than if they are a human citizen of a democracy, and a free being. They’re worth more if they are domesticated into a ‘rapidly attention switching, don’t-know-what’s-true, confused, distracted, don’t go out to camping trips with their friends’ type environment. It’s much more profitable if people are in this amusing themselves to death reality that Huxley warned us about.
So, just for the sake of this conversation, if we were to line up 100 15 to 25-year olds and showed them the full range of manipulation and addiction type behaviours from AI and social media that they’re going through, the kind of cult machine that they are being inducted into. I honestly think that they would be ok with the emotional, intellectual trade-off that they would have to make.
In other words, they would be like the character Cypher in The Matrix that says, I know when I eat this steak it’s just the Matrix telling my brain it’s juicy and delicious, but you know what, I’ll choose the steak anyway.
"When I look at people in the tech industry, it's like watching a hostage video."
Tristan Harris discusses the inside workings of big tech
Yes, for the sake of this technological program and the dopamine hit people experience, I do think people would be ok with making the trade-off. What do you think?
First I want to be clear about what trade-off we’re talking about. This is not an anti-technology conversation. It’s not the trade-off about whether I use technology or not, because I use technology.
But in the film, we talk a lot about Palaeolithic emotions. Your brain is running an ancient code that comes from the savanna in which you can’t change the emotions you have. Hyper-normal stimuli, novelty-seeking, social approval feels good, loss aversion feels bad. Those things are built into our brains.
Even if you’ve been enhanced, I liken it to an ants world. Say, an ant downloads the secrets of the universe, and it knows all the equations of motion and Einstein’s theory of relativity, but it’s still trapped in an ants body. When pheromones come along, it is still going to walk; it’s little legs in that direction because billions of years of evolution have forced it to do that. So, I’m not denying that it is a seductive default pathway for all of humanity to walk. Children will be sucked into the seductive effects, but one thing that is also universal about human nature is what’s called moral reactance, which is what was used in the tobacco campaigns of the 1990s.
Which is that nobody wants to be manipulated. If you compare the two tobacco campaigns, one says, the surgeon general’s warning that this is bad for you – that doesn’t work at all because it sounds like someone telling you what to do. But if you say, they know it was manipulative, they know that tobacco was addictive, and they did it anyway, and here are the documents showing that they did it. Then it will be different.
With that said, I think that the film hopefully will be pulling back the curtain on the magic trick. We showed this film at a high school in Salt Lake City. And the kids’ response to it was off the charts because they are suffering from this every single day. It’s very real for them. They have friends who have committed suicide or have had mental health problems because of the issues in the film.
I’m a bit hopeful that at least some people will see this film and wake up and say, the Matrix really is real, I just didn’t know it looked like this. Hopefully, that will create a conversation about what it means to be truly free. And hopefully, they will ask themselves the question, what is truly worth my attention?
Just to shift gears a bit, more of a self-reflection question. Before you went to work at Google, I understand you were working at a place called The Persuasive Technology Lab at Stanford under Professor B J Fogg. This was a place where a lot of tech founders initially came to understand human behaviour.
At some point were alarm bells ringing for you even back then?
It is first important to note that persuasion is built into all language. There’s something called the Russell conjugation from Bertrand Russell where the emotional tone of how I make you feel is pre-conjugated in language. So if I use the word ‘manipulated’ I’m telling you to feel bad. You should feel bad about this persuasive transaction because ‘manipulation’ is a stronger and negative form of a persuasive transaction.
If I gave you a choice architecture where certain items are on the menu presented in a certain way, and other items are not on the menu presented in a different way, that would be a type of persuasive transaction. But that doesn’t conjugate itself as being manipulative. So the real root of this work for me is philosophy.
When I was at Google, I was studying the ethics of human persuasion because these questions are so profound, and I never really had lasting answers.
The lab was built with positive intentions. In fact, the Instagram co-founder Mike Krieger and I were in a class together where we worked on a project called In the Sunshine, which was about persuasive technology to get you to send a photo of the sunshine. If it knew you were in a zip code which had good weather and your friend was in a zip code that had bad weather, it would say, hey Ari, you should take a photo of the sunshine and send it to your friend Mike. That was the original intention. Obviously, as you’re learning about this material, it starts to become frightening, how we can use and abuse the secrets of how the mind works.
In our final class, there was a 3-hour exercise on the ethics of human persuasion. One project group came up with the idea: What if in the future you had a persuasive profile for every human being on Earth? So you would know exactly what kind of triggers, or cues, or biases they have and you know perfectly how to play into them, and how dangerous that would be. And of course, that was Cambridge Analytica in the long run. That’s actually what Facebook as a machine is enabling. Mass market, industrial grade, military-grade, semi-automated, AI-powered manipulation. That’s incredibly dangerous, and I think we need to ban it from our society in the long run.
“We are escalating towards civil war in the United States.” – Tristan Harris on the extreme polarisation in the U.S.
So Facebook essentially has become a loaded AI gun in many ways?
Well, put it this way in mathematics if you point the input back at the output you get chaos. And that’s exactly what we’re saying. We pointed Facebook, which is literally an AI, back at our own brains, to figure out what makes a human being’s nervous system respond, what will get the spider senses to twitch. It will start to figure that out, and we don’t really know what we’re doing, and we will not be aware of how our own intentionality is being hijacked. Then we’re in this chaotic loop where you get infinite regress and chaos. And suddenly that’s the world. That’s what’s damaging.
That said, just because you know it doesn’t make you immune, but if the whole world can become aware of this, awareness creates the opportunity for choice.
If you are unaware that this is happening, you are being programmed as an automaton by the system. Collectively if we are unaware of the breakdown of truth and living inside the fragmented hall of mirrors where everyone believes a different reality, versus if we’ve all seen the film, then we will have a shared truth in the breakdown of shared truth.
So then this whole conversation does really loop back to my first question, this idea of paralysis. We know the issue, we know what needs to be done. Ultimately I feel this problem falls more in the domain of government. What’s really weird about this all is that it feels like government is as hypnotised by the power and influence of these tech companies as much as we all are.
What kind of policies or laws do you think governments need to enact in order to mitigate some of this incredible damage that we’re seeing play out? Because it’s clear the tech industry is unable to self regulate.
First of all, you’re right; we’re absolutely going to need governments to act to protect us and create a western digital infrastructure that outcompetes the authoritarian closed digital infrastructure of the China, Russia, evolving model.
It’s also really important to frame it this way because it’s a global geopolitical competition.
We are in a digital World War Three in which the western digital infrastructure, which is open and available for anyone to walk through, is actually being massively outcompeted and exploited by actors with closed information networks. So it’s not just a matter of reducing the mental health harms or mending the fragmented shared truth that has broken down.
It’s a matter of governments acting to protect the future of western democracy in a digital democratic model that outcompetes the authoritarian model.
We need to go from an inhumane technology structure to one that is humane, regenerative, and safe. To do that we need comprehensive sweeping tech reforms. Think of the Basel III amendments in the EU or the Dodd-Frank amendment in the United States to create comprehensive financial reform.
That will involve things like protection for kids, privacy for people, transparency for platforms, accountability for harms, freedom from manipulation, enabling more competition and then tax Justice. So with asymmetric resources, which breaks down into platforms that have asymmetric knowledge about us. They have incredible capacity and control to render behavioural outcomes for us. They have asymmetric size and scale, and they have asymmetric resources to counter lobby.
"Facebook as a machine is enabling. Mass market, industrial grade, military-grade, semi-automated, AI-powered manipulation. That's incredibly dangerous and I think we need to ban it from our society in the long run."
What do you mean by asymmetric?
Asymmetric in the sense that they have more knowledge about us than we have about ourselves. They have more capacity to control and influence us than we have to influence ourselves because they control the means by which we communicate and the core information.
We need governments that can help even the playing field so we can actually end up with something better.
I actually believe that people in the tech industry have people’s best interests at heart. That’s why they got into the business. But they’ve been led astray by these runaway systems. That’s the core point for parents to take home. The people who make these products don’t even let their own kids use them.
The best test for if we’re in a humane world is if the people in the tech industry will happily give all of the products that they make to their own children, because they know it improves their lives.
I just want to end on less of a question and more of just a note. You seem like a great optimist and someone who believes things can change. In the documentary, the latter half gets very dystopian, and very dark very quickly. We see the metastasising effects of these instruments in our society, such as addiction, rabbit holes, polarisation, politicisation, turbulence in democracies. Would it be a rhetorical question to ask, do we have a choice to change this, or is it too late?
I think it’s important for people to reflect. We’ve been under this hypnotic spell of technology, and it’s a way of sense-making.
We’re 10-years into conspiracy theories being recommended to people. We’re 10 years into kids thinking it’s normal to get the social approval of 10,000 people they’ve never met, and that it’s more profitable to spend time on an Instagram account selling make-up than it is just to be a human girl growing up with her friends. I’m a little bit of a traditionalist, but we’re 10 years into this mass hypnosis. This is the greatest psychological experiment we’ve ever run on humanity.
I think if everyone could rewind the clock, they would see how this has driven us crazy. That is why everywhere you look in the world it feels like everything is going crazy all at once.
We need this moment to pause and say, we are escalating towards civil war in the United States, we are escalating towards extremism, we are escalating towards conspiracy groups being the default way of making sense in the world. This film might be the moment to hit the reset button, to cool off, and ask, what do I really believe?
So if you unplug that cable from The Matrix there might be an uncomfortable first few days, as you realise, this world is not what I believed it to be.
Yes, well I’m always secretly hoping that Laurence Fishburne is going to knock on my door. Thanks, Tristan for a fascinating chat.
The Social Dilemma Is Out on September 9th on Netflix. For more information about Tristan’s work, head here.
If you want to read more interesting interviews such as this, please subscribe here.