Max Tegmark
The Intelligence Explosion
A professor at MIT and a founder of the Future of Life Institute, Swedish-American cosmologist Max Tegmark has become somewhat of a node connecting the world's most intelligent people.
Regularly assembling tech VIPs such as Larry Page, Ray Kurzweil, and Elon Musk at closed off cliquish gatherings, his influence remains deep, but he knows that with that comes responsibility. Tegmark and his cohort spend their time discussing the existential risks of an intelligence far smarter than anyone on earth. He remains adamant that an AGI (artificial general intelligence) will allow humanity to flourish like never before, but will come with no safety manual.
If it concerns Tegmark it should concern all of us, at a time when we’re preoccupied with environmental crises and political ‘pissing contests’, Max wants us to pay more attention to the most important conversation of our time, which no one is having.
You say this is the most important conversation of our time, maybe give us an outline of why that is?
There has been a lot of talk about destroying jobs and enabling new weapons, but I feel we’re ignoring the elephant in the room, the real question is what will happen when machines outsmart us at all tests.
I think that a lot of people really don’t see this coming. I think the problem is that we’ve traditionally thought of intelligence as something mysterious that can only exist in biological organisms – especially humans. But in my perspective as a physicist, intelligence is simply a certain kind of information processing performed by elementary particles moving around according to the laws of physics, and there is no law of physics that says we cannot build a machine smarter than us in all ways. I think we’ve only just seen the tip of the intelligent iceberg; there is amazing potential to unlock the full intelligence that is latent in nature and to use it to help humanity flourish or screw it up in new ways.
I’m confused because, on the one hand, you talk about all the ways AGI can help humanity flourish, and on the other, it feels as if every day we see a new accord signed by Elon Musk et. al. telling us there is a new imminent danger?
You are right, we have a love/hate relationship with this but that is nothing new, we’ve had a love/hate relationship with all technology. We loved fire because it was able to keep us warm at night and predators away; we also hated it when it burned our houses down. The more powerful our technology got, the stronger the love and hate got. What we’re talking about here is the emergence of the most powerful technology ever, and no wonder we’re excited and nervous. Everything I love about civilization is the product of intelligence. So if we can amplify our own intelligence with AI it gives humanity the opportunity to flourish like never before and solve all these pesky questions stumping us. But it also gives us the opportunity to screw up like never before.
I’m optimistic we can create an awesome technology if we win the race between the growing power of it and the way we manage it, but in the past we have always stayed ahead in the race through the strategy of learning through mistakes. We invented fire, screwed up, and invented the fire extinguisher. We invented the car and then invented the seatbelt, but with more powerful weapons like nuclear weapons and superintelligence, we don’t want to learn from mistakes. It’s a terrible strategy. We want to plan ahead and get things right the first time. That’s why Elon Musk and I and so many other people in the community have been saying, “Look, now is the time to start actually planning ahead”. I’m not interested in arguing about whether we should worry, but in what concrete steps can we take today to create a good tomorrow? There are research problems we will need to answer in time and not just wait until the night before a bunch of dudes with RedBull switch on superintelligence.
“I think we’ve only just seen the tip of the intelligent iceberg, there is amazing potential to unlock the full intelligence that is latent in nature and to use it to help humanity flourish or screw it up in new ways.”
A panel you recently assembled shocked me; a gathering of highly intelligent individuals in the area including Google’s Demis Hassabis and Ray Kurzweil. I was amazed at the level of fogginess around the topic, as well as how little representation of diversity there seems to be around it.
This is a technology that will affect us all so we need everybody to join the conversation. The problem is that a lot of people are not able to join it because they have not been given even the most basic knowledge about what’s actually happening and what the choices are. It’s so easy to get lost in the gigabytes and teraflops, so I’ve tried very hard to make my book Life 3.0 accessible.
Being a physicist enables me to have a different voice when I talk about memory and computation, learning and the fundamental principles instead of the usual technological ‘geek speak’. I also tried very hard not to shy away from the social implications of this because ultimately, as with anything, it’s about how we are going to use it. What bothers me as well is that I find media almost focuses exclusively on the negatives, especially if you look at Hollywood’s depiction of all this.
Would I be correct to say that in order for us to make AGI a true reality we need to agree that everything needs to be moving in one direction, including our technology and science capabilities?
There is no guarantee that we’re going to get to human level AI. I respect people who think it’s not going to happen in the next 100 years but there are also a lot of leading researchers from the top companies who are building it who think it will happen within decades, so I think it’s a possibility. As I said, I think many people are stuck in this mindset that we need a special sauce to have intelligence and that somehow if you’re not made of flesh and blood it’s impossible to be smarter, because there is something magical about carbon atoms or the soul or something like that. My perspective on that is that we have way too much hubris, we used to say, “Oh the sun is orbiting around the earth and we are so different than everything else” and, ” We are made of carbon, that’s why only we can be intelligent”, and I think there is no scientific basis for this. We should be more humble and realize life is a beautiful thing that can occur with or without carbon and we’re on a true trajectory towards perhaps building other kinds of life, but that’s not something we should take lightly.
“It looks like most of our universe still hasn’t woken up yet.”
So you talk about building life 1.0, 2.0 and 3.0, and it reminds me of the Kardashev scale. So let’s talk about design – how does a designed 3.0 life look?
When I talk about life design I talk about something taking control of its own destiny, and deciding how it wants to be. Bacteria did not design their hardware or software, both evolved through natural selection; a bacteria has hardware that is a bunch of atoms arranged in a particular way, determined by DNA, and software that implements certain algorithms, which are also determined by DNA. So if you have a bacteria that tends to swim towards more sugar rather than less sugar, that’s just a simple little program hard-wired into the bacteria. The bacteria cannot decide if they want to become antibiotic resistant – they don’t learn anything. It is only over the course of many generations that the software evolves to be different. Life 2.0 – which is us – we have a remarkable opportunity to design software of our choosing, so if someone chooses that they want to become a lawyer they can install a software module in their brain by going to law school and suddenly they have all this additional knowledge and abilities that they didn’t have before. Or they can choose to learn a foreign language.
In fact, your DNA stores about a gigabyte of information, but your brain stores about 100 terabytes of information, vastly more, 100,000 times more, and that information is being largely installed in your brain during life. We call it learning, but I call it software installation to make that analogy, and it gives us incredible freedom to take control of our lives. It allows us to ask, “What do we want to learn and who do we want to become?”. It’s this freedom that made us rulers of this planet. It’s also known as a cultural evolution, where we can learn things from other people. What about our hardware? We can upgrade it a little bit. So maybe we’re life 2.1. We can put in artificial kneecaps, pacemakers, and cochlear implants, but we can’t do very much to live to 1 million years and we can’t have a million times better memory. But that’s not the fundamental law of physics; it’s just our kind of life, which unfortunately has hardware that’s tough to upgrade. When we die, almost all the information we amass gets destroyed again. We can’t just make a back up of ourselves.
Does it depend on what legacy you leave behind?
Well, some of Mozart stays behind but still, most of the information in your brain is destroyed. Life 3.0 is the ultimate life, which breaks completely free and takes control of its own destiny. Then the potential for what life can do is truly mind-boggling – not only can it live for as long as it wants, but if it wants to flourish throughout our cosmos then literally the sky is no longer the limit.
You talk about matter and this idea that something inordinate can become alive so to speak, like Frankenstein ending up possessing intelligence. As you say, “there is no law of physics preventing us from building intelligent quark blobs more intelligent than us”.
I talked about this because I think this is the reason so many people don’t see advanced AI coming. They think there must be something magical that happens when we go from ordinary matter to something intelligent. In Frankenstein, there is a lightning strike and something magical takes place. People used to think the difference between a dead bug and a living bug was that the living bug contained some kind of fairy dust, modern biologists think that the bug is a mechanism and so the dead bug is a broken mechanism. The particles are arranged differently, so if you look at a blob of matter and ask what’s the difference between your brain and a watermelon, is it that your brain is intelligent and that the watermelon is not because it is made of different elementary particles? No, they are both made of the same elementary particles, three of them: up quarks, down quarks, and electrons. The only difference is how they are arranged, in fact, if you go on a watermelon-only diet for a while you will basically be a watermelon rearranged.
When IBM Deep Blue dethroned Gary Kasparov in chess, humans had programmed the computation and the only reason it did better is that it could compute faster. But when Magnus Carlsson played for the first time in his life when he was 5 years old he wasn’t very good, but after a lot of input data, he became world champion. Deep Blue could not do that. Today’s AI systems are beginning to learn like people do, and that’s the key reason they will eventually overtake humans and become smarter than their programmers.
It seems a bizarre time to be talking about this, almost supernatural coincidental. It feels a little too neat, maybe a simulated universe?
As you know, I have an argument why we’re probably not in a simulated universe, but as I said before, if you think you are then enjoy life to the fullest. So that your simulators don’t get bored and switch you off.
But what we certainly know for sure is that, given the laws that govern our universe we’re in (simulated or not), life is nowhere near its full potential. We have this human hubris idea that we are the pinnacle of evolution and that we’re as smart as it gets, and that’s a ridiculous idea from a cosmic perspective. Why should we, little pip squeaks on this little planet, 13.8 billion years after our big bang, be the end? I think it’s a very arrogant idea. It looks like most of our universe still hasn’t woken up yet. But if we can help our universe come alive more and help life flourish here and elsewhere for billions of years then I think that would be beautiful.
“Life 3.0 is the ultimate life, which breaks completely free and takes control of its own destiny. Then the potential of what life can do is truly mind-boggling; not only can it live for as long as it wants, but it can flourish throughout our cosmos.”
Well yes, there is always the hubris issue?
Well, I think to be on the safe side we should try to manage our planet more responsibly. People obsess over the threat of the future of intelligence systems, asking questions like, “Are our jobs going to go?”. But we face even more urgent questions, like are we going to start an arms race with killer robots?
There’s a UN meeting happening in November where we will discuss that specific issue, and it will become clear if we will or will not be having an international treaty about it. That’s not science fiction, that is happening right now and if we go full tilt with it then forget about terrorist attacks with cars, Kalashnikovs, and vans, instead the terrorist of tomorrow will be using weapons that can kill millions of people by using AI drones perfect for assassinations or killing a specific ethnic group.
We’re going to end up in a horrible situation where nation states basically cede a lot of influence to terrorist groups and other non-state actors, so there’s a big push from the AI community to stop this. This open letter came from the people building the AI, saying we want our AI to be used to create a better future, not to just create a new arms race.
We failed epically with nuclear bombs, we have 15,000 of them now with Kim Jong-un and Donald Trump in a nuclear pissing contest, but with AI technology we don’t want it to end up like that. Let’s try and be more proactive.
The scientists that are building AI technology are very idealistic. They want to cure cancer, eliminate poverty and create a better future, but there are other people who want to use it for their own ends, which would be less noble. We can’t just leave this discussion to the generals of the world.
“If you think you are in a simulated universe then my advice to you is enjoy life to the fullest. So that your simulators don’t get bored and switch you off.”
Max Tegmark on a simulated universe
It’s such a bizarre contradiction, these tech titans that have unlimited amounts of money to create changing technologies for potentially nefarious purposes but also telling everyone not to do it. Something doesn’t make sense.
There’s nothing new about that. Who were the first people who warned about the nuclear bomb? It was we the physicists because we were the first to understand the risks.
But if I had a 5-year-old kid I wouldn’t give them a toy they couldn’t handle unless they were ready for it.
It’s a valid point. You wouldn’t give a 5-year-old kid a box of hand grenades to play with, but that’s precisely what we will do if we give humanity the technology that we’re not wise enough to handle.
I feel this way about Kim Jong-un and Donald Trump playing with the nuclear codes; I don’t trust either of them with this kind of advanced technology.
In a world where AGI is prominent, what happens to general infrastructure, nation states, the general landscape of how we live?
Most people take for granted that if we eventually build human level AI, it will not supersede itself, but that’s arrogance again. What happens to humanity? Things didn’t go so well for the Neanderthals when we showed up. The difference is that if we are the ones who create the AI, of course, we have the opportunity to make things better for ourselves, and there is no reason why we can’t co-exist with another intelligence. A one-year-old child can live with its parents making their life better. This is a positive vision with AI; we can create intelligent AI machines that have goals aligned with our own.
Let’s talk about the future because I know that’s one thing your group like to do, we’re predictive creatures. Talk about the future and the intelligence explosion 100 years from now, or even 1000 years from now.
I describe a broad spectrum of scenarios in the book for how things may end up. But ‘what will happen’ is the wrong question to ask. It’s like if you launch a rocket and ask, “I wonder where this rocket will go?”, that depends on how you programmed it. That’s why it’s crucial to ask where do we want to go? So for today, we have a concern about loved ones dying of cancer, wouldn’t it be nice to cure them. We haven’t been smart enough to figure it out yet but we can use AI to help. Why do we have problems with climate change? Why haven’t we figured out how to make solar panels more efficient? Why haven’t we figured out how to make energy more sustainable at a very cheap price? It’s because we’re not smart enough. AI can help us with all of that. I would argue that for every question we care about – from poverty to social justice to disease and famine – we can get great help from AI.
My main take away from writing all these thought experiments in the book is that it’s very hard to come up with a scenario where you don’t have some misgivings. I think we should all come together to help figure this out.
Life 3.0: Being Human in the Age of Artificial Intelligence is out now through Penguin