Quantcast
Skip to content Skip to footer

The race to copy your brain before AI replaces you

Uploading your brain could be the only way to avoid human obsolescence, says Netholabs’ Christian Larsen

Christian Larsen does not hedge when the conversation turns to AI.

If a superintelligent AI “species” emerges, one that is faster, more capable and operating at a fundamentally different level of capacity, he believes the outcome is already decided. “There’s no question that they will have the upper hand,” he says. Not in ten years. Not as a distant possibility. As a basic consequence of physics.

In Larsen’s framing, this is not a debate about models, benchmarks or whether the latest release beats the last. It is a shift in the balance of power between two forms of intelligence. One biological. One not. And the biological one, for the first time, is under threat.

Larsen’s response is not to slow AI down. It is to change what it means to be human.

As co-founder of Netholabs, he is working on whole brain emulation. Whole brain emulation is the process of creating a digital version of a human mind by replicating how the brain works. Not a chatbot approximation. Not a personality clone trained on your emails. A system that captures the underlying patterns that make you you, then runs them on an entirely different substrate.

The idea sounds like science fiction because, for now, it is. His grand work in progress also comes with a kind of cold internal logic. If intelligence is becoming digital, then staying purely biological starts to look like a constraint. As AI systems rapidly improve, the idea of human obsolescence moves from theory to something more pressing.

Larsen’s suspicion is that humanity may have already left this problem too late. Work that should have started years ago is now compressed into a narrow window. “Two to three years,” he says, when asked how long humanity has to guard against being subjugated by future AI models. After that, the ability gap may be too wide to close.

In a world where machines think faster, learn faster and scale without friction, remaining tied to a slow, fragile biological system starts to look less like a defence of identity and more like denial. Whole brain emulation offers a way out of that constraint. A way to become, in Larson’s words, “substrate independent”; your mind, no longer tied to your body.

It’s an idea that immediately raises uncomfortable questions. About identity. About suffering. About whether a copy is still you, or something else entirely. Larsen is aware of and sensitive to all of those practical and philosophical questions. He just sees them as secondary to the risk of doing nothing. 

There is a clear pragmatism in what he is building, but also something human beneath it. He comes across as someone trying to give us all our best chance of long-term survival, not someone indifferent to what it means to be human. The technology may sound radical, but the intent is protective. Keep options open and avoid the worst-case scenario.

Because if AI does become a second species, the question is not whether it will outthink us. It is whether we will still count.

What is whole brain emulation?

What is whole brain emulation, as far as you’re concerned.

So whole brain emulation is really the technical term that people use when they’re referring to uploading. This is making digital representations of animals and of people.

What problem do you think that solves if you’re able to do that successfully? 

We face a big existential problem now that has become more pressing, which is basically the “second species” argument in relation to AI, or obsolescence. So obsolescence of all of humanity.

And we think that there are very grave risks that come with the AI systems that are being developed now. While we support a lot of the great work in other parts of the AI safety world and field, whether that be governance, whether that be alignment research, control, that research doesn’t directly address some parts of the equation. 

It definitely does not address the second species argument directly, which is this idea that, as [AI philosopher] Joe Carlsmith would simply put, if you have a second species that is far more intelligent than all of humanity put together, then humanity is at the whim of whatever system or organism that would be.

How does what you are doing address that?

What we think is important, in terms of a problem that whole brain emulation can solve, is that once humanity can become substrate independent [humanity existing beyond biology], you gain much more capability in terms of monitoring, interacting with and generally subsisting alongside potentially superintelligent systems, rather than being their pets or worse.

So that is primarily the angle that we work on this problem from, as a pressing existential problem. But of course, it goes without saying that you can have a very flourishing human future if we can become substrate independent. You could do radical neuromodulation, you can have backups. Truly radical longevity can only come from being substrate independent.

There are great efforts that are happening in biological life sciences to achieve that. That’s fantastic, that’s great. But the personal journey of a lot of people that work on whole brain emulation is that they look at the problem and they see that actually we can just do this quicker, directly, as it refers to whole brain emulation.

Could AI replace humans?

If a parallel species does emerge, how likely is it that it takes the upper hand?

Oh, 100%. There’s no question that they will have the upper hand. There’s no question that a system that can transmit information – so axonal propagation [signals travelling along nerve cells] in our brain travels at about the speed of sound, 30 metres per second, versus photons at about 30 million metres per second – there’s no question that digital systems, as they continue to grow, if everything is not shut down, that this upper hand will appear.

If you have a much more powerful system that you don’t understand, whether that be a person that can really control your entire society, then that is something that is dangerous no matter what.

How does whole brain emulation safeguard against some of the potentially negative effects?

There are risks when it comes to whole brain emulation. I’m here at Brain Mind, which is a neuroethics event where we are discussing neuroethics this week at Asilomar in Monterey. People are trying to think about how we can put guardrails in place.

It’s very difficult when you have institutions that are 300 years old, in terms of governments, that move very slowly to keep up with the pace of technology. But that does need to happen. We do need to install positive legislation very soon.

What does a world with functioning whole brain emulation look like?

Ultimately, we don’t know yet. It can go in several different directions. It really depends a lot upon what regulations exist. And we are still discovering a lot of answers about how fast this technology can develop. It also depends a little bit on how AI progresses.

If you look at Richard Ngo’s The Gentle Romance as a focal point, one example of how this could look, in that story there are human beings that gradually record more and more neurobehavioural data and have a form of exo-brain that is very tightly coupled with the individual. It’s a symbiotic relationship, an extension of yourself.

What that story describes is the extension of yourself getting the capability to also really sense and feel like you, a true extension, and to be able to predict what one wants to do and start acting with us as well. So it’s really just an extension of self. Then you have a gradual transition from biology to something that is substrate independent.

There are also ways where you can imagine that a lot of data is gathered over time, models are made, and then you have what people call a branching identity. You have your biological branch and a digital branch, and at one point in time they really are both you, and then those paths diverge.

If you have a non-essentialist view of identity, you would say that it’s more the pattern or the content and the feelings that you experience in your relationships, rather than the actual carbon, water, and hydrogen atoms that make up your body, that really matter. 

It’s more about our values, our emotions, our relationships, and perhaps something beyond the individual self, like the values of humanity or the values of people that you care about, and the search for some objective good, curiosity. Maybe that’s what we think about as the valuable parts of the self.

I also really want to stress that this has to be something that people choose for themselves, and they can shape what that is. It’s core to this. What we want to do is try to make defensive technologies.

Does humanity have a sentimental attachment to biology because it’s all we’ve ever known?

Definitely. But make no mistake that biology doesn’t care about you. We might care about biology, but biology doesn’t care about us.

Biology is basically a selfish transport mechanism for our genes that evolved through inclusive fitness, but it did not have any concern for the minds that inhabit and transport those genes.

The reason why our brains are so efficient, if you slow a computer down to the bit operations and speed at which a human mind computes, then it would probably also take far less power, is because we lived in a very energy-constrained environment where we had to hunt and it was very difficult to find food.

The reason why we experience chronic pain as a society, it’s very important to test and cure these diseases, and a lot of the technology developed in the longevity field can help, and we need to fix chronic pain, but we shouldn’t lose sight of the fact that the reason we have chronic pain is because we evolved in an environment where if you put your hand in the fire a little too long, you would get a bacterial infection and you were dead. So we over-index pain relative to what is necessary in today’s world. 

Can you upload your brain?

If this progresses, could there be a digital version of me that exists independently? Is that theoretically possible?

Yes.

Would that digital version of me be aware that it’s a digital version?

Yes. I think that at this stage, it would be immoral not to inform it.

When you start thinking about the paradoxes surrounding simulation, people ask, are we in a simulation? What are the chances of that? Once you can simulate people, then you start to face the question of whether it is moral to even simulate human life. Human life is full of pain and suffering. So if you are simulating that and haven’t made hedonic adjustments to raise the hedonic floor, then is that even a moral thing?

But I think certainly informing it as soon as possible would be best.


How would you ideally like to use the technology, if it becomes available to you? 

We differ a little bit from most of the longevity community in the sense that we definitely think that this technology can provide identity continuation and be the most robust way to do so.

But we have a lot of problems to solve before I would feel comfortable saying I’m doing this for myself because of all these exciting future possibilities. Once we do, we will be engaging in the most exciting frontier in all of humanity’s history in terms of exploring what types of minds can and will exist, and what possible experiences we can enjoy together.

We will be far more able to explore the universe than we have ever before. It’s all well and good to try and go to Mars in a tin can, but to really capture the light cone [the region of spacetime you can influence or reach, given the speed of light], which, in a rapidly expanding universe by our current understanding of physics, strongly incentivises us to go out and gather matter so that we can use it later, whether we want to or not, with probes, in terms of having a human intelligence experience, whole brain emulation is the only way we can really achieve that.

Would it be possible to get to a point where we move beyond biological existence?

Yes.

Would there be a downside to that?

If you look at sci-fi novels like Diaspora or various others, there are always some pockets of sentient life that decide, for whatever reason, whether religious or otherwise, to remain biological.

I think it is super important that in the future we respect all forms of diverse minds, whether they are digital or biological, different types of digital minds, different biological minds. It is very important that we learn to accept and respect different ways of living.

I am worried because historically humanity has not been excellent at that. If whole brain emulation could allow us to better understand ourselves and each other, then I think that would be really good. It would allow us to coordinate better and ensure that if people want to live in biological or digital forms, we can do so peacefully.

Regarding near-term applications, could the work you’re doing help people better understand themselves and make better decisions?

Yes, definitely. We work with partner companies on this, and there will be more news in the near future.

Just yesterday, I was working with one of our partners on tools where you have a local program that scrapes all the text you see. Our memory is quite limited, so having something that unifies everything you’ve seen and helps you recall and use it effectively is incredibly powerful.

In terms of near-term applications, if you take detailed behavioural and neural measurements, you can understand yourself much better.

We also collect some of the largest complex datasets on the planet and can apply automated interpretability to them today. Scientific knowledge, human and non-human, is going to be increasingly derived from automated analysis of these massive datasets.

On the way to emulation, we think we can help many industries: fine motor control for robotics, multi-scale probabilistic models to help cure diseases like schizophrenia, and more aligned human-like intelligence. There are many near-term applications.

Linking back to what you said earlier about existential risk from AI, how much time do we have to make the right decisions and implement safeguards?

Two to three years.

Why do you say that?

We should have started working on this problem more seriously five years ago. So the question is when we need to start taking it seriously, I would say that was in the past. But if we don’t act within the next two to three years, that would be very alarming.

How confident are you that we will pay sufficient attention and devote sufficient resources?

I’m generally an optimist. I have belief in humanity. While this is a much bigger risk than nuclear war, during the nuclear arms race we were able, to some degree, avoid catastrophe. We need to do more this time, but I think we have to be optimistic.

Frequently asked questions

Can you upload your brain today?

No. Whole brain emulation does not yet exist in a functional form.

Would a digital version of you be conscious?

That remains an open scientific and philosophical question.

What does “substrate independent” mean?

It means your mind is no longer tied to your biological body and could exist on another system.

Why are people worried about AI replacing humans?

Because advanced AI systems could become more capable than humans across most tasks, shifting control away from us.

Photograph: Getty

Leave a comment

Sign Up to Our Newsletter

Be the first to know the latest updates

[yikes-mailchimp form="1"]