Will the current path to building AI wipe out the human race? Is OpenAI’s governance structure broken? And is Emmett Shear back on the job market?
In part two of this exclusive Q&A, Emmett Shear, former interim CEO of OpenAI, covers these questions and more, sharing his own predictions for when AI will take off and what it might take to actually get there.
And be sure to catch up with part one to learn all about the AI culture wars and uncover the various stances on AI’s potential risk to humanity. If you’re wondering what “e/acc” or “⏸️” mean on people’s X handles and bios, that’s the piece for you.
~
Vitalik Buterin, co-founder of Ethereum, says in his response to Marc Andreessen’s Techno-Optimist Manifesto “It seems very hard to have a ‘friendly’ superintelligent-AI-dominated world where humans are anything other than pets.” Do you think that's true?
Trying to predict what the world looks like with something significantly smarter than we are is mostly a fool's errand. They call it the singularity because, like a singularity, it's an event horizon past which you can't really see. And everyone's rediscovering the same things that [Ray] Kurzweil was talking about in the 80s. None of this is actually new. Kurzweil looks like a real genius now I have to say. He even called the timeline [of technological progressions towards building advanced AI] approximately right. Truly amazing.
If you really confront what it means to build something significantly smarter than us, it is obvious that after that point we just don't really know anymore. You could be asking the apes, “Well, when these humans are way smarter than you, what's the world going to look like after that?” They have no idea.
"I think the successful thing's going to look much more bio-inspired than the current stuff does."
It turns out we are perfectly capable of living peacefully with certain kinds of animals. We wound up killing more of them than we'd like before we realized, oh, maybe we should stop that. But now we care about preserving the whales and trying to help them. And I would imagine something much smarter than us would be able to do the same.
One of our problems is we're only kind of barely smart enough to do any of this. Human beings are running at the ragged edge of what our intelligence allows us to do. And that means a bunch of things are just hard and we mess up basic cooperation things where, if everyone involved was just a little bit smarter, we would all recognize, oh, this is dumb. Hunting the dodo to extinction, that's just stupid. It doesn't even serve our own interests.
How far do you think we are, then, from solving alignment — building AI in a way such that it’s aligned with human values?
The idea that you build [AI] capabilities and alignment separately is mistaken. That is the approach that some advocate for. And weirdly, by advocating it, I fear they will cause the exact problem that they are trying to avoid. They think you can build an optimizer and then give it a goal that represents a weighted set of everything that matters. The [idea] is you can just swap that goal out to be anything you want.
I don't think that's the right way to think about the problem at all. The optimizers we've built so far are human optimizers and organizations, basically corporations. And those things work in a very different way. They're not arbitrary. You can't point them at arbitrary goals.
I'm pessimistic about this, which is why I don't think the LLM thing is quite on the path to it. It's harder than it looks to build an abstract arbitrary optimizer. I think the successful thing's going to look much more bio-inspired than the current stuff does.
Can you explain that? What does bio-inspired mean?
The way that a brain works in general is that it’s a self-organizing system. There's no global process at all, because where would the global thing be? You can't reach every neuron at once. Things have to be physically in contact to communicate.
So current systems are run with global variables that determine the heat of the system for learning with full connectivity between layers. And that's just deeply abiological. Nothing about that is how bio stuff works.
If you look at how the human body works, individual cells have goals and operate as a little agent. And that's true as you zoom up, your liver acts as an agent. The goals the top-level system has arise dynamically out of the goals of the lower-level systems.
I understand — you’re talking about complex adaptive systems: systems of interconnected elements that adapt and learn from their environment, leading to emergent behavior that is not predictable just from the behaviors of the individual elements. Like the stock market, social media networks, or human immune system.
Right.
And you think the way we're building AI is nowhere near that?
There's nothing adaptive with the LLM approach. The only goal happens at the very top level. The individual neurons are just this collection of weights. You can't meaningfully look at them as an agent that's trying to do anything.
If you want something to be aligned with humans, at a basic level, you can't tack [alignment] on at the end. It can't be trying to optimize the world into a certain shape and then [you] coincidentally put a bunch of constraints on it to align it with us. It's backwards.
In the ideal world, it tries to understand the world as a bunch of other agents and its goal is to, in some sense, come into alignment with the other agents and [their] goals. The right answer looks more like machine bodhisattva. The problem with trying to build the machine Christ is you might build machine Antichrist, whereas there is no anti-bodhisattva.
On one hand, you're not that worried about what is currently being built (LLMs). On the other hand, in the grander scheme, you are worried, as you said earlier, that “there’s a substantial chance, if we [build AI incorrectly], that it kills us all, and that’s very bad.”
LLMs have proven, beyond a shadow of a doubt, that statistical prediction of the world — compression of the world into a model — is the core of intelligence. And now everyone's looking.
We're going to find it soon. We're not that far. While I'm not super optimistic that you can just scale up the current thing into being in intelligence, that doesn't mean we're not three steps away from finding that.
When we first invented electric light, the first electric lights sucked and it took a lot of work to get the Edison bulb. But once you know it's possible, we start looking.
Do you have timelines of your own?
Yeah, but they're super wide.
I don't believe in point estimates for probabilities. In the market, things don't actually have prices. Actually what the price of a commodity is on an exchange is a crazy idea. A transaction has a price, but a commodity has a bid-ask spread. Your belief should follow the same pattern. Confidence intervals are just a better way to represent your mind than point estimates.
Timeline-wise for general AI, if you wanted me to give a 90% confidence interval, I'm like four to 40 years. If someone thinks they have a narrower confidence interval than that, I would like to know why they think they know so much about this thing that no one's built yet.
It's easy to be completely 100% right. If your 99% confidence interval is one day to 10,000 years, you're almost certainly right. But tightening your interval further than that is hard.
"I'm not that into slowing down right now because I don't think we're close enough. Where the pause needs to happen is right around when you're getting close to human-level intelligence."
I know you don't want to give probabilities, but how likely do you think it is that once we reach a turning point — that we get beyond a certain threshold of sufficiently advanced self-improving AI — we're in a path locked for misaligned AI?
That depends a lot on how we get there. My spread on that is super high — 1 to 50%.
This is an engineering problem, not a math problem. Yudkowsky is fundamentally a philosopher and a mathematician. He believes the way you solve problems is to find the correct answer, the closed-form solution.
But I am an engineer. Management is effectively interpersonal engineering. The way engineers think about things is iterative. The way you learn to make something that works is you build a small thing that works, then you build a bigger one. When you're building a bridge, you'd rather not be building it with a new technique you've never used before. Even if this bridge is the longest bridge you ever built before, you have some principles and understanding about how bridges work that hopefully scale up.
And the problem is as you go superintelligent, Yudkowsky is right, a bunch of the dynamics change in a way that is very unpredictable and a little terrifying. But if we're ever going to get that right, it's going to be because we've learned generalized lessons about how intelligences work — that, as we get closer and closer to the human-level one, allow us to start understanding the science of it at least a little bit.
I'm not that into slowing down right now because I don't think we're close enough. Where the pause needs to happen is right around when you're getting close to human-level intelligence. That's the only place where we can actually learn — spend time near the boundary condition with the best tools, engineering and tinkering with it, and trying to understand the basic principles. Plus, hopefully the AI can help us. A human-level AI with any sense of self-preservation should be right there with us saying, “Wait a second. I also think we shouldn't build the superintelligent one. I'm going to get wiped out too.”
I can write you a bunch of stories where it ends badly. But if it ends well, it's clearly because we engineer something that works correctly that's close enough to right that it self-corrects to right. You could give up on the whole technology, but I don't think we should, and I honestly don't think we can.
I don't think, practically speaking, getting people to give up computers is going to work. If you leave laptops out there, at some point someone's going to find [self-improving AI]. Let's not wait for some random person to find it on their laptop.
It's good that right now it takes giant research agencies. The last thing you want is to have it done unsupervised where there's no reflection and there isn't anyone double-checking the work because some clever Ph.D. wakes up his laptop.
Never miss another untold story.
Subscribe for stories, giveaways, event invites, & more.
Do you think this is a winner-take-all market once someone stumbles upon human-level AI? Are they — whether it’s OpenAI or someone else — just going to win from there?
This is the takeoff speed question. How fast does self-improvement self improve? I tend to believe once you get to something that's human-level intelligent, it's going to be very hard for anyone else to catch you. Unless they find [an] approach that's just fundamentally better and they're not that far behind you. But the ability to spin up a hundred thousand trained researchers in parallel and point them back at the problem is a hard-to-beat advantage in general.
Was the dispute between Sam Altman and the board just a dispute between AI factions?
I said this publicly on Twitter. I don't think there's any significant difference between what Sam believes and what the board believes in terms of timelines, danger, or anything like that. I think it had nothing to do with that, personally.
I don't have access to what's going on in anyone's brain directly, but I saw no evidence that that was the case.
Vitalik says that the inability of the board to fire Sam is a sign of the inherent failure of the governance structure of OpenAI. Do you think that's true?
I don't know. Even well-designed systems fail sometimes, so [it’s] unclear. I would say YC gives the same advice to every company about innovation, which is that we welcome innovation on product and marketing and engineering, but generally speaking, try to avoid innovating at all costs on legal or governance. Because innovation on legal and governance is very hard and it goes wrong very easily. And I will say OpenAI innovated a lot on governance and it's not clear to me that it's necessarily a good idea to do a bunch of innovation there.
When you announced your role as the interim CEO, you said that you would drive changes in the organization “...up to and including pushing strongly for significant governance changes if necessary.” What did you mean by that?
I can't really go into any particular detail on that because I never reached the end of that. I will say we wound up with some governance changes. That Brett, Larry, and Adam’s mandate is to rebuild the board and revisit that question. I have faith in that team's ability to do a good job.
What’s next for you?
The OpenAI thing was really high adrenaline but I came away realizing I miss working on something that is my primary focus. It was very clarifying to have a singular goal.
I'd been working on AI stuff on the side, I was consulting for Anthropic before this. I want to go work on AI stuff more. I don't want to work on the existing trajectories. I find the questions on the basic research side much more interesting. So, I'd hopefully find some sort of basic research gig I get to go be part of.
Is there an ideal organization that you want to do that at? Is there someone who's doing that right now that you're excited by?
I'm looking right now. I've talked to a couple of people, but nothing that I would say I've committed to yet that I'd be like, "Oh, this is the thing."
Ideally, it's something that's working on the problem of AI and the idea of using a statistical prediction to build better intelligences, but is investigating a very different approach from large language models. And I think for me, probably one that's more biologically inspired. I'm more interested in those approaches. I think they're more likely to produce interesting results.
If someone wants to hire you, what should they do?
Don't worry. Don't call me, I'll call you.
During my career, almost all the very best senior people who I hired, I didn't approach them, they approached me. Because at that level of seniority, it's much more about you wanting to go there. And trying to guess what kind of project would be appealing to someone is hard.
But if someone's working on very different approaches to AI that have some kind of biological background — and they think it could be a pathway to general intelligence but it’s not based on a transformer-based model — I’m all yours. I’d be very curious to speak to people who are doing that because I think that's the thing [where] we're going to learn the most.
More Like This