Mercury Logo

Hello, we’re Mercury. Mercury offers banking* for startups — at any size or stage. Founders can access banking, credit cards, treasury, venture debt, and more, and manage their businesses with confidence. Launched in 2019, Mercury is trusted by more than 100,000 startups.

*Mercury is a financial technology company, not a bank. Banking services provided by Choice Financial Group and Evolve Bank & Trust, Members FDIC.

Emmett Shear on AI's culture wars
Emmett Shear on AI's culture wars

Profiles and Q&As

Emmett Shear on AI's culture wars

The former interim CEO of OpenAI demystifies the various factions of AI and its risk to humanity.

Written by Shreeda Segan

Illustrations by Midjourney

On November 17, 2023, the world learned that Sam Altman was fired from OpenAI. The news quickly rocked the faith OpenAI’s biggest fans and investors had in the company, putting the its upcoming $86B share sale in jeopardy.

Then on November 19, Emmett Shear, co-founder and former CEO of Twitch, was appointed interim CEO. And within a mere 72 hours, he had negotiated to bring Altman back — a move that was met with widespread praise and has thrust Shear’s voice into the center of the most important conversations in AI.

In part one of this two-part Q&A, Shear reflects with Meridian’s Shreeda Segan about the various AI factions — the ideological movements about the role AI should play in humanity’s future and the risk it poses to destroying everything dear to us.

~

Walk me through the current landscape of all of the views on AI risk, like effective altruism (EA), effective accelerationism (e/acc), techno-optimism, and so on.

So there's maybe three vectors on which people differ on this topic. The first one is how seriously you take the idea that the current trajectory of technology — transformer-based architectures for large language models — is the pathway toward general intelligence, whether it takes 3 or 15 years. 

If you don't think it's a pathway to general intelligence, all these people calling for regulation are insane. [The thinking is], “This is no different from any other technology. Stop freaking out. Don't form a doomsday cult. You're just a bunch of pessimists. Don't be an idiot.”

They’re a set of people who just don’t think it’s that big of a deal.

Then you can further divide them between ones who are like, “...And, therefore we should go fast” and people who are like, “...Oh, therefore we should slow down and regulate it” because they think we should regulate all new technology. Just between those people, that's the typical battleground between pro-technical progress people and anti. And I would almost always say, in that case, I’d be on the pro-tech side.

Got it, and these people from this first vector are techno-optimists that Marc Andreessen introduced in his manifesto?

[Yeah.] Actually, weirdly, I’d call them “techno-pessimists.” They may be optimistic about the impact of technology in general, but they're AI pessimists.

"EA and e/acc are mostly the same people. Their only difference is a value judgment on whether or not humanity getting wiped out is a problem."

What’s the second vector?

Then you have people who think that we are on the path to building a general level intelligence — either in this innovation (transformer-based architectures for LLMs) or in some subsequent one that is imminently possible. And they believe that this path to a general intelligence means it's going to become a human-level intelligence soon.

This goes to a further fork in the road — given that it's going to make a human-level intelligence, in theory, a priori, it should be able to improve itself. We're smart enough to build and improve it from there. Therefore, it should be smart enough —  by definition — to improve itself.

And you have people either agree or disagree with that in a variety of ways. There's a set of people who believe humans are near some maximum on the intelligence curve and that you just can't get much smarter than a human. They believe that making things smarter will get more difficult as you get smarter — faster than adding intelligence helps. It's not really about IQ, but let’s use IQ as a gestural direction. Getting from IQ 100 to 300 is exponentially harder than getting from IQ 50 to 100.

Now, I would say to those people that we have no idea. No one's ever tried to do it before. There doesn't seem to be any obvious reason to assume the problem gets either harder or easier as you go up. But, it could be.

And you end up in vaguely the same conclusion as a techno-pessimist at that point. It's a little different. You wind up thinking, “Okay, this is potentially really game-changing. We have to think about how these human-level AIs are going to do stuff. And what does that mean and how should you use them and how should you regulate them?” There's a bunch of questions there. You have to think about it a little bit more seriously.

But still, it's just not that dangerous. We've had humans around for a long time, and, if you assume it's about as smart as a human, it's not that dangerous. Humans aren't that dangerous, it's going to be fine.

So these people are optimistic about reaching human-level intelligence but pessimistic about iterating beyond that. What’s the last vector?

They believe that we're going to build a human-level intelligence, it's going to become self-improving, and there's no obvious endpoint to that process.

Within that faction, you have two subcategories:

  • [EA] people who believe the orthogonality thesis. [They question whether] a really smart AI intrinsically and automatically cares about cooperating with us: “Could this thing be super smart and also be optimizing for something we truly don't want it to optimize for? Because that would be bad.”
  • [e/acc] people who think that because it's smarter than us, that it's our child basically and it's okay if it turns us all into carbon dust that it uses to build new diamond navigators: “That's not a problem. It's good actually. The machine god is thermodynamically-favored.”

The original e/acc thesis was, "Yeah, yeah, they might turn us into carbon and that's good actually.” (But other people have taken the e/acc title so it doesn’t mean anything anymore.) The EA thesis is saying, “Oh my god, they're going to turn us into carbon! This is the end of the world.” So EA and e/acc are mostly the same people. Their only difference is a value judgment on whether or not humanity getting wiped out is a problem. Other than that, they mostly are in total agreement.

I posted a two-by-two, right?

Yes, I saw that. I didn't really realize that e/acc and EA — in the sense that e/acc was originally conceived of — were very similar.

[Eliezer] Yudkowsky is the original e/acc. Literally in 2001, he was on the Extropians mailing list talking about how we have to build the machine god, and when we build it, it will make the world better. And then he had this realization where he was like, “If we actually do that, probably it's going to kill everyone.”

And I've watched other e/accs now go on this same journey of realizing like, “Wait a second. Oh, oops.” And that's the natural pathway. People mistake the EA people like Yudkowsky for anti-technology people. No, no. These are like deep technologists — transhumanists — who realized, “Oh shit, this particular technology it's not quite what I thought.” They're like the same people.

Techno-optimists disagree way more with both e/acc and EA than the latter two disagree with each other. They agree with e/acc on the policy prescription, which is full speed ahead, but that's because they don't think it's going to lead to building anything of particular damage. They don't think it's going to cause the end of humanity.

Where does Andreessen's techno-optimism sit in your two-by-two?

Techno-optimists are in the “AI is not that big of a deal and therefore we should go fast” [quadrant]. And e/acc is in “AI is a massive deal and therefore we should go fast” [quadrant]. But they're actually very far apart in the poles. Their positions don't overlap at all, which confuses people.

I would also say certain techno-optimists agree more with the leftist, anti-big tech, pro-AI regulation people. The latter don't think the thing is that big of a deal either, but they want to regulate it for the same reason they want to regulate Facebook, which is that it's a big tech thing that's being used to manipulate people. The techno-optimists are much closer to them in terms of their model of AI, even though their policy prescriptions are the opposite.

Don't miss part two.

Don't miss part two.

Subscribe to be the first to know when it's live.

And where are you?

I would say I am probably most aligned with the AI will become human-level intelligence and there's a substantial chance, if we do it wrong, that it kills us all, and that's very bad.

But Yudkowsky — again, one of the leading proponents of that theory — thinks we're pretty close to building a human-level intelligence. Whereas I think we are still substantially farther away. I think LLMs do lead there eventually, but I think it's like, we're easily fooled by things that talk. You know about ELIZA, right?

Yeah.

When they first encountered her, people thought like, oh my god, it's like a thinking thing. It's like, no, it's not. It's not smart at all. But it talks and anything that talks we just assume has a mind because we don't know how to not assume that. We project onto things that talk or communicate. People start thinking their TVs are alive. We are animists at heart.

And so when you look at ChatGPT and the progress there, there's obviously some reasoning going on. You can present it with simple novel situations and it can reason about them. But people overestimate how much actual reasoning is happening. I know the difference between me thinking about a problem and me pattern-matching it blindly to some related thing that I've seen before. I think it's mostly doing the latter.

The AI is missing something. It's learned from entirely passive data and examples. To actually learn, you have to learn in an active loop. Instead, it's trying to predict what the average entity would do, or what it would write as the next word in this thing. It's not trying to achieve a goal. Well, it is trying to achieve a goal. It's just this very weird orthogonal goal that doesn't work like a normal mind does at all.

So why does Garry Tan now have e/acc in this Twitter username? Has e/acc transformed into something else?

The guy behind [coining the term] e/acc is Beff Jezos who deliberately designed e/acc to be a meme that would be appealing to techno-optimist tech people, to try to get all of [those] people on his side. And what's happening right now is prominent investors who put e/acc in their handles  — who I'm quite confident do not want to raise a machine god that kills us all — just like technology and are against regulation. I don’t think [these investors have] that strong an opinion about AI one way or the other. They're like, “Oh, a meme, yay, pro techo-optimism!”

"Labels are rallying flags for tribes. By taking control of the label and redefining it, you actually control what people believe."

At first, I was against it. Like, oh no, they're empowering the e/acc people. Then I realized they are so much more powerful and influential than Beff that what's going to happen is e/acc is just turning into [techno-optimism]. e/acc is being metastasized by itself. The thing it is on the surface becomes the thing it is underneath.

Like I'm an e/acc now. Technology's awesome. We should make progress go faster. Hooray.

It's just become techno-optimism, basically?

They are the same thing now and it's going to confuse people. I'm annoyed with Beff for this. It's really his fault for deliberately creating a mislead. This is what happens when someone puts [forth] an unappealing message but wraps it in this wrapper to try to get a bunch of people on board with this idea [that] they would reject if you proposed it directly. You're doing this memetic warfare thing.

Now every discussion on the topic is going to be deeply confused because everyone's going to think e/acc means different things. The memetic commons has been spoiled.

The labels now have become meaningless to the point where the same thing's happened with effective altruism. Effective altruism meant basically, hey, maybe take your charity work seriously. Don't just give to whatever random thing sounds good in a soundbite. But it's so obviously the right thing to do that it's kind of meaningless as a movement at this point.

People are still confused about how EA — which they associate with Sam Bankman-Fried — became this AI risk (x-risk) thing.

Inside of EA there's this big split between the x-risk who funded a bunch of the x-risk type stuff and then the malaria bed net people. It's almost [a] diametrically opposite [set of] ideas of how to go about effective altruism: We should worry about AI destroying the world versus we need to deworm Africa.

Right, but how do you explain to someone how they fall under the same label?

EA is a name for a cultural movement — like hippies or trads. There's no guarantee they agree on anything.

Don't think of EA as any kind of coherent body. It's not like Marxism. Marxism is some guy's idea. And people who follow this specific ideology, it means something. Whereas leftism, being a leftist or even being a liberal, it's a big tent. It's not a useful term. It gestures in a very broad direction.

The EA people think it's important to try to do the most good with your effort that you can, bearing in mind that you are of course a finite being who cannot fully know the consequences of your actions and yet one should try. That's sort of the essence of EA, which, honestly, is almost so anodyne that it's hard to argue with.

The only coherent countermovement to effective altruism is what I would call localism, which [says] trying to do the most good is hubris and instead, you should tend to your garden. Make your neighborhood better. Make your city better. When your city is a shining beacon on the hill, you may work on your state. When your state is done, you may work on the country. When the country is done, you may work on the world.

I am, in some ways, more a localist than I am an effective altruist, in the sense that I focus on San Francisco politics more than global ones. Because I think it's hubris to do global stuff.

Why do you think Andreessen, Vitalik Buterin, and others are competing over this space with their own labels?

Labels are rallying flags for tribes. By taking control of the label and redefining it, you actually control what people believe. People decide that they are Democrats or Republicans first and then they figure out what their policy beliefs are later. And they decide they’re e/acc and then they go to look for what their policy goals are.

Which is why I would agree with Paul Graham about this: Keep your identity small. There's a reason I don't have the pause symbol or e/acc in my Twitter. I think it poisons your brain. If you care to think clearly, don't let yourself get eaten by the brainworms of a label.


Find part two of this Q&A now live here.

More Like This