11 Comments
Nov 28, 2023Liked by Packy McCormick

This article is a good reminder to avoid extremism, and an explanation of the concept of Overton windows. However, I have two critiques of it.

The first critique is that it's a bad summary of the AI x-risk / EA position. The first people who cared about AI x-risk were radical transhumanists who want aligned superintelligent AGI to create an unimaginably prosperous world. For an example, Bostrom's "Letter from Utopia" ( https://nickbostrom.com/utopia ) describes starting with abolishing death and massively enhancing human intelligence. In my experience, the culture in AI x-risk circles remains one where space colonisation, great personal liberty, and near-unlimited prosperity for everyone are taken for granted when talking about what the future should be like. What's changed since the early days is that some people have gotten very pessimistic about the difficulty of the technical AI alignment problem. (My own stance is that as a civilisation, we have not made much of an effort on technical alignment yet, so it's too early to put massive odds against a solution)

My second critique is that this article it seems like an example of one of the main cultural forces standing in the way of growth.

The object-level question is: under what conditions will AI cause human extinction? Experts disagree on this but very few serious people think that this is an obvious question either way. There is a lot of technical work being done to find the answer.

Now what the article does, in effect, is say this: forget the object-level, technical question. The real question is about narratives (i.e. politics and power). And the solution? There's no mention of technical work or science, or even any tricky object-level question (it's assumed everything is political) -- the message is to look at the positions and assume that the middle position is right. If all discourse consisted of political games along a 1d axis, then yes, eventually there's probably a compromise in the middle (or no solution at all). But in this, the key updates on what we should do are going to come from future breakthroughs in AI, which in turn depend on the brute facts of what is technically feasible. Everyone's (hopefully) on one team when it comes to thinking that apocalypse is bad. And if I had to draw a line between two sides, the most salient one I see is between those trying to use AI as a case study for their own political battles, and those who are trying to build beneficial tech and carefully wring bits of information out of reality so we know how to do so.

I think a big reason why our civilisation is under-performing on growth is that politicking and narrative-spinning is taking over discourse, which in turn drives policy and culture. Twitter (and to a lesser but still relevant extent, Substack) selects hard for virality, and narratives are more viral than tech or science. Sometimes an issue should not be shoe-horned into a political narrative, and sometimes you have to stop spinning narratives and engage with the actual tech and the actual science.

Expand full comment
Nov 28, 2023Liked by Packy McCormick

For another alternative (or synthesis some might say) check out this piece on Technorealism https://boydinstitute.org/p/the-technorealist-alternative

Expand full comment
Nov 28, 2023Liked by Packy McCormick

The part about Malthus is so spot on and is what everyone seems to forget. He statistically modeled that growth in England was unsustainable and he was right based on publicly available data. But he could never have predicted the Industrial Revolution, which shattered all his models.

History doesn’t repeat itself but it does rhyme

Expand full comment

Well explained, this rush to the extreme from both ends can be seen across many debates: sometimes it’s hard to believe a persons stance until you realize they aren’t there to win, but to shift the ‘normal range’ of that middle ground

Expand full comment
Nov 28, 2023Liked by Packy McCormick

Great article: Very well written and easy to understand while writing about a non-trivial idea.

Expand full comment

The cat is out of the bag. As you said, the only solution is somewhere in the middle of the two extremes. The question is how to get there fast before we either regulate to slow down significantly in certain countries, i.e., US and EU, and lose all the benefits where it becomes a human-centric AI and not a replacement, but other countries keep on marching full speed and get to a place where it becomes an AGI and create the extreme risk, or we do not do anything, and it eventually kills us. We need an independent body to mediate this tug-of-war. The government is not the correct entity to mediate or regulate since they have minimal expertise, and most governments are easy to influence and are short-term thinkers.

Expand full comment

Linking the "AI risk" meme to "degrowth" is a completely false correlation. It is, however, a recent, and brilliant move on the part of the e/acc extremists to spread the false fear that EA seeks to "stop" AI development and wants to stop the economy from growing. By including "degrowth" you tip your own hand and reveal how even in this article that purports to be aiming only at an overview and simply shining a light on the "middle realm" you are in fact pulling hard for the e/acc team.

The famous letter last spring that called for a temporary 6 month halt of releases into the wild of frontier models was just a call to "tap the brakes" not turn off the engine or dismantle the car. And if you recall, that letter was not taken seriously by any of the media that reported on it. It never mentioned degrowth, nor has degrowth been mentioned by any EA icon. It is a crock. It's sole aim is to pull hard for the e/acc crowd. How did you miss this? You seem to have very good intent.

Expand full comment

This post doesn't present the "EA" side accurately. In fact, EA is the wrong name for the group of anti-e/acc; I think better names for the two sides are Accelerationists and Doomers (EA is a larger group, consisting of people from both sides and neither side, but more aligned with doomerists overall).

Disclaimer: I'm biased towards the Doomer/"EA" side of the conversation.

The Doomer point of view seems to be: AGI (=smart/powerful AI) is going to either extremely improve the world or kill us all. We don't know which of the two will happen. If the first AGI is unaligned, we *immediately and permanently lose*, no retries. Once we do have an aligned AGI, however, the future is bright and utopian. We're currently moving towards AGI very fast (technological progress). So, We need to hit the brakes on AI *capability progress* and focus on AI *alignment progress*; stop making AI more powerful (stop moving towards AGI), spend more years on safety research (keep moving towards aligned AI). If it costs us an extra 100 years of technological stagnation before we create our machine god, so be it; the suffering until then is miniscule compared to the happiness of the long term future.

The Accelerationist point of view seems to be a collection of some of the following "counter-doomer" ideas, agreeing about the high value of the AGI-powered utopia but disagreeing about the process it takes to get there:

1. No chance of doom. AGI will definitely not kill us all, so, no reason to delay a good thing.

2. More than one life. If we get unaligned AGI, we can stop it before it permanently ruins the world, and maybe try again. We can afford mistakes.

3. Decentralization. AGI is more likely to be aligned if it's created and controlled by millions of people around the world, rather than by a few governments/companies/nonprofits.

4. Short-term > long-term. If we don't really care about trillions of potential future lives, AGI doom isn't such a big problem, but AGI utopia is still amazing for everyone currently alive.

5. Selfishness. It would be horrible to die before the AGI singularity makes everyone immortal.

6. A world without aligned AGI is too sad. A small chance of the world ending is a risk that is worth the reward.

(Please let me know if and how I'm misrepresenting the two sides!)

The doomer side of me has good arguments against points 1-3 (I think p(doom)>1%, we only have one chance, and our centralized bodies are good enough right now). I don't think I can argue against points 4-6, they require a different worldview; in fact, I really empathize with point 5 - I'm not a full utilitarian, I care a lot about myself and the people close to me way more than I care about some made-up future pseudohumans.

I also want to highlight the "asymmetry tug-of-war" argument, that claims it's not a fair battle. I actually claim the *inverse*: it indeed is unfair, but in the other direction.

Packy says: Doomerists need to get regulation once, and that's it. Accelerationists need the world to keep improving in capabilities and safety until an aligned AGI emerges.

But, I'd say the opposite: Accelerationists only need to cause a single mistake, just one rogue AGI, and the world will go up in flames. But Doomers need to constantly work to hold back AI capabilities progress, constantly monitor computation resources, for... years, decades, or centuries, however long it takes to solve the alignment problem.

Expand full comment

Thanks for this Packy.

I'm with you on the fearmongering infecting the zeitgeist and never going away, a la nuclear energy fears.

Where I diverge is on your middle-ground thesis. As in, you'd put Horowitz and some of the recent e/acc folks in the extreme camp.

Zooming out for a moment, I think it's useful to return to fundamental principles. What is government for? Where should we insist on state intervention? What is the empirical evidence to guide the amount of regulation in an economy? Plenty of research shows an inverse relationship between regulatory burdens and innovation. No serious policymaker advocates for anarchy, with tainted milk on store shelves and limited state intervention in the economy, yet beyond those extremes, it seems the natural experiment that is the United States and Europe paints a clear picture.

Insisting that anyone advocating for restricting private enterprise through the coercive power of the state have data to support their restrictions is immensely reasonable. If I said we should ban the color pink, you'd ask why.

Expand full comment

Packy and co, thanks for your continued work on pressing for innovation.

I tried writing a definitive rebuttal to the AI doomsayers:

https://zedtarar.medium.com/ai-apocalypse-or-overblown-fear-challenging-the-narrative-of-technological-doom-07b0ec4dbdd6

Expand full comment