Discussion about this post

User's avatar
LO's avatar

This article is a good reminder to avoid extremism, and an explanation of the concept of Overton windows. However, I have two critiques of it.

The first critique is that it's a bad summary of the AI x-risk / EA position. The first people who cared about AI x-risk were radical transhumanists who want aligned superintelligent AGI to create an unimaginably prosperous world. For an example, Bostrom's "Letter from Utopia" ( https://nickbostrom.com/utopia ) describes starting with abolishing death and massively enhancing human intelligence. In my experience, the culture in AI x-risk circles remains one where space colonisation, great personal liberty, and near-unlimited prosperity for everyone are taken for granted when talking about what the future should be like. What's changed since the early days is that some people have gotten very pessimistic about the difficulty of the technical AI alignment problem. (My own stance is that as a civilisation, we have not made much of an effort on technical alignment yet, so it's too early to put massive odds against a solution)

My second critique is that this article it seems like an example of one of the main cultural forces standing in the way of growth.

The object-level question is: under what conditions will AI cause human extinction? Experts disagree on this but very few serious people think that this is an obvious question either way. There is a lot of technical work being done to find the answer.

Now what the article does, in effect, is say this: forget the object-level, technical question. The real question is about narratives (i.e. politics and power). And the solution? There's no mention of technical work or science, or even any tricky object-level question (it's assumed everything is political) -- the message is to look at the positions and assume that the middle position is right. If all discourse consisted of political games along a 1d axis, then yes, eventually there's probably a compromise in the middle (or no solution at all). But in this, the key updates on what we should do are going to come from future breakthroughs in AI, which in turn depend on the brute facts of what is technically feasible. Everyone's (hopefully) on one team when it comes to thinking that apocalypse is bad. And if I had to draw a line between two sides, the most salient one I see is between those trying to use AI as a case study for their own political battles, and those who are trying to build beneficial tech and carefully wring bits of information out of reality so we know how to do so.

I think a big reason why our civilisation is under-performing on growth is that politicking and narrative-spinning is taking over discourse, which in turn drives policy and culture. Twitter (and to a lesser but still relevant extent, Substack) selects hard for virality, and narratives are more viral than tech or science. Sometimes an issue should not be shoe-horned into a political narrative, and sometimes you have to stop spinning narratives and engage with the actual tech and the actual science.

Expand full comment
Jeff Giesea's avatar

For another alternative (or synthesis some might say) check out this piece on Technorealism https://boydinstitute.org/p/the-technorealist-alternative

Expand full comment
9 more comments...

No posts