Discussion about this post

User's avatar
Matt's avatar

In the Google example, doesn't the "AGI Doesn't Exist" scenario also result in an infinite downside risk, assuming Page is being honest and willing to bankrupt the company chasing it? This downside risk seems to be the same as in the "AGI Exists" scenario where Google does not invest/is not able to capitalize on it.

Expand full comment
JHuff's avatar

If we create a digital god, will it be benevolent, or wrathful? In other words, the analogy here is close, but not quite right, because it ignores the infinite downside scenario of AGI.

We rule the earth because we are smarter than everything else that lives here. The difference between eating burgers and being factory farmed as a burger source is purely brain capacity, and we are about to create something with more brain capacity than we have. We are expecting it to be subservient to us. Are we subservient to any less intelligent thing that we have ever encountered? Can we honestly expect to successfully control a god? Can we guarantee what we summon won't be a demon instead?

I don't know. I do think there's an infinite upside to AGI, if the AGI remains fundamentally devoid of independent will; a benevolent god that does only what we ask it to do. As long as we ask it for the right things, we can use that tool for great benefit. But ask it the wrong things, or imbue it with programming that leads unintentionally to the wrong objectives, and we can potentially annihilate ourselves just as surely as we could with nuclear weapons. Worse yet, develop it such that it begins having its own will and goals, and we will very quickly depart from the top of the food chain. That's the wrathful deity scenario.

So, I'd draw the diagram differently, with the "invest in AGI" box split in two, a +infinite for Benevolent, a -infinite for Wrathful. We don't need the "don't invest in AGI" box; just as with the nuclear bomb, the fact we have realized it is probably possible means that someone on earth will eventually build it, and we would be smart to do the same, and get there first. We don't know whether we are responsible enough to handle it properly, but we do know we'd prefer not to find out if anyone else is. Unlike the bomb, though, we might not be in control of the big red button when we finish building it.

Expand full comment
6 more comments...

No posts