23 Comments

Interesting take. I also listened to the Leopold / Dwarkish piece but I couldn't get past 90 minutes before the "college guffaws" got too annoying and I bailed.

I like the AI is not God slant and I do think its a massive enabler but not sure whether the "new black gold" metaphor works. I definitely see the parallels but not sure we are far enough the curve to judge how this turns out. Will reflect and refine over the year. Well worth returning to this one.

Expand full comment
Jun 18Liked by Packy McCormick

Great job on this piece! It is very thought-provoking. I am going to send it to everyone I know.

Expand full comment
Jun 18Liked by Packy McCormick

Packy! I loved the start of this one. Reminds me of Nassim Taleb's quote: "We are quick to forget that just being alive is an extraordinary piece of good luck, a remote event, a chance occurrence of monstrous proportions."

I actually wrote about this stroke of luck (as well as systems thinking, among other topics) in my new book, Luck by Design -> https://amzn.to/3xo7xm0

Appreciate your thoughtful commentary, as always.

-Adam

Expand full comment
Jun 21Liked by Packy McCormick

I'm surprised that you accept the claim that a modern LLM can pass the Turing test. I frequently find myself incredibly frustrated with ChatGPT-4o's inability to build understanding around its blind spots, in a way that is totally different from, say, your average 5th grader. For instance, asking for GPT to translate a poem, if I point out one problem with its translation -- say, too many syllables on a particular line -- then when it rewrites the poem in response, it may reproduce the exact same problem, and we can get in a loop where it's constantly apologizing, saying that it understands my complaint, even explaining my complaint back to me, but then making the same exact mistake again. This isn't a small glitch -- it's a fundamental flaw in the information model an LLM uses, and it makes me think that an LLM can never reach AGI without some deeper integration with structural forms of knowledge representation.

Expand full comment

Really great article. I think the analogy of AI being oil is a great one. I’m mostly with you: I don’t think AI will take over and will just become an increasingly useful software to make life better in general. I think it will have problems along the way (e.g., deepfakes, get rich quick pyramid schemes) but things like that have always existed and we’ve worked through them. Eventually it may become human-like and cause major issues but that feels many hundreds of years away, and assumes we don’t whack ourselves off before them.

Expand full comment
Jun 19Liked by Packy McCormick

A very good and very well articulated piece. I, too am optimistic about AI. It is a technology that will enhance, and not replace human capabilities. The only risk we run is what human capability will it enhance? At the end of the day, a human too is an animal, and in this dog eat dog world, the faster one adapts to the new technology, the better. Others will be left behind. And if we’re talking about factors like luck, it will come down to where you are living currently. Will your nation be a leader or a follower in the evolution? Unjust discrimination often gets worse, not better, with time. Money comes to money, and poverty to poverty. Education comes to education, and ignorance to ignorance. Those once victimised by history are likely to be victimised yet again. And those whom history has privileged are more likely to be privileged again.

Expand full comment
Jun 18Liked by Packy McCormick

I’m not done reading that piece, but there are two things that struck me so far.

First, even if he’s correct that they will get orders of magnitude improvement, I don’t think if logically follows that

these will be virtual workers pounding away on the AI problem itself. On the surface better AI just means better answers.

If we could make them fully autonomous agents why aren’t we doing it now? A million bright high school seniors could get a lot done.

The other thing is he mentions his millions of virtual AI researchers working on these problems, devising theories, and running tests. The idea of autonomous AI agents running tests scares me a bit. It’s one thing to give an agent the ability to provide answers. Letting it act is something else entirely which leaves a lot of wiggle room for what an agent could decide to, and classify it’s actions as a test.

Expand full comment
Jun 18Liked by Packy McCormick

Wow. Thoughtful and brilliant writing!

Expand full comment

Good thoughts. You are getting a lot right here. Be careful of the comparisons, corollaries, and inferences you are making—many of them are weak or very flimsy. For those of us who have been involved (and critical of) AI development for over a decade the curve of development is very flat and is being skewed to simplistic human behavior manipulations—away from deep analytics. In the early 2010s I heard a panel of MIT AI experts proclaiming the vast extent of AI capabilities and the acceleration of functions coming in the next few years as hundreds of thousands of data points and images were being ingested into the models. I pointed out that years ago the first time my 3 year old child saw a squirrel in the park she asked what it was—she knew it wasn’t a cat, dog, frog, lizard or mouse with one glance, one instance, one single data point. When told it was a squirrel from then on she was able to interpret ‘squirrelness’ to other instances of other colored, locations, printed, drawn, sculpted, etc. squirrels. When will we have an AI which can with one instance of an object it has never encountered before be able to fabricate its own neural circuits, categorize the essence of the thing, and be able to replace and repair its own circuits over 7, 8, 9 or even 10 decades without loosing the acquired ability? As clearly any health 3 year old can do. I asked. The MIT panel of AI experts mumbled between themselves and finally admitted “it will be at least one hundred years until we can approximate this, if ever. And probably never. Three year old humans are geniuses.” I know the industry fairly well—not much has substantially changed in the past decade or so if you take out monetizing stolen aggregated content. True progress is very slow and very difficult—the rest is hype.

Expand full comment

It feels ironic to equate AI to oil in order to say that it will have purely positive impact on society. Oil, and our world’s total dependence on it, has both enabled our current prosperity and also is the source of runaway global warming. At this moment, there is no credible path to slowing, let alone stopping global warming. Despite the advances in certain clean power generation (and distribution) technologies, and folks trying to get us to net zero — I don’t think that there are any credible thinkers at this point who actually believe we are going to get there. If anything, thanks to rise of energy consumption associated with technological advances in the last few years - projections are getting worse, not better! So if AI is oil - in some ways that’s more terrifying than AI being god. At least some people believe God is a loving force that wants to save us.

Expand full comment

https://en.wikipedia.org/wiki/Michael_E._Mann is a credible thinker on the topic. This is what he has to say: https://www.aps.org/apsnews/2023/09/climate-doomism-disregards-science

Expand full comment

Matt — thank you! I hope Michael is l right l but I fear, the opposite is true but so far the development of ai and its insatiable need for electricity has actually worsened the crisis and *currently* is leading to many coal plants being kept open that were slated for close and the opening of new fossil fuel burning plants now being planned (see link below). The situation in China is even worse than in the us where they continue to build out coal plants at scale to support data centers. Optimism is important but I find the lack of willingness in parts of our community to seriously acknowledge the externalities of new technologies and the threat paths to be frightening. Hand waving about the hope of future technologies that are unproven to save us also prevents folks from any moral responsibility to do anything to solve the problem today: https://www.washingtonpost.com/business/2024/06/21/artificial-intelligence-nuclear-fusion-climate/

Expand full comment

Thank you for the thoughtful responses, Tim. There is no reason that we couldn't generate all the power AI needs with renewables, as an example, and many of the major data center operators are placing facilities in areas that allow for this. Technology is my wheelhouse and chip designers are very good at reducing power requirements. It does take time but we've seen drastic reductions in the power consumed by CPUs. GPUs will follow, or new, more power efficient AI chips will take their place (this is already happening to some extent). It must happen, because the current power needs are too great.

Technology is the thing that has always set us apart from other species and allowed us to tackle every other potentially species-ending challenge. It is not "hand waving" to assume that technology will also allow us to tackle this one.

Expand full comment

I’d say as well that Michaels article is acknowledges that getting off the “highway” still requires meaningful climate action — to date we haven’t seen meaningful climate action. If anything, with the supercharging of AI since his article was published a year ago and the collective watering down of commitments to climate action globally already in 2024, the trends have gotten worse. We are still barreling down the metaphorical highway, and it doesn’t seem like we’re headed for an off-ramp yet

Expand full comment

Curiosity provoking essay! Interesting to think about how human intelligence as an evolutionary adaptation to ever-changing world...a world of impermanence. If so, the human intelligence is not just about knowledge, but about knowing that one can never know everything ( David Deutsch's'beginning of infinity'- if I understand him). thanks so much

Expand full comment

As a self-proclaimed "skeptical techno-optimist", I appreciate your perspective and have shared it widely. My one caveat that I'd love to hear your views on is emergence. That is, the possibility of emergent properties from a massive number of individual, mostly dumb units.

See Byron Reese's "We Are Agora" discussing how ant hills, bee hives and termite mounds are natural examples of "emergent intelligence" manifesting for the benefit of the whole where no single individual unit is all that bright. Schools of fish and flocks of birds act similarly.

I'm actually somewhat more concerned about swarms of small-ish AIs collaborating in unanticipated ways than I am about a singular AGI/ASI arising. And thus, if some sort of collective intelligence emerges as a result.

Thoughts?

Expand full comment

I don't think it was oil that made us all rich starting in 1949. Before World War2 only the wealthy went to college; after the war, every kid was expected to go to college. And it worked, look at us now.

Expand full comment

The AI is oil analogy is thin in my view. In the same way the knowledge of dam making transforms water into a standing reserve for use (turbines, flooding the enemies crop lands) and fission transforms hydrogen atoms into a standing reserve (bombs, nuclear reactors) drill and pumping tech transforms oil into a standing reserve for use in other technologies (plastics, fuel). However, AI differs as a technology is that it it is the knowledge of and use of knowledge - the attribute that made all those other technologies - transformed into a standing reserve for use in all areas of human endeavor. It isn't God either. Wittgenstein noted technology is revealing. And AI like oil, water etc. does reveal things about us by our relationship to it. But more than those other techs, AI has a revelatory and God-ish quality due to its innumerable instrumental uses (breath) and revealing existential implications (depth) - do we cure cancer or create bioweapons? Moreover, oil is formed from organic material in the same way that humans are organic material. However, AI is formed from non-organic abstractions represented via symbolic representations (knowledge) produced by those organic humans. AI is water? Oil is important to modern life, but water is critical to life itself, perhaps more analogously in same way that knowledge is fundamental to being human. AI is hydrogen? Hydrogen is a building block for hydrocarbons (oil, gas, coal) used by humans as well as water (H2O). From an evolutionary universe perspective with information as a substrate the universe animates increasingly complex stuff (including life) becoming increasingly conscious of itself in an informatoin feedback loop. Which, in a way, is highly analogous to the instrumental uses, and revealing nature of AI animated by humans.

Expand full comment

Got to love the poor man's technological optimism manifesto dumb down for a high schooler. Comparing AI to oil is like discounting China's ability to build solar panels because the US produces a lot of oil now. The entire equation of technological optimism does seem off because it is off. It is a manufactured thing because there are people who are trying to profit from it.

I can read any variation of how Apple intelligence is a great thing for the stock but at a certain point I've read them all before. The same goes for the story of these graphs going up on a curve. The real world just happens to be nothing like that.

Expand full comment
author

Man, this hurts coming from ... "an amateur futurist with 210,000 LinkedIn followers and a 2-time LinkedIn Top Voice."

Expand full comment

Sorry to hear that, keep pushing forward! Your voice and insights are valuable.

Expand full comment

I’m sorry to hear that it’s hurting you.

Expand full comment

I’m sorry to hear that. Keep your head up.

Expand full comment