Welcome to the 1,861 newly Not Boring people who have joined us since last Monday! If you haven’t subscribed, join 198,862 smart, curious folks by subscribing here to help us hit 200k:
🎧 Audio is back! Listen on Spotify or Apple Podcasts (will be up in the next 30 minutes)
Today’s Not Boring is brought to you by… Secureframe
Easy SOC 2 compliance + AI-based RFP automation
With a streamlined workflow and expert guidance, Secureframe automates the entire compliance process, end-to-end.
What makes Secureframe different?
Get audit-ready and achieve compliance in weeks, not months with built-in remediation guidance and 100+ integrations.
Stay compliant with the latest regulations and requirements including ISO 27001, GDPR, HIPAA, PCI, and other standards.
Automate responses to RFPs and security questionnaires with AI.
Secureframe is trusted by hyper-growth organizations: AngelList, Ramp, Lob, Remote, and thousands of other businesses. Schedule your personalized demo of Secureframe today.
Hi friends 👋,
Happy Monday! If you listened to the All-In pod this weekend, you might be concerned that there’s really no point in starting a startup, learning how to code, or doing … anything anymore.
Don’t quit just yet.
When we have intelligence “too cheap to meter,” history suggests that we’ll find ways to put it all to work, with latent demand left to spare.
BTW, I won’t be offended if you watch SpaceX’s first flight test of the Starship, the largest rocket ever to fly, while you read. You can watch it here.
Let’s get to it.
Intelligence Superabundance
Will AI steal our jobs, let us work less, or force us to do more, better?
Induced demand, Jevons Paradox, the Marchetti constant, and consumer psychology 101 suggest that the increased supply of intelligence will create more demand for tasks that require intelligence, that we’ll turn gains in intelligence efficiency into ways of doing new things that weren’t previously feasible.
Instead of a zero sum competition with the bots, we might be on the cusp of intelligence superabundance. We may the same amount of time working, we’ll just achieve much more in those hours than we currently do.
Parkinson’s Law – that the amount of work expands to fill the time available for its completion – suggests that when each discrete piece of work takes less time, the amount of work will expand to fill the rest of the time. But that’s just a rule of thumb.
We can examine it a little more rigorously with a little mental flip. Instead of thinking about AI as something separate from human intelligence, we should think of it as an increase in the overall supply of intelligence.
As internet pioneer David Gelernter wrote in a prescient 2010 piece, Dream Logic, The Internet, and Artificial Thought, “Human intelligence is the most valuable stuff in the cosmos, and we are always running short. A computer-created increase in the world-wide intelligence supply would be welcome, to say the least.” There’s latent demand for brainspace.
Treating intelligence as a resource with a growing supply, we can use economic theories like induced demand and Jevons Paradox, the Marchetti constant, and even observations from Jeff Bezos to explore the idea that with AI, humans will need to do more, better.
Then the important question becomes not “Will I lose my job?” but “What would I do with superabundant intelligence?”
We’ll get there, but first, a little econ.
Induced Demand and Jevons Paradox
Induced demand is one of the funniest concepts in economics, and I think it’s one of the most important to understand in an environment of abundance, be it physical or digital.
Here’s a simple definition: induced demand occurs when an increase in the supply or capacity of a good, service, or resource leads to an increase in its consumption. Supply begets demand.
This isn’t true of everything, of course. 10x’ing soda production wouldn’t increase soda consumption by 10x. Induced demand mainly holds in cases where there’s latent demand, where people would use more of a thing if only more existed, or existed accessibly enough to use.
Traffic is the canonical, super frustrating example. The idea is that adding new roads or lanes does practically nothing to reduce traffic. More cars just hit the road to fill up the new capacity, and the traffic is right back where it started. The result is that more people are stuck in traffic.
Casey Handmer has a good blog post on this. He explains that latent demand for road space in LA is ~30x the current capacity, meaning that ~3% of people who would want to travel actually drive. The other 97% find the experience of driving in LA too painful, so they just don’t do it. When a new lane opens up, some of the people right on the pain border get optimistic that maybe things will be less painful now, so they jump in their car and on the road, and create more pain for themselves and others by doing so. This would be the case until LA builds at least 30x the road capacity.
But say you could build enough road to serve all of the latent demand and then some, effectively eliminating traffic. This would be a miracle and take far more money and space than it’s worth, but just go with me. In this world, say the time it takes you to travel from home to work is cut in half. There’s another funny economics quirk that comes into play here, too.
Jevons Paradox, named after economist William Stanley Jevons who first noticed it in his 1856 book The Coal Question, states that when something becomes more efficient, people consume more of it. Jevons observed that when the steam engine became more efficient, when it could do more using less coal, coal consumption actually increased. People didn’t just use steam engines to do the same things more cheaply; they started using the steam engine to do more things.
We can project this forward, too. In my favorite piece on the subject, Energy Superabundance, Eli Dourado and Austin Vernon imagine what humans will be able to do with energy nearly too cheap to meter. Their goal “in promoting abundant energy is faster economic growth and higher standards of living.” Among other things, they paint a “vision of the future that includes flying cars, hyperloop, sub-orbital point-to-point travel, electric autonomous trucking, vertical farming, water-from-air condensation, water desalination, and so much more.” We won’t only use cheap, abundant energy to do the same things more cheaply; we’ll use it to do more things, better and faster.
Both Bertrand Russell and Yacov Zahavi observed a similar phenomenon in travel. Since Neolithic times, people have budgeted roughly one hour per day for commuting – this hour is known as Marchetti’s constant or the “travel time budget.” As transportation options have improved, people have just traveled farther. Instead of walking to the next village, we drive from the suburb to the city.
Jevons Paradox – that increasing efficiency leads to more demand – is one mechanism by which induced demand can occur. I call it out specifically because it highlights how counterintuitive this stuff can be.
Create cheap, abundant clean energy, and people will consume a lot more energy to do all sorts of wonderful things. ChatGPT tells me that energy efficient appliances may actually lead to more energy usage at the population level as consumers use their energy-efficient appliances more often (although my ChatGPT editor advised me to mention that this is a debated topic and not a universally accepted fact).
Make healthcare more accessible, and people will consume more healthcare. ChatGPT tells me that Europeans visit the doctor more often than Americans, but in the interest of balance, counters that Americans have higher consumption rates of surgeries. Extrapolating: make doctors more accessible, decrease the need for surgeries, increase health.
Increase the supply of services for which latent demand exists, and people should consume those services more frequently.
The Verdict on Lawyers
A couple weeks ago in Idea Olympics, I imagined a future in which LawBots take human lawyers’ jobs:
The LegalDDoS of 2027 only hastened the LawBots' dominance; the barrage of human-filed lawsuits against them forced the notoriously Luddite courts to employ legions of LawBots themselves. Whoops.
Slow Ventures’ Sam Lessin made a good counter argument on Twitter:
While none of us want to live in a world with 10x the lawsuits, just like none of us want to sit in traffic, I think he’s right. Increase the supply of “lawyers,” induce more demand for legal work. Increase the efficiency of lawyers, paradoxically increase the demand for legal resources.
It will be trivially cheap and easy to turn what would have been a handshake agreement into a legal one, just to be safe. It will be trivially cheap to sue people. In about five minutes, I used ChatGPT to learn how to sue my neighbor for planting trees on my property and to generate a complaint.
This is just basic, out of the box ChatGPT. It is very careful to remind me that it is not a lawyer. There are a bunch of companies working on building legal-specific AIs that will do a much better and more tailored job than this. Like Harvey, which has signed deals with top law firms and PWC and was recently valued at $150 million by Sequoia. And yet… a simple ChatGPT conversation was really easy, and free.
I probably wouldn’t spend the legal fees to sue my neighbor over planting trees in my yard, but would I file a complaint for free plus the cost of the filing fee? I bet a lot of people would. Rich people sue each other over things all the time because the legal fees aren’t prohibitive to them, particularly in the context of the sizes of the things they’re suing over. Now, average people will be able to sue each other more easily over much lower value issues.
To Sam’s point, this might actually increase the demand for lawyers. My neighbor, John Doe, might pony up for a lawyer to defend himself. And this is just a simple, personal case – when businesses big and small sue each other over littler and littler things, or write more and more complex agreements, they may need to employ both AI and more human lawyers to deal with all of it. This is particularly true if you view lawyers, like consultants, as a kind of CYA expense.
You can apply a similar logic to any number of professions. Certainly, some jobs will be automated away, but I wonder how many professions will actually grow because of AI.
Divinely Discontent Customers Will Always Want More and Better
Another way of looking at this idea is, confusingly, a different kind of demand curve described by Eugene Wei in Invisible Asymptotes: “We speak often of the economics concept of the demand curve, but in product there is another form of demand curve, and that is the contour of the customers' demands of your product or service.” Wei quotes Jeff Bezos’ 2017 letter to shareholders:
One thing I love about customers is that they are divinely discontent. Their expectations are never static – they go up. It’s human nature.
As supply or capacity increases, customers will simply demand more and better products. Latent demand grows over time, as consumers see what’s possible today and demand more and better tomorrow.
This is apparent in the world of atoms, where induced demand was born. When we get cheaper energy, we simply demand more and better energy-consuming products. When we get more lanes, we simply demand more driving.
I suspect the same will be true for intelligence.
Will AI take lawyers’ jobs, or will we simply sue each other more?
Will it take designers’ jobs, or will we demand better design?
Will it take doctors’ jobs, or will we all demand our own concierge doctors?
Will it take engineers’ jobs, or will we demand more personalized and higher-quality software?
Will it replace startups, or will we demand more diverse and better instantiations of founders’ visions, faster?
Will it take writers’ jobs, or will readers just demand higher-quality, more well-researched, more original work?
On the surface, AI looks like an opportunity for humans to do the same work with less effort, and that’s certainly one possibility. The past few weeks’ rise of AutoGPTs – agents like Auto-GPT and Yohei Nakajima’s BabyAGI that act as semi-autonomous workers – lend credence to this camp. Maybe we’ll all be the CEO of our own bot armies, directing them with the occasional prompt as we sip pina coladas on the beach. Maybe we’ll get a three-day workweek in the near future.
Maybe. But I doubt it.
I suspect that the increased supply of work will induce demand for that very human skill of figuring out just what we should be working on. It would be a tragic self-own if we used our new superpowers to endlessly sue each other back and forth.
Will it force us to change how we work? Of course. Will it push us to differentiate? Absolutely.
But that could be a wonderful opportunity. Instead of worrying yourself into paralysis, digging your heels in to do more of the same thing, or planning your next lawsuit, I’d encourage you to take some of your newfound free time to paint a vision of the future you want to build with superabundant intelligence, and then get to work.
Thanks to Dan and Puja for editing!
That’s all for today. We’ll be back in your inbox on Friday AM with the Weekly Dose.
Thanks for reading,
Packy
Like this a lot, but the missing piece is high regulatory capture.
"Document review" is a huge part of any corporate legal case. Simply put, the (literally) 200,387 documents relevant to, say, an OxyContin lawsuit have to be reviewed for relevance (many need to be excluded on grounds like personally-identifiable information, etc.). This task is currently done by people, usually only by active members of a bar. It's sweatshop legal labor: pays around $25 an hour most places and is typically staffed by graduates of mediocre law schools all desperately trying to set up an independent practice. Ripe for AI disruption--except human review *is the law*, another crafty move by the American Bar Association. It's a great way to sop up all the excess from law schools that should not be in business.
I'll leave what the American Medical Association might do in the way of self-protective regulation as an exercise for the reader.
A students work for B students at companies founded by C students. Intelligence is one factor. But EQ beats IQ. Add another, AQ (Adaptability Quotient). EQ + AQ > IQ. AI owns IQ but humans have a commanding lead in EQ. AI will adapt, humans will also. The humans with the greatest levels of combined IQ, EQ and AQ will command this superabundance of intelligence.