Social vs. Science Experiments
Or, why progress curves in AI and web3 look different
Welcome to the 1,333 newly Not Boring people who have joined us since last Monday! If you haven’t subscribed, join 156,749 smart, curious folks by subscribing here:
Today’s Not Boring is brought to you by… Secureframe
Secureframe is the leading, all-in-one platform for security and privacy compliance. Secureframe makes it fast and easy to get and stay compliant so you can focus on what matters: Scaling your business, customers, and revenue.
With Secureframe, hypergrowth organizations can:
Get SOC 2 audit ready in weeks, not months
Stay compliant with the latest regulations
Scale compliance with your business to meet ISO 27001, GDPR, HIPAA, PCI, and other requirements
Plus, Secureframe helps sales teams respond to RFPs and security questionnaires quickly and easily with AI so they can close more deals, faster.
Click here to set up a demo. Mention “Not Boring” during your demo to get 20% off your first year of Secureframe. Promotion available through October 31, 2022.
Hi friends 👋,
Happy Monday! I hope you had a great weekend.
We talk a lot about progress here in Not Boring. Progress can be difficult and messy and winding, and importantly, progress moves at different speeds and follows different paths for different kinds of products. Today’s essay is my thinking on two types: science experiments and social experiments.
Successful science experiment products clearly move the world forward; social experiment products are less obviously beneficial and messier in the short-run, but I think they’re every bit as important in the long run, especially in combination with science experiment products.
This is a “thinking out loud post.” I would love to hear your thoughts and feedback.
Let’s get to it.
Science Experiments vs. Social Experiments
Some cutting edge tech products are the result of science experiments, others are the result of social experiments.
Science experiment products face a lot of technical risk early on, and getting a product to market can take a lot of time and capital. But when a product is finally objectively good enough, they can face less market risk. Launches are often preceded or accompanied by research papers. Think categories like AI, biotech, energy, chips, and space.
Social experiment products face less technical risk and can get to market in months instead of years, but once launched, face the very real risks associated with relying on people as key components of the product. These products enter the market partially formed, and evolve with their users. Think categories like web3, social media, and marketplaces.
This distinction is one of those ideas that feels too obvious to type now that I’ve typed it, but a lot starts to make sense to me when I look at the world through this framework. We often lump everything into the “technology” pile and that causes a bunch of confusion.
That confusion has been top of mind recently as crypto has dipped while AI has soared. Living through both waves, the progress curves feel very different.
For AI, a science experiment, there’s nothing, nothing, nothing… then BOOM. A fully formed butterfly emerges from the chrysalis. For web3, a social experiment, little flints spark adoption, and adoption shapes the products. Nothing, something, more, more, “probably nothing”… then KABOOM. Something unexpected breaks, and it’s back to the drawing board.
Social experiment products are forged in the public eye, with the inevitable ups and downs on display for the world to see. Science experiment products are forged in private, and enter public consciousness with most of the kinks worked out. Of course, the technology isn’t always perfect. Try generating an image with hands or eyes in DALL•E 2.
But we went from radio silence to “holy shit that’s magical” in the blink of an eye, and now that Stability AI has open sourced models like Stable Diffusion and OpenAI followed suit by open sourcing Whisper, it’s up only. There’s something new literally every day. Last week, Google’s DeepMind demonstrated an AI that can make algorithms for matrix multiplication. AI making AI.
It’s worth noting that many science experiments can fail once they hit the markets, either because they don’t actually work yet (see: Theranos), because they were too early (see: many, many failed science experiments), or because scaling experiments to commercial levels is difficult (see: hardware is hard). A certain product category may not be technically or economically feasible for a long time, even while researchers work with the expectation that when this cost curve reaches that point, or this input becomes cheaper, everything will snap into place. Some companies get a little overexcited, rushing to market before the science or economics are quite right.
But while individual efforts fail often, the experiments they run push the category forward, one technological innovation building on the next until someone’s product is good enough or the market is finally ready.
Zooming out, progress in science experiment categories like AI looks something like this:
Fitting that line looks something like an S-Curve: Slow Growth, Fast Growth (AI is here now), and then Slow Growth again. It’s a smooth line up that never dips: a new innovation sets a new technological floor that future products build from. Growth slows when all of the juice has been squeezed and a new technology trigger is needed to set off a new curve.
People working on science experiments have the benefit of the relative obscurity of the lab. They can tinker, mess up, keep tinkering, progress, take ten steps backwards, and have a breakthrough, all in the privacy of their own four walls. Obviously, private tinkering comes with its own challenges – getting funding might be harder, you can’t point to a million users, the problems are really hard to solve and dead-ends are frequent – but at the end of the day, the biggest question has an objective answer: did you build the thing or not?
People building social experiments have no such luxury. While technology plays an important role – blockchains, token standards, zero-knowledge proofs, and the like are all major technical innovations – people are a necessary component of the product itself. And people are messy, flighty, scammy, fickle, excitable, and all of the other wonderful things that make us people. Plus, most crypto products are money technologies too, and money enhances all of those human attributes.
That means that while people building social experiments can tinker with their tech in private, they can’t simulate the full product experience in a lab and release a perfect product with all of the kinks worked out. They need to release a product into the public lab that is the real-world and let the chips fall as they may.
Worse, they often need to build hype ahead of the product’s capabilities because they need to stand out and attract people in a crowded market so that the product can actually do what they think it might be able to do in the first place. Even with the best intentions, they’ll be called useless at best, a scam at worst.
Look at the early days of Twitter (“Why would I want to read what people are eating for lunch?”), all of the commentary around Clubhouse, a small startup whose app got so popular that thousands of grownups cheered its downfall, or practically any web3 product. All of the triumphs and defeats play out on the stage of public opinion, causing violent spikes up and down in entire categories.
This is not meant to excuse the actual scams as fun experiments. Au contraire, the actual scams are doubly-destructive, because they harm their own users and they cast a shadow over the well-intentioned projects. Fuck the scammers.
Zooming out, progress in social experiment categories like web3 looks something like this:
Fitting that line looks more like a rollercoaster that trends up but has some stomach-churning drops along the way. People rush in when things look good, and rush out when they don’t, which happens with any technology, but the products themselves get more useful when people rush in and less useful when they rush out. Attaching money and prices amplifies this effect.
That social experiments will be messy and rocky is so obvious that AI already understands it, yet whenever that messiness and rockiness actually play out in reality, we’re surprised anew, wondering if this downturn spells the “end of X.” When I fed a prompt for this essay into GPT-3 yesterday, days after I drew those curves, it accurately predicted the shape of the two curves (GPT-3 output in green):
GPT-3, the result of a science experiment, relies on the unwitting past participation of millions of people who have posted things to the internet that now feed the models. The single-player experience that any user has now that it’s out in the wild, however, is valuable whether anyone else uses it or not. In fact, if I were the only person in the world using GPT-3, it would be even more useful to me: y’all would think I was a super genius robot for churning out superhuman quantities of good material.
On the other hand, if a shadowy super coder built a technically perfect blockchain, one that solved the scalability trilemma, never went down, was easier to build on than web2 infrastructure, all of the good things, but I were the only person using it, it would be exactly useless to me.
Metcalfe’s Law, which states that the value of a network is proportional to the number of people using it, cuts both ways for social experiments that need to establish network effects. It’s really hard to get the right people to use the right product in the earliest stages, and most network businesses fail because of this Cold Start Problem.
That’s why hype is often a necessary ingredient to a social experiment product, and why progress in social experiment categories looks like a bunch of Hype Cycles strung together.
The y-axis on the Gartner Hype Cycle is Visibility, not Progress, but since social experiment products need users for the products to have any value, there’s a direct correlation between the two. There’s also an indirect relationship, as higher prices attract more people who build more products, as a16z describes in the Crypto Price-Innovation Cycle.
The relationship between Visibility and Progress in science experiment categories is almost entirely indirect, but it exists. Hype and Visibility attract more people and resources, which speeds up the pace of progress and often brings down costs, in a virtuous cycle. That’s the steep part of the S-Curve, and why actual progress in science experiment categories looks less like a bunch of Hype Cycles (progress doesn’t decline, just decelerates), and more like stacked S-Curves:
That’s not to say that Hype doesn’t impact public perception of science experiments. AI was famously always “ten years away” and “just linear regressions with a fancy name”... until it wasn’t. But AI’s success didn’t depend on getting a lot of people to use not-yet-good-enough chatbots; it depended on keeping a relatively small number of nerds excited enough by the progress to keep making progress for the sake of solving really hard and important problems.
One of the coolest things happening in the world right now is that a lot of science experiment categories are blooming simultaneously, often in concert with each other. In addition to AI, techbio, robotics, and renewable energy all seem to be jumping out of the lab and into the real world with products better than people would have expected and cost structures better than experts would have predicted. AI is genuinely useful and fun instead of frustrating. Moderna got a mRNA-based COVID vaccine to market in record time and saved millions of lives. Now, AI and biotech are combining in amazing ways, as exemplified by DeepMind’s AlphaFold and Atomic AI. Solar costs are plummeting and opening up new use cases and demand sources.
After a bunch of tedious, incredibly difficult work in relative privacy, these products are so obviously useful that adopting them is becoming a no-brainer. That’s one of the benefits of quietly doing science experiments in the lab before commercializing.
An observant reader might note that solar benefits from learning curves and Wright’s Law, and that improvements in cost and efficiency are a direct result of more solar panels being produced. So adoption is important. Adoption is important and necessary for any business once it becomes a business; it’s the whole point. Once science experiments come out of the lab and into the market, they often benefit from Scale Economies.
But adoption isn’t the product itself, as it is with social experiments. A billion people buying solar panels brings down costs faster, and makes it possible to get closer to the edge of physical limitations more quickly, but it doesn’t change the science. A billion people believing in Bitcoin directly makes Bitcoin stronger and more useful in ways that it wouldn’t be if only ten people believed in it.
Social experiments live and die by Network Effects, the prize for dealing with all of the human messiness upfront for the few social experiments that make it to the other side. Network effects can become so strong that it doesn’t matter how bad the bits a company produces are, people will stick around (ahem, Facebook).
But true network effects are hard to come by, and in the case of web3, adding money into the mix exacerbates the challenge (but, I suspect, makes true network effects stickier once achieved). Bitcoin and Ethereum are a testament to that. Both took off in a much quieter market environment, in the very small first and second crypto cycles, starting with cores of true believers and expanding outwards.
That’s textbook network effect creation. Start with a small niche, grow the density and connections among participants, and only then start adding on adjacent networks. That’s really hard to do in a bull market, when a bunch of people rush anything that looks even remotely promising, diluting and even poisoning the niche, particularly when web3 products are designed to be open and permissionless.
I keep using Farcaster as an example of what good looks like here because I think they’re doing a bunch of really smart things on a bunch of fronts, and maybe none smarter than this: limiting who can use the product to a tight core of like-minded people. That gives the product the chance to be as much like a science experiment as it’s possible for a social experiment to be, making and fixing mistakes in private with the collective input of people who care about the product. It will be big one day, and maybe there will be a token, but by building in a semi-closed lab, it should have a strong enough core to withstand and repel the mess and grifters that jump on big, token-based projects.
Until writing this essay, I thought that the reason that big crypto projects are built in bear markets was that only the people who really care stick around and there’s less noise and competition. That’s part of it. But I think the bigger thing is that the total lack of interest from the outside world lets social experiments mirror science experiments as closely as possible. Anecdotally, I’m starting to play with private beta versions of products from web3 portfolio companies that avoided the trap of rushing to market during the bull, and they’re genuinely awesome.
Before I went on paternity leave a couple months ago, I worked with David Phelps and jokedao to let random internet strangers submit topics for me to write about and pick a winner. It was a social experiment itself, and as a testament to social experiments’ messiness, this is the topic that won:
“In such a sluggish market, a genesis airdrop project is needed to stimulate the start of the next bull market.”
I guess in a way, this essay is that one, and my conclusion is that the prompt couldn’t be further from the truth. A genesis airdrop – dropping free tokens on a bunch of people for getting into something early – that attracts a bunch of financially motivated, loosely connected people to a very rough, early product is the last fucking thing that will start the next bull market.
The next bull market will start, slowly at first, when the social experiments being cooked up in quieter labs become useful enough to small but growing groups of people to withstand the inevitable onslaught of the kinds of people who are attracted to genesis airdrops, to withstand the hype.
No one loves the Gartner Hype Cycle more than I do. It describes what it describes well and it’s useful for writing about technology trends, but it’s too crude to be particularly useful for either entrepreneurs or investors.
I hope that breaking frontier technologies into science experiments and social experiments, and looking at their respective curves, adds the right level of granularity to evaluate products and trends against the right criterion. This essay is just a jumping off point – we haven’t discussed what metrics to look for when – but at the very least, we can stop making apples to oranges comparisons between technologies like AI and web3, and start focusing on the more interesting pieces, like where the curves might overlap and accelerate each other.
We may be hitting that point right now with AI, starting to blur the lines between science experiment and social experiment, particularly with Stability AI’s decision to open source its models. Ostensibly, Stability will benefit from platform network effects in a way that many science experiment products don’t.
I also have a thesis that AI will be web3’s ultimate best use case. People will need a way to own, permission, and benefit from their data as it feeds increasingly powerful models, and if open AI wins, decentralized ownership and governance of those models will be critically important. Looking back in a decade or so, I wouldn’t be surprised if we viewed all of web3’s early stumbles and mistakes as necessary experiments for the main event: governing AI in such a way that it doesn’t turn us all into paperclips, owning enough of it that we benefit from its use of our data, and rewarding open source contributors for their work.
While science experiments often seem more straightforwardly and obviously good – a new life saving medicine or cheaper electricity solve clear problems – humanity’s uniqueness, and our ability to solve hard scientific problems in the first place, comes from our ability to coordinate and work together as social creatures. Democracy was a bloody, messy social experiment.
What makes us messy also makes us magical. Social experiments are every bit as important as scientific ones, even if they make for a bumpy ride.
Thanks to Dan for editing, and to Chris, Bruno, Nat, Nathan, Jb, Lucas, Tom, Gabriel, 4484, Nathan, Ben, Alexander, and Jonny from Farcaster for ideas and resources!
That’s all for today. See you back here Friday for your Weekly Dose of Optimism. Have a great week.
Thanks for reading,