Weekly Dose of Optimism #104
HIV Prevention, AI Tsunami, Statecraft, GLP-1 Again, Olympic Exoskeleton
Today’s Not Boring is brought to you by… Write of Passage
Taking Write of Passage in 2019 was a turning point in my life. It’s where Not Boring began, and I wouldn’t be writing this to you right now if I hadn’t seen a random tweet from David Perell and decided to give it a shot.
At the time, I didn’t want to be a full-time writer. I just wanted to be a better writer, because I thought it would help me at my job, help me find a new job, and give me an excuse to clarify my thoughts about a lot of things I was interested in and read about. It did all of that, but it also led to all of this.
David is the writing teacher you secretly wish you had growing up. He’s insanely passionate, obsessed with quality, and has a way of making it a ton of fun to improve your writing. Plus, there’s no better way to force yourself to improve your writing than being (virtually) around a group of people trying to do the same thing every week.
Write of Passage is electric. I spoke on the last night of the course a couple cohorts ago, and it was so cool to see just how much people loved the experience they had. Whether you want to write professionally, meet smart, curious people online, or just want to get better at shaping a narrative, there’s nothing I could recommend more highly than Write of Passage.
Hi friends 👋,
Happy Friday and welcome back to our 103rd Weekly Dose of Optimism.
Packy here. Dan is OOO drinking Spritzes in on the Italian coast (the creatine business is good!) so I’m filling in to deliver your dose.
Let’s have ourselves a Friday, shall we? Start with a little optimism, get a couple hours of work in, then kick back and watch the world’s best athletes do their thing in Paris. It’s Olympics season baby!
Let’s get to it.
(1) A twice-yearly injection offers 100% protection against HIV, study suggests
Associated Press h/t @InterIntellectus
Twice-yearly shots used to treat AIDS were 100% effective in preventing new infections in women, according to study results published Wednesday.

100%. That’s everyone. “There were no infections in the young women and girls that got the shots in a study of about 5,000 in South Africa and Uganda, researchers reported.” In a group that took daily pills, “roughly 2% ended up catching HIV from infected sex partners.” The shots worked so well that the study was stopped early so that everyone could get them.
Despite the progress made in preventing HIV, 1.3 million people are still infected each year and 600,000 die from the disease each year, mostly in Africa.
The shots, Gilead’s Sunlenca, had been approved previously in the US, Canada, and Europe as a treatment for HIV, but now that its preventative strength has been shown, the race is on to make it available (and affordable) for preventative care in the countries most impacted by HIV.
Imagine showing this article to someone in the 1980s. HIV is practically cured. What a time to be alive.
(2) Big Week for (Open) AI
Last year, Llama 2 was only comparable to an older generation of models behind the frontier. This year, Llama 3 is competitive with the most advanced models and leading in some areas. Starting next year, we expect future Llama models to become the most advanced in the industry.
The artificial intelligences are getting smarter.
On Tuesday, Meta released Llama 3.1 405B, a monster 405 billion parameter model trained on over 15 trillion tokens using 16,000 NVIDIA H100 GPUs. The elegantly-named Llama 3.1 405B is competitive with best-in-class closed-source models from companies like OpenAI and Anthropic, coming in first in Instruction Following and second in Math (behind Claude 3.5 Sonnet and ahead of GPT-4o) according to Scale’s SEAL Leaderboards.
As expected, they open sourced the whole thing. Along with the release, Zuck penned a letter titled Open Source is the Path Forward, in which he makes the case that open source is good for developers, for Meta, and for the world. Of note, he calls out Apple:
One of my formative experiences has been building our services constrained by what Apple will let us build on their platforms. Between the way they tax developers, the arbitrary rules they apply, and all the product innovations they block from shipping, it’s clear that Meta and many other companies would be freed up to build much better services for people if we could build the best versions of our products and competitors were not able to constrain what we could build.
And he argues that open models are both safer against unintentional harm, because they’ll have more eyes on them, and intentional harm, because China is probably stealing the closed model weights anyway. Zuck is really coming into his own.
On Wednesday, Mistral released Mistral Large 2, which it says beats Llama 3.1 405B on Coding and Math with just 123 billion parameters. The Scale SEAL Leaderboard has yet to be updated, but in either case, we have some powerful open models to work with.
Then yesterday, the closed models came roaring back.
Yesterday morning, Google Deepmind announced that its Alpha Proof and AlphaGeometry 2 models achieved silver-medal standard solving International Mathematical Olympiad problems. I continue to think that while Google’s consumer-facing AI products have been hilariously poorly executed, the company keeps pumping out some of the coolest and most useful results in categories like bio and now math. The IMO “is the largest and most prestigious competition for young mathematicians, held annually since 1959,” and it’s really hard. AI struggles with really simple math occasionally, so this seems like a big breakthrough, and one step closer to using AI to help solve the really big problems.
And finally, yesterday afternoon, OpenAI, not to be left out, shared SearchGPT, which is an AI-powered search engine that looks a lot like Perplexity. Sam Altman also tweeted that the company will begin rolling out alpha rollouts of its voice product next week.
It’s interesting to see the big foundation model companies carving out slightly different niches on models that seem to perform roughly at the same level: OpenAI is focused on product (although I can’t wait to see GPT-5), Anthropic is focused on product as well, bringing more into my favorite model (Claude 3.5 Sonnet), Meta and Mistral are focused on open source, and Google Deepmind is focused on really hard math and science problems.
Bring on the competition. I’m trying to hang out with Dan in Italy while the AIs do my job.
(3) AI models collapse when trained on recursively generated data
Shumailov, Shumaylov, Zhao, Papernot, Anderson, and Gal in Nature
Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of LLM-generated content in data crawled from the Internet.
Just kidding. There will be a use for us humans after all.
In the midst of all the new releases, researchers published a paper in Nature showing that the more AI-generated data models are trained on, the worse they perform.
Here’s the argument. Current LLMs are trained on data mostly created by humans.As LLMs become more common, they'll start generating a lot of content that ends up on the internet. Future LLMs might then be trained on data that includes a mix of human-created and AI-generated content. The researchers found that when this happens repeatedly (they call it "learning with generational data"), the AI models start to lose some important information about the original, human-created data. They call this problem "model collapse."
Model collapse happens in two stages:
Early stage: The AI forgets about rare or uncommon things in the data.
Late stage: The AI's outputs become less varied and more repetitive.
Researchers are smart and there’s a lot of money on the line, so I’m sure that in each of the labs there are already developments underway to get around the data wall that doesn’t simply involve dumping AI-generated content into training sets, but I read this as another sign that we won’t see a “fast takeoff” in which AI gets so smart, powerful, and humanlike so quickly that we don’t have time to react and are all useless pets.
We’re going to end up in a Goldilocks Zone, with super powerful tools that do what we tell them to do (and that need to keep us alive to keep creating that sweet, fresh, handcrafted data).
(4) How to Invest Federal Funds Like a VC
Santi Ruiz and Dr. Sandeep Patel in Statecraft
“Pound for pound, BARDA has easily been the most effective federal agency in our national COVID response.”
Statecraft is one of the most fascinating newsletters on the internet. Every week, Santi Ruiz interviews one person who’s worked inside the government machine to understand how they achieved a specific policy goal. He and guests have covered topics like How to Stop Losing 17,500 Kidneys, How to Buy Stuff Like DARPA Does, How to Run a CIA Base in Afghanistan, and How to Calm Oil Markets.
This week, Santi interviewed Dr. Sandeep Patel, the former Director of the Biomedical Advanced Research and Development Authority’s (BARDA) Division of Research, Innovation, and Ventures (DRIVe). BARDA’s mission is to develop medical countermeasures through public-private partnerships for a variety of public health emergencies. DRIVe is its venture capital arm.
After I wrote about the government’s challenges and capitalism’s beauty on Tuesday, a few people sent me this interview, and for good reason:
All of the problems that BARDA cares about, like emerging infectious diseases, chemical attacks, and nuclear radiation exposure, are not commercially viable areas. Industry alone is not going to fund the development of medical countermeasures for those things.
For example, BARDA funded the majority of COVID vaccine, therapeutic, and diagnostic work.
The interview is an important reminder that there are talented, mission-driven people working behind the scenes throughout the government making sure that Americans are safe, healthy, and prosperous. There are certainly things that industry can’t solve alone. Things work better when both government and industry are operating at full capacity.
The people making everything run smoothly on the government side — despite very real challenges working within a large, intentionally conservative organization — rarely get the spotlight. Props to Santi for highlighting their work.
Score one for the Deep State (laudatory).
(5) The benefits of GLP-1 drugs beyond obesity
Daniel Drucker for Science
The initial chapter of GLP-1 innovation focused on glucose control, and later, weight loss. Subsequent waves seem likely to improve health outcomes in people with a range of chronic disorders. Dozens of new molecules are being interrogated in the clinic, with some likely to target new mechanisms and achieve greater benefits in multiple disorders beyond simply more effective glucose control and weight loss.
Is that? Bai gahhhh it’s GLP-1 again!
This is getting out of hand. Practically every other week, we’re back with something else that GLP-1’s can do. Often, these new benefits seem to be the result of weight loss: people lose weight and they get healthier in a lot of ways. But this article suggests that it’s not just the weight loss.
The article highlights a lot of things we’ve covered. GLP-1s might reduce the risk of heart attacks and strokes, help protect the kidneys, and improve liver health. It also cites some more recent studies with fascinating findings:
The SELECT cardiovascular outcome trial showed that semaglutide reduced heart attacks, strokes, and cardiovascular death in people with obesity, and interestingly, this benefit wasn't directly correlated with the amount of weight loss.
Recent analysis of real-world data showed lower rates of suicidal ideation in people taking semaglutide compared to those on other glucose-lowering or weight loss medications.
Analysis of electronic health records showed lower rates of cannabinoid use disorder in people taking GLP-1 medications.
Aside from the weight loss mechanisms, the author suggests that GLP-1’s anti-inflammatory effects might be neuroprotective, fighting Alzheimer’s, Parkinson’s, and strokes, and aiding against substance abuse disorders and compulsive behaviors.
A lot of this is in the trial phase, so don’t get too excited just yet, but researchers are hopeful that next-generation therapies that combine GLP-1 with other hormones or receptor targets will potentially achieve even greater benefits.
Guess our addiction to writing about GLP-1s is going to be hard to kick.
Bonus: French Paralympian Carries Torch Using Exoskeleton
The Paris Olympics kick off this afternoon with the Opening Ceremony at 1:30pm EST, but the olympic optimism is already pouring in.
On Tuesday, this video of French Paralympian, Kevin Priette, went viral. Priette, who has been paraplegic since an accident at age 11, carried the Olympic Torch through Paris wearing the Atalante X exoskeleton made by French medical robotics company Wandercraft. The exoskeleton uses AI to balance itself, allowing those who were previously unable to walk to walk while keeping their upper body free to do things like carry the Olympic Torch through the streets of Paris.
Miraculous stuff ahead of two straight weeks of peak human performance.
Humans rock.
Have a great weekend y’all. Wrap up your work, slip into something more comfortable, and jump on (but do not f!) your couch for a weekend full of Olympics.
Thanks to Write of Passage for sponsoring! We’ll be back in your inbox on Tuesday.
Thanks for reading,
Packy + Dan
The segment on the breakthrough HIV prevention shots was inspiring, and your analysis of the competition between the open AI models was enlightening, especially as it is such a fast-developing field. It was a great positive read, thank you.
I have to try that exoskeleton. That gives me chills!