The Electric Slide
The history, 99% decline, and future of the Electric Stack with Sam D'Amico
Welcome to the 1,269 newly Not Boring people who have joined us since our last essay! Join 248,515 smart, curious folks by subscribing here:
Hi friends 👋 ,
Happy Tuesday! Been a minute.
A month ago, I caught up with Sam D’Amico, the founder of Impulse Labs. We got to talking, and decided to write something together on the modern electric tech stack.
It would be simple, I thought: a few graphs showing how much the cost of the stack has declined over the last few decades (99%, it turns out), a few stories about demand for certain products driving those cost reductions, bada bing, bada boom.
One month, over 100 hours of research and writing, and 40,000 words later, it’s done.
Sam has been evangelizing the Electric Stack, and it’s resonating. Just yesterday, Ryan McEntush at a16z published a great piece on the Electro-Industrial Stack that discusses some of the topics we talk about here. We need more: that there are hundreds of books, thousands of essays, and millions of tweets on (and by) AI and so few on this topic is a reflection of American priorities that will need to change.
This essay is my contribution to the conversation. It goes deep into the history of the Electric Stack, introduces the Electric Slide, wrestles with the strategic logic of betting the future on AI, and ends with cautious optimism about America’s ability to rebuild and win in the Electric Era.
If you want to jump straight to the online version: Read The Electric Slide Online
If you want it as a PDF, you can download it here: Electric Slide PDF
Let’s get to it.
Today’s Not Boring is brought to you by… Vanta
Vanta helps growing companies achieve compliance quickly and painlessly by automating 35+ frameworks, including SOC 2, ISO 27001, HIPAA, and more.
Start with Vanta’s Compliance for Startups Bundle, with key resources to accelerate your journey: step-by-step compliance checklists, case studies from fast-growing startups, and on-demand videos with industry leaders.
The Electric Slide
One of the more interesting developments in AI is that while the American AI companies are mainly focused on closed-weight models, China is building open-weight models.
That begs the question: why?
On its face, the bet is simple: if China doesn’t have access to leading-edge chips, then open-weights are the best way to encourage both adoption and ecosystem development.
I think they’re making a different bet.
A couple of years ago, Isaiah Taylor, the founder of nuclear company Valar Atomics, told me something that’s stuck in my head ever since:
There are only really three pillars to anything around us, as far as consumable goods. We've got energy, intelligence, and dexterity.
I would generalize “dexterity” to “action.” Everything we see around us, and will see around us in the future, is the result of the potential to do work (energy), the capacity to decide what to do and how (intelligence), and the ability to manipulate matter (action).
In economic terms, energy, intelligence, and action are strong complements in the production of anything.
And in the immortal words of Joel Spolsky, "Smart companies try to commoditize their products’ complements.”
America is, implicitly or explicitly, making a bet that whoever wins intelligence, in the form of AI, wins the future.
China is making a different bet: that for intelligence to truly matter, it needs energy and action.
If you control energy and action, making intelligence abundant strengthens your position.
After catching up to America in electricity generation in just 2010, China now generates 2.5x as much electricity as we do.
It also dominates the technologies that turn electricity into action: the Electric Stack.
Lithium-Ion Batteries
Magnets and Electric Motors
Power Electronics
Embedded Compute
Today, China produces 75% of lithium-ion batteries globally and manufactures 90% of the neodymium magnets that make motors spin. In power electronics and embedded compute, it’s rapidly gaining ground.
That means that China controls the means of producing electric vehicles (EVs), drones, robots, and all of the other electric products that are replacing the combustion-driven machines on which America built its might.
As we speak, everything that moves, heats, lights up, computes, or converts energy is being rebuilt to perform better, faster, cheaper, quieter, and as a freebie, cleaner around electric technology.
Simply put: anything that can go electric will.
Or rather, anything that can go electric economically will.
Every year, the number of things that can economically go electric increases as their components get cheaper and more performant. Every year, China grows its Electric Stack capabilities relative to the West. Taken together, that means that more of the physical cutting edge will be Made in China.
And as humanity infuses machines with intelligence, more of those intelligent machines will be Chinese.
This is why China is happy commoditizing AI. They believe that action is the much harder, and therefore more valuable, piece of the future to own.
To understand why China is making this bet, and why they're probably right, you need to understand what the Electric Stack is, how incredibly cheap it's gotten, and who controls each layer.
The deeper you go, the more ridiculous it feels to believe that if we simply build the best models, we will win economic and military dominance, once you appreciate how all of this stuff works, how it all fits together. You need to feel, in your bones, that research, great ideas, without the manufacturing might to turn them into scaled products is no moat.
What follows is a surprisingly thrilling 40,000-word story of Western invention and Eastern manufacturing, of GM selling our future for $70 million, of conferences in Pittsburgh and assembly lines in Fukushima, of exactly how it is that drones fly, and of cost curves.
The details matter because the details make the curve, and the curves are destiny.
Everyone reading this is likely familiar with Moore’s Law. Many of you are likely familiar with scaling laws in AI, and Rich Sutton’s associated Bitter Lesson. Somehow, as if by magic, these digital curves become self-fulfilling, their audacity attracting talent and capital that make them come true.
These curves apply to the physical world, too.
We are all familiar with the solar panel cost curves, which have seen solar energy drop from $130.70 per watt in 1975 to $0.31 per watt today.
This has to do with electricity generation: turning some fuel – oil, coal, sunlight, wind, natural gas, uranium, or water - into electrons that can be used to power practically anything.
We are less familiar with another set of curves: those that make up the Electric Stack. These are the LEGO blocks that snap together to make the products that consume electricity and turn it into action.
While there is a similar curve for batteries through 2018, there aren’t for the other layers of the Electric Stack, nor is there one Electric Stack curve. So we built them, in the hopes that they attract people to build on the Electric Stack like Moore’s Law has attracted people to build on computers.
Each curve tells a story.
Since Sony started rolling out lithium-ion batteries in 1991, battery packs have gotten 98.7% cheaper, for an annual decline of 12.5%.
Since hard-disk drive motors began incorporating Magnequench and Sumitomo neodymium magnets in late 1980s, the cost of electromagnetic actuation has dropped 98.8% from $204/kW to $5/kW, for an annual decline of 12.5%:1
Since industrial companies began using variable frequency drives (VFD) using B. Jayant Baliga’s insulated gate bipolar transistors (IGBT) in the late 1980s, VFD inverters have gotten 99.5% cheaper, for an annual decline of 14.5%:
And since Texas Instruments commercialized microcontrollers (MCUs) and digital signal processors (DSPs) in calculators and kids toys in the late 1970s, the cost to run a million instructions per second (DMIPS) has fallen 99.9%, for an annual decline of 20% over the past 35 years:
To the best of our knowledge, no one has ever built a composite of these curves: equally weight each component of the Electric Stack to understand how much less it costs to build an electric product today than it did in the past.
We built it, and we call it: the Electric Slide.
It shows that the cost of the Electric Stack has fallen 99% since 1990, or 12.6% per year with an equal-weighted stack.
No product’s bill of materials (BOM) actually has equally weighted component costs, though, and no two BOMs are the same. A Tesla Model 3 might spend 60% of its BOM on batteries. A DJI Mavic 3 drone might devote 40% of its BOM to compute. So we cooked up an interactive Electric Slide that you can play with here:
That today China owns two layers of the Electric Stack almost entirely was not inevitable, or even likely.
The four key Electric Stack technologies were invented at various points between the 1960s and 1990s in America, Japan, and the UK, and reached critical maturity around the same time in the 1990s.
Then, in many cases, we sold the future. GM sold its neo magnets division, Magnequench, to China for $70 million. A123 Systems, which invented the Lithium Iron Phosphate (LFP) battery, went bankrupt and sold to Wanxiang for $257 million in 2013.
Thanks to shortsighted Western errors and farsighted Chinese industrial policy, in the commercialization phase, the Electric Stack center of gravity has moved from America and Japan to China, which dominates the stack. By controlling these four technologies, China has become the world leader in everything from EVs to drones to electric bikes to robots.
A giant piece of this is that mastery of this stack applies across domains, allowing market leaders like BYD to make everything from cars, to home energy products, to iPads, to much of the world’s drones. Within the whole sector – the components, software, and expertise largely transfer – meaning mastery of one product of the stack allows success in scaling others. Advantages compound. The result has been China getting the best “LEGO set” in the world, with regards to this stack.
Conrad Bastable calls this LEGO set the Electric Platform. For a sobering in-depth read on this topic, read his essay, Forsaking Industrialism. This essay owes Conrad an enormous debt for both that piece, and the conversation we had on Hyperlegible. I will reference both, explicitly or implicitly, throughout.
Put another way, over the past half-century, the technologies in the Electric Stack have gotten so cheap and so powerful that new entrants can build better-performing products than incumbents. This is true for companies - it’s a key enabler of my Vertical Integrators thesis - and it’s also true for countries.
As the Electric Stack technologies ride down their learning curves, China can better produce more of what the world wants.
Far from just making cheap components, its companies like BYD, DJI, and Huawei, have put themselves among the world’s most innovative integrators. In Q4 2024, BYD passed Tesla in sales. Per the IEA, “China continues to be the world’s EV manufacturing hub and is responsible for more than 70% of global production.”
Conversely, America is systematically overemphasizing the role AI will play in the future and underestimating the role that electrification will play.
As I wrote in Base Chapter 2: “While intelligence gets all of the attention, I’m increasingly convinced that what we’re entering is the Electric Era. Cars, robots, flying cars, drones, appliances, boats – anything that can go electric is going electric, because electric performs better. Even intelligence is reliant on access to electricity.”
More broadly, America’s implicit stance is: we will specialize in high-value creative work like software, chip design, and biotech research, while other countries, mainly Taiwan and China, handle the low-margin manufacturing. This is an outdated stance.
Manufacturing and design are inextricably linked. When you make things, you learn how to make them better. You learn which parts of the underlying stack need to be improved, improve them, and make better products. This is a theme that comes up over and over again in our Electric Stack story.
Texas Instruments won the Calculator Wars because it made its own microcontrollers. Sony took lithium-ion batteries from bench-scale to mass market scale and improved their efficiency by 50%. BYD made a lot of batteries, then it started making cars, and the deep knowledge of both allowed it to both bet on LFP early and develop the Blade Battery that it’s ridden to EV dominance. Not for nothing, DeepSeek was able to basically match OpenAI on less advanced chips because it went deeper into the guts of NVIDIA’s software than anyone else.
Ashlee Vance, who literally wrote the book on Elon Musk, recently wrote that, “The two biggest U.S. manufacturing success stories of the last twenty years are Tesla and SpaceX. And this is a problem.” It is not surprising that the person who sleeps on the factory floor and claims that the factory is the product, not to mention the entrepreneur who most heartily embraced vertical integration, is the one running both of those companies.
As we will see, the innovations that enabled the Model 3 came from a deep knowledge of all of the components that make up a Tesla, and how they work together. This same knowledge helped Tesla thrive through the COVID chip shortage.
Even worse, though, and if America’s advantage is in high-value creative work, AI capable enough to confer economic and military supremacy packages up and commoditizes that advantage.
As I pointed out in my conversation with Conrad, “If you believe that America's advantage is the IP and the design and all of that, the fact that we're racing to make that a commoditized good is actually super ironic in a way that I hadn't appreciated before.”
This, again, is why China is open sourcing intelligence.
Per Clayton Christensen’s Law of Conservation of Attractive Profits, if intelligence becomes commoditized, profits will move to adjacent layers of the value chain, like the electric product in which that intelligence lives. A robot with a commoditized brain may capture more of the profits in the value chain than the brains themselves, particularly if open source models get good enough.
To be blunt: in the Electric Era, maintaining design leadership without manufacturing leadership is not a coherent strategic position, and one that gets less coherent the better you believe AI will get.
And the Electric Era is coming, because electric products are simply better, and because they will keep getting better.
As part of my job, I get to see a little bit into the future, and from what I see, electric products will win the coming decades.
A few months ago, I went for a ride in Arc’s electric boat, the ArcSport, on Lake Austin. It was faster, sharper-turning, and quieter than any speedboat I’ve ever been on, and it docked itself.
A few weeks ago, I went to Zipline’s test site in California and watched, with my own eyes, as the drones held perfect position thanks to electric motors that can adjust thrust hundreds of times per second with perfect synchronization across multiple rotors, enabled by precise power electronics and real-time compute.
MRI machines, like those used to detect cancer early, depend entirely on powerful, precisely controlled electromagnets. Residential batteries, like Base Power Company’s, can’t back up homes or balance the grid without the Electric Stack. Robots, whether modern industrial robots, surgical robots, Matic’s floor-cleaning robot, or futuristic humanoids, are made up of servomotors, sophisticated power electronics, lithium batteries, and advanced compute.
These products all exist today. As the cost of Electric Stack continues to decline and performance continues to improve, new products enter the feasibility window.
Astro Mechanica can build efficient supersonic planes, for example, because electric motors have gotten powerful enough for their weight. Electric planes will begin to make sense as batteries get more dense and motors more efficient. So too will flying cars. And if we want affordable humanoid robots, even more than smarter intelligence, we need better batteries, motors, inverters, and microcontrollers.
In fact, if we want AI to play a role in any of our physical products, they need to be rebuilt on the Electric Stack first, so that they can speak AI’s language.
One of the reasons, I suspect, that America is so excited about AI and pays relatively little attention to electrification (which, somehow, has been politicized) is that the Electric Stack is messy. It exists in the real world.
It is easy to imagine that a magical digital technology we don’t quite understand but seem to be the best at will miraculously fix what ails us. It is harder to imagine how in the world we might rebuild, and improve upon, such an interconnected, physical stack of technologies, a stack that has taken decades of research, luck, market forces, genius, and elbow grease to get to this point, and one which China so thoroughly dominates.
The time for wishful thinking is over. There is a world to be rebuilt.
Whoever owns the Electric Stack owns the right to rebuild it in their image.
The learning curves are an incredibly useful guide: they tell us where we’ve been, and more importantly, where we’re going.
But the curves are too smooth for real understanding. That smoothness masks a ton of complexity: the research, the failures, the coincidences, the business deals, the products, the industrial planning, and the ingenuity that somehow, almost impossibly, came together to create the world we live in.
So we will need to cover both, and we will need to go incredibly deep, so you get a real feel for the thing, for the magnitude of both the miracle and the challenge.
If America wants to win the future – not because we’re itching for a fight, but because we want to maintain our role as the world’s largest and most innovative economy – we will need to vertically integrate. We need to have the ability to manufacture every part of the Electric Stack, the understanding of how to build the best products that comes from that ability, and the ability to scale.
Getting this mojo back won’t be easy. It seems almost impossible. It will take industrial policy, innovation, government support, consumer demand, and a little free market magic to pull it off. Currently, we are shooting ourselves in the steel-toed foot.
One thing we’ll learn in studying the history is that demand for electric products drives scale, cost declines, and performance improvements.
Unfortunately, electrification has become unnecessarily politicized in America, with cultural associations obscuring the hard strategic realities. While the current administration has shown understanding of electricity generation through nuclear EOs and of component importance through investments like the Pentagon’s $400 million in MP Materials, the rollback of demand-side incentives undermines the very learning curves that could restore American manufacturing dominance.
If I were President, I would make sure that every American man, woman, and child has an EV, heat pump, drone, induction stove, and robot. American demand is perhaps the most powerful motor in the world, and our best shot at economically onshoring enough of the Electric Stack to remain competitive in the Electric Era.
My hope is that by better understanding how we got here, we can avoid mistakes like this and better determine where to go next.
The Roadmap
This is not a short essay, nor a light one, even by Not Boring’s standards. There are a few reasons for that.
First and foremost, given what I believe to be the importance of the Electric Stack to the modern world and its future, we are woefully unfamiliar with its history and details.
When I say “we,” I certainly mean me. As you might be able to tell, the process of writing this essay involved a lot of “OH! That’s how that works.” Just as we hand-wave that AI will fix everything, we handwave about the components of the Electric Stack based on a headline we read in the WSJ or a tweet we scrolled by in our feed.
“It's critical that America not rely on China for batteries!” Yes, but why, and at which level should we integrate? “China controls 90% of rare earth magnet production!” Yes, OK, but what is a rare earth magnet, what does it do, why is it important that we manufacture them here, and how might we be able to do that? I hope that by diving into an insane level of historical detail, we can have more nuanced conversations about the future.
Second, because I find this stuff fascinating. The tale is full of hidden legends and business case studies - triumphs and fumbles - that shape how the world moves.
Any hope of getting our efforts right this time requires an understanding of where they’ve gone wrong (or right) in the past.
Finally, because the curves themselves tell stories, predict the future, and in so doing, bring that future into existence.
The components that make up modern electric products have gotten so cheap and so performant that entirely new things are becoming possible and economical each year. And while they feel less likely to continue because they have to interact with the physical world, somehow, they do. People keep underestimating how cheap solar is going to be; the same should be true for most things built on the Electric Stack.
That’s another reason to write the stories in addition to simply showing the curves. Every story shows a technological tree that seems to have hit its last branch, before someone, somewhere tries a new chemistry, adds a new element, or configures the LEGO blocks in just such a way that, suddenly, it just kind of works, works enough, at least, for a specific product that really needs it to work just that way… and the rest is history. The curves continue.
There’s a risk in an essay like this of painting one of two pictures: either that we must do everything in our power to beat China or that we are so hopelessly behind that there’s no point, that we should just focus on the game we can win, even if it’s not the most important game.
I think the curves tell an optimistic story: that, thanks in large part to Western science and Eastern manufacturing, these technologies have now reached a point at which it is becoming increasingly possible to build an Electric world, economically.
This is a long piece, but it’s one that I’ve loved researching and writing over the past month, and I think it’s the most comprehensive resource for those who want to understand how we got here and where we might go in the Electric Era.
We will cover:
A Brief History of Electromagnetism: The force driving the Electric Era.
How Electric Motors Work: Establishing the fundamentals.
Lithium-Ion Batteries: Li-Ion, NMC, NCA, Tesla, BYD, CATL, and LFP.
Magnets and Electric Motors: the neodymium magnets that spin the world.
The Century of Semiconductors: the birth of compute.
Power Electronics and Control Systems: controlling electricity with itself.
Embedded Compute: microcontrollers, DSPs, ARM, and RISC-V.
Lessons and Takeaways: What can we learn from the history of the Electric Stack?
Rebuilding the Electric Stack: What it will take to build the future.
To be clear upfront: rebuilding the Electric Stack in the West will be hard, but possible. America still owns the greatest demand and entrepreneurial engines the world has ever seen. We will need to run both at about a million RPMs.
So without further ado, let’s begin. And we must begin, of course, all the way at the bottom, with some fundamental physics.
A Brief History of Electromagnetism
The two most important physics discoveries of the nineteenth and early twentieth centuries took the same form: these two things you thought were different are actually the same.
In 1905, Einstein first published the formula that is now the most famous in physics: e=mc2. Mass can become energy; energy can become mass. Thus did the atomic bomb and nuclear energy become possible.
Forty years earlier, the Scottish physicist James Clerk Maxwell, expanding on the work of Faraday, Weber, and Kohlrausch, derived a less-quoted but no less important equivalence: c=1/√μ0ε0. Electricity and magnetism are one field; light is its wave.
Maxwell showed how electric and magnetic fields interact as continuous media. In the 1890s, Dutch physicist Hendrik Antoon Lorentz introduced the electron model of matter, demonstrating how Maxwell’s equations predict the electrical, magnetic, and optical behavior of real materials, from the conductivity of metals and gases to the fine details of their spectra.
Lorentz postulated the electron because his mathematics needed tiny charge carriers to make Maxwell’s equations fit the behavior. In 1897, English physicist Sir J.J. Thomson experimentally proves their existence.
While building his electron theory, Lorentz formulated the force on a charge q moving with velocity v in fields E (electric) and B (magnetic): F = q(E + v×B).
Thus did the electric motor, the drone, and the EV become possible.
Electric motors are one type of electric actuator (devices that convert electrical energy into mechanical motion). Actuators themselves are a subset of transducers or energy-conversion devices, which convert energy from one form to another.
The Electric Stack converts electricity into motion, but also into heat (induction stoves), into magnetic fields (MRI machines), into light (LED displays), into sound (speakers), and back into electricity (regenerative braking). This versatility, the ability to transform electrical energy into any form of useful work with precise control, is what gives the Electric Stack so... powerful.
Combustion converts chemical energy into heat and motion with significant losses and poor control. Electromagnetic systems can convert energy between forms with remarkable efficiency and precision.
This is why almost everything can go electric: Maxwell's unified field theory provided the mathematical foundation for controlling energy, and the Lorentz force equation governs every interaction between electricity, magnetism, and matter. With electromagnetic control, you master energy conversion. Master energy conversion, and you can remake practically everything.
But “practically everything” is a big ocean to boil (even with an induction stove), so to understand the principles of the Electric Stack, we’ll break down how an electric motor works.
How Electric Motors Work
If you wanted to, right now, you could go build a really slow motor with just three permanent magnets. A permanent magnet is just the normal magnet you’re thinking of when I say magnet. It’s “permanent” because it’s always magnetic; you can’t switch it on and off.
Put a hole in the middle of one of them, between its north and south poles, and stick a shaft through it. This is the rotor.
Take the other two magnets, and put one on each side of the rotor, each with opposite poles facing the center. So on the left, put the north pole towards center, and on the right, put the south pole towards center. These two magnets are the stators.
Because magnets attract opposite poles and repel like poles, the S stator will attract the N pole of the rotor while the N side of the stator repels it, spinning it towards the S stator.
As your rotor approaches this alignment, quickly flip the magnets in your hand, so that the poles facing in switch. Now the N stator will repel the N pole and attract the S pole of the rotor.
Do this over and over and over again, with just the right timing, and you have a motor that spins your rod, to which you can attach things like gears that can direct the energy from the spinning in different directions.
Note that you are not doing the spinning. Your energy is not going to spinning. You’re just moving the magnets.
The magnetic force is doing the spinning. Everything else is about getting the magnets in the right place with the right polarity; nature does the rest.
This is critical, so we’ll say it again. A combustion engine carries its fuel, which it burns, fighting thermodynamics by converting heat to motion, so it is less efficient. An electric motor simply directs electromagnetic forces that want to move towards equilibrium.
Rory Sutherland explains electric’s advantages colorfully:
For cars and any manner of products that turn energy into useful work, the cost physics are on the side of the Electric Stack.
The cost physics of this manual motor, of course, are dogshit, worse than having no motor at all. It would be easier to just spin the shaft yourself (insert joke about how much practice you have here). But it’s a good toy model, because the Electric Stack replaces each of the pieces above with modern technology to make the motor spin faster and more consistently, to get ever-better at directing electromagnetic forces that want to move towards equilibrium.
Electric motors (electromagnets) replace the permanent magnet stators. Rather than physically moving permanent magnets, controllable electromagnets (coils of wire with currents running through them) create rotating magnetic fields that can be adjusted for optimal torque and speed.
Neodymium-Iron-Boron (NdFeB) magnets replace your off-the-shelf ferrite magnets as the rotors, giving you a much better size-to-strength ratio, maximum energy product (BHmax), remanence (magnetic field strength), and coercivity (how hard it is to demagnetize).
Lithium-ion batteries replace the energy you put into switching the stators’ polarity. They send currents through coils of wire, which, because of what Maxwell and Lorentz discovered, turns them into magnets, or electromagnets.
Power electronics replace your hands for flipping the switches. Instead of manually flipping magnets, sophisticated semiconductor switches flip the current thousands of times per second, turning electromagnets on and off with perfect precision.
Compute replaces your eyeballs and decision-making. Sensors detect exactly where the rotor is, and microprocessors calculate the perfect timing for each switch, optimizing for efficiency, speed, and smoothness in real-time.
Together, these four technologies transform your impractical manual motor into something that can spin 20,000+ RPM reliably for years, deliver precise torque control, and convert electrical energy to mechanical motion with over 95% efficiency.
Understanding how this stack works is so crucial to everything we’ll talk about next that we won’t be offended if the hand-drawings don’t do the trick. This video has 21 million views and is a good basic primer:
And if you want to understand how a modern brushless DC motor with a neodymium magnet works, or even make one yourself (!), this one is also great:
All products that incorporate the Electric Stack use the same building blocks and concepts in different ways.
An induction stove, for example, uses power electronics, compute, batteries (in the case of Impulse), and electromagnetic coils, but no permanent neo magnets; the metal pan itself acts as the “rotor” and the changing magnetic field induces electrical currents directly in the pan, producing heat.
Impulse Stove X-Ray
The LEGO kit nature of the Electric Stack means that improvements in one part of the stack benefit practically all of the products built on the stack, and that demand for one product built on the stack might drive cost and performance improvements for all of the others.
Remember: they’re all geared towards better directing electromagnetic forces that want to move towards equilibrium. (Say it with us!)
The fact that we are in the middle of an Electric Renaissance, then, is the result of the improbable and downright miraculous histories of each individual layer of the stack. We will now turn to those stories, starting with the heart of the Electric Stack: the lithium-ion battery.
Lithium-Ion Batteries
In August 2022, our friend Noah Smith declared that this would be the Decade of the Battery.
“The fundamental reason that batteries are becoming so important,” he wrote, “is simply that the technology has improved by leaps and bounds.”
Since 1991, the cost of lithium-ion batteries in $/kWh has decreased by 98.8%, at an average of 12.5% per year.
How’d that happen? Batteries’ descent down the curve, and ascent in importance, began five decades ago in the belly of the old energy beast: at Exxon.
On October 6, 1973, on Yom Kippur, Judaism’s holiest day, Egyptian President Anwar Sadat and Syrian President Hafez al-Assad launched a coordinated surprise attack on Israel. Eleven days later, on October 17th, King Faisal of Saudi Arabia, a cautious monarch who had resisted using oil as a weapon for years, announced that OPEC would cut oil production by 5% each month until Israel withdrew from occupied territories. More dramatically, Arab oil producers declared a complete embargo on oil shipments to countries supporting Israel, primarily the United States and Netherlands. Oil prices quadrupled practically overnight, from $3 to $12 per barrel. Gas stations ran dry. Americans sat in mile-long lines for hours, sometimes fighting over fuel. President Nixon imposed a national speed limit of 55 mph. Christmas lights were banned in many cities to save electricity.
Oil companies were flush with profits but short on energy security, and so, at Exxon’s corporate lab in Linden, New Jersey, a 30-year-old British Oxford and Stanford graduate, Stan Whittingham, was one of an army, in the corporate sense, of scientists tasked with determining just what the oil giant might do when the oil didn’t flow so freely.
Battery Research
Whittingham, given free rein “to work on anything energy-related, provided it was not petroleum or chemicals,” began experimenting with a process called intercalation: stuffing lithium ions between layers of titanium disulfide.
In 1976, he created the first rechargeable lithium battery, for which he would eventually share the 2019 Nobel Prize in Chemistry.
To understand both why Lithium was a Nobel-worthy innovation and why Whittingham had to share the Prize, we should take a moment and go over how batteries work (for Packy’s sake; I know you know all of this already).
Batteries work on the basic idea that some elements want to give up electrons and others want to get electrons. Put them near each other and it creates electrical potential: the electrons must flow! The whole history of battery development has been about getting electrons to flow while optimizing for energy density, power density, safety, cycle life, and cost.
Every battery has an anode on one side and a cathode on the other with electrolytes in between.
The anode is the negative terminal of the battery. Its active material (e.g. lithium) has too many (negatively charged) electrons and wants to give them up. The cathode is the positive terminal of the battery. Its active material (e.g. lithium cobalt oxide) has too few electrons and wants to get more. So the anode’s lithium splits into an electron and a lithium-ion (specifically, a cation, which has fewer electrons than protons) so that the electron can seek out its slot in the cathode’s lithium cobalt oxide.
But wait! In between the anode and the cathode is the electrolyte, made up of dissolved salts that create free-floating ions that can carry electrical charge. The electrolyte lets ions (Li+) through but rejects electrons, forcing them to take the long way. The battery is hooked up to a circuit, so the electrons flow through the external wire to power your device, or, say, create an electromagnetic force in coils of copper wire.
When charging, the process happens basically in reverse. This is how Whittingham, and many others since, have been able to make rechargeable lithium-ion batteries. If you can’t recharge, the economics of an EV or a drone get blown up.
Lithium, Whittingham realized, had ideal electrochemical properties for an anode. It is the lightest metal (good for battery weight), and it has one lonely electron in its outer shell, desperately seeking to break free.
But, he discovered, lithium had some challenges, too. As in so many of us, its greatest strength was its greatest weakness. Lithium’s double-edged sword was its reactivity.
Which brings us back to why Whittingham shared the Nobel Prize: his batteries occasionally blew up.
As lithium ions return to the lithium anode, the metal can accumulate in the form of wispy, needlelike structures known as dendrites. If they grow long enough, they can reach from one electrode to the other and short-circuit the battery. If that happens, the battery can overheat quickly, build up pressure, and, possibly, explode, which is why the International Air Transport Association, presumably full of smart people who are familiar with Whittingham’s work but not that of his successors, asks you to please remove all lithium-ion batteries from your checked bags.
Whittingham’s battery was good – 2.2 V on discharge, ~70 Wh/kg⁻¹ at the cell level, and rechargeable! – but it wasn’t good enough.
Luckily, across the pond at his alma mater, 57-year-old Oxford physicist John B. Goodenough read the Exxon scientist’s papers and wondered whether you might be able to swap Whittingham’s soft chalcogenide for a layered oxide with stronger M-O bonds. In 1980, Goodenough’s group synthesised lithium cobalt oxide (LCO) and used it as the active material in the cathode. Its red-blue ceramic slots lithium like cards in a deck while holding a rock-solid lattice. Voltage leapt past 4 V and specific energy exploded overnight to ~150 Wh/kg⁻¹.
That was the problem though, wasn’t it? The explosions? Goodenough’s batteries exploded, too. He hadn’t gotten rid of the pure lithium in the anode.
The voltage increases were good enough that Goodenough shared in the 2019 Nobel Prize, but not good enough for commercial use.
But this was the 80’s, baby, and Japan was both ascendent in consumer electronics and, as a small island nation, even more dependent on foreign oil than America. Japan, too, was looking for energy alternatives post 1973 Oil Embargo, in part through the government-funded Sunshine Program, which meant its researchers were familiar with the battery work going on in the West, and as the consumer electronics king, Japan was also looking towards the future.
There was a lot of R&D on portable electronics in the 1980s, and so small and lightweight batteries, with high energy density and rechargeability were also needed. But nobody really knew what kind of rechargeable battery was going to be needed. The big buzzword at first was “portable,” soon joined by “cordless” and “wireless.” I just sort of sniffed out the direction that trends were moving. You could say I had a good sense of smell.
That sense of smell, and the whole quote, belonged to our third 2019 Nobel Prize in Chemistry winner, Asahi Kasei researcher Dr. Akira Yoshino.
Asahi Kasei was a diversified chemicals and materials shop, and within it, Yoshino’s job was evaluating the brand-new conducting polymer, polyacetylene. This was a classic case of “a technology in search of a problem,” but this time, it found one.
See, Goodenough’s work had caused a bit of a sensation in the small but global battery community, and even attracted visiting scholars from Japan, one of whom joined Asahi Kasei’s battery program and handed Yoshino photocopies of Goodenough’s LiCoO₂ data.
Yoshino thought that polyacetylene might make a good lithium-metal-less anode to pair with the Brit’s LiCoO₂ (LCO) cathode, and when he built test cells, it did, with a few issues. The polymer was airy and chemically fragile, so its capacity was poor and the electrode degraded in air. Fortunately, though, Asahi Kasei was a diversified chemicals and materials shop. Its partnership with Nippon Mining on the Mizushima petrochemical complex, which included a calcined-petroleum-coke plant.
Yoshino married a Goodenough’s high-voltage cathode to a petroleum-coke anode, a carbon/graphite anode, that merely hosts Li⁺ rather than being lithium. It worked. In lab tests the “rocking-chair” cell survives crush, nail, and overcharge without blowing up. Yoshino patented the concept in 1987, but even with the right chemistry, his employer lacked the production muscle to commercialize it.
Sony brought that muscle. The Japanese electronics company practically was the 1980s.
Sony Manufactures the Battery
The Walkman. The Betacam and Handycam. The Trinitron. In the last year of the decade, Sony’s revenue hit 2.88 trillion yen (roughly $20 billion USD at contemporary exchange rates). The year before, it had even bought CBS Records from the Americans for $2 billion at a time when the label’s roster included Michael Jackson, Billy Joel, Bruce Springsteen, Barbara Streisand, Bob Dylan, Journey, Mariah Carey, New Kids on the Block, and Gloria Estefan. Yeesh.
This was a decade in which Japan seemed destined to surpass America, with Sony as the supreme symbol of Japanese innovation and quality. And Sony, more than anyone, needed better batteries to keep the dream alive. The company was betting its future on making everything from professional video cameras to eventually laptops truly portable, and traditional nickel-cadmium batteries made their devices heavier, shorter-lived, and more expensive than their vision and customers demanded.
In 1989, Sony released its CCD-TR55 Handycam to a chorus of customer complaints: the nickel-cadmium batteries were heavy, suffered from memory effect, and died after 30-45 minutes of recording. So when Yoshio Nishi, who was running Sony’s battery development team, heard through the small but dense Japanese battery community grapevine that Yoshino had developed a lithium-ion battery that didn’t explode, he was more than intrigued.
Initially, Nishi and Sony tried to develop their own version in-house, but found that Yoshino’s patents were as bulletproof as his batteries were nailproof.
Fortunately for Sony, though, Asahi Kasei had no intention of manufacturing batteries themselves; they were just a diversified chemicals and materials shop. They quickly signed a licensing agreement.
Sony got:
Exclusive manufacturing rights for certain products,
Access to Yoshino’s patents and know-how,
Asahi Kasei’s help in sourcing materials, and
A head-start over rivals Matsushita, Sanyo, and Toshiba.
Asahi Kasei got:
Upfront licensing fees (reported to be around ¥500 million),
Ongoing royalties (supposedly 3-5% of battery sales),
Built-in demand for their precursors, and, crucially,
Sony’s manufacturing expertise.
Sony would benefit from this expertise, too. Its legal department structured the deal brilliantly: the company kept rights to improvements and derivatives, which meant as Sony's engineers improved the technology, they didn't have to go back to Asahi Kasei. This became crucial as Sony increased energy density by 50% within three years by manufacturing a lot of batteries, learning, and improving.
Because it actually had to put batteries in products, Sony had to care about things, like energy density, that researchers didn’t have to. And after nearly two decades of research, during which cost was pretty much no issue, once lithium-ion batteries jumped into production, it became a critical one.
The Lithium-Ion Battery Alpha Product: The Handycam
Gunpei Yokoi, the Head of R&D at fellow Japanese giant Nintendo at the time, described his design philosophy in the now-famous phrase: 枯れた技術の水平思考.
For the non-Japanese-speakers in the audience, that’s “Lateral Thinking With Withered Technology.” Instead of inventing everything from scratch, take proven, reliable technology and find novel applications for it.
As strong as Sony was heading into the ‘90s, it wasn’t invincible. It made big bets (and built out corresponding manufacturing capacity) on technologies like Betamax, floppy disks, computer data tapes, and Digital Audio Tape (DAT), that either flopped or slowly became less relevant. Betting on things with tape meant that Sony had a lot of a very specific type of withered technology that would prove useful with a little lateral thinking.
With kilometers of empty tape-coating lines sitting idle, Sony engineer Nobukazu Nishi, different Nishi, who had pulled together a “tiger team” to tackle battery manufacturing, had a realization: those coaters could spread cathode and anode slurries onto metal foils at scale. Nobukazu Nishi’s team used them, and made further improvements to solve production challenges: a better electrolyte boosted energy density and cycle life; a tri-layer separator reduced the risk of thermal runaway.
For packaging, they repurposed flashlight-battery tooling into the now-canonical 18650 steel can, which were easy to wind on cassette mandrels.
The upshot is this: by the end of 1990, the pilot plant cranked out 1,000 battery cells per day. From bench scale to almost-commercial scale in under a year. And the early cells already beat nickel-cadmium on energy density and cycle times by 2-3x. It was time to scale.
One of the main premises of this essay is that as new technologies improve, they unlock new use cases, the demand from which necessitates the scale that drives further performance improvements and unlocks new use cases, and the cycle continues. This is how learning curves work.
For each stack of the Electric Stack, there was an alpha product, the product that provided the push down the curve.
For lithium-ion batteries, the alpha product was the Handycam.
In 1991, Sony released the CCD-TR1 8mm camcorder, the Handycam. It was lighter, with no bulky external battery. It lasted longer, up to two hours of filming. The battery made the Handycam. An August 1991 advertisement in GQ touted “two hours of recording time.”
And the Handycam made the battery.
Those first pilot lithium-ion cells in 1991 cost roughly $7,500 per kWh in 2024 dollars. As Handycams flew off the shelves, costs continued to plummet. By 1999, a kWh cost just $2,010 in 2024 dollars, a 73% decline in under a decade.2
But Handycams weren’t the whole picture. Other video camera competitors joined in, releasing their own Li-ion powered cameras, beginning in 1994. And as costs fell, performance and safety improved, and new product-specific capabilities were added (thermal management, packs, cell formats, etc…), new products began incorporating, or were enabled by, lithium-ion batteries.
In 1995, Apple released the first lithium-ion powered laptop, the PowerBook 5300, but quickly recalled it after two packs caught fire and re-issued it with Ni-MH. IBM took the crown a year later with the ThinkPad 560, with its optional Li-ion pack 46H4206. Business users were happy to pay a premium for a four-pound laptop.
Nokia launched the first mass market Li-ion-powered cell phone, the 8210, in 1999, a successor to the higher-end Motorola StarTAC, which had Li-ion options in the mid-1990s. Lithium powered Apple’s iPod in 2001, and then its iPhone in 2007. In 2008, Tesla released the sleek but expensive Roadster and BYD its much uglier plug-in F3DM. The Nissan Leaf became the first mainstream battery electric vehicle (BEV) when it was launched in Japan in 2008 and America in 2010.
Cars needed a lot more batteries than video cameras and cell phones, which meant that innovation shifted from the cell level to the pack level.
Enter Tesla, whose batteries were ultimately just a lot of smaller 18650-type cells packed together. This assembly of smaller batteries is called the pack. The challenge was: while each cell had become very reliable, the Roadster used so many 18650 cells that the probability of any one failing and bringing down the whole pack was shockingly high.
Compounding the challenge was the fact that the cells were tiny. Tiny cells were fine for laptops and camcorders, but if you ran thousands of cells in series, your battery pack would be so high-voltage that you couldn’t use semiconductors in any of the power electronics; it would be above their rated voltages!
So Tesla had to wire cells in parallel to make them behave collectively like a larger cell, which presented a huge problem: if a battery cell failed and short-circuited, all of the other cells in the pack would drain through the failed cell, causing the pack to die AND possibly setting the whole vehicle on fire in the process.
The statistics here are utterly brutal: if one in 200K cells are bad (in 2005, the failure rate for 18650 lithium-ion cells was roughly 1 in 200,000, according to Polarium), a vehicle with 6831 cells like the Tesla Roadster sees a 1−(0.999995)^6831 = 3.34% failure rate per vehicle! That would have meant 81 mobile-app-boom era technology brothers’ Roadsters could have caught fire out of the ~2,500 total Roadsters produced.
Tesla had to learn to live with individual cells failing without lighting the car on fire, just like in your house, you want a short circuit to blow a fuse or pop a breaker, not cause the device or the wires in your walls to catch fire. That meant fusing every single cell in the battery pack somehow. But the way that laptop batteries were assembled, you would have strips of metal spot-welded to the cell directly – not exactly the sort of joint that would work as a fuse! In a laptop, this was okay, as a failed cell would be in series with the other cells; the current would be limited by the rest of the system.
But Tesla was putting 69 cells in parallel (nice!), which meant that all 68 adjacent cells would DUMP power into the one failed cell, turning it into an incendiary device.
So they pulled technology from an adjacent space… power electronics!
Tesla used wire bonding to connect each cell to a big plate with a thin wire; if the cell failed, the wire would be under extremely high current and act like a fuse, disconnecting the cell from the rest of the battery pack and allowing the 68 remaining cells to still work.
Wire bonding was a key technology required to make small, high-energy battery cells viable in the electric vehicle space, one which Tesla scaled massively through the Model 3 and Model S programs in subsequent years.
But wire bonding was still a band-aid on a fundamental problem: they were using batteries designed for Handycams and laptops in cars. Fortunately, by 2013, Tesla had enough clout (and demand from the Model S) to convince Panasonic to build custom cells just for them.
Thus the 2170 cell (21mm x 70mm) was born: slightly larger than the 18650, but still following the "many small cells" philosophy. More importantly, Tesla convinced Panasonic to build these in Nevada at the Gigafactory, which was announced in 2014 and began production in 2017. This marked the first time battery production was co-located with vehicle assembly at massive scale.
Tesla’s small cell approach had a few key advantages:
Easier thermal management (more surface area per volume)
Gradual degradation (individual cells could fail without killing the pack)
But… it required complex assembly and thousands of wire bonds
But it wasn’t the only approach, and perhaps not the best for truly scaled battery manufacturing. Enter the Dragon.
China Does Batteries
In October 2012, A123 Systems, the MIT spinoff that pioneered Lithium Iron Phosphate (LFP) batteries, went bankrupt, despite a prior $249 million US government grant.
Chinese auto parts giant Wanxiang Group, which had offered a $465 million rescue deal that unraveled before the bankruptcy, bought A123 out of bankruptcy for $257 million, a fire sale for a company once valued at $2.4 billion.
Through A123, the Chinese supply chain got LFP technology and western engineering talent, which accelerated its efforts. To say that China simply acquired battery dominance with A123, however, would be a gross oversimplification.
Those efforts were already over a decade in the making.
China had been patiently building up its battery capabilities from the low-end since its 2001-2005 Five-Year Plan emphasized “the commercialization of PV and wind technologies, diesel and battery systems, bioelectricity, geothermal energy, and fuel forests in rural areas.”
It picked a good time to get into batteries, from a leap-frogging perspective.
When we left our battery chemistry story, Sony, and then the world, had settled on Goodenough’s LCO cathode paired with Yoshino’s carbon/graphite anode. This chemistry powered the ‘90s. But it, too, had some issues, namely: cobalt was expensive, the battery was thermally unstable over a 60% charge, and it had a limited cycle life (you couldn’t charge it that many times before it pooped out).
Around the turn of the millennium, three groups were working on three separate cathode chemistries to address those issues:
In the United States, at Argonne National Lab, a team co-led by Michael Thackeray, who did his postdoc work at Oxford with Goodenough, invented the lithium-rich nickel-manganese-cobalt (NMC) cathode. NMC used less cobalt, so was cheaper than LCO, had better thermal stability, and slightly higher energy density (150-220 Wh/kg for NMC vs. 150 Wh/kg for LCO).
Companies like GM licensed NMC for the Chevy Volt and Bolt, and it would become the West’s most popular lithium-ion cathode.
In Japan, at Sumitomo Corp (of whom much, much more later don’t you worry) developed the lithium-nickel-cobalt-aluminum-oxide (NCA) cathode. It had a higher energy density than NMC (~200-260 Wh/kg), but was less thermally stable.
Tesla switched from LCO, which it used from 2008-2012 for the Roadster, to NCA for the Model S and X lines. Elon is comfortable making component-level trade-offs and making up for them at the system level. In exchange for higher energy density, Tesla added roughly 100 pounds of cooling system weight, dynamic power-limiting software, and a redesigned battery management system (BMS) that monitored the temperature and voltage of each individual cell.
Today, NCA and NCM still make up the vast majority of EV li-ion chemistries in the West:
But what’s that green peeking up there? And why did we spoiler alert the left side of the graph?
Earlier, we told you that John B. Goodenough’s LCO battery wasn’t quite good enough. Turns out, Goodenough believed it wasn’t good enough, either.
Now at the University of Texas at Austin, Goodenough and his team were fixin’ to hook ‘emselves a cathode material that was cheap, safe, and abundant.
They looked at the olivines, common minerals, and specifically at lithium iron phosphate (LiFePO₄, or LFP). Both iron and phosphate were dirt cheap. Neither was toxic. It was perfect.
Except for the fact that it didn’t work at all. Almost zero electrical conductivity. Lithium ions practically crawled through it. Even theoretically, it sucked: with a maximum possible energy storage (mAh/g) of 170 vs. 274 for LCO.
But it had two redeeming characteristics: it was so stable that it couldn’t catch fire, and voltage held steady at 3.2V throughout discharge. So in 1997, Goodenough and his collaborators, Akshaya Padhi and K.S. Nanjundaswamy, published Phospho‐olivines as Positive‐Electrode Materials for Rechargeable Lithium Batteries.
Most scientists were confused, thought that ol’ Goodenough had lost a step. What good was a battery with lower energy density and awful conductivity?
But Michel Armand at the University of Montreal figured out that coating the particles with carbon (also cheap and abundant) solved conductivity. And Yet-Ming Chiang at MIT developed methods to make nano-scale LFP particles, which sped up lithium-ion diffusion rates, plus also carbon-coated, which led to more patent wars which we won’t get into but they seemed brutal.
LFP was viable, after all.
Let’s give it up once more for Goodenough, who passed away two years ago at age 100. (Long cycle life.)
Now we return to A123, which Yet-Ming Chiang founded to commercialize his carbon-coating and nano-scale LFP technology.
Chiang founded A123 in 2001, licensed his nano-phosphate patents from MIT, and began demoing cells in 2003 and 2004. By 2006, it signed a collaboration agreement with the US Advanced Battery Consortium (Detroit’s Big Three plus the DOE). In 2008, it inked a deal to provide batteries to the Norwegian EV maker TH!NK; by 2009, Chrysler, Shanghai Automotive Industry Corp, and, fatefully, Fisker Automotive followed suit. Gas prices were high and A123, with an alternative that was now relatively cheaper, was riding high, too.
A123 went public in September 2009 under the ticker AONE and closed the first day of trading with a $1.2 billion market cap. It went bankrupt three years later, scooped up by Wanxiang.
There were a couple problems, as Qnovo sees it: first, LFPs weren’t ready for EVs yet. They were low energy-density, and therefore low-range. Second, and this is important vis-a-vis vertical integration:
A123 Systems was fundamentally a battery materials company. That’s where its innovation lay. As such, it focused primarily on improving the design and manufacturing of its battery materials. Yet, the battery pack in an electric vehicle was a complex integrated system that brought together both the battery and its materials along with a sophisticated battery management system (BMS), i.e., the electronics and software that control the battery’s performance and reliability. A123 Systems largely left the design of the BMS to its customers. That meant that the overall battery pack system could not be fully optimized as long as its key ingredient subsystems were designed by different parties.
Meanwhile, in China, the company that would become the world’s most vertically integrated automaker was on the path to doing everything itself.
Build Your Dreams
Shenzhen was a small fishing village of about 30,000 people across the border from Hong Kong before Deng Xiaoping, as part of his “reform and opening” policy, designated it the opening country’s first Special Economic Zone in May 1980.
Throughout the 1980s and into the 1990s, China’s quasi-capitalist city exploded. Cheap labor poured in from across China to fill the demand for hands: Shenzhen had quickly become a hub for assembly and low-end manufacturing. Soon, dozens of contract manufacturers were making toys, watches, and all manner of cheap electronic devices.
By the time Wang Chuanfu founded BYD Company Limited, with just twenty employees and 2.5 million Chinese yuan (roughly $350k today), the city had grown to 10 million people. Perfect for his arbitrage.
The arbitrage was: reverse engineer Japanese battery manufacturing processes and replace all of the expensive automation with Shenzhen’s cheap, abundant labor.
“Reverse engineer, improve, make it yourself.” This was Wang’s stated strategy.
BYD produced batteries, first nickel-cadmium, then lithium-ion, at a fraction of the cost, but roughly equivalent quality, of the Japanese. Motorola started buying its batteries in 2000, Nokia in 2002. BYD went public on the Hong Kong Stock Exchange that year.
Most people at the helm of a newly public company would have played it safe. Not Wang. The next year, he bought a struggling automaker being sold off by an industrial conglomerate. When the market found out BYD owned Qinchuan Automobile, its stock dropped 20%. For good measure, he also bought a factory in Beijing that makes molds and set up an auto R&D department in Shanghai. Then he got down to business: ripping apart cars to reverse engineer them. At first, BYD made shitty Toyota and Honda knock-offs, but Wang had a bigger vision: cars would become electric, and a company that could make both batteries and cars would have a huge advantage.
In 2006, he signed an agreement to license John B. Goodenough’s LFP patents from Hydro-Québec, which co-owned them with UT Austin. LFP wasn’t good enough for cars yet, but Wang had a vision.
The smartest investors saw his vision before it came true. In his excellent Colossus profile of Neil Mehta, Jeremy Stern paints a picture of the would-be Greenoaks founder’s 2007 visit:
Later that night he is in Shenzhen, to visit the offices of BYD. In 2007 BYD was not yet a $600 billion conglomerate vaporizing the German auto industry. It was a seemingly fraudulent goat rodeo of a company making batteries for frivolous little ‘electric cars’, in a dank warehouse in a city that was still in the process of transforming from a fishing village. When Mehta walks into BYD he notices liquid dripping down from the ceiling onto the floor, which isn’t the kind of thing you prefer to see in a place manufacturing lithium iron phosphate batteries.
The place is just a mess, but it doesn’t really bother Mehta. He loves it, in fact. He’s obsessed with this idea that electric vehicles are going to be a big thing, and he loves the energy of the place. He especially likes talking to the people there about the company’s Founder, Wang Chuanfu. Mehta loves hearing about Wang.
The next year, in 2008, BYD rolled out its first plug-in hybrid car, and the world’s first mass-produced plug-in hybrid electric car, the F3DM. It used LFP. It wasn’t very good. It was expensive and there was practically no charging infrastructure.
That was fine. It was a good year anyway. That year, Warren Buffet and Charlie Munger took a 10% stake in BYD.
Of Wang, Munger said: “This guy, Wang Chuanfu, is a combination of Thomas Edison and Jack Welch – something like Edison in solving technical problems, and something like Welch in getting done what he needs to do. I’ve never seen anything like it.”
That same year, Beijing hosted the 2008 Olympics. To show their environmental forwardness, they installed LFP batteries in the city’s buses. These weren’t BYD buses or batteries, but after their success, the government supported LFP, and Wang was in a position to capitalize.
BYD launched its K9 electric bus in 2010. "This will be a major growth area for us next year," BYD spokesperson Paul Lin told Asia One. “The selling price for a BYD electric bus will be 2-3 million yuan (S$395,000-S$592,600), but China's government subsidies could be up to 1.2 million yuan, he said.”
Government subsidies were one reason electric buses were a good business. Longer-term, though, buses had a number of qualities that made them very attractive for a company that wanted to make LFP batteries for EVs.
Buses didn’t need high energy density; they had plenty of room for big packs. Buses didn’t run into the same charging issues cars did; they ran the same routes and parked in the same place every night. They prioritized safety, durability, and low cost-per-mile; LFP shone in all three areas.
Those buses, too, were rolling battery laboratories, largely on the government’s dime. BYD learned about pack assembly, thermal behavior in extreme weather, and manufacturing at scale. They discovered that LFP's flat voltage curve made battery management simpler; they didn’t need Tesla’s complex cell-level monitoring.
Importantly, instead of waiting for LFP to get good enough for cars, BYD found an ideal alpha product.
They took what they learned and improved the chemistry. They optimized particle size for faster charging. They improved carbon coating for cold weather performance. Then they scaled. The K9 was an international success. In 2013, they even opened a manufacturing facility in California.
At the same time, the company was planning to try its hand at cars again, too. In 2010, it rolled out the e6 in a pilot with forty taxis in Shenzhen. Taxis were a perfect intermediate step. They shared important characteristics with buses - predictable routes, central charging, high utilization - but they had the form factor of a regular car. The e6 was a modest success: it proved that LFP could work in a car, without all of the space afforded by a bus.
BYD was proving the value of vertical integration.
Meanwhile, Robin Zeng was going horizontal.
In 1999, Zeng co-founded Amperex Technology Limited (ATL) to make batteries for laptops and MP3 players. The company was based in Hong Kong, in order to serve foreign customers, licensed battery technology from the United States, to offer the most cutting edge stuff, and manufactured in Dongguan, China, to take advantage of cheap labor. In just two years, it was producing batteries for one million devices. By 2005, a Japanese company, TDK, acquired the company, and pushed it into smartphones (it would become a key supplier to Apple on the iPhone) and electric vehicles.
Because EV batteries were so different, Zeng and Vice Chairman Huang Shilin established a separate department for EV battery R&D. It was actually ATL’s batteries used in the Beijing bus fleet during the 2008 Olympics. Due in part to the success of the buses, the Chinese government began subsidizing EVs in 2009.
China was getting serious about EVs. In 2011, China's 12th Five-Year Plan (2011-2015) officially designated new energy vehicles (NEVs) as a "strategic emerging industry." Perfect timing for ATL, except… Now that NEVs were strategic, new rules limiting foreign control of automotive battery companies were coming into effect.
Zeng saw that China would need a domestic battery champion that could work with both Chinese and foreign automakers. BYD couldn't be that champion. They competed with other carmakers and were married to LFP, anyway, while many EV customers wanted NMC. ATL couldn't be that champion. It was Japanese-owned.
Zeng, under a slightly new banner, could be that champion.

In 2011, Zeng and Huang negotiated a spin-off with TDK. A group of Chinese investors led by Zeng and Shilin would own 85%; TDK would own 15%. The company was called Contemporary Amperex Technology Co., Limited, or CATL.
CATL came out of the gate with a bang. BMW contracted the company to make NMC batteries for its i3 EV. Soon after, Chinese companies, subsidized (or even owned, in the case of BAIC and SAIC) by the government, bought NMC batteries from CATL, to offer Tesla-like range. For almost its entire first decade, the vast majority of CATL’s sales were NMC.
This was a deliberate choice, and it shows how government policy (in both West and East) and vertical integration impact decisions like which cell chemistries to use and where to find them.
Why did BMW want to make an electric car? As Conrad colorfully recounts in Forsaking Industrialism, “So they [European bureaucrats] take Mr. BMW and they sit him down and they tell him that if he sells any non-electric cars they are going to shoot him. He protests. He asks for some time to comply (15 years). “Who the f-ck makes batteries and how do we get three million of them a year?”, Mr. BMW asks his senior executives.”
CATL makes batteries, and it can make lots of them, whatever kinds its customers need.
The main reason CATL focused on NMC is that its higher energy density increased range, for a price. Foreign automakers like BMW wanted range and had customers who were willing to pay for it. Chinese automakers wanted range because the government was willing to subsidize it. As Jason Li explains in a 2019 report for the Center for Strategic & International Studies, “By incentivizing increased battery energy density and vehicle range, Beijing promoted a rapid transition from lithium ferrophosphate (LFP) batteries to ternary lithium (NMC) batteries.” Beijing’s 2017-2019 NEV subsidy design favored NMC, so companies installed NMC.
What about BYD? If the government incentivized NMC, why did it stick with LFP?
BYD didn’t just make batteries. It made batteries, buses, cars, and electronics in-house. For BYD, safety and overall vehicle cost, which favored LFP, trumped energy density and range, even with the subsidies, and especially considering that Wang, a trained chemist, believed that stability and manufacturability would matter as much as density in the long-run, and that given the company’s expertise, it could continue to improve LFP density. Wang was right.
Early e6 taxi LFP cells achieved ~130 Wh/kg, and packs were so inefficient that they dropped to 90 Wh/kg at the pack level. BYD took its learnings from taxis and buses, devised better carbon coating techniques, found the particle size sweet spot, and increased cell energy density to 160 Wh/kg and pack-level density to ~120 Wh/kg, compared to ~180-200 Wh/kg at the pack level for NMC.
Then in June 2019, the CCP changed its NEV subsidy policy. Average vehicle subsidies were cut in half, the subsidy differential for higher density mostly disappeared, and it promised to phase out subsidies by the end of 2020.
While the new policy practically eliminated NMC’s subsidy advantage, it was good for no one in the Chinese EV or EV battery industry, short-term. BYD’s profits shrank dramatically in 2019, and Wang called it the company’s “darkest moment.”
But it is always darkest before the dawn. Without subsidies, BYD and CATL had to innovate.
And instead of copying Tesla's small-cell approach, BYD and CATL went the opposite direction: make the cells huge.
In 2019, CATL, which had run a small LFP business, mainly for buses, introduced its Cell-to-Pack (CTP) technology. CTP eliminated modules entirely. Instead of wire bonding thousands of cells, they laser-weld a few dozen large prismatic cells directly to the pack structure. Per Autocar Professional, “Using cell-to-pack (CTP) technology, which includes more than 70 core patents, CATL is able to increase mass-energy density by 10-15 percent, improve volume utilisation efficiency by 15-20 percent and reduce the amount of parts for battery packs by 40 percent.”
CTP is a pack-level engineering architecture that applies to both LFP and NMC, but because LFP struggled from low density, the boost it received from CTP was relatively more consequential. NMC density went from “high” to “higher.” For LFP, the increase in pack-level density (remember, pack losses had been 30% previously) meant that it could meet 300-400km range targets. It took LFP from “cheap and safe but short-range” to “cheap, safe, and good enough.”
The following year, in 2020, BYD introduced the Blade Battery: essentially, one cell that runs the entire width of the car.
Blade was the same idea as CTP, but with even fewer parts and designed specifically for LFP. Blade boosted volume utilization from ~40% → ~60%+, and allowed LFPs to reach ~160 Wh/kg at the pack-level, closing the gap with NMC packs (~180–200 Wh/kg at the time).
Blade also became a selling point and differentiator. It was safe, but it was also long-life, high-tech, and even kind of sexy. The company got its swagger back. Vindicated, it wrote in a blog post, “BYD’s Blade Battery aims to bring battery safety back to the forefront, a redirection from the industry’s tenuous focus on this crucial aspect.”
This wasn’t your grandmother’s safety, though, as the company made clear with a newly-designed Han EV:
In short, the Chinese large cell approach meant:
Simpler assembly (fewer connections)
Lower cost (less complexity = cheaper manufacturing)
Structural integration (the battery becomes part of the chassis)
But… harder thermal management and catastrophic if a cell fails
It also meant the LFP takeover could begin in earnest.
By 2023, Tesla adopted LFP chemistry for standard range vehicles while keeping cylindrical cells and CATL started making cylindrical cells for customers who wanted them. The approaches converged because both solved different parts of the cost equation:
Tesla solved thermal management and reliability
China solved manufacturing simplicity and cost
Honestly, though, Tesla adopting LFP was a big win for BYD and CATL. BYD made a weird bet, and it hit. And Tesla was forced to buy LFP cells from CATL.
Today, CATL produces a stunning 37.9% of the world’s EV batteries, followed by BYD at 17.8%.
And while Tesla’s approach works for Tesla, primarily, China’s focus on manufacturing simplicity and cost means that while the US, UK, and Japan spent decades developing lithium-ion battery technology, China has spent the past decade coming to dominate the growing LFP market. That’s the other side of our graph, the spoiler alert:
Almost 80% of Chinese EVs now use LFP batteries, and because of China’s large share of the market, almost half of the world’s EV do, too.
Stunningly, as of 2023, Tesla has been shipping its Model Y’s produced in the Berlin Gigafactory with BYD’s Blade LFP batteries!
China bet on the right chemistry, and bet early to own the entire stack. It is the battery behemoth.
China currently produces over 80% of the world’s lithium-ion battery cells, and produces an even higher share of each of the battery’s components. Lithium isn’t particularly scarce, and China only mines 7% of the world’s lithium, but it “controls more than 60% of battery-grade global lithium refining” according to S&P Global.
Its share is expected to grow. The International Energy Agency projects that “Chinese dominance of the cathode and anode active material manufacturing capacity will not see any significant reduction by 2030; it doesn’t see the US winning more than 15% of the lithium-ion market by 2030, even if all announced projects materialize.”
So did the A123 purchase make the Chinese LFP industry? It helped, for sure. The acquisition, plus hiring of A123's engineers, gave Chinese companies insights into nano-phosphate LFP technology. But BYD and CATL mainly innovated on their own. They had become genuinely innovative companies, innovative at the systems level, innovative in a way you can only be if you’re integrated. Owning the A123 patents did mean that the US couldn’t block their LFP development, though.
We will return to China’s dominance, and if there’s anything the US can do about it, later in the essay, but I gotta say… China’s performance in batteries has been a great thing for the world.
Lithium-ion batteries are much cheaper because of Chinese innovation and scale: average battery prices have fallen 90% since 2010. China is actually producing too many batteries. As of 2024, Chinese battery production was already higher than global EV demand, and along with falling raw-materials prices, overproduction tanked both prices and margins to all-time lows of just above $50/kWh at the cell level for Chinese EV batteries:3
Because of cheap, performant batteries, drones fly, EVs drive, and robots might walk. It’s becoming trivially cheap to add storage to an otherwise-variable solar farm. Base Power Company can offer cheaper home backup and grid reliability with cheaper batteries. Robots will become affordable more quickly with cheaper batteries. If this is the Decade of the Battery, it’s in large part thanks to China’s ability to scale. And as it stands, Chinese battery manufacturers seem to be eating that benefit in the form of much lower margins.
The fact remains, however, that batteries are a critical, maybe the critical layer of the Electric Stack, and ceding full control to another country is a geopolitical risk, a supply chain risk, and a bigger, harder-to-pin-down-risk: that without the ability to make batteries ourselves, we won’t be able to innovate the way BYD and CATL have.
Good or bad, this is a theme that continues to come up: the West innovates and China scales its way to dominance across the Electric Stack.
Nowhere is this more acutely and urgently true than in magnets.
Magnets and Electric Motors
An electric motor is, at its core, magnets spinning magnets. This simple fact explains why America is concerned about China's control of 90% of global neodymium magnet production: without magnets, nothing in the Electric Stack moves.
In a motor, batteries send current through copper wires, creating electromagnets (stators). These electromagnets' shifting poles attract and repel the permanent magnet (rotor), creating spin. It’s magnets all the way down.
Motors sit at the heart of the Electric Stack. They're where all four layers converge. Batteries provide power, power electronics shape it, compute controls it, and magnets turn it all into motion.
Thanks to improvements across the stack, since 1990, the cost of electric motors in $/kW has decreased by 98.8%, at an average of 12.5% per year.
This increase in the amount of power a motor generates per dollar comes from improvements across all layers of the stack, but the fact that the curve exists at all is due to the simultaneous 1983 discovery, on opposites sides of the world, of neodymium magnets, which delivered 10x the magnetic energy of previous materials, enabling smaller, more powerful motors.
Without strong, heat-resistant magnets, there are no modern motors. Drones are too heavy to fly. EVs are too inefficient to compete.
Without powerful magnets, we'd be stuck in the 1970s. With them, thanks to two centuries of old fashioned experimentation and cutting-edge materials science, the modern world spins.
A Brief History of Magnets and Motors
Michael Faraday created the first electric motor in the basement laboratory of the Royal Institution in London on September 3, 1821, a year after Hans Christian Ørsted first demonstrated that electricity creates magnetism in a lecture at the University of Copenhagen in April 1820, a few months after which André-Marie Ampère wrote down the mathematical laws at the Académie des Sciences in Paris.
Playing with those new laws, Faraday suspended a wire in a cup of mercury with a permanent magnet in the middle. When he ran current through the wire, it rotated around the magnet continuously. His motor was both revolutionary, the grandpère of modern electric motors, and totally useless for any practical purpose, in part because the permanent magnet he used was magnetite (lodestone), basically a magnetic rock with incredibly weak field strength, maybe 0.1 MGOe if you're being generous.
MGOe stands for MegaGauss-Oersteds, a fitting tribute to Ørsted that measures how much magnetic energy you can pack into a given volume. The BHMax is the peak value of B (magnetic flux density measured in Gauss) times H (magnetic field strength measured in Oersteds) and is a good proxy for progress in permanent magnets. BHMax is the property, and MGOe is the unit it’s expressed in. The roughly equivalent measure for electromagnets is the Tesla, which measures magnetic field strength, or flux density.
Anyway, Faraday’s motor couldn’t do useful work, but it was useful in inspiring others to better it.
In 1831, at Princeton, Joseph Henry discovered that you could make incredibly powerful electromagnets by wrapping insulated wire around iron cores. His electromagnets could generate field strengths of 1-2 Tesla when powered, strong enough to lift 750 pounds, orders of magnitude stronger than any permanent magnets available at the time.
Forty years later, in 1871, Zénobe Gramme built the first commercially successful generator. It used electromagnets for the field magnets, powered by some of the current it generated itself in a process called self-excitation. This became the template for practically all industrial motors and generators through the 1920s, which, looking at the picture, you can understand why we didn’t have drones through the 1920s.
Electromagnets dominated the early electric motor landscape for a few reasons:
Steel permanent magnets lost magnetism from vibration and heat
You could control electromagnet strength by varying current
Iron was cheap; good permanent magnets weren't
Big motors needed big magnetic fields, which was only possible with electromagnets
Then in 1917, the Japanese birthed the modern permanent magnet when Honda and Takei discovered that adding cobalt to steel created permanent magnets that kept their magnetism. With their KS magnet steel’s higher coercivity, permanent magnets suddenly made sense for small motors, and the booming auto and telephone industries got to work figuring out applications.
Delco used permanent magnets for starter motors in 1919, Trico used them for windshield wiper motors in 1921, and Western Electric used them to make phones ring in 1923. Still, though, KS steel was expensive because of the cobalt, and at 1 MGOe, you needed a lot of it, which limited its applications.
Accidentally, in 1931, T. Mishima fixed the problem. The man just wanted to make better cutting tools, so he sprinkled some nickel and some aluminum into steel. He ended up with a permanent magnet that had double the energy of KS steel, which he called MK steel.
In 1935 (in a pattern that would repeat itself half a century later), on the other side of the world, a scientist at General Electric named W.E. Ruder independently discovered that aluminum-nickel-iron alloys made excellent permanent magnets. GE started adding cobalt and called it "Alnico" (Aluminum-Nickel-Cobalt).
Neither Mishima nor Ruder nor anyone in the world knew why Alnico worked. It turned out to be spinodal decomposition creating a nanoscale structure of magnetic needles in a non-magnetic matrix. They were doing proto-nanotechnology in 1935! What they did know was that it worked, and soon, permanent magnets were everywhere.
With the onset of World War II, 4 MGOe Alnico was used in military field telephones and radar systems. Gibson’s Les Paul guitars used them for the P.A.F Humbuckers that Eric Clapton would make famous.
By Alnico 8 in 1955, Alnico had an MGOe of 10, 100x that of Faraday’s lodestone. Alnico drove bicycle dynamos, car alternators, and aircraft generators. Every TV, radio, and hi-fi speaker used Alnico magnets, as did microphones and tape recorders. Every car had dozens of Alnico motors for fans, wipers, windows, and seats. General Motors alone used 10,000 tons of Alnico magnets per year. And the IBM 350 disk drive used Alnico in its read/write heads.
Meanwhile, while the applied scientists had been busy mixing and sprinkling and generally FAFO’ing, theoretical scientists had been hard at work figuring out just why magnets behaved the way that they did.
In the 1920s, Heisenberg and others extended his uncertainty principle to explain how electron spins aligned in magnetic domains via Exchange Interaction Theory. In the 1930s and 1940s, Domain Wall Theory explained how magnets magnetized and demagnetized. And in the 1940s and 1950s, Single Domain Particle Theory showed that tiny particles could be much stronger magnets.
With the theory to direct their research, applied scientists developed even better magnets.
In 1952, Philips in the Netherlands discovered barium ferrite magnets. These magnets were weaker than Alnico at 4 MGOe, but because they were made of iron rust and barium, they were dirt cheap. Cheaper magnets meant more use cases: for example, now every refrigerator could have magnetic door seals.
In 1966, Karl Strnat at the Wright-Patterson Air Force Base in Ohio, applied single-domain theory to rare earth elements, creating 18 MGOe Samarium-Cobalt (SmCo) magnets that blew Alnico away. If you’re a Jesse Michels fan like I am, Wright Pat should ring a bell: that’s the Air Force Base where rumor has it debris from the 1947 Roswell crash was taken for reverse engineering. Then we get, by far, the strongest magnets ever made… it’s just strange is all I’m saying!
Alnico was getting squeezed from both ends. Ferrites were 10x cheaper for simple applications. SmCo was 2x stronger for demanding applications. And the 1974 cobalt crisis that emerged from the Zaire civil war octupled the price of cobalt, incentivizing companies to discover alternatives.
Which set the stage for what has to be the most exciting conference in magnetic history.
Enter the Neodymium Magnet
November 8-11, 1983. Pittsburgh, PA. Hilton Hotel. 29th Annual Conference on Magnetism and Magnetic Materials.
The Steel City was an appropriate venue to host the world’s leading magneticists. Steel is an alloy made from iron; just add carbon. And in 1983, there seemed to be a growing consensus that the next generation of permanent magnets were going to be iron-based, or ferrite. The question was: just add what?
Iron (Fe) is great for magnets because it’s cheap and abundant, and because it provides a high saturation magnetization, the maximum possible field strength. But iron alone is magnetically soft - it magnetizes easily but also demagnetizes just as readily. Without something to "pin" its magnetization in place, pure iron makes a terrible permanent magnet. It lacks the crucial magnetocrystalline anisotropy needed to resist demagnetization.
Researchers had a hunch that the answer would be found in the rare earth elements, or lanthanides (highlighted in light blue in the periodic table below), specifically in the light rare earths (the first seven, from La to Eu), and more specifically in either Praseodymium (Pr) or Neodymium (Nd).
Why the first seven? Everything else was too rare and expensive to make both strong and cheap magnets. Even Samarium (the Sm in SmCo) was rare enough that it contributed to SmCo’s high cost and limited its use to higher-value uses. So really, researchers were looking at the first four: La, Ce, Pr, and Nd. These rare earth elements defied the name; they aren’t actually that rare.
Why Pr or Nd? Lanthanum and Cerium, the first two light rare earths, could be eliminated because they have no 4f electrons (the source of magnetism) and therefore are non-magnetic.
Praseodymium and Neodymium, on the other hand, had two and three 4f electrons, respectively, which give both a high magnetic moment (strength of the magnet) and strong coercivity (resistance to demagnetization).
So those in the know knew that others in the know would probably be looking at iron with a little Pr or Nd mixed in. But no one knew for sure, and to the best of their knowledge, no one had actually gotten one of these lanthanide-iron magnets to work. The compounds that formed naturally when you mixed neodymium and iron, like Nd₂Fe₁₇, disappointed, magnetically speaking. Their crystals had the wrong shape, poor anisotropy, and unstable phases that would decompose.
But at Sumitomo in Japan, Masato Sagawa discovered that adding in a little boron (B) solved those problems.
Sumitomo was already making SmCo magnets through a process called sintering. Take the powders, mix them, press them into shape under massive pressure, and heat them until they almost melt (sinter) so the particles fuse together. Out comes a solid, crystalline magnet with perfectly aligned magnetic domains.
Sagawa applied Sumitomo’s sintering process to “many, many additive elements” to give coercivity to his NdFeB alloy, “almost throughout the periodic table,” and nothing worked, until “at last, I found a good additive element. It’s not another element—it’s neodymium itself.”
In 1982, Sagawa told IEEE, “I succeeded in developing a neodymium-boron sintered magnet with record-high BH maximum [a measure of the maximum magnetic energy that can be stored in a magnet] in the world.”
He thought he was the only one, and when he presented at the 1983 MMM Conference in Pittsburgh on the third day of the conference, November 10, 1983, he presented under that belief.
The belief stood for maybe an hour, until General Motors’ John Croat made his presentation.
General Motors had the same problem and the same realization as everyone else. Cobalt was too expensive; lanthanide-iron magnets seemed to make sense. But GM scientist John Croat saw a slightly different problem, and therefore, devised a different solution. As he recalled to IEEE:
Both of us set out with the intention of making a rare-earth iron permanent magnet from neodymium or praseodymium. The problem was that there was no intermetallic compounds available. Unlike in this rare-earth cobalt phase diagram—there was lots of interesting intermetallic compounds—the rare-earth-iron phase diagrams do not contain suitable usable intermetallic compounds.
What is an intermetallic compound? Essentially, it’s the crystal structure that holds the magnetic moment in place in the structure. It provides the structure, stability, and reproducibility you would need – if you were GM, and wanted to make a lot of magnets – to make a lot of magnets. Cobalt was full of them; iron was deficient.
And look, if I had an editor, they’d be yelling at the depth here, but I don’t, and I want you to grasp how this happens, how one little thing here, a chance paper read there, leads to the modern world.
Because Art Clark at the Naval Surface Weapons Laboratory had published a paper in 1976 showing that a rare-earth-iron compound, terbium iron 2 (TbFe2) didn’t contain suitable intermetallic compounds at low temperatures, but that when he kept annealing (heating) it at higher and higher temperatures, somehow, at “350 °C, the coercivity shot up to about 3.5 kilo oersted.” (Oersted strikes again - kilo Oersted measures magnetic field strength, and specifically coercivity, whereas MGOe measures energy density, or BHMax.)
Croat and his team had read this paper, and they started melt-spinning materials. Melt-spinning is a different process than sintering. In this case, melt the same ingredients into liquid metal and spray that liquid onto a spinning copper wheel cooled with water. The metal cools so fast (a million degrees per second!) that it forms tiny magnetic crystals in ribbon form. Grind up the ribbons, mix with polymer resin, and mold like plastic. You end up with magnets.
Melt-spinning worked, kind of. “I had developed very high coercivities right away,” Croat remembered. “The problem with these materials was that they were all unstable. I started to heat them up at about 450 °C, and they would decompose into their equilibrium structure, and the coercivity would go away.” So he sprinkled, too, adding different elements, melt-spinning, testing. One day, he too added boron. And boron, for Croat too, was the solution.
So on the same day that Sagawa presented Sumitomo’s Nd₂Fe₁₄B compound, Croat presented GM’s exact same Nd₂Fe₁₄B compound!
Conference attendees – I mean imagine that you’re at a magnet conference in Pittsburgh, Pennsylvania in 1983 and no one has come up with a better magnet than SmCo in almost two decades and then, on the same day, two people, from opposite sides of the world, with no knowledge of each other’s work, do, and it’s the exact same thing – were stunned.
Materials science is weird, though, and although Croat and Sagawa each landed on the same exact compound, the process mattered.
Sagawa’s sintered Nd₂Fe₁₄B magnets were much stronger at 36 MGOe, but brittle and harder to manufacture and cut into specific shapes. Croat’s melt-spun Nd₂Fe₁₄B magnets were weaker at 14 MGOe but easier to manufacture and injection-moldable into any shape needed, including the complex geometries in motors.
In the 162 years since Faraday’s motor, magnetic performance as measured by BHMax improved by 360x, and magnet makers offered customers options. Pay more for something stronger, or less for something weaker but flexible and strong enough for many uses.
Understandably, given the known importance of high-performance permanent magnets, a patent war between GM and Sumitomo ensued. Ultimately, Sumitomo’s patents held in Japan, and GM’s in the US, and the two companies realized that their magnets were different enough that they agreed to cross-license to each other. Sumitomo would own high-performance magnets; GM’s Magnequench would own bonded magnets.
Both would find customers who needed their specific type of NdFeB magnet for different products. They’d even work together on the product that brought NdFeB scale, and drove it down its own learning curve.
The NdFEB Alpha Product: The Hard-Disk Drive
A hard-disk drive works by magnetizing spots on a disk using two heads: a write head and a read head. When you save a file, the write head creates a magnetic field that aligns domains in the disk's magnetic coating. When you open it, the read head detects these magnetic orientations as 1s and 0s.
Hard-disk drives store information on computers, everything from operating systems to photos. Before hard drives, computers used punch cards, then magnetic tape, then floppy disks. Hard drives let computers store much more data, and for the first time, install software and keep it on your computer. That’s not how it always was: I remember loading Oregon Trail into the computer via floppy disk in my lower school computer lab.
Hard drives, like many technologies, started big and expensive and got progressively smaller and cheaper. Magnets played a major role in that progression: stronger magnets meant write heads could make smaller spots which meant computers could store more data in smaller areas.
IBM released the first commercial hard drive in 1956: the RAMAC (Random Access Method of Accounting and Control) 305 Magnetic Drum Data Processing Machine.
It used Alnico magnets, weighed a ton, stored 5MB (“IBM touted the system as being able to store the equivalent of 64,000 punch cards), and cost $10,000/month to lease (about $118,000 in 2025 dollars).
Over the next two decades, researchers at IBM and elsewhere (but mostly at IBM) made a number of improvements to hard drives to increase their storage, speed, and reliability: flying heads (1961), thin-film memory (1962), semiconductor read-only memory arrays (1965), voice coil actuator (1965), ferrite heads (1966), magneto-optical storage (1967), semiconductor DRAM (1970), floppy disks (1971), track-following servo (1971). The history is fascinating, but honestly, you gotta draw the line somewhere, and if you want all of the details, we recommend going to the Computer History Museum’s website starting here and clicking “Next Milestone” until you get to the present.
By 1973, when IBM introduced the washing machine-sized 30MB Winchester (named after the .30-30 Winchester rifle cartridge because IBM project manager Kenneth Haughton said, “If it's a 30-30, then it must be a Winchester”), ferrite magnets started reaching their physical limits.
Engineers couldn't make the magnetic domains on the disk any smaller because ferrite heads couldn't generate strong enough fields to write to them or sensitive enough fields to read them.
That was an issue, because of course IBM’s customers wanted more storage in less space, but not a huge issue, because the companies and governments who didn’t get fired for buying IBM were used to paying a lot of money for computers and giving them plenty of space in the office.
It became a huge issue in 1981, when IBM introduced the IBM PC (Model 5150), its first personal computer.
The IBM PC created massive demand for storage, but without stronger magnets, there was no physical way for drives to shrink. Moore’s Law was doing its thing, making computers smarter, but storage was stuck.
Which is why IBM and others sent engineers to Pittsburgh in November 1983, hoping to find something yummy at the MMM, which, as you know, they did.
The implications of Sagawa and Croat’s neo magnets were immediately obvious to the engineers. It was just math. At the existing ferrite BHMax of 4, magnets could write domains 10 micrometers wide. At Sagawa’s BHMax of 36, they could write domains 1 micrometer wide. Neo magnets with 9x the energy density enabled domains roughly 3x smaller in each dimension, providing ~10x the storage density from magnets alone.
After the conference, the computer industry engineers flocked to both Michigan and Osaka, because they needed both sintered and bonded neo magnets.
As Sagawa explained, “this is because of the invention of neodymium-boron sintered magnet which is used in the actuator motor. And also, the bonded-magnet neodymium is used in the spindle motor to rotate the hard disk.”
Sagawa’s magnets powered the voice coil motor that moves the arm with speed and precision. The arm needs maximum field strength for precise movement and raw power to move the arm thousands of times per second, and its simple curved shape could be cut from sintered blocks.
Croat’s magnets powered the spindle motor that spins the disk. The spindle can use weaker magnets for constant-speed rotation, but it needs complex rings with multiple poles that are injection-moldable into the precise geometries needed for smooth rotation.
And boom, as they say, goes the neodymium.
The first commercial neo magnets were produced in the mid-1980s, and used in both audio equipment and hard-disk drives. The earliest specific product that used neo magnets that we could find was actually the Electro-Voice N/DYM microphone in 1986. Neo magnets made the mics better in a bunch of ways: stronger electrical signal from the same diaphragm movement meant better signal-to-noise; more powerful and uniform magnetic field extended high and low frequency response; lighter weight magnet assembly meant lighter portable mics.
The killer use case, though, the alpha product, was the 3.5” hard-disk drive, and the story is such a classic that Clayton Christensen used it as a case study in The Innovator’s Dilemma.
In 1984, according to the Computer History Museum, Finis Conner, one of the co-founders of hard drive manufacturer Seagate Technologies and its Vice Chairman, announced plans for a 3.5 inch hard-disk drive (HDD) to succeed the company’s popular 5.25” model. Three members of Seagate’s senior management shelved the plan, deciding to focus on existing 5.25” models because existing customers cared about storage capacity per drive and cost per MB, both of which favored the larger HDD.
Conner saw where the puck was going, though, and in 1985, he left Seagate to found Conner Peripherals and develop the 3.5” HDD himself.
Actually, a Scottish company called Rodime had already released a 3.5” model in 1984, using ferrite magnets, to target the emerging notebook and portable computer market, where capacity was less critical than form factor. The Rodime HDDs were worse than the 5.25” HDDs on the things desktop customers cared about, but they had advantages in the smaller emerging portables market: size, weight, and ruggedness. Long-time Not Boring readers will be familiar with this chart:
Timing is everything, and Conner’s timing could not have been better. While 3.5” HDDs originally underperformed their larger 5.25” competitors, a number of component improvements changed that quickly: thinner disks, better heads, higher areal density, and, crucially, neodymium magnets. In 1986, Conner introduced the 3.5” CP340 HDD.
The second version, the CP341, began shipping with the Compaq Portable III in 1987.
The text in Compaq’s ad is small, but it says, “The greatest limiting factor to total system performance is the speed of the disk drives. The fixed disk drives in the COMPAQ PORTABLE III are three times faster than most others, with an average access time of less than 30 milliseconds.” It had, in large part, neodymium magnets to thank for that speed.
That year, Compaq made up 90% of Conner’s revenue. By 1990, Conner did an elite $1.337 billion in revenue, making it the fastest-growing manufacturing startup in United States history and leading Fortune to name it America’s Fastest Growing Company.
Soon, everyone was using neo and going small. By 1988, the major OEMs, including Quantum, PrairieTek, and even Seagate, were integrating NdFeB magnets into their voice coil actuators, and by 1992, nearly 100% of new desktop and portable computer HDDs used neo. The 5.25” HDDs were relegated to high-capacity servers and data centers.
You know how the story goes next. As the demand for 3.5” (and even 2.5”) HDDs drove scaled neo magnet production, prices dropped, performance improved, and new products were improved and unlocked.
Neo Everything
As with lithium-ion batteries, the alpha product was far from the only product. New products began incorporating, or were enabled by, motors neo magnets.
Above, I show the products as a function of neo magnet strength (BHMax) and cost ($/kg), because with neo magnets, strength unlocks new use cases. It’s instructive to note that when China restricted rare earth exports in 2010 and 2011, innovation slowed.
There is another curve you can draw, though, that shows these products’ launches as a function of motors’ price-for-performance.
The true answer likely lies somewhere in between: certain products became possible because of increasing magnet strength and coercivity, and others became economical because of the overall system cost reflected in the motor.
You’ll notice that many of the products on this chart also showed up on our lithium-ion battery chart earlier – that’s not a mistake! Most of these products use many or all of the Electric Stack technologies in different combinations. EVs need big batteries and powerful electric motors (and compute, and power electronics and control systems). Robots are basically a bunch of servo actuators powered by a big battery.
And as with batteries, early applications of neo magnets were a US / Japan story. While the US focused on desktop and laptop computers, Japanese companies incorporated neo magnets into consumer electronics: Sony released its Flagship audiophile MDR-R10 headphones using NdFeB drivers in 1990, Makita rolled out the 6012HD cordless drill in 1992, Sony put neo in its Handycam TR-series the same year, and Kyosho put its Le Mans 480 Gold high-performance 1/10-scale RC motor in its Optima Mid and Lazer ZX off-road buggies in 1995.
All of these fell into the “existing products, but better” category. Then in 1997, Toyota released its first Prius hybrid electric vehicle in Japan – the first mass-produced hybrid car with neo rotor magnets – and rolled it out globally in 2000. Three years later, in 2003, Enercon unveiled its E-40/E-70 direct drive wind turbines, which required hundreds of kilograms of neo magnets.
Cars and wind turbines needed both more magnets than computers and consumer electronics, and they needed the magnets to perform under more extreme conditions. Specifically, they needed higher coercivity, the resistance to demagnetization at high temperatures. Customers knew this – hybrid vehicle research teams at Toyota and Honda were demanding higher coercivity magnets – and so researchers got back to work.
In the late 1980s and through the 1990s, they proved that adding heavy rare earth elements – dysprosium (Dy) or terbium (Tb) — into the Nd2Fe14B crystal lattice increases anisotropy, which raises the magnet’s intrinsic coercivity. The upshot is that magnets that demagnetized above 80–100 °C could now withstand temperatures above 150 °C, like those produced in traction motors and direct-drive wind generators.
Research creates demand. Demand drives scale, lowers prices, and creates new demand. The new demand creates new challenges which invite new research. And on and on.
“Dy-doping,” as adding dysprosium is called, actually reduces BHMax a bit and increases prices (Dysprosium really is pretty rare), but the increase in coercivity was worth it. Now bigger things, things that generate a lot of heat, could be powered by electric motors using NdFeB.
Cars: first the Prius, later Tesla and BYD. Wind turbines: along with traditional gearbox-based turbines, direct-drive turbines (the ones that use neo magnets) have grown from 1% of capacity in 2005 to 29% of capacity (~316 GW) in 2024.
Small things still use neo magnets to perform better: your iPhone uses them to pack louder, clearer audio, haptics, image stabilizing cameras, sensors, and wireless charging into a small form factor.
But entirely new things are possible with neo magnets that weren’t possible before, particularly things that move for which weight is the critical factor. Drones are the prime example.
The key metric for a drone is its thrust-to-weight (T/W) ratio: the total thrust available from all of its motors divided by the total weight of the drone, including battery and payload. T/W governs everything about how a drone flies: its payload capacity, agility, and efficiency. At a T/W of 1.0, a drone can just hover, it can’t do anything else. Most drones aim for a T/W over 2.0.
Unlike something that doesn’t have to fight gravity, both torque and weight really matter in drones. If you add a lot of heavy magnets, your weight increases, bringing your ratio back down. Without neo magnets, motors would either be too heavy or too weak to get the T/W required for stable flight and quick maneuvers at a reasonable battery life.
As far back as the early 2000s, hobbyists were putting neo-powered RC plane motors in DIY drones. In 2013, China’s DJI professionalized the process. Its Phantom series brought consumer quadcopters to the mass market in 2013, using four outrunner 4 brushless DC motors, which you can see in this image.
Drones are everywhere now. Most weddings you go to have drone photography. Zipline is making deliveries in Texas with drones (quadcopter with fixed wing for range). And thanks to the war in Ukraine, drones are widely recognized as a key piece of the military arsenal of the future. They’re only possible because of neo magnets.
The future will need a lot more neo magnets. Drones will proliferate. Flying cars need neo magnets. So will supersonic planes: as I wrote before I even really knew what NdFeB stood for, the unlock for Astro Mechanica’s turboelectric adaptive jet engine is that “electric motors can deliver enough Watts per kilogram to make their inclusion worth their weight.” Right there in the chart from the piece is the reason: those triangles are NdFeB.
And then there are the robots. Humanoids use actuators like humans use joints. Tesla’s Optimus robot uses a total of 40 electromechanical actuators, 28 structural actuators for the body (torso, legs, arms, neck) and 12 actuators for the hands (6 per hand). All of them use neo magnets; of Optimus’ 58-73 kg weight, a full 3.5 kg is neo magnets.
Unfortunately for America, if the future requires neo magnets, China owns the future.
Neo China
The story of the research, development, and commercialization of neo magnets, like the story of the research, development, and commercialization of li-ion batteries, is an American and Japanese story. And like the li-ion battery story, modern scale and production dominance of neo magnets belongs to China.
Today, China mines roughly 70% of global rare earths, processes 85-90% of them, and produces 90%+ of the world’s NdFeB magnets. China, relatedly, makes at least half of the world’s electric motors.
How did this happen?
Longsightedness on the part of China; shortsightedness on the part of the West.
Deng Xiaoping, on a 1992 tour of the Bayan Obo mine in Inner Mongolia, laid out a vision for his country’s future: “The Middle East has oil. China has rare earths.”
At the time, China mined and processed rare earths, and shipped them off to countries like the US and Japan to be turned into components, like neo magnets. The US and Japan held the intellectual property, both legally and practically. As part of the deal, the liberalizing China sent scientists and engineers to the US and Japan for training in the production of NdFeB magnets.
One of the companies to which China sent their engineers was Magnequench, which GM had spun out in 1986 after Croat’s discovery of bonded neo magnets. Magnequench made bonded neo magnets for use in GM cars, but also for other customers, including the US Department of Defense. The company, for example, supplied 85 percent of the neodymium magnets used in servo motors for precision guided munitions (PGMs), including cruise missiles and JDAM smart bombs.
But it was the Reagan era eighties, baby, and GM, once a mere auto manufacturer, was becoming a diversified conglomerate under CEO Roger Smith. In 1984 Smith oversaw General Motors' acquisition of Electronic Data Systems from Ross Perot for $2.55 billion, as an effort to broaden out of its manufacturing base and into technology and services. He also acquired Hughes Aircraft and moved GMAC into the mortgage business. During the same period, GM invested $90 billion dollars over 10 years in a quest to create "lights-out" factories that could run without people, attempting to reduce labor costs. Smith went out on top, as GM was about to crumble.
You know this story. It’s the Boeing story. It’s the story of once-great American incumbents.
In 1991, GM sustained an $8.5 billion loss in its North American automotive operations. That December, GM announced it was closing 21 more plants across North America, eliminating 74,000 jobs in what became known as the "General Motors Christmas Massacre." By the mid-1990s, GM had cut both its hourly and salaried work forces to half the size they were in 1985, when the company had 142,000 hourly workers and 500,000 salaried employees.
GM was forced to go back to its roots: make and sell cars, get rid of everything else.
Everything else included Magnequench, 80% of which GM put up for sale for just $70 million in 1995. The buyer was a consortium, led by American Archibald Cox Jr. (son of the Watergate prosecutor), but really two PRC-controlled companies in a trenchcoat: China National Nonferrous Metals Import and Export Corporation (CNIEC) and San Huan New Material High-Tech Inc. In a stunning coincidence, the heads of both Chinese companies were sons-in-law of China's paramount leader Deng Xiaoping.
There have been reports that the Chinese government pressured GM into selling Magnequench to Chinese interests as a condition for approving GM's bid to open an automotive production line in Shanghai. Consistent with its goal to focus on selling more cars, GM wanted access to China's growing auto market.
Because the PRC was essentially buying an American supplier of components used in US military applications, the Committee on Foreign Investments in the United States (CFIUS) had to approve the deal. CFIUS reached a "mitigating agreement" with Magnequench's new owners that the Chinese companies could not remove Magnequench's production equipment or jobs from the U.S. for at least five years.
Shockingly, during those five years, Magnequench’s new owners duplicated the company’s Anderson, Indiana production line in Tianjin, China, and practically on the day the five-year agreement ended, Magnequench closed up shop in the US.
As of 2003, China controlled everything from mining to magnet production, and in the ensuing years, they would come to dominate motors as well, through companies including DJI, BYD, and T-Motors, among others. From practically nothing in the late 1980s, China owned neo magnets by the early 2000s. And they took advantage of it.
In 2002, Chinese miners, supported by the government, undercut global rare earth miners on price (and environmental considerations), forcing America’s sole rare earths mine, Mountain Pass, to shut down.
In 2010, after half a decade of restricting rare earth oxide exports in order to control more of the value chain at home, a Chinese fishing trawler collided with Japanese Coast Guard vessels near the disputed Senkaku/Diaoyu Islands. The Japanese detained the captain. China responded by halting rare earth exports to Japan for about two months, crunching Japan’s magnet makers (Hitachi Metals, Shin-Etsu) and electronics firms (Panasonic, Toyota, etc.). In January 2011, China announced another 35% cut in export quotas for the first half of the year compared to 2010 and set up stricter export licensing, which made it harder for foreign companies to source rare earths unless they set up operations in China.
As seen in the chart above, that sent prices soaring. Neodymium oxide rose from roughly $19/kg in early 2010 to over $300/kg by mid-2011. Dysprosium oxide went from ~$170/kg to over $2,000/kg in the same period.
It was a mini-wake up call. Molycorp reopened Mountain Pass in 2012 before shutting it down again. But the call wasn’t loud enough and the damage had been done.
Which is why China dominates rare earths and their manufactured products more strongly today than ever before, giving China a big bargaining stick at a time when tensions between the US and China are high, and an economic trump card at a time when the demand for neo magnets is set to explode.
Rebuilding the Electric Stack in America will require reversing this trend, which we will discuss later in this piece and in future ones. For now, let’s continue down the stack.
Batteries send currents through wires, creating electromagnets that spin permanent neo magnets, turning electricity into motion.
But creating that smooth, fast spin requires electromagnets to flip polarity hundreds or thousands of times per second, with extreme precision. That’s where the rest of the Electric Stack stack comes in: power electronics to switch the current, and compute to orchestrate it all.
The Century of the Semiconductor
If you already know the full history from relays to semiconductors, you can skip this.
Semiconductors play two broad roles in the modern world.
One we are extremely familiar with: compute gives humans the ability to control information with electricity. Think CPUs and GPUs. Run electrons through increasingly tiny gates representing 0s and 1s, etched with light right into a silicon substrate, at increasingly fast speeds, quintillions of times per second, and out pop answers, images, videos, and even proto-intelligence.
The second is less famous but equally important: power electronics give humans the ability to control motion with electricity. Think IGBTs and inverters. Switch massive currents on and off through semiconductor gates, converting battery DC to precise AC waveforms, thousands of times per second, to make motors spin at precise speeds keeping drones hovering, EVs accelerating, and robots, one day, grabbing you a drink.
In both cases, semiconductors let electricity control itself.
Imagine a simple circuit. A battery connected to a wire connected to a lightbulb connected back to the battery. If the circuit is complete, current flows from the battery to the bulb, and your light turns on. If you cut the wire, the light turns off.
Cutting and reconnecting a wire every time you want to turn the light off and on is impractical, so humans invented the switch. When it’s up, the circuit is broken, and the light is off. When it’s down, the circuit is complete, and the light turns on.
You can do one more thing with our simple circuit. Flip the direction of the battery, and the current flows in the opposite direction. For a lightbulb, that doesn’t matter. As long as the battery is charged and the switch is closed, the light will turn on.
For electromagnets, however, like the ones in our motor, the direction of the current does matter. Send the current through the copper wires in one direction, and the electromagnet has a north polarity. One direction makes the coil’s magnetic field point north; reverse it, and the field flips south.
Until 1835, if you wanted to control electricity, you did it like we explained in our toy models: by hand. You had to physically connect and disconnect wires.
For early telegraphs, people ran wires for miles with manual connections at each end. Or you had to manually throw a knife switch, like they did in old Frankenstein movies when they wanted to show you that they were about to run some serious current.
People tried all sorts of mechanical switches - clockwork mechanisms, float switches, governor switches - but until 1935, there was no way to control electricity with electricity. You needed as much force to throw the switch as it was carrying. Amplification wasn’t possible, nor were remote control or automation. There had to be a better way!
Re-enter Joseph Henry. The American Faraday. Honestly, I don’t know why there aren’t statues of this guy.5 Remember him from earlier? At Princeton in 1831, Henry developed the first practical electromagnetic motor.

In 1835, as one did back then, Henry wanted to ring a bell using electricity from a battery, but his wire was too long. Over the distance, electrical resistance ate up the current until there wasn't even enough remaining to move the bell's clapper. BUT what if instead of trying to send enough current through the wire to ring the bell directly, he thought, I send a smaller current to activate an electromagnet (which he pretty much invented) which would pull a mechanical switch closed, connecting a local battery to the bell. Ringggg.
It worked. This being Joseph Henry, of course it did.
The weak signal controlled the strong signal. The mechanical relay was born, and in the coming decades, it would be applied to the long distance telegraph (by Samuel Morse), railroad signaling systems (1860s), telephone exchanges (1870s), elevator controls (1880s), and the first electric motor starters (1890s).
But the mechanical relay had a problem: it was still physically flipping a switch! Every connect and disconnect has arcing in the air, slowly damaging the contacts over time, meaning you can’t switch very many times. You also wear out a mechanical system.
That was fine for home lighting, which was used a few times a day, but what if you needed to switch thousands of times?
For nearly half a century, the best scientists could come up with were vacuum tubes, essentially modified lightbulbs that enabled early 20th Century electronics: radio, TV, telephones, the first computers, even the code breakers that won World War II.
It’s an interesting story, but we have places to go, but this isn’t a story about vacuum tubes, because the large devices they necessitated consumed tons of power, bulbs burnt out (and got tripped up by literal bugs, which is where the programming term comes from), and by today’s performance standards, were super slow at actually “flipping” the switch. So if you want that story, you can read it in the Vacuum Tube Appendix.
With no better solution than vacuum tubes, most AC motors were locked to the frequency of the power supply. On the 60 Hz US grid, a motor’s speed was set by the supply frequency and its winding’s pole count (for example, a 4-pole motor always ran near 1800 RPM), and changing it required costly, bulky mechanical or electromechanical systems. As a result, most industrial machines, tools, and home appliances had only on–off control, not smooth variable speed. To get a sense for what that was like, go into your kitchen. Your refrigerator’s compressor still runs at one fixed speed, cycling on and off, which is why you only hear the annoying hum some of the time.
But a better solution did come about, because vacuum tubes were used as switches and amplifiers for telephone networks, and no one had a more vast telephone network, or more of an incentive to find better switches, than AT&T. And no one had a better collection of talent to find them than AT&T had at Bell Labs.
In the spring of 1945, right after the War that vacuum tubes helped win, Bell Labs research director Mervin Kelly directed William Shockley, John Bardeen, and Walter Brattain to create a “semiconductor amplifier which worked on the principle of the field effect, a principle which theorizes that a strong electrical field could cause electricity to flow through a nearby semiconductor.”
Bell Labs knew that semiconductors (materials whose conductivity could be tuned) could in theory amplify signals without heat, if they could control the electron flow. In early attempts, they could not.
Then, at a remote Bell Labs facility in Murray Hill, NJ, while Shockley was elsewhere, Brattain and Bardeen set up a sliver of ultra-pure germanium with two closely spaced gold contacts pressed against its surface, wired to a circuit. When they fed a small input signal into one gold contact, the output from the other contact was amplified, not by vacuum electron flow, but by movement of charge carriers in solid germanium. This was the point-contact transistor: the first working solid-state amplifier in history.
Shockley was a bit of an asshole, though, which is why the “Traitorous Eight” would eventually leave Shockley Semiconductor to found Fairchild Semiconductor, and he was not happy that his teammates made the discovery without him.
So he hid himself in a hotel room in Chicago where he designed the junction, or sandwich, transistor, which “allowed an electric current to enter the transistor base, which was missing electrons, and amplify the signal being sent from the collector layer to the emitter layer.” Shockley’s was easier to manufacture, and more robust.
On July 1, 1948, Bell Labs announced the discovery of the transistor, “An amazingly simple device, capable of performing efficiently nearly all of the functions of an ordinary vacuum tube.” All three men shared the 1956 Nobel Prize in Physics for the invention.
Transistors were great for what AT&T needed them to be great at – signals – and they are the foundation of the modern world. But when transistors were tasked with handling real industrial power, they melted.
Over the next three and a half decades, researchers and engineers at corporate laboratories in the United States raced to explore the trade-off space in the pursuit of practically useful power electronics.
In 1957, two General Electric engineers, Gordon Hall and Frank W. Gutzwiller, drawing on Shockley’s 1950’s work on four-layer diodes, invented the Silicon Controlled Rectifier (SCR), also known as a thyristor. The thyristor could switch and control high voltage (electrical pressure pushing charges forward) and high current (flow rate of electric charge) electronically, and was put to use in DC motor drives for industrial uses like steel mills and paper machines, and even elevators.
But SCRs had one big challenge: they couldn't turn off on-command. They needed to wait for the current to naturally go to zero each AC half-cycle. The search continued.
Throughout the 1950s, researchers and engineers worked to improve Bell Labs’ transistor to handle larger currents. The first transistors were tiny devices that could handle 10-50 milliamps before melting and destructing. In 1956, GE and RCA pioneered the alloy junction transistor whose larger junction area and better thermal contact could handle 1-2 amps. In 1959, the year after one of its engineers, Jack Kilby, invented the integrated circuit, Texas Instruments introduced the Mesa Transistor, which had better frequency response and could be made more consistently, allowing engineers to push current ratings higher through better quality control. Later that year, and through 1960, two big breakthroughs occurred.
First, Fairchild (founded by the engineers who ditched Shockley) and TI swapped germanium, which failed above 75°C, for silicon, which could operate at 150-200°C. Silicon transistors could handle much more power without failing. Next, Jean Hoerni at Fairchild invented the planar process, which hid the junctions beneath a protective silicon dioxide layer, dramatically improving reliability and yield, and making larger power devices practical.
Using these innovations, and some power-specific ones including larger die sizes, interdigitated structures, better packaging, and triple diffusion, companies began rolling out commercial Bipolar Junction Transistors (BJT) in the early 1960s. In 1962, RCA released the 2N3055 silicon transistor, which could handle 15 amps and 60 volts.
Power BJTs took off. They replaced tubes in audio amplifiers, controlled electronic ignition systems in cars, powered variable speed drives in motors, and played a role in industrial automation. But they, too, had limitations: BJTs had slow switching speeds and needed a constant supply of current to keep conducting.
Meanwhile, back at Bell Labs, an Egyptian-American named Mohamed M. Atalla was taking a new approach to an old problem. See, back in 1945, Shockley’s original idea wasn’t the bipolar transistor, but a field-effect device in which an electric field would control current flow. What made Shockley think that was possible? That’s how vacuum tubes work!
But early field-effect devices failed miserably. For over a decade, every attempt to make a field-effect device failed miserably, because the semiconductor surface was electrically "dirty," full of traps, defects, and contamination that made consistent behavior impossible.
Most scientists viewed this as a device problem; Atalla viewed it as a chemistry problem. Atalla was right. In 1958, he came up with the “thermal oxidation process,” which others would name the “Atalla process,” and demonstrated that his oxidized silicon surfaces had 100-1000 times fewer surface states than anything achieved before. Early the next year, he teamed up with a new Bell Labs recruit, the Korean-American Dawon Kahng. Together, they built test structures using the thermal oxide as an insulator. Eventually, they tested one with:
A silicon substrate (the body)
A thermally grown SiO₂ layer (the insulator)
Aluminum evaporated on top of the oxide (the gate)
Two diffused regions on either side of the gate (source and drain)
On November 17, 1959 they achieved clear field-effect transistor action. By applying voltage to the aluminum gate, they could control current flow between source and drain. The thermal oxide was thin enough to allow field penetration but robust enough to prevent current leakage and maintain stability. Their device was called the Metal-Oxide-Semiconductor Field-Effect Transistor, or MOSFET.
And man, history doesn’t repeat itself, but it does rhyme.
The duo presented their work in June 1960 at the Solid-State Device Research Conference in… Pittsburgh. The researchers at the conference were excited, but this time, Bell Labs wasn’t… BJTs were finally working and the first MOSFETs were actually inferior to BJTs in many ways. They were slower, noisier, and with unstable threshold voltages. MOSFET’s advantages - zero DC gate current, simplicity of manufacture at scale - weren’t obvious at the outset. And Bell Labs was already deeply invested in BJT technology. It was classic Innovator’s Dilemma.
And to be fair to Bell Labs, early MOSFETs did have issues, and Atalla spent three years addressing them. Then, slowly, companies started using MOSFETs. 1963: Fairchild's Frank Wanlass invented CMOS (Complementary MOS), showing how P and N-channel MOSFETs could create logic circuits with virtually zero static power consumption. 1964: General Microelectronics used MOSFETs to create the first MOS-based calculator chip. 1967: Bell Labs' own Dennard invented the DRAM cell using a single MOSFET and capacitor. 1968: Federico Faggin (coincidentally, the author of one of my favorite books on consciousness) at Fairchild developed the silicon gate process, replacing aluminum gates with polysilicon, enabling self-aligned gates and much better performance.
When Faggin left Fairchild for Intel in 1970, he was hired to lead the MCS-4 project, which would include Intel’s new 4004 chip. Faggin brought his silicon gate technology with him, and used silicon gate MOSFETs to create the 4004. The chip, into which he etched his initials “F.F.”, contained 2,300 MOSFETs and kickstarted the microprocessor revolution.
While Bardeen, Brattain, and Shockley won the Nobel Prize in 1956, Atalla and Kahng received no such honor. Which seems absurd, because MOSFETs constitute 99.9% of all transistors manufactured today. The silicon/silicon-dioxide interface they perfected remains the foundation of the entire semiconductor industry, and every microprocessor, memory chip, and digital device depends on them.
The MOSFET structure Atalla and Kahng created has shrunk from 25 micrometers to under 3 nanometers. 3 nanometers sound familiar? That’s right, the naming convention for advanced semiconductors - i.e. a “3nm chip” - refers to the physical gate length of the MOSFET (although marketing has taken over, and the smallest feature in modern chips is something like ~10nm).
By some estimates, humanity has manufactured over 13 sextillion (13,000,000,000,000,000,000,000) MOSFETs, making it not just the most manufactured device in history, but quite possibly the most manufactured human artifact of any kind!
Let’s give it up for Atalla and Kahng and their MOSFET one more time.
This is where our story branches: Compute goes one way (toward ever-smaller MOSFETs), Power Electronics goes another (toward handling massive currents). Let’s follow the Power Electronics branch before returning to Compute.
Power Electronics
If batteries are the heart of the Electric Stack, and magnets are its muscles, power electronics are its peripheral nervous system. They shape raw electricity into exactly the right form, in exactly the right direction, at exactly the right time, to make motors spin, batteries charge, LEDs glow, and grids hum.
Recall our simple circuit. Now imagine a switch that not only turns current on and off, but also reverses it, much faster than you can by hand, thousands of times per second. Imagine that switch could also decide exactly how much current to send, and in what waveform. That’s what modern power electronics do. Power electronics let electricity control itself.
These systems, too, have gotten much cheaper and more powerful. Since 1990, the cost of power electronics like motor inverters and power semiconductor systems has fallen from $400/kW to roughly $2/kw today, for an annual improvement of 14.5%.
At this point in our semiconductor story, those who would like to switch current in larger devices, like industrial motors, are left with two sub-optimal options:
BJTs: can handle tens of amps, but have slow switching speeds and require continuous drive current to stay on.
MOSFETs: switch quickly with simple voltage-based gate drive (no continuous current needed), but have resistance that increases dramatically with voltage rating, making them inefficient and expensive for high-voltage, high-current applications.
Throughout the late 1960s and 1970s, American and Japanese engineers at GE, RCA, Hitachi, Siliconix, and International Rectifier try to make MOSFET work for power electronics (power MOSFET), but they keep running into a fundamental problem: to block high voltage, you need a thick, lightly-doped drift region.6
By the late 1970s, those teams had converged on a similar idea: what if we could get the BJT's excellent current-carrying ability (where the device floods itself with charge carriers to reduce resistance) but control it with just a voltage like a MOSFET (no constant current needed)?
Early attempts failed spectacularly. They ran into the same issue. In engineering: the devices latch up and lose gate control. In English: These devices get “stuck on.” Imagine a light switch that won't turn off no matter how many times you flip it. Once they start conducting, they refuse to turn off. The gate becomes useless, and the device keeps conducting until you cut the main power.
The field, appropriately, was itself stuck, until another immigrant, Indian B. Jayant Baliga, cracked the code.
The Insulated Gate Bipolar Transistor (IGBT)
The researchers and engineers in our tale are unsung heroes. We don’t give them nicknames, like we give athletes. But if we did, there’s no doubt we’d call B. Jayant Baliga The GATEway of India.
Born in Jalahalli, a small village near Bangalore, in 1948, the GATEway received his B.Tech in Electrical Engineering from the prestigious IIT Madras, in the same town where the transcendent mathematician Ramanujan would live and, eventually, die. Baliga moved to the US at age twenty, where he received his MS and PhD from Rensselaer Polytech.
Upon graduation, Baliga wanted to work where the action was: in computers. Specifically, he wanted a job at IBM or Bell Labs, but with his student visa, he couldn’t even get an interview. A friend told him about a job working on power electronics at GE in nearby Niskayuna, New York, but Baliga didn’t come all the way to America to work on power devices.
“Why would I want a job working on power devices?” he recalls asking. “Power devices have been worked on since the 1950s. Everything that’s interesting has already been done.”
But he didn’t want to go all the way back to India, either, so he applied and got the job. And as it turns out, he was wrong. He was about to do something interesting.
Post-Oil Crisis, GE was looking for ways to save energy in the many electric motors that drove its products. Most of GE’s motors were induction motors, which had no good way to slow down, had to be on or off, so GE engineers wanted to try something new: just drive the motor at the desired speed.
But that meant “adding electronics to send power to a motor’s windings at variable frequencies, which wasn’t easy to do back then,” according to IEEE, because as we now know, voltages were too high for MOSFETs to work efficiently, and BJTs “required bulky and expensive control and protection circuits.”
According to Baliga, a GE manager in charge of drives for air conditioners told the company’s researchers, “You guys need to give me some innovation that will make my business succeed,” which sounds exactly like something a GE manager in the era of Revenge of the Nerds would tell the company’s nerdy researchers. So he did.
“Baliga came up with the design for a thyristor-like device that combined the best attributes of MOSFETs and bipolar transistors,” wrote IEEE, “which at that point were considered separate semiconductor universes.”
Baliga realized that the key is to decouple the input from the output. Use a MOSFET to control the gate, but a BJT structure to carry the current. GE CEO Jack Welch came up to the research lab to check it out, and gave it the green light. By the next year, GE was producing Insulated-Gate Bipolar Transistor (IGBT) wafers, and modern power electronics were born.
Still, when GE commercialized the IGBT in 1982, early versions its switching speed was still too slow. Baliga came up with a process - electron irradiation - and all of a sudden, “I could control the speed all the way from low frequencies to as high as I wanted,” remembered Baliga.
The IGBT allowed fast, electronic switching of large amounts of current for the very first time. By 1985, “everybody went bananas with this—now they had fast devices. They could start running with it. It just took off.”
Even though we don’t think about IGBTs very often, it is hard to overstate their impact.
Without IGBTs, you can't efficiently convert battery DC to motor AC. Without IGBTs, you can't do regenerative braking. Without IGBTs, motors are stuck at fixed speeds. No IGBT, no Electric Stack. No Electric Stack, no modern world.
IGBTs have saved more energy than any invention in history. When Baliga won the Global Energy Prize in 2015, the committee wrote, “In the last 25 years, IGBTS saved over 73,000 TWh of energy, over 1.48 trillion gas gallons, and decreased CO2 emissions by 49.5 billion metric tons. The customer’s savings exceeded $23.7 trillion!”
Today, IGBTs are in everything electric: every EV, wind turbine, solar inverter, and every variable frequency drive (VFD) in every factory. Those VFDs were, in fact, power electronics’ alpha product.
The Power Electronics Alpha Product: Variable Frequency Drives
As much as GE wanted to save energy, Japan needed to save energy.
While GE invented the IGBT, its early devices were fragile and expensive, and in 1988, Jack Welch actually decided to sell off GE’s entire semiconductor business.
“Neutron Jack,” a corporate hero during his reign, has come to be the poster child for his era’s love of financialization over real growth. Legacy aside, the story is familiar: developed here, manufactured elsewhere.
In the mid-’80s, two Japanese conglomerates, Toshiba and Mitsubishi, refined the IGBT manufacturing process, while two others, Yaskawa and Fuji Electric rapidly productized them. The Japanese were ready. Just read the foreword from this 1985 Fuji report on the Prospect of Transistor Inverter, then read the whole thing if you’re really interested.
The Japanese were ready for the learning curve: “History of transistor inverters is repetitions of capacity increasing and getting higher performance and making them lower in cost.”
They were ready to avoid the Innovator’s Dilemma: “It is not that the inverters of present phase have attained 100% satisfaction of all, but we can say that in the context of cost and performance, they have reached now to the time for a wide diffusion acquiring sufficient esteem for their qualities.”
The Japanese knew what was coming, what had to be done, and what it all meant.
Once again, the Japanese applied processes from previous products to the new thing.
They leveraged their experience in power MOSFET manufacturing, the process control for which Japanese semiconductor fabs were already famous, and robust packaging technology from their work on power modules. Perhaps most importantly, they were vertically integrated, so they could iterate their way to improvements quickly.
Take Mitsubishi. Mitsubishi Semiconductors made the IGBTs. Mitsubishi Electric made motor drives and industrial equipment. Mitsubishi Heavy Industries were a massive internal customer for products like factory automation tools and HVAC systems. When Electric’s drive engineers found that early IGBTs were failing due to overcurrent conditions during motor starts, they could walk down the hall and tell the Semiconductor team. Within weeks, not months, they'd have new test devices with different current ratings or improved short-circuit withstand times.
Fuji’s feedback loop was even tighter. Their Matsumoto factory had IGBT fabrication on one side and drive assembly on the other.
The upshot is that the Japanese were the first to commercialize IGBTs, and in the process of increasing capacity, achieve higher performance for lower cost.
In 1985 and 1986, Toshiba and Mitsubishi made IGBTs that could handle 600V and 1200V reliably. In 1988, Yaskawa, working directly with Mitsubishi’s Semiconductor division, released the Low Noise Inverter, likely the first mass-produced IGBT drive.
Wait, what’s a drive?
Man, I’m sorry to do this, but if you’re unclear about the relationship between IGBTs, inverters, and variable frequency drives (I know I was), I wrote an explanation in this appendix. Here is the short version:
The IGBT is the semiconductor switch itself.
The rectifier contains at least six diodes and turns your wall’s three-phase, 60 Hz AC into rough DC.
The DC Bus uses capacitors to clean up the rough DC into a steady voltage that can be turned back into AC.
The inverter contains at least six IGBTs to make a power conversion circuit that converts DC (say, from the DC bus or directly from a battery) to AC, which can be precisely controlled.
The variable frequency drive (VFD) is the complete system with inverters, rectifiers, a DC bus, control logic, a user interface, and protections.
Here’s the “so what”: Before VFDs, running a motor at half speed meant literally wasting half your power as heat through resistors or mechanical brakes, like driving with one foot on the gas and one on the brake. VFDs let you just... use less gas. It’s what the GE engineers had wanted.
By controlling frequency, VFDs made industrial motors much more efficient, and a whole world of use cases opened up.
Most industrial processes don't need full power all the time. A ventilation fan doesn't need to run at 100% at 3 AM. A pump doesn't need full flow when demand is low. Before VFDs, these motors ran full speed 24/7, wasting enormous energy. VFDs let them run at exactly the speed needed.
Companies like Yaskawa and Fuji Electric could go to customers and say, “If you install our VFD, you will save money on energy immediately. You probably don’t even have to retrofit, just wire the VFD between your wall power and the motor, program a few basic parameters, and start saving.”
Early VFD salespeople would actually guarantee energy savings. They’d offer to measure current energy usage, install the VFD for a 30-day trial, and if you didn’t save at least 30% on your energy bill, they’d remove it for free. They rarely removed them.
This 1990s case study from Emerson Electric provides a real example:
R.A. Miller was told to expect savings of $29,088 on install costs after rebate of $27,726, for an 11-month payback period. In reality, they saved $40,000, for a payback of just over eight months! Plus, “R.A. Miller realized real payback in improved product quality and a more precise flow pattern that resulted in much tighter quality control in the drying process. This enhancement in tighter control resulted in improved humidity control which in turn raised the quality of the produced hardwood.”
R.A. Miller was impressive, but not a total outlier. In 2002, the DOE wrote a report assessing electric motor systems market opportunities, in which they discussed certain projects around the country that made efficiency improvements to their motorized systems. While not all of the savings came from VFDs, the DOE wrote that “Nearly two-thirds of all potential savings derive from system improvements, such as the substitution of adjustable speed drives for throttling valves or bypass loops in pumping systems or fixing leaks in compressed air systems.”
In short, VFDs were a no-brainer!
After Yaskawa, Fuji Electric released its FRENIC series in 1989, targeting Japanese textile mills, machine tools, and HVAC systems. In 1990, the West jumped back in. ABB’s ACS 500 series targeted European heavy industry, particularly paper mills in Scandinavia where energy costs were critical. In 1991, Siemens, which had resisted because they’d made a big bet on thyristors with their SIMOVERT series, rushed the SIMOVERT P VFD to market. In 1992, Allen Bradley partnered with Powerex (a Mitsubishi/GE JV) to launch the 1336 Plus.
Over this period, as Fuji Electric predicted, prices dropped as volumes soared. VFDs cost roughly $400/kW in 1985 (nominal). By 1995, their cost had fallen to $130/kW. These and following power electronics costs numbers are rough estimates, because power electronics come in different shapes and sizes: full VFDs versus inverters, 600V, 1200V, and number of voltages, etc, but are directionally correct.
Sales data from this period is hard to come by, but this 2011 IEA paper says that “Sales of inverters in Japan began in the 1980s and by the mid‐1990s accounted for about 75% of the sales volume of electric motors, which implies that a high proportion of applications were using inverters. Prices of inverters dropped by 60% from 1990 to 2002.” By 2008, Yaskawa, the world’s largest drive maker, shipped its ten-millionth VFD, and claimed that it accounted for 14% of the world market.
By the mid-1990s, VFDs came home and swept America. In 1993, HVAC manufacturer Carrier announced that all their large chillers would offer optional VFDs using IGBTs. Trane and York followed within months.
VFDs offered a competitive advantage, as evidenced by the competitive response: a pioneer would introduce them, and its competitors would follow. That same year, Otis introduced the Elevonic 411M-C system (1993) with Mitsubishi IGBTs, which allowed for regenerative braking: feeding energy back to the building when the elevator descended. KONE responded with their EcoDisc using ABB drives in 1996 (see Appendix).
Elevators are one of the earliest products “unlocked” by better power electronics. Of course, many of these products show up on multiple charts; better batteries, magnets, and power electronics are all required to make them work. An EV motor wouldn’t spin so smoothly, and the EV wouldn’t really work at all, without the ability to control the current’s frequency!
By the late 90s, all sorts of things got IGBT VFDs: the DBAG Class 128 train got them for traction conversion, the Toyota Prius used newly developed automotive-grade IGBT for motor and generator inverters, and the RCA started putting them in cathode ray tube TVs.
The hits kept on coming in the new millennium. In 2001, LG introduced the first refrigerator with Inverter Linear Compressor to save energy, lower noise, and control temperature. The Korean company also released direct-drive inverter washing machines around that time. Panasonic put inverters in its microwave ovens. Electrolux used IGBTs in its (shitty, basic, primitive) induction stoves. In 2006, Siemens used IGBT in its MICROMASTER industrial HVAC systems.
When Google opened its first large-scale data center in The Dalles, Oregon in 2006, VFD-controlled cooling systems were essential to achieving their unprecedented PUE (Power Usage Effectiveness) of 1.25, compared to the industry average of 2.0 at the time. Without VFDs making cooling 40-50% more efficient, AI data centers would be even hotter and more expensive.
In 2008, Enphase introduced the solar industry’s first micro-inverter system, putting a micro-inverter on each panel, as opposed to the whole system, to generate “energy harvest increases of 5-25 percent and balance-of-system (BOS) savings of 13-15 percent compared to systems designed with traditional inverters,” according to the company. While normal inverters converted DC from a battery to variable frequency AC, Enphase’s micro-inverters took DC right from the sun (via panels) and converted it on the spot. Beautifully, the Electric Stack helped harvest more electrons to ultimately feed the Electric Stack.
Still, solar inverters were stuck at 96% efficiency, because every time an IGBT switches, it loses energy during the transition from on to off. These "switching losses" turn into heat.
Want to switch faster for smoother motor control or smaller components? More heat. Want to handle higher voltages for electric vehicles or solar inverters? More heat. Silicon IGBTs were much better in terms of efficiency than their predecessors, but not quite good enough for the use cases knocking on their door.
That’s exactly the problem Tesla ran into with the Roadster that same year. To get the performance they wanted, they had to use a massive cooling system that added weight, complexity, and cost. The inverter alone needed liquid cooling with radiators, pumps, and coolant, because silicon IGBTs turned about 3% of the battery's energy into waste heat.
The problem is silicon’s bandgap. Bandgap is the energy difference between electrons that are stuck in place (valence band) and electrons that are free to move and conduct electricity (conduction band). Silicon has a relatively small bandgap of 1.1 eV: it’s easier to get electrons moving, but breaks down at lower voltages and temperatures than materials with higher bandgaps. Higher bandgap materials need more energy to get going, but once they do, they can handle more power, switch faster, and need less cooling.
Silicon IGBTs were a miracle, but it was time to stack a wider bandgap miracle on top.
SiC, or the Return of the GATEway
This is another way learning curves happen.
New technologies enable new applications, which drive demand, which drives performance and cost improvements. But even newer applications become almost possible. They try to make it work with the new-now-kind-of-old technology, and it kind of does, but not quite, which creates the demand for new iterations of the technology.
So remember The GATEway of India, B. Jayant Baliga? When Jack Welch sold GE’s semiconductor division, Baliga didn’t want to move into a boring management role elsewhere in the company, so he made his way down to North Carolina State University, where he knew department head Nino Masnari, and where, to use the clean room at the Microelectronics Center of North Carolina (MCNC) without using state funds, he set up his own center, NC State’s Power Semiconductor Research Center (PSRC).
Well, what Baliga doesn’t want you to forget, as he told an interviewer for an oral history, is that “People forget that I’ve done many other things. In fact, we forgot something that I’ve done, which is probably equally important as the IGBT, which is to create wide-band semiconductor-based power devices as technology.”
I fucking love this guy. He did a lot, and he knows it.
Like for example, while he was developing the silicon IGBT, he got to thinking, what other materials might work even better? “I had an intellectual curiosity: what can I do if I change the material?” he asked himself, “So I derived this equation, and that equation is now named after me. It’s called Baliga’s Figure of Merit.”
Baliga’s Figure of Merit did something very useful. It lets you “assess how well another semiconductor material will perform compared to silicon using that equation.”
From there, it was simple math. He predicted a 13x performance improvement from gallium arsenide (GaN). GE was already using GaN in lasers and LEDs, so they gave him a team, and they developed the first functional GaN devices. Then Baliga realized that by switching to silicon carbide (SiC), he could probably get even better performance. But GE didn’t do much with SiC. GE Labs had only tiny slivers, and they didn’t know much about them.
Coincidentally, providentially, would you guess who did know much about SiC? North Carolina State University. The same NC State Baliga went to basically so he could get clean room access! Come on!
The year before Baliga moved down to what I consider to be potentially the greatest state in America, NC State materials science students of Professor Robert Davis had spun out a company named Cree (later named Wolfspeed, after the school’s mascot, the same Wolfspeed that just filed for bankruptcy in June to restructure $6.5 billion in debt).
When Baliga talked to Davis, he told him his former students were starting to figure out how to grow SiC, and that he might be able to purchase some from them. He did.
Before testing, Baliga expected that SiC “might be 100 times better than silicon.” He was, for once, wrong. “Then my students and I did these careful measurements, and we found that it’s actually 1,000 to 2,000 times better. Incredible.”
SiC had almost magical properties compared to silicon:
10x higher electric field strength – could handle 10x the voltage in the same thickness
3x higher thermal conductivity – heat flowed out 3x faster
3x higher bandgap – could operate at 300°C vs silicon's 150°C
Up to 100x faster switching – could switch at MHz instead of kHz
But SiC was a nightmare to manufacture. Growing high-quality SiC crystals was like trying to grow diamonds. It required temperatures over 2000°C and took weeks to produce a single wafer. In the 1990s, a 2-inch SiC wafer cost $2,000, compared to $20 for silicon.
Cree spent the 1990s perfecting the crystal growth process for LEDs, unknowingly laying the groundwork for a power electronics revolution. In 2002, they made their first SiC Schottky diode for power applications. It was expensive and small, but it worked.
For years, SiC remained a niche technology for military radars and satellite power supplies; applications where cost didn't matter but weight and performance did.
This - military purchases for capability over cost – is another way that learning curves kickstart.
Then, in 2015, Japan Rail’s N700A Shinkansen bullet train used SiC MOSFET modules in their main traction converters, becoming the first commercial high-speed train with full-SiC MOSFET main traction converters in service.
Wait a second… did I just say MOSFET? I thought those were just for computers now?
You thought wrong.
Remember, the point of the IGBT was to combine the MOSFET's easy gate control (voltage-driven) with the BJT’s low conduction losses at high voltage. But SiC MOSFETs can handle such high electric fields that they achieve low conduction losses without needing the bipolar structure. Essentially, SiC MOSFETs are good enough that you don't need the IGBT's complexity.
See, SiC has a 3x wider bandgap than plain old Si at 3.3 eV. The wide bandgap enables thin drift regions, which enables low resistance, which makes MOSFETs practical at high voltages, which makes IGBTs unnecessary, which enables faster switching, which enables smaller components…
It's the Electric Stack story in miniature: one fundamental improvement (bandgap) cascades through the entire system. Without the wide bandgap, none of the other advantages matter. With it, they all matter a lot.
SiC's 3x wider bandgap enabled 10x higher electric field strength, which meant devices could be 10x thinner with 100x lower resistance. Suddenly, MOSFETs could do what only IGBTs could do before, but switching 100x faster.
Still, they were expensive, 3-5x more expensive than silicon IGBTs. Most large potential customers, specifically those at EV companies, were waiting for SiC to come down the curve.
You’d have to be crazy to put SiC MOSFETs in a consumer EV!
Tesla Model 3
One of the fun things that Elon Musk does is that he promises things that are currently untrue and then makes his team figure out how to make them true, or closer to true than they would have been if he’d never made the promise in the first place.
At the Model 3 launch event in 2016, Elon promised that Tesla's Model 3 would cost $35,000 with a range of at least 215 miles, which forced the team to get creative.
Like: most companies wouldn’t use a component that’s 3-5x more expensive than alternatives in order to reduce costs. But Tesla did some multivariate crunching and realized that while SiC MOSFETs were more expensive than silicon IGBTs, a system freshly designed around SiC MOSFETs advantages could actually be cheaper and more performant.
We’re going to go deep here and study how it all fits together, because the Model 3 is one of the best examples of what can happen when you design a vertically integrated product using cutting-edge components from each layer of the stack.
Recall that SiC has 100x lower resistance than silicon. Practically, that means they generate 70% less heat per kW and conduct heat 3x better and can tolerate higher temperatures than silicon IGBTs (300°C vs 150°C), which has all sorts of design implications.
To deal with the heat, for example, the Model S required IGBTs housed in sealed module packages. These modules then sat on thermal paste, which connected to a cold plate, through which liquid coolant flowed to the radiator. Each of these thermal interfaces added both mass and resistance. The modules also needed separate boards for gate drivers, isolated from the IGBTs for thermal protection lest they melt.
With SiC MOSFETs, Tesla could mount the bare SiC dies directly to the aluminum heat sink (via just a thermal pad), eliminating the entire module packaging layer that silicon IGBTs required. This direct connection meant heat flowed straight from the chips through the thermal pad into the aluminum casting, then dissipated through air cooling out to the car's radiator.
The SiC MOSFET inverter became a single, potted module the size of a laptop that could handle 300kW. It was more than 50% smaller, used simpler cooling, and was 4% more efficient, adding 20 miles of range basically for free. All of this meant that while the SiC components were more expensive, the total system cost was lower for better performance.
That’s a lot of words. Pictures do a better job highlighting the improvement:
That is an astounding difference, but it’s just part of the story. The Model 3 is one of the best representations of the integration of the various Electric Stack technologies compounding for decades and coming together in one expertly-integrated package.
For example, do you notice the shape of the Model 3 SiC MOSFET drive inverter? It’s designed that way so it can be mounted directly adjacent to or even integrated with the motor assembly. By making the inverter small and efficient enough to be air-cooled, Tesla could shape it to fit snugly against the motor, creating a more compact, integrated drive unit. This close integration shortens the high-current connections between the inverter and motor, reducing electrical losses and electromagnetic interference. The Model S inverter, being so large and liquid-cooled, had to be mounted separately with longer cable runs to the motor.
The motor itself was new for Tesla in the Model 3 Internal Permanent Magnet Synchronous Reluctance Motor (IPM-SynRM) used in an electric vehicle.
IPM-SynRM, which was developed by ABB in the early 2010s for industrial motors, generates torque from two distinct sources: permanent magnets and magnetic reluctance.
The insight is that a normal permanent magnet (PM) motor is good at normal speeds and bad at high speeds; a synchronous reluctance motor (SynRM) is harder to get going but better at high speeds. By combining both - cutting slots in the rotor and inserting permanent magnets - you get the best of both worlds.
If you’re in too deep and really want to understand how it all works, watch this video.
For our purposes, what we need to understand is this: SiC MOSFET inverters transform IPM-SynRM motors from “possible but not great" to “superior to conventional motors.”
IPM-SynRMs need to actively weaken the magnetic field at high speeds to prevent back-EMF from exceeding the battery voltage. SiC's faster switching (up to 100 kHz capability vs 5-10 kHz for IGBTs) enables much more precise current control during these critical high-speed operations. Without this precision, the motor either loses efficiency or simply can't reach higher speeds.
The magnets themselves are grain-boundary-diffused neodymium magnets, which means that they use dysprosium to increase coercivity (increase the temperature at which they demagnetize (important in a hot motor), but only at the grain boundaries, the weak points at which magnets typically fail at high temperatures. With grain-boundary diffusion, a technique borrowed from Honda, Tesla got double the coercivity with 70% less dysprosium, which is important because dysprosium costs 10x more than neodymium.
Between the motor design and the improved magnets, Tesla got 97% motor efficiency (vs 93% for Model S induction motor). With SiC MOSFETS, Tesla got 98% inverter efficiency (vs. 94% for the Model S).
Together, the 8% drivetrain efficiency gain translated to 8% greater range, all else equal. They also made Tesla’s super fast Plaid mode possible, as the thermals for high performance now worked out.
That efficiency, and better switching, also meant that Tesla could get more range out of fewer batteries, because reduced switching losses meant more of the battery’s energy reached the wheels.
Recall that 2017 is exactly when Tesla’s first Gigafactory with Panasonic came online, rolling out newer, larger 2170 cells instead of 18650s. These cells featured silicon-doped anodes, which increased energy density ~15%, and vertical integration cut costs by ~35%.
Instead of the Model S’ 7,104 18650 cells, the Model 3 used 4,416 2170 cells. Fewer cells meant less packaging, connections, and structural weight. The fact that each cell was bigger meant more surface area, which meant better heat dissipation. All told, the battery pack was 140 pounds lighter in the Model 3 than in the Model S.
Overall, thanks to the improvements at all levels of the Electric Stack, the Model 3 Long Range, the version delivered in 2017, weighed 833 pounds less than the Model S and delivered better performance overall for $35,000 less!
This is why you see that motor cost drop in 2018!
In our electric motor cost curve, we use a Munro Tesla 2018 Model 3 teardown to get our best-in-class $/kW motor performance for that year. Munro and team estimated the car’s inverter cost at $810, and with a power rating of 210 kW, it achieved a $/kW of $3.85. We give the motor the credit in our analysis, but as you can see here, it’s a whole system win.
Compounding is crazy. But as I hope you appreciate at this point, it doesn’t just happen on its own. It takes a million little ingenious improvements and a lot of scale.
And if you’re at all familiar with the Tesla story, or have come to understand the shape of the sections in this essay, I bet you can guess what happens next.
Chinese Soft Power (Electronics)
OK, you might not be able to fully guess. Power electronics remain a much closer race than batteries or motors and magnets.
That, though, is remarkable given that China basically entered modern power electronics in the past two decades.
The country has understood the importance of semiconductors since the 1950s, but early efforts were primitive (simple diodes and transistors instead of integrated circuits) and were disrupted by the Cultural Revolution (1965-1975).
Beginning under Deng Xiaoping in 1978, China redoubled its chip efforts. According to the United States International Trade Commission, By 1985, state-owned factories had imported 24 secondhand semiconductor manufacturing lines at a cost of 1.3 billion RMB (~$480 million). Still, “one American researcher visiting a Shanghai factory in the mid-1980s found it was producing chips that were 10–15 years out of date on wafers with yields as low as 20 to 40 percent.” Failed JVs with western companies in the 1990s left China not much closer, but undeterred.
“In 2005, China’s State Council issued a National Medium- and Long-Term Science and Technology Development Plan Outline for 2006–20 (MLP)... recognized the importance of semiconductors as an enabling hardware and ‘core technology’ for future advances,” per the USITC. Interestingly, one of the plan’s supporting documents promoted the concept of IDAR: “Introducing, Digesting, Absorbing, and Re-innovating intellectual property and technologies as a means of industrial catch-up.” It explicitly called for targeted acquisitions of foreign technology.
CRRC, China’s state-owned railcar rolling stock company, didn’t wait long to execute. In 2008, it bought the British semiconductor maker Dynex to access its IGBT capabilities, in part for military technology, and in part for trains.
That same year, China opened its first high-speed rail, the Beijing-Tianjin intercity railway.
Early IGBTs for the high-speed trains were imported from Germany (Infineon) and Japan (ROHM), but with the help of Dynex management, CRRC got to work setting up a manufacturing facility in Zhuzhou, Hunan province to meet the stable, practically government-guaranteed demand for its IGBT devices. By 2015, CRRC Zhuzhou Institute’s newer production line was producing hundreds of thousands of 8 inch chips, at a scale and quality level high enough to begin exporting them.
Today, China is the world’s dominant consumer of IGBT, at 40% of global demand, although supply market share is harder to come by. At the cutting edge, anyhow, the world has moved on.
In 2017, Tesla demonstrated the advantages of SiC MOSFETs, and Chinese EV OEMs were quick to adapt. By 2020, BYD rolled out the Han SUV with its own self-developed SiC MOSFET.
BYD’s timing was interesting, and understanding how the company’s EVs went from fairly undesirable among Chinese consumers to the world’s bestselling EV company in 2024 highlights another way China competes besides acquisitions and pure tech transfer.
The year after Tesla launched the Model 3, “officials in Shanghai allowed Tesla to become the first foreign automaker to establish a manufacturing plant in the country without a local partner,” per Apple in China. Musk proposed opening the factory in two years. The mayor of Shanghai convinced him to fast-track it to twelve months with land and tax incentives.
“It was probably the fastest and most efficient factory ever built in the car industry, let alone EVs, in the world,” Harsh Parikh, the head of global supply management for CapEx at Tesla, told Apple in China author Patrick McGee.
As McGee explains, China tried for decades, and failed repeatedly, to build internal combustion cars. As The Economist put it, “For all its manufacturing might, China never mastered internal-combustion engines, which have hundreds of moving parts and are tricky to assemble.”
Electric, though, was different. China was already good at manufacturing the components: they’d surpassed Japan in li-ion battery production in 2015, and owned neo magnet production “from mine to magnet” by 2003. The government had been incentivizing EV production and consumption since 2001.
“Despite all of this support,” McGee writes, “EVs and plug-ins accounted for just 4.8% of the new car market in 2019.” Chinese companies just weren’t very good at designing cars that Chinese consumers wanted. They didn’t want EVs. Until Tesla came.
In 2020, the Tesla Model 3 was China’s bestselling EV. According to Tesla’s Parikh, “As soon as Tesla came, there was a paradigm shift from consumers, and that’s something the Chinese government saw. This was an opportunity to have an entire EV industry in China compete with, and learn from, Tesla.” As McGee explains:
In China, this phenomenon has been called the catfish effect. The idea is based on the fact that when sardines are caught at sea and placed in a tank for their journey back to shore, they become sedentary and die. But sardines kept alive tend to have better flavor and texture, and thus fetch more money. The story goes that a Norwegian fisherman figured out that if he threw a catfish into the tank, the sardines would keep swimming and fight for survival. The presence of a single catfish causes the whole tank of sardines to better themselves. Beijing, it’s often said, wanted Tesla to play the role of the catfish for the EV industry.
In this telling, BYD and other Chinese auto OEMs saw what Tesla was building - including its SiC MOSFET drivers - and upped their games.
“The theory,” however, “is partly misleading,” McGee writes. “It implies that Tesla just inspires competitors to do better, but Tesla works intimately with and improves its third-party vendors, who then supply the local EV brands such as BYD.”
For drive inverters specifically, BYD’s quick adoption may have indeed been the catfish effect. Tesla sourced its SiC MOSFET power modules from STMicro, a Swiss company, not a local Chinese vendor, and BYD actually produced its own SiC MOSFET power modules through BYD Semiconductor after years of investment in the technology, according to this excellent breakdown of BYD Semiconductor.
And Tesla seems to have catfished the entire EV industry. Other Chinese EV OEMs still import and integrate foreign SiC power modules: Xiaomi's SU7 uses Infineon, Nio uses onsemi, and Geely uses STMicro. Globally, Toyota, GM, Volkswagen, and Renault-Nissan-Mitsubishi, Mercedes, Volvo, and others have also adopted SiC MOSFET inverters. This seems to be a case of: once Tesla showed how SiC MOSFETs improved the entire system, everyone quickly followed suit.
Meanwhile, BYD has started to move upstream, investing in TYSic, a maker of SiC wafers, in 2022. It’s continued to innovate on the power modules, increasing voltages and building new products around their new capabilities. On March 17, 2025, BYD launched its Super E platform, which lets drivers charge a 100 kWh battery in six minutes. Per Nomad Semi, “This achievement was made possible by BYD Semiconductor breakthrough in high-power 1,500V SiC chips. It marks the first large-scale application of 1500V SiC chips in the global automotive industry.” He adds that “According to tphuang (very good source for BYD), BYD Semiconductor is working on 1700V SiC module which will enable >1000V platform.”
Underestimating Chinese companies as copycats is a mistake, particularly BYD. Among all Chinese electric companies, BYD is the most vertically integrated, and innovates on both the components and at the system level.
“It is vertically integrated across the supply chain from the epitaxy growth to the final module,” Nomad Semi writes. “Unlike global players, Chinese companies are mostly involved in only 1 to 3 steps of the SiC manufacturing process.”
SiC is just one example of this vertical integration. And it’s working. As of last year, the student has become the master. The result is this: in 2024, Tesla did $97.7 billion in revenue; BYD did $107 billion.
It is likely that the other auto OEMs will follow BYD’s lead. Indeed, China’s 2021 Five Year Plan targets wide-gap semiconductors as a national priority, and vertical integration works in other industries that China has come to lead.
Unsurprisingly, for example, China is absolutely dominating in solar PV inverters. In 2024, Huawei and Sungrow ranked #1 and #2 for the tenth year in a row. China is the world’s dominant manufacturer of solar panels; that they also dominate important components shows how skilled they are at integrating the stack.
As it stands today, within the SiC supply chain, there is split leadership between the US and China. Unlike batteries and magnets, power electronics are not yet lopsided.
Substrates: The US has a lead in substrates, with Wolfspeed leading the market at 33.7% share.
Epitaxy: China leads in epitaxy, the next layer up the stack (literally, it’s a crystalline layer on top of the substrate with precisely controlled electrical properties impossible through SiC crystal growth alone), with China’s Epiworld controlling 30% of the market.
Devices (chips): The West actually kind of dominates here, with Switzerland’s STMicro at 32.6%, driven by its Tesla relationship, followed by America’s onsemi at 23.6%. China doesn’t crack the top 5.
Modules (packaged assemblies): Data is harder to come by here. Certainly, BYD excels, as does Tesla, while Infineon, onsemi, and others sell full modules to companies like NIO and Xiaomi.

In short, the US still holds a narrow lead in SiC MOSFETs, but China is closing fast. With Wolfspeed—the world’s leading substrate producer—emerging from bankruptcy, keeping that capability alive and scaling is a strategic imperative.
That may be difficult. Here’s an interesting tidbit I picked up spelunking in the Semiecosystem Substack: “But with the exception of China and a few other nations, EV sales have been disappointing. Thus, the demand for SiC devices is weak. And today, the SiC device market is a terrible business.”
If China continues to incentivize EVs while America pulls support for them, that would favor China’s future position in the market. As we’ve come to appreciate, demand drives capabilities. If most of the demand for EVs comes from China, it may be the only country incentivized to continue to build out SiC capabilities.
SiC is not the only front in the power electronics struggle, though. The next high-stakes contest is over another wide-bandgap semiconductor: gallium nitride (GaN).
The US and China are competitive here, too, and the stakes may be even higher. While SiC is perfect for high-voltage systems, GaN is taking over high-frequency, high-efficiency applications ranging from the quotidian, like the increasingly ubiquitous USB-C charger, to the critical, like data centers and drones.
IGBTs, SiC, and GaN. As things stand, the future of power electronics will rely on all three:
Silicon IGBTs for high-power, cost-sensitive applications, like grid infrastructure, trains, industrial motor drives, and wind turbines
SiC MOSFETs for high-voltage, high-efficiency applications, like EV traction inverters, fast DC chargers, solar inverters, industrial power supplies, and robots.
GaN for high-frequency, compact applications, like phone/laptop fast chargers, data center power supplies, 5G base stations, lidar/radar systems, and drones.
Many products will use both SiC and GaN. An EV, for example, might use SiC for its drive inverters but GaN for its in-car charging ports.
Specifically, SiC will handle applications with high voltages and lower switching frequency needs. GaN will handle applications with lower voltages and higher switching frequency.
Currently, there are two main types of GaN chips:
GaN-on-Si is the most common for power electronics, like your phone charger, and is cheaper to produce because of silicon’s abundance and humanity’s abundant experience putting it on chips. GaN-on-Si power devices are also used to make data centers more power efficient, further increasing demand in the race to add gigawatts of data center capacity in the face of limited power supply.
GaN-on-SiC is a premium solution for radiofrequency (RF) and high-power applications. It has better thermal management and higher frequency capability, and offers superior performance for a high cost. In both the US and China, the military is a major buyer of GaN-on-SiC for advanced radar systems; they are the ultimate high cost for high performance buyers.
Because of its control of the SiC substrate market, America currently has the advantage in GaN-on-SiC. American companies Wolfspeed, Qorvo, and MACOM, and European companies NXP and Infineon, dominate here.
Because silicon is widely available, the competition for GaN-on-Si, however, exists further up the stack, in epitaxy, device design, and packaging, all areas where China can catch up faster. Here, we seem to be facing the same issue we have before: China may be the last country standing in GaN-on-Si manufacturing.
TSMC recently announced that it was exiting the GaN market, forcing fabless GaN chip companies to find new foundries. In March, STMicroelectronics announced a GaN technology and manufacturing agreement with China's Innoscience. In his analysis, Semiecosystem’s Mark LaPedus wrote:
As stated, Innoscience as well as other China-based companies are competing in the GaN device market. In addition, several China-based foundries are accelerating their GaN efforts, including AMSfab, Sanan IC, and CR Microelectronic's Runxin Micro, according to TechInsights.
Here’s the worst-case scenario: China will likely compete on price and turn the entire GaN market upside down.
Given this shift and China’s advantage in small electronics, its victory in GaN-on-Si seems almost a foregone conclusion.
In fact, in its 2025 Wide Bandgap Power Electronics Strategic Framework report, the US DOE notes that China accounts for 98% of the world supply of raw gallium and writes, “Chinese efforts to develop Ga-based chips are aimed at leapfrogging the United States, Europe, and Japan. As the semiconductor industry shifts from Si to Ga, China is preparing to take the lead position accounting for over 40% of all related patent applications between 2019 and 2020 (compared to 23% for the United States).”
But there’s hope! GaN-on-GaN is the industry’s holy grail, it’s still in the materials science phase, and the typical first half of our Electric Stack R&D arc is playing out. Japan and America currently lead in GaN-on-GaN research, although it is early.
Theoretically, GaN-on-GaN is superior to GaN-on-Si, GaN-on-SiC, and SiC MOSFETs because it combines gallium nitride's superior material properties with a vertical device architecture.
Success in GaN-on-GaN would be a do-over, a chance for America and Japan to lead in next-generation power electronics, and, with foresight, not give away the lead. If you’re interested, you can read more in the GaN Appendix.
By shaping electricity, power electronics have helped shape the modern world. Theirs is a familiar story: developed in the US, commercialized in the US, Europe, and Japan, and increasingly dominated by China, which has the manufacturing know-how, government support, local Electric Stack supply chain, raw materials, and electric demand to run away with yet another layer of the stack.
The question now is: who will win the cutting-edge? That’s a battle the US and Japan have shown a repeated ability to win.
Then, the next question will be: how will they proceed if they do?
The answer, I think, will depend on how much of the Electric Stack - specifically batteries, magnets, and motors, we’re able to manufacture in the West.
Because on their own, power electronics are a small part of the bill-of-materials (BOM) cost, and therefore a smaller market than that for batteries, magnets, and motors.
But they are critical to making batteries, magnets, and motors perform. As we’ve seen, power electronics have an outsized impact on system costs and performance.
That dynamic - low cost, limited standalone value, but enormous impact on the rest of the Stack - also describes our last layer, the only one which America clearly dominates today, the brains behind everything we’ve discussed so far: compute.
Embedded Compute
A note and a spoiler: this section will be a little bit shorter, because it’s the layer of the Electric Stack in which America and the West face the least existential risk. While China is gaining ground thanks to the open source RISC-V architecture, their gain is not our loss. Barring tremendous error, we will have the embedded compute we need assuming we get the rest of the Electric Stack in line.
Instead, the point of this section is this: not all compute is created equal. While everyone focuses on GPUs, they are the last piece of the puzzle when it comes to electrification. Without MCUs, AI won’t be able to make machines move.
At last, we enter familiar territory. Compute. Chips. Moore’s Law.
You can’t turn on the news without hearing about chips. NVIDIA, the world’s leading GPU-maker, is the most valuable company in the world at $4.4 trillion. Taiwan Semiconductor, the foundry that fabs the chips for NVIDIA and others, is the world’s 9th most valuable ($1.2 trillion) and its location, on an island off the coast of China, is a key factor in the US-China saber-rattling.
Chips are the brains of the modern world, and of the Electric Stack. Batteries provide power, magnets create motion, and power electronics switch currents, compute decides when and how everything happens.
Think back to our toy model, with the hand manually flipping magnets. How does that hand even know when to flip? Sensors - visual sensors in the eyes, tactile sensors in the hands - take in information, and send it to the brain. The brain does a bunch of intuitive calculations and tells the hand when to flip. Embedded compute plays the same role in the Electric Stack.
Improving compute has been the primary economic focus of the West for the past half-century, and as a result, the cost of embedded compute has fallen more dramatically - 23.0% per year since 1995, after 30 years of similar performance - than any other technology in the stack. Not as fast as Moore’s Law, but fast.
“Moore’s Law,” Byrne Hobart and Tobias Huber write, “is perhaps the most compelling and enduring example of a two-sided bubble, wherein the expectation of progress in one domain spurs progress in another, which then propels growth in the first.”
People expect chips to get smaller, better, faster, and cheaper, so they build applications for smaller, better, faster, and cheaper chips, which creates demand, which allows for the investment in making the chips smaller, better, faster, and cheaper.
When the semiconductor story branched in the 1970s, one branch, Power Electronics, took Atalla and Kahng’s MOSFET, combined it with BJTs, and produced the IGBT, then the SiC MOSFET, then GaN FETs.
We left the other, Compute, cut off at Federico Faggin’s Intel 4004 Microprocessor, and that is where we will pick up. That is, in fact, where most charts tracking Moore’s Law begin.
The Intel 4004 had 2,300 transistors. Remember, a transistor is a very small electronic version of the mechanical relay that Joseph Henry rigged up at Princeton two centuries ago: apply a small voltage to one terminal, and you can control a much larger current flowing between two others.
Today, an NVIDIA B100 GPU has 208 billion, on each chip. In this light, it makes sense that we’ve produced 13 sextillion MOSFETs. That number is probably outdated now; it keeps growing, even as I write this sentence.
But while this march towards more transistors in smaller areas has marched on unimpeded, it’s not the only metric for which engineers have optimized.
In fact, when we talk about compute in terms of the Electric Stack, we’re actually talking about two branches that don’t follow Moore’s Law, or at least don’t ride its bleeding edge.
For products that turn electricity into work – drones, EVs, robots, induction stoves, etc… – AI is useless without compute that turns its general instructions into reliable, specific instructions. This is why you see Tesla use teleoperated robots before fully autonomous ones, and why drones flew years before ChatGPT.
The two branches of compute we’re talking about here, the ones on which electrified products depend, are reliable and specific: microcontrollers (or MCUs) and digital signal processors (DSPs).
The best way to understand how all of this fits together is to think about a drone.
How to Fly a Drone
Drones are useful models because they combine cutting-edge Electric Stack technology in a relatively small and simple package.
As Martin Feldmann and Gene Keselman write in Factories First: Winning the Drone War Before It Starts, “A fully optimized first-person view drone consists of roughly ten parts (including the motor subsystem), a rounding error next to the 30,000 components in a passenger car of which the United States produces roughly 10 million a year.”

Here’s how it all works together:
When you power on the battery - typically a LiPo (lithium polymer) battery, a form of li-ion with a gel polymer electrolyte - it sends its raw DC voltage to the power distribution board (PDB). The PDB is necessary here because the drone has four motors: it simply routes the DC to each of the four arms’ electronic speed controllers (ESC).
The ESCs are the power electronics of the drone. They use MOSFETs to convert the DC into a precisely timed 3-phase AC waveform that runs the motors.
The waveforms determine how fast the motors spin the propellers, which determines how fast, high, and at which angle the drones fly. But which waveforms?
Drones constantly take in a ton of information: from the pilot and from its sensors.
On the ground, the pilot moves the sticks on their remote control (transmitter) to specify throttle, yaw, pitch, and roll, and those travel via radio wave to the receiver on the drone. The receiver receives them and sends them to the flight controller.
Meanwhile, sensors provide a constant stream of data. The GPS sensor provides position and velocity data. The inertial measurement unit (IMU), typically in the flight controller, is an accelerometer and gyroscope that reports orientation, tilt, and angular velocity. Hall sensors on the motors, or the ESCs themselves, provide their rotor position and speed. Current and voltage readings from the ESC are constantly fed back in. There may also be barometers for altitude, magnetometers for compass heading, and increasingly, microphones and cameras for visual and audio data.
This data can’t feed the ESC directly. Power electronics are muscle, not intelligence.
Instead, the pilot and sensor data feed into the flight controller, where a microcontroller unit (MCU) and digital signal processor (DSP) sit plugged into a printed circuit board (PCB). Sometimes, for high-end military use cases, these are separate chips, but increasingly, they’re separate logic in either one MCU with DSP extensions, or in a System-on-Chip (SoC). Let’s consider them as separate for now.
The MCU serves as the hub. It collects all of the raw sensor data, and runs supervisory logic, the high-level decision-making and safety coordination that tells the low-level control loops what to do, and when, like “Target on motor 4 is 10,000 RPM” or “Keep temperature at 150°C.”
For more complex data where the math is heavier, like field-oriented motor control or multi-sensor fusion, the MCU delegates the math to a DSP, which runs calculations like filtering (cleaning noisy signals) and control calculations (figuring out the exact currents or angles needed to keep motors and the system stable) thousands or millions of times per second.
The DSP sends the output of its calculations – required phase currents, torque request, or stabilized position estimate - back to the MCU, which integrates the information into its overall control decisions.
The MCU then sends its final current/phase commands as pulse width modulation (PWM) or digital protocol signals to the power electronics. The power electronics takes the PWM signals and pushes the specified amps into the coils, which creates the precise electromagnetic forces that spin the motor at the precise speeds to spin the propellers at the precise speeds to make the drone fly precisely the way the pilot wants.
All of this happens thousands of times each second, over and over and over again.
Where does AI come into all of this?
The drone can and does fly without AI. The intelligence flying many modern drones looks like this:
AI plays largely the same role as the human pilot. It takes in even more sensor data – from cameras, for example – runs it through GPUs and NPUs (neural processing units), and issues the types of commands a human pilot would. The drone treats those essentially like the signals it receives from the remote control, and throws them into the mix for the MCU to turn into specific instructions. The benefit is: if the “pilot” is onboard, it’s harder to jam the signals.
But, and this is the crucial part, a drone needs the Electric Stack - batteries, magnets and motors, power electronics, and embedded compute - to fly. It doesn’t need AI. Put another way, AI cannot fly a drone without the rest of the Electric Stack in place. To make drones, or any electric product, smart, you first need the Electric Stack.
While one branch of computing has taken us towards general intelligence that can loosely be applied to a lot of digital things, and eventually, with the Electric Stack in place, to physical things, those two other branches - MCUs and DSPs - have led us towards the deterministic control and mathematical efficiency necessary to turn electrons into action in the first place.
And when it comes to MCUs and DSPs, the next big thing did actually start out looking like a toy.
The Compute Alpha Products: From Calculators to Speak & Spell
Let’s go back to 1969, the nicest year of the 20th century.
In January, Richard Nixon becomes President. In March, the Concorde flies its first test flight. In July, Neil Armstrong takes one small step for man. In August, the Manson Family murders Sharon Tate and others just one week before Woodstock takes over upstate New York. And in October, just two days before Halloween, ARPANET sends its first message between UCLA and Stanford.
In April of that eventful year, “the Nippon Calculating Machine Corporation approached Intel to design 12 custom chips for its new Busicom 141-PF printing calculator.” Intel’s engineers thought they could do Nippon eight better: they proposed a new design with just four chips, one of which could be programmed.
“That programmable chip, later known as the Intel 4004, became the first general-purpose microprocessor.”7
The 141-PF had just four chips: the 4004 CPU + 4001 RAM + 4002 ROM + 4003 Shift Register. It cost a then-affordable-for-a-calculator $395. That same 4004 microprocessor (MPU), and its successors, could and would be mixed and matched and put to use in any manner of computers that, thanks to the chip, became smaller, lighter, and more affordable.
But if you’re just making a calculator, the 4004 chipset was overpowered and overexpensive. And Texas Instruments (TI) had recently gotten into the business of making calculators.
Remember Jack Kilby? In 1958, the young TI engineer invented the integrated circuit. Seven years later, the company was still looking for something to do with it, consumer-wise.
In 1965, TI President Pat Haggerty challenged Jack Kilby and his team to make a handheld calculator using integrated circuits. They did: the project, codenamed “Cal-Tech,” became the world’s first handheld electronic calculator.

TI didn’t yet make calculators - it made chips - so it partnered with the Japanese company Canon on 1970’s Pocketronic.

This partnership pattern that we’ve become so familiar with – the US makes the technology, Japan consumerizes it - was on crystal-clear display with calculators at the turn of the 1970s.
“Japanese calculator vendors Sharp, Canon, and Busicom connected with various US semiconductor vendors to develop custom chips for their calculators,” Steven Leibson wrote for Electronic Engineering Journal. “Sharp partnered with Rockwell, Canon partnered with TI, and Busicom partnered with Mostek and Intel for different calculator models.”
Consumer electronics companies around the world wanted TI to develop 4-, 5-, and 6-chip sets for their calculators, which meant that a few unlucky TI engineers flew all over the world making small tweaks to new chipsets that looked an awful lot like the previous ones. One of those engineers was Gary Boone, and so “Gary Boone, who worked in the Texas Instruments MOS (Metal-Oxide Semiconductor) Department, devised the first chip that can be called a microcontroller because he was becoming bored with his job and in trouble with his family.”
What if, instead of flying all over the place to make little tweaks in the hardware, you could make just one chip, make it programmable, and sell it to all of those same customers?
Boone and TI MOS Marketing Manager Daniel Baudouin set out to answer that question. They compiled a matrix with customer requirements on one side, and “functional blocks or chunks of circuitry” on the other, and tried to figure out how to handle all of the requirements with the fewest chunks. This required a technical insight: general purpose transistors aren’t the most efficient use of space on a purpose-built chip. They give space to all sorts of things that aren’t necessary for doing one set of things very well. So Boone and Baudoin “quickly veered towards architectures that made heavy use of memory – RAM and ROM – because these structures are extremely efficient and easier to route on an IC. Memories promised better silicon utilization efficiency by a factor of 40x or 50x.”
The chip they designed, the TMS1802NC, was both the first single-chip calculator, and the first microcontroller. It put a processor, memory (RAM and ROM), and I/O (input/output) on one chip.
As Leibson explained in EEJ:
It has a CPU consisting of a program counter (PC), an instruction register (IR), an instruction decoder (Control Decoders), and a 4-bit ALU. It has a RAM to store numeric data and a ROM to store a program that defines the chip’s operation. Finally, at the bottom, you can see specialized I/O circuits for scanning a matrix keyboard, driving display digits, and driving the seven segments in each display digit. The I/O in this design may be specialized, but this diagram clearly depicts a microcontroller.
What he means by this last line is that this particular chip has calculator-specific I/O: it was set up to take input from the calculator keypad and output to the LED or LCD display, rather than providing general I/O blocks like modern microcontrollers.
Boone, though, quickly started thinking bigger. “Calculator chip patent 4,074,351,” Leibson wrote, “described additional target applications including taxi meters, digital voltmeters, event counters, automobile odometers, and measuring scales.”
Putting everything on one, simplified chip had a number of advantages. “Pitch-matching” components - lining them up more closely - meant minimizing wiring distances, reducing capacitance and resistance, allowing for more regular layouts, and simplifying manufacturing. Using simpler 4-bit processors, to handle a simpler, more specific set of calculations, meant smaller, cheaper CPUs. Taken together, it all meant cheaper, smaller chips, and therefore cheaper, smaller calculators.
At first, TI kept up business as usual. It sold chips to companies that made calculators. TI announced the TMS1802NC single-chip calculator IC on September 17, 1971 and priced it at less than $20. Intel, by contrast, sold the 4004 for $60 at volume at the time, and the total chipset, with the 4001, 4002, and 4003, could cost $300-500.
As my mother always told me, though, if you have the best chip, make your own calculators. In April 1972, TI announced its first calculator, the TI-2500 Datamath. Priced at just $149.95, it was much cheaper than competitors’ and truly portable.
Later that year, in September, TI announced the TMS0100 single-chip calculator family, including the TMS1802NC, which it rebranded as the TMS0102. In 1974, TI announced its TMS1000 general-purpose 4-bit MCU family, priced at just $2 per chip in volume. (That may have been marketing - Microcomputer Digest listed them for $10 in 10,000 piece quantities in 1975, but still. By 1979, they really were down to $2-3 per chip in volume.)
With the Datamath, TI began its assault on the calculator industry that would come to be known as the Calculator Wars. By 1976, its TI-30 scientific calculator, with much more functionality than the four-function datamath, launched at just $25.
Lower cost drove volume, and volume drove lower costs. Per Dataquest Microcontroller Historical Shipment Data, 1975-1989, TI shipped 50,000 TMS1000 microcontrollers in 1975. By 1979, it shipped 26.1 million.
The calculator was the MCU’s alpha product, and in the years to follow, many other American and Japanese companies entered the fray. By 1984, fourteen companies were shipping a combined 193.4 million 4-bit microcontrollers. By 1989, fifteen companies were shipping 672.9 million of them annually.
By 1989, 4-bit microcontrollers cost well under $1 at high volume, and there were plenty of high-volume applications to be had; microcontrollers made their way into remote controls, toys, games, small appliances, simple instrumentation, calculators, handheld computers, and basic automotive electronics.
Perhaps unsurprisingly, Japan had come to dominate the market by this point. In 1989, TI shipped just 10.4 million microcontrollers, America just 44 million. Japan shipped 535 million.
TI’s diminishing role in the 4-bit market was in large part a choice. The chips were becoming commoditized, and TI had bigger signals to process.
The Speak & Spell and DSPs
It is a time-honored tradition for teachers and parents to freak out that a new technology will ruin their children’s education. Today, it’s AI. In 1976, it was calculators.
As cheap calculators proliferated in the early 1970s, educators considered their use cheating. TI, which had very quickly grown a very big calculator business, needed to change their minds.
The company tasked a team including Paul Breedlove and Gene Frantz with creating a solution. In 1976, they released the Little Professor, a “calculator-with-a-twist” that “presented equations lacking their solutions; students had three chances to type in the correct answer.” It was a hit. Teachers backed off. I still fondly remember playing Snake on my TI-83.
Coincidentally, while he was making The Little Professor, Breedlove’s daughter was learning to spell. According to IEEE Spectrum, “it occurred to him that there was a basic similarity between teaching math and teaching spelling. With math, you present an equation and the student has to solve it; with spelling, you enunciate a word and the student has to spell it.”
There was one big difference: to enunciate the word, the spelling toy would need to speak.
Breedlove had also worked in TI’s speech research lab, and he knew they were close to creating affordable speech-synthesis tech, so he pulled together a team. He brought in Frantz to be the system designer and Richard Wiggins to do speech. Wiggins pulled in his friend Larry Brantingham as the integrated circuit architect.
The toy that the quartet built would become the most advanced piece of consumer technology on the market, with three new chips: a controller, “huge” ROMs, and a single-chip speech synthesizer, which did not then exist.”
The huge ROMs took the form of two 128kb ROM chips. The state-of-the-art previously was 16kb.
The real jewel, however, was the speech synthesizer.
In the 1960s, researchers at Bell Labs and Nagoya University and NTT in Japan devised a technique for coding speech called linear predictive coding (LPC), which “let the device generate the speech signal for an entire word from a relatively small amount of data, much smaller than what would have been required to simply store digitized recordings of the words themselves.” Wiggins was familiar with the work, and incorporated it into the spelling device.
The synthesizer chip the team designed ran digital signal processing logic but was not a digital signal processor (DSP). That said, the implementation of DSP logic was an important enough breakthrough that the product the team designed was named an IEEE Milestone in 2009.
That product was the Speak & Spell. It launched in 1978, and moved over half a million units in its first holiday season. Speech synthesis in a handheld device was alien technology… in 1982, ET used it along with a record player, coat hanger, and circular saw blade to “phone home.”
Just as TI started with a calculator chip and generalized to the MCU, it started with a speech synthesis chip (TMC0280/TMS5100) and generalized to the DSP.
Before TI officially entered the DSP race, NEC released the μPD7720 and AT&T/Western Electric released DSP‑1, both in 1980. But, thanks to the popularity of the Speak & Spell, TI’s was the first widely adopted, general‑purpose, programmable DSP family: the TMS320 line, beginning with TMS32010. Introduced in 1982, the TMS32010 was named “Product of the Year” by Electronic Products magazine that same year. And you gotta imagine Electronic Products knew a thing or two about electronic products.
DSPs excel at doing a specific thing: they take in messy real-world signals (like sound, images, or sensor data), do fast math to clean and transform them, and spit out precise instructions or compressed information. That specific thing is useful for a ton of applications.
The sounds of my childhood were orchestrated by DSPs.
They made modems possible (bweeee-eee… krrrk-krk-baning-baNING): the DSP runs the math to modulate digital 1s and 0s into analog tones that can ride over a phone line, then demodulates those tones back into bits on the other side, while filtering noise and correcting errors so the message makes it through intact. You’ve got mail!
We have DSPs to thank for mp3s: the DSP slices audio into frequency bands with algorithms like the Fast Fourier Transform, models what the human ear can’t hear, throws that data away, and encodes the rest so the same music takes a fraction of the bits.
When my dad called home on his carphone on the way home from work, I had DSPs to thank for understanding what he was saying: the DSP cleaned up his voice in real time, compressing it for transmission, filtering out engine and road noise, and decoding the messy analog signal coming back so a conversation could ride over a weak, noisy cellular channel. The same applied to cell phones.
The same applied to images. In 1992, the year after the release of the li-ion-powered Handycam, Sony began selling Hi8 format handycams with DSP. JPEGs and MPEGs were compressed with DSP. Digital cameras used them to turn raw sensor data into usable images. Early video conferencing was made possible by DSP.
DSPs were in our devices – modems, computers, cameras, and phones - but they also supported the networks themselves: DSPs worked in the telecom switches and base stations carrying the world’s calls.
By the mid-1990s, TI basically was the DSP market. Unlike MCUs, it didn’t cede leadership to competitors; it controlled over half the market, synonymous with DSPs like Intel was with CPUs. In a 1999 Technical Brief, the company categorized all of the applications for its TMS320 family of DSPs:
Even back then, DSPs were used in physical applications like robotics, power tools, motor control, and power line monitoring. The military was an important early customer of cutting-edge DSP for radar, RF, navigation, and missile guidance.
In short: if a messy signal needed to be mathed in order to be actionable, DSPs handled it.
More precisely, if a messy signal needed to be mathed in real-time, in a precise amount of time, DSPs handled it. “Every signal-processing task operating on a DSP must be deterministic. That is, the time it requires to finish must be determined exactly, or it runs the risk of breaking up the signal processing,” wrote Speak & Spell co-creator Gene Frantz in a report, Digital Signal Processor Trends.
Progress in DSPs, then, can be measured in millions of instructions per second (MIPS), or in dollars per MIPS. On that front, DSPs showed remarkable progress. Based on numbers in that report, between 1980 and 2010, the cost per MIPS decreased 10 million times, from $30 to $0.000003.
At the same time, mW/MIPS, named “Gene’s Law” after Frantz, followed a Moore’s Law-like trajectory: power dissipation per MIPS halved every 18 months.
The data gets spotty for DSPs after the early 2000s, for a simple reason: MCUs largely ate DSPs.
When we left MCUs in 1989, the world produced over 600 million 4-bit chips and sold them for about $1 each. By 2010, according to IC Insights, 14 billion MCUs - 4-bit, 8-bit, 16-bit, and 32-bit - sold for an average of $1.08 each. The cost per unit was about the same as in 1989; the units just got a lot more powerful.
As both MCUs and DSPs got smaller and more powerful, and more applications - from smart phones to EV motors - needed both real-time control and signal processing, chipmakers began fitting the logic of MCUs and DSPs on the same chip, cutting system costs and complexity.
In the late 1990s, Texas Instruments introduced some of the first hybrid architectures with their TMS320C2000 series, which combined DSP capabilities with microcontroller peripherals specifically for motor control and industrial applications. In the early 2000s, Analog Devices and Freescale/Motorola followed suit.
Then a little British company strong-ARMed its way into the market.
ARMing Everyone
Strong-ARM is more of a pun than an accurate description of history. It’s more accurate to say that ARM got clever, cheeky, generous even.
The year was 1983. The pivotal year in our Electric Stack story. The year that Sagawa and Croat presented their dueling neo magnets. And the year that two engineers at Acorn Computers - Steve Furber and Sophie Wilson - started writing the instruction set for what would become ARM.
ARM (originally Acorn RISC Machine, later Advanced RISC Machines) emerged from Acorn Computers' need for a better processor for the successor to their successful BBC Micro computer (BBC because, charmingly, they made it in partnership with the British Broadcasting Corporation). Everything they looked at was either too expensive, too weak, or consumed too much power, so they decided to stick with the 8-bit MOS Technology 6502.
When they visited Western Design Center in Arizona to see the 6502 processor being made, they were shocked “Expecting to find another large office building populated with hundreds of engineers,” Substacker Babbage writes, “they instead found a suburban bungalow and that Mensch was employing students using Apple II computers to help with the designs.”
If those guys could do it, why couldn’t they?

They were only half-joking. Earlier that year, Acorn director Andy Hopper had given them a paper by UC Berkeley computer scientist David Patterson and Bell Labs’ David Ditzel that made The Case for the Reduced Instruction Set Computer, then another by Patterson and Carlo Sequin, RISC I: A Reduced Instruction Set VLSI Computer. They figured they could build it, pretty quickly and with a lean team.
Wilson wrote the instruction set design in just 808 lines of BBC BASIC code. Furber designed the hardware architecture. After 6 engineer-years of design effort, the VLSI foundry in California delivered the first ARM processor prototype on April 26, 1985, while General Motors and Sumitomo were still fighting over neo magnet patents.
It worked on the first try, which never happens in semiconductors.
More remarkably, when they powered it on without connecting the power supply, it still worked, running entirely off the leakage current from the I/O pins.
What they designed was radically different from Intel's x86 architecture. Where Intel's Complex Instruction Set Computing (CISC) tried to do everything – hundreds of instructions, some taking dozens of clock cycles – ARM's Reduced Instruction Set Computing (RISC) did less, but did it efficiently. Fewer transistors. Less power. Simpler design.
This seemed like a terrible idea in 1985. Desktop computers needed power, not efficiency, because… you could just plug the computer into the wall.
Just as the hard disk drive industry initially dismissed 3.5" drives because desktop users valued capacity over size, just as Bell Labs dismissed MOSFETs because BJTs worked just fine, the processor industry dismissed ARM because desktop users valued performance over efficiency.
Mobile device manufacturers, though, cared a lot about efficiency, and Apple was trying to become a mobile device manufacturer. Per Apple Insider, “On November 27, 1990, Apple, Acorn and VLSI Technologies jointly formed a new firm called Advanced RISC Machines Limited. Apple invested $3 million to own 43% of the company. That investment was specifically to fund the design and development of the ARM processor for what would become the Apple Newton MessagePad.”
The Newton was a major flop, but the ARM was not, at least on one key metric: its processors could run for hours on AA batteries. It was efficient.
With no customer, though, and very little money, what next? “Real men have fabs,” AMD founder Jerry Sanders once said, and while that may have been true, ARM didn’t have the money to be a real man. Unlike Intel and AMD, which built massive fabs and sold finished chips, ARM was forced to sell something else: intellectual property. Designs.
For a while, that seemed silly, or at least quaint. By the turn of the millennium, Intel was worth hundreds of billions of dollars. ARM was worth hundreds of millions. Intel CEO Craig Barrett dismissed ARM as a "tiny niche player."
Then mobile happened.
Nokia needed chips for phones that could last days on a single charge. Texas Instruments licensed ARM designs and built them. Qualcomm needed processors for CDMA phones. They licensed ARM. Samsung, MediaTek, Broadcom, Marvell… everyone making chips for mobile devices licensed ARM.
By 2007, when Steve Jobs stood on stage and announced the iPhone, he didn't mention that it ran on an ARM processor manufactured by Samsung. But it did. So did every Android phone that followed. And the iPad. And practically every smartphone and tablet on Earth.
Until this point, ARM offered one line of cores: ARM1 (1985) through ARM11 (2002) were essentially a linear progression of increasingly capable general-purpose, but power-efficient, processors. Apple launched the iPhone on the ARM11 architecture in 2007, but already by the mid-2000s, it was clear that ARM was, or soon would be, in a jack-of-all-trades-master-of-none situation.
8-bit MCUs from Microchip PIC and Atmel AVR still dominated embedded, with Arduino launching in 2005 to go after hobbyists. DSPs and DSCs (Digital Signal Controllers, which combined DSP computation power with MCU control efficiencies) from TI and Motorola/Freescale were eating motor control and signal processing. Intel was trying to push the x86 architecture into mobile with XScale (but ultimately failed).
So in 2004 and 2005, ARM split its processors into three Cortex lines:
Cortex-A: Application processor for high-performance apps and operating systems to focus on the emerging smartphone market.
Cortex-R: Real-time processor for deterministic control with hard latency guarantees, traditionally the domain of DSPs and DSCs.
Cortex-M: Microcontrollers with ultra-low power for embedded use cases to compete with MCUs.
With this split, combined with its licensing strategy, ARM turned its competing-on-all-fronts weakness into an advantage. It could offer chip architectures that addressed specific needs, while maintaining one instruction set architecture (ISA) and one software ecosystem across all of them. So developers, chipmakers, and OEMs could reuse knowledge and code across a range of different products, or within the same chip.
ARM offered two types of licenses. With Core Licenses, companies got the complete design files for a specific core but couldn’t modify it. They needed to integrate as-is. With Architectural Licenses, which were rare, expensive, and reserved for big customers like Apple, Qualcomm, and Samsung, companies got the ISA and could design their own core from scratch to run on ARM code.
Apple still designs custom ARM-based chips, something like 95% of mobile systems on a chip (SoC) use ARM, and there’s a whole lot we can go into on the mobile phone side, but we’re here to talk about the Electric Stack, and for the Electric Stack, the Cortex-M series has come to dominate, and for most use cases, to eliminate the need for separate MCUs and DSPs.
In the 1990s and 2000s, customers used separate MCUs and DSPs, adding complexity, cost, power consumption, and a straightforward issue with space on the circuit board. In 2010, ARM rolled out the Cortex-M4, which was specifically designed to eliminate the MCU+ DSP combination by combining:
Single-cycle MAC (Multiply-Accumulate) operations
SIMD (single instruction multiple data) instructions for parallel processing
Optional FPU (floating point unit) for floating-point math
Saturation arithmetic
All while maintaining the simple MCU programming model
The upshot was that the Cortex-M4 could handle the main responsibilities of a DSC in a single core. For motor control use cases, which would have combined an MCU with something like a TMS320C2000 DSP previously, a single M4 could make washing machines more efficient, give power tools variable speed control, simplify EVs’ motor controllers, and put a drone’s motor control a sensor fusion on a single chip.
In the drone example we went through, then, the flight controller could replace two chips with one. And indeed, the Cortex-M is incredibly popular in drones. DJI uses the STM32F4 Series in its flight controller, an STMicro chip that uses Cortex-M7. Something like 70% of the drone market uses STM32 in flight controllers, a consequence of STMicro’s early adoption of the Cortex-M series. If you want more detail on the why, this Reddit thread goes deep.
And so but the takeaway is this: ARM-based chips have largely replaced separate MCUs and DSPs and have comfortably dominated the microcontroller market. The STM32 instantiations specifically are everywhere, with low-end versions costing $0.50 and high-end versions running $30. Drones use them, as do 3D printers, industrial motors, ventilators, and cars (for their secondary, non-safety-critical controllers).
Safety critical systems use Cortex-Ms or Cortex-R’s in Lockstep: two identical CPUs run the same code, compare notes, and go into safety mode if there’s a mismatch. For higher-end needs, like ADAS (Advanced Driver Assistance) and FSD use higher-end Cortex-A based chips with a combination of GPUs, NPUs, other MCUs for actuation, and a Cortex-R (or competitor) for safety monitoring.
A modern EV has north of 100 MCUs managing the battery management system (BMS), motor control, on-board charger, body control (i.e. doors), infotainment systems, connectivity, climate control, safety, and suspension. (Tesla actually has far fewer MCUs - 50-70 - because it rewrote its firmware and designed its own MCUs during the COVID chip shortage, which it was able to do thanks to its vertical integration.)
Simply put, ARM’s Cortex bet has paid off handsomely.
In 2000, ARM's partners shipped 367 million chips. By 2010: 6 billion. By 2020: 25 billion.
In 2016, Masa Son and SoftBank bought ARM for $32 billion. In 2020, NVIDIA tried to buy it for $40 billion. In September 2023, ARM went public at a $54.5 billion market cap. Today, it’s valued at $149 billion, Intel at a measly $103B. Big niche, huh?
Coincidentally, just last week, SoftBank announced a $2 billion investment in the “beleaguered chip giant” while the Trump administration considers buying a 10% stake in the same.
To date, over 280 billion ARM-based chips have been shipped cumulatively. They're in 95% of smartphones, 90% of hard drives (remember those controller chips?), 65% of cameras, and increasingly, in laptops and servers.
Now, however, ARM’s dominance is being tested, and after twenty years, the company is deprecating the Cortex brand in favor of its new integrated Corstone sub-systems. It wants to move up the stack to sell pre-integrated CPUs + GPUs + NPUs + interconnect. There are a number of reasons for this, from the mundane (like naming conventions getting confusing) to the strategic (ARM is starting to design custom chips for customers, like Meta, and pissing off other customers, so why not just design standardized chips for everyone).
But the biggest risk to ARM, and RISC, is RISC-V. The free, open source instruction set is also China’s toehold into the one layer of the Electric Stack that it hasn’t mastered.
The RISC-V Risk
RISC-V is to ARM what Linux was to Unix: a free, open alternative that starts at the low end and gradually moves up.
Designed by Krste Asanović and David Patterson (the RISC creator whose papers we linked to above) to have something simple with which to teach their UC Berkeley students, RISC-V is:
Completely free: No licensing fees, ever
Simple: Base ISA has only 47 instructions
Extensible: Anyone can add custom instructions as needed
Frozen: Base ISA will never change (unlike ARM's constant evolution)
Simpler, cleaner, and free, with the library benefits of open source eating away at ARM’s software lock-in, RISC-V has been embraced by a wide range of chipmakers, including ARM’s customers and China.
Google’s Android now supports RISC-V, and the company is making custom AI accelerators with RISC-V (although Linux creator Linus Torvald called Google’s RISC-V code “garbage” and said “it makes the world actively a worse place to live.”) After legal battles with ARM, Qualcomm is investing heavily in RISC-V, especially for its Wear platform. In 2023, a group of ARM customers – Infineon, NXP Semiconductors, Nordic Semiconductor, Qualcomm, Bosch, and STMicro - formed the automotive RISC-V alliance, Quintauris, to accelerate RISC-V adoption and adopt standards to avoid fragmentation.
For China, RISC-V is an opportunity to gain semiconductor independence.
After the 2018–2020 U.S. sanctions on ZTE and Huawei, Beijing recognized that reliance on Western IP (like ARM and x86) was a national security vulnerability. Even though ARM is UK-based, its IP contains U.S. technology, which means it falls under U.S. export controls. When Huawei was cut off from ARM support in 2019, it proved how fragile China’s position was.
RISC-V is “sanctions-proof.” It’s open source, royalty-free, and no single Western company can cut China off. And as a result, both Chinese companies and the CCP have embraced it.
Already, China accounts for more than 50% of global RISC-V shipments, mainly for MCUs and low-end embedded products (comparable to the Cortex-M line), which are exactly the chips China needs to further control the Electric Stack.
Huawei pivoted to RISC-V after losing ARM access: its Little HiSilicon has launched RISC-V chips “from MCUs to AI edge SoCs.”
Alibaba’s T-HEAD Released RISC-V cores (Xuantie E902/E906/E907) starting in 2019, along with toolchains and dev kits. By 2023, T-HEAD claimed 2.5B cumulative shipments. Chester Lam points out that its Xuantie C910, designed for high performance applications that would have been served by Cortex-A (AI accelerators, edge servers, industrial control, and ADAS), has an unbalanced design, weak memory subsystem, and performance bottlenecks that overshadow its strengths, limit its usefulness for a number of applications, and make it hard to realize the potential of its advanced features.
Of course, as we’ve learned, dismissing China because of early stumbles is foolish. The country is all-in on RISC-V, MCUs don’t require cutting-edge 3nm fabs, and China has both the electronics ecosystem to rapidly iterate and the electronics demand to justify large investments.
When Dr. Teresa Cervero attended the 5th RISC-V Summit in Shanghai, she “witnessed firsthand the sheer scale and unwavering resolve with which China is strategically investing in and developing its domestic RISC-V ecosystem.”
She identified a “China Playbook”: “The operational philosophy I observed champions rapid innovation and a fast ‘time to market’ strategy, which involves creating minimal viable products and testing them in real-world environments to gather genuine customer feedback.”
“It will be very interesting to see if this agile approach, which has proven successful in consumer electronics,” she wondered, “will be extended to high-performance computing (HPC) systems.”
I think the assumption, based on the history of the other Electric Stack layers, has to be that it will work, and I think that’s OK.
Unlike the other layers, embedded compute doesn’t represent a chokepoint for the West. If China gets good at MCUs, and continues to get better at more advanced AI chips, it doesn’t threaten America’s ability to access the chips it needs.
But as we turn away from the history and present and towards the future, China’s reaction to restrictions is instructive: when America restricted access to chips, it went all-in on building them and has gotten “good enough” fast.
That suggests that restricting Chinese access – completely impractical in certain categories already given its skill and access to resources, anyway - is unlikely to succeed.
And it suggests a path forward for America: if you can only build modern technology at the pleasure of a country that might cut you off, you need to get serious about building your own capabilities, even if it’s expensive and slow to start.
Lessons from the History of the Electric Stack
If you’ve made it this far, congrats: you now know more about the electric stack than 99% of the world. So what have we learned?
There are eight key lessons.
Lesson 1: the fact that the world has reduced the cost of the components that turn electricity into motion (and all sorts of other useful work) by 99% in three and a half decades is one of the defining miracles of our time.
At no point in our story was it obvious that these curves would continue. Dead ends were hit and routed around, again and again, across the world.
While it’s unclear exactly how the Electric Slide will continue its descent, what should be abundantly clear is that it will. Products that are impossible to build today will become possible. Products that are expensive to build today will become cheap.
Lesson 2: as these products become possible and cheap, demand for them will continue to drive new scientific, engineering, and manufacturing breakthroughs that continue to drive them further down the curve.
Alpha products - camcorders, calculators, Speak and Spells, VFDs, and hard-disk drives - ignited demand that precipitated further improvements. Interestingly, aside from the Speak and Spell, each of these was a product that existed before the invention of the new technology but had flaws (even if unknown at the time) that limited their reach that were solved by the new technology.
Creating demand for electric products, through a combination of making products people want occasionally subsidizing them, and government programs to purchase advanced capabilities even before they’re economical for consumers and businesses is key to owning the Electric Stack.
Lesson 3: there’s typically a decade or two gap between scientific breakthrough and scale.
Whittingham invented the lithium-ion battery in the 1970s; Sony commercialized it in 1991. Goodenough published on LFP in 1997; BYD released the Blade in 2020. Atalla and Kahng demonstrated MOSFETs in 1959; Intel’s 4004 didn't arrive until 1971. Baliga demonstrated SiC's 1000-2000x superiority over silicon in the early 1990s; Tesla didn't put them in the Model 3 until 2017. Baliga invented IGBTs at GE in 1980-1982; Yaskawa and Fuji didn't commercialize VFDs at scale until 1988-1989.
Science isn’t enough. For the research to lead to impact, manufacturing it into products is necessary.
Lesson 4: talent is destiny, and that talent can come from anywhere.
Throughout our story, we see the same names pop up over and over. Joseph Henry. B. Jayant Baliga. John B. Goodenough. Mohammed M. Atalla. Some of these people are home-grown talent, and some came to America from abroad. It’s important to remember that while the curves look smooth and inevitable in retrospect, they have relied to a large degree on incredibly talented individuals who worked the same problem for decades.
Lesson 5: American innovation in the Electric Stack consistently came through partnerships with other countries.
The lithium-ion battery was the result of research done in America, then the UK, then Japan, and its scale, low-cost, and energy density came largely from Japanese and Chinese manufacturing. Tesla partners with a Japanese company, Panasonic, on its Gigafactories. The microcontroller, designed in America, first found a home in Japanese calculators, and the profusion of modern microcontrollers is thanks to Japanese manufacturing, UK chip design, Taiwanese foundries, and now, American open source software. An American and a Japanese researcher simultaneously discovered the neodymium magnet, and both of their methods were necessary to make 3.5” hard-disk drives spin. And while IGBTs and MOSFETs were invented here, it was Japanese companies who first commercialized VFDs at scale, Chinese companies that made solar PV inverters cheap, and an American EV company that realized SiC MOSFET inverters could drive down system costs while driving up performance.
Any plan to reestablish American Electric Stack capabilities would do well to keep this international partnership in mind.
Lesson 6: what looks smart in the short-term ends up looking dumb in the long-term.
This lesson comes in two flavors: the Innovator’s Dilemma and outsourcing and financialization.
A new technology’s new capabilities are likely to be underestimated by the market, and even by the companies that invent them. Seagate didn’t want to make smaller drives. ARM’s chips seemed woefully underpowered. Bell Labs preferred the BJTs it had bet on to better MOSFETs.
As Fuji wrote in that 1985 report, “It is not that the inverters of present phase have attained 100% satisfaction of all, but we can say that in the context of cost and performance, they have reached now to the time for a wide diffusion acquiring sufficient esteem for their qualities.”
And while outsourcing capabilities may seem prudent at the level of the individual company in the short-term, the bill for the economy as a whole eventually comes due.
GM sold its neodymium magnets division, Magnequench, to Deng Xiaoping’s sons-in-law for $70 million. GE’s Jack Welch decided he didn’t want to use the Baliga’s IGBTs and sold off the company’s entire semiconductor division. Dynex sold its own IGBT know-how to CRRC. And A123 sold itself in bankruptcy to Wanxiang.
Relatedly, companies from Tesla to GM to Apple have set up shop in China to access both its large consumer market and manufacturing skill, either directly transferring technology to Chinese companies, or catfishing them into developing their own.
Lesson 7: China has followed a specific playbook for a long time
It is easy to write off China’s strength in the Electric Stack to some combination of IP theft and cheap labor, but that would miss the patient strategy at play. The government has identified important technologies and incentivized demand for the products that use them. Companies have identified transitional products, like buses before cars, to perfect them in production. The government, typically a local government, often serves as the first customer. Then they iterate rapidly within vertically or horizontally integrated companies, and within integrated ecosystems.
The real story is messier than that, but the key takeaway is this: China has set itself up to rapidly iterate at the system level in key areas until it has products that are good enough to compete, and sometimes better than that.
Of course, China certainly has its own issues.
One of the main reasons it’s become so dominant in refining is that its environmental standards are so lax. It disappears entrepreneurs when they get too successful. Conrad told me a story of a Chinese minibus entrepreneur who was one of the richest men in China until he argued with a local governor over the location of a new factory, at which point the government simply took his company; it is now the company with which BMW partners in China (and that governor, a rival of Xi Jinping, is now in jail).
And then there’s involution. Rob L’Heureux recently wrote a fascinating essay, Surviving Involution, on the causes and effects. Essentially, Chinese Electric Stack companies are competing so intensely with each other that they're driving margins to zero through overproduction and price wars, with battery manufacturers now selling at near-loss despite technological leadership. Hypercompetition drives rapid innovation but may prove unsustainable as companies exhaust themselves fighting for market share rather than capturing profits or investing in next-generation R&D.
Noah Smith put it more bluntly in China’s industrial policy has an unprofitability problem: “China is paying its national champions to fight each other to the death.”
And while China’s manufacturing innovation is impressive, looking back at the story of each layer of the Electric Stack, it’s hard to miss this fact: none of the initial breakthroughs were Made in China.
All of which is to say: China’s skill should not be underestimated, but the inevitability of its continued dominance should not be overestimated, either.
Lesson 8: integration drives innovation.
This is true at the company level, and at the national level.
At the company level, vertical integration creates tight feedback loops.
Mitsubishi and Fuji perfected and scaled IGBTs because their engineers were putting them into products right down the hall from the scientists who were researching how to improve them. Tesla made the impossible Model 3 possible by reengineering the entire vehicle to take advantage of SiC MOSFETs’ capabilities. BYD bet on LFP, and then willed the bet into the money, because it made cars, not just batteries; the Blade battery it made for itself became so successful that Tesla has started using it.
But CATL demonstrates that horizontal specialization can work when embedded in a dense, collaborative ecosystem. CATL doesn't need to make cars because Chinese automakers are literally down the street, iterating together. They can serve multiple customers (including competitors like Tesla and BYD) while capturing learning from all of them.
The contrast with A123 is telling: A123 was a specialized battery company in an ecosystem that didn't have the rest of the stack. CATL is a specialized battery company in Shenzhen/Ningde, surrounded by motor manufacturers, inverter makers, and automakers.
This has implications for America's path forward. We might not need every company to be vertically integrated like Tesla if we can rebuild dense industrial clusters where specialized companies rapidly iterate together.
Silicon Valley worked this way for chips. Detroit once worked this way for cars. Shenzhen works this way. At the component scale, North Carolina seems to work this way on power electronics level; perhaps it would be a good place to cluster.
Clustering, though, is just one small piece of a very big and jagged puzzle. The truth is that while there are clear lessons that emerge from the History of the Electric Stack, there is no one clear conclusion. As Conrad ended Forsaking Industrialism:
What’s the best “industrial policy”?
Do all the things that matter and none of the things that don’t.
Good luck.
And look, I’m just a newsletter writer who learned about most of this myself over the past month, so I’m certainly not going to figure out what all and none of those things are myself.
I wrote this piece so that a lot of smart people will help figure it out.
But it would be deeply unsatisfying to end this without some thoughts on where we go from here, wouldn’t it?
Rebuilding the Electric Stack
We began this essay by trying to understand the bet that China is making on open-weight AI models.
Our thesis was that if China owns energy and action, they should want to commoditize intelligence.
But after 40,000 words of history, the reality is even more damning: China has become the best in the world at taking intelligent ideas from anywhere—Oxford, Princeton, Tokyo, North Carolina, Bell Labs, even Pittsburgh—and turning them into scaled physical realities. They've built the manufacturing capacity and capability to absorb breakthrough research and make products with it.
China’s bet is that in a world where intelligence is free, whoever can turn ideas into things fastest and cheapest wins. And currently, they’re the best in the world at turning ideas into things.
America’s current bet, on the other hand, is that winning AI is the most important thing.
Make really smart computers, something something, win.
It assumes that there will be a significant and meaningful difference between the very best model and all of the rest, an assumption, thus far, not borne out by reality.
In January, just weeks after OpenAI released its GPT-4 o1 reasoning model, the Chinese company DeepSeek released a more cheaply trained, open source model, R1, that matched o1 and other frontier models in the benchmarks.
OpenAI’s recent GPT-5 release has underwhelmed. It is a better product, and reasons longer and harder, but seems no closer to omniscience.
When I recently asked Twitter why LLMs weren’t going to cap out at the level of best human-generated data, the best answers that I got were that having near-human level intelligence that’s able to think for a long time, very quickly, across enormous amounts of data will actually be indistinguishable from superintelligence. That may be true, but it seems to be a less moated path than pre-training, and one that China, masters of scale, is at least as well-suited as the US to win. It suggests that the handful of companies with the resources to compete will be able to compete as long as they are willing to commit the resources: no fast take-off, and no fast pull-away.
It seems to me that AI will not be winner-take-all-of-the-future.
But let’s say, for argument’s sake, that it is true, that America wins AI by a fairly wide margin.
Here, there is a second flaw: intelligence without action is insufficient.
We have learned this lesson throughout the history of the Electric Stack.
After over a decade of research on lithium-ion batteries, Sony increased their energy density by 50% by actually manufacturing them. The first to commercialize LFP, with one of its inventors as CEO, A123 failed in part because it was just a battery materials company; it didn’t control battery pack manufacturing, and couldn’t optimize the entire system.
Thinkers from Alexander Hamilton to Paul Kennedy have argued that manufacturing capacity is the best predictor of economic and military strength. As Hamilton wrote in his Report on the Subject of Manufactures in 1791:
Not only the wealth; but the independence and security of a Country, appear to be materially connected with the prosperity of manufactures. Every nation, with a view to those great objects, ought to endeavor to possess within itself all the essentials of national supply.
They have yet to be proven wrong, and any argument that that will change now faces a steep burden of proof.
“Winning” by replacing the laptop class with bots while ignoring physical capacity feels almost like a technological re-run of the hyperfinancialization that got us into this mess in the first place. Better margins, weakened capabilities, sneaky fragility.
But, to keep the argument going, let’s assume that we can develop an AI that is smart enough to figure out how to manufacture things, to marshal energy and execute action.
Finally, the American bet that intelligence will solve energy and action is flawed for the same reason that China’s bet is strong.
By building such a model, we would be commoditizing our differentiation and putting it up for sale.
So the first thing we need to do if we want to rebuild the Electric Stack is to realize that we need to rebuild the Electric Stack.
Or, we need to understand that AI won’t solve this for us, that in order to maximize the potential of artificial intelligence, we need to pair it with electric action.
That is why I wanted to write this piece. I hope it contributes to that understanding.
None of this, however, is to say that AI isn’t valuable. In fact, the opportunities created by the potential to infuse machines with intelligence, though, along with the dramatic improvements across each curve in the Electric Stack, are what make me hopeful that we will be able to figure this out.
In our conversation, Conrad pointed out that America is the best in the world, “at taking 10 bucks and turning it into 100 bucks.”
That investing in the Electric Stack was not the highest ROI opportunity is one of the reasons, he believes, that we didn’t build up our capacity. Frankly, for many years, investing in the Electric Stack wasn’t the best way to turn $10 into $100.
Today, it is.
For the same reasons that I believe we’re entering a new Techno-Economic Paradigm and that Vertical Integrators are best positioned to capture value, it now makes both strategic and economic sense for companies to invest in the Electric Stack, and for the government to support their investment.
The curves might be hitting just the right spot where the free market machine can turn its attention back to the physical world. This is, no doubt, one of the reasons that so many startups are starting to build Vertical Integrators.
This is happening already. Many of the companies that I’ve written about and invested in over the past couple of years are building better products on the Electric Stack, even if they’re currently forced to use Chinese components. In doing so, they’re creating the demand that has always served as a signal to produce supply.
Demand is the key: we need to turn America’s insatiable demand cannons on to electric products. This is why I wrote that “If I were President, I would make sure that every American man, woman, and child has an EV, heat pump, drone, induction stove, and robot.” Throw in flying cars and portable MRIs, while we’re at it.
This isn’t science fiction. It is simply what becomes possible, soon, if you follow the curves. And it’s how you create the incentive to bring enough of the Electric Stack back to America and its allies’ shores that we control our own destiny.
Sam D’Amico, who I worked with on this piece, runs Impulse Labs. They make induction stoves with batteries, magnets, power electronics, and embedded compute, but because the Stack is the core of any electric product, they’re also partnering with other home appliance OEMs to make their products electric. The reason, he says, is that:
Ironically, these higher-friction, established markets are the ones that can drive insane volumes for the Electric Stack. Predating software, iteration cycles are long, and regulatory requirements are often frustratingly high (for instance: Impulse took 20 months to certify their power electronics to the UL858 standard for stoves). If fundamentally better technology enables fundamentally better products, there’s an entirely open playing field for American companies, largely shielded from China’s production superiority.
This is a lesson we’ve learned, too: alpha products are often fundamentally better versions of established products, enabled by fundamentally better technology. A lighter camcorder with better battery life. A smaller hard-disk drive with more storage.
And it is a time at which smart, focused government investment in rebuilding capabilities - not necessarily to be the World’s #1 in each component, but in order to build enough capacity to innovate across the Stack - can have a multiplicative impact on the American economy.
Demand drives supply, innovation, and scale. The more Impulses and Base Power Companies and Ziplines and Arc Boats, not to mention defense companies, demanding high-quality, affordable, supply chain resilient Electric Stack components, the more the numbers start to make sense for the would-be American horizontal Electric Stack Suppliers.
Those are coming, too.
Vulcan Elements recently announced a $65 million Series A to build fully decoupled neodymium magnet manufacturing in Durham, North Carolina, less than 30 minutes away from NC State, where Baliga is still the Progress Energy Distinguished University Emeritus Professor of Electrical and Computer Engineering. They have a roadmap to producing cheaper neo magnets than China with better chemistry, modern manufacturing, and yes, a little AI.
Companies like Ouros are making higher energy-density batteries, at first to serve the underserved markets for whom energy density is key. Larger auto OEMs like Ford and GM are working to manufacture batteries in the US and friendly countries. These will be challenging, as Northvolt’s recent bankruptcy shows, but as these processes become more automatable, there’s no physical reason we can’t make them here.
I even just invested in a stealth company making electric motors.
These horizontal Electric Stack suppliers make more economic sense now than they have in the past because the products they’re making have gotten cheap and performant enough that vertically integrated are building electric products with them.
Businesses big and small prefer locally-made magnets, batteries, and chips more than ever before after COVID demonstrated the brittleness of global supply chains. They are economic animals, and the numbers need to work, but the numbers are finally beginning to take into account some of the hidden externalities that were absent from the calculations before.
Rebuilding the Electric Stack will largely be a free market endeavor, like the rapid buildout of AI has been, but the government can play a supporting role here, too, as it has with AI.
Most straightforwardly, it can be a buyer, as it always has been. The 2021 and 2023 National Defense Authorization Acts, as one example, bans the use of Chinese rare earth magnets in new military equipment beginning in 2027. They are willing to pay a premium in the short-term in exchange for security, resiliency, and new capabilities, and in so doing, can provide the initial demand that pushes American suppliers down the curve.
It can also use tax credits and subsidies to incentivize demand for Made in America electric products built on an American Electric Stack.
In order for that to happen, though, it needs to stop thinking of electric products as lib-coded and start thinking about them for what they are: better performing products that will continue to get better, and the key to bringing back manufacturing and the good jobs that come with it.
Electric products perform better today, and unlike combustion products, they will continue to improve. Hanging on to the gas-powered car and stove as a matter of national pride is like Napoleon Dynamite’s Uncle Rico bragging that he used to be able to throw a football over that mountain.
Of course, the government can also make it easier to build. It should not be the case that Elon Musk can build a Gigafactory even faster than he believed possible in China, but not here.
We should also reform the National Environmental Policy Act, so that it becomes easier (or possible at all) for American companies to build the products that will help the environment. I’m all for Mike Solana’s idea to create Special Economic Zones where we can process rare earth metals and manufacture electric components. Rare earths aren’t rare; refining capacity is.
Building Electric Stack manufacturing capacity is good because we need to have the capacity to make things, but it’s also good because it allows us to do what we do best, better.
We need to continue to bet on breakthroughs: new chemistries, materials, and technologies. It’s the thing we’ve proven best at, again and again. GaN-on-GaN FETS. Solid-state batteries. Switched reluctance motors. Things that don’t look like they make sense today eventually will.
Rebuilding the ability to make the current iteration of the Electric Stack will both give us the hands-on experience to iterate towards those breakthroughs faster, and ensure that we have the installed capacity to manufacture the breakthroughs here, when they inevitably arrive.
If history is a guide, we should team up with our friends on both research and manufacturing. I will leave it to the experts to work out the details. Korea is good at batteries. Japan is leading in GaN-on-GaN research. Germany makes many of the industrial machines we will need. Mexico is right next door with a labor force ready to build.
Finally, America is the best in the world at building software. Our labs’ frontier models top the leaderboards.
A world built on the Electric Stack is a software-defined world. Physical products can get faster and smarter with an over-the-air software update. Factories can run more efficiently with software that speaks the language of their machines. While the story of China’s recent Electric Stack dominance is not simply a labor story, it is a fact that we will have more success reshoring products whose manufacturing is amenable to automation.
Battery cell manufacturing is becoming increasingly automated, as is sintering. Semiconductors, packaging, and PCB assembly are not labor intensive.
Like I said, this is incredibly complex and I don’t have the answers. The people building these things will have a better idea of what makes sense and what doesn’t than I do. That’s the beauty of free markets.
I do know that if we want to win the future, we’re not going to do it with AI alone.
After a half-century of digital dominance, the cutting-edge is once again physical. In a physical world, you have to make things to understand how to make things better.
America doesn’t need to “beat China.” We don’t need to make all of the world’s batteries and magnets. We just need to make enough of them that we can make the products with which they are built and that we understand, down to the bare metal, how to keep making those products better.
The path forward is not to beat China at its own game of state-directed industrial policy. It is to play our own. America's unmatched superpower is its combination of breakthrough invention and insatiable consumer demand. For decades, that engine was pointed at software and bits.
The curves now tell a different story.
The two most important physics discoveries of the nineteenth and early twentieth centuries took the same form: these two things you thought were different are actually the same.
Mass and energy. Electricity and magnetism. On those, America built the modern world.
The most important economic opportunity of the 21st century follows the same logic: these two things you thought were different are actually the same.
Bits and atoms. Software and hardware. Intelligence and action.
If the past half-century viewed these two things as largely separate endeavors, the Electric Era will prove that each is more powerful with the other.
Once again, the world will be rebuilt on this insight.
Everything that can go electric economically well, and more can go electric economically the more the curves run. The cost physics are on electric’s side. Performance is on electric’s side. The environment is on electric’s side.
That the physical world will be rebuilt is an inevitability written in the curves.
The quadrillion dollar question is: who will rebuild it?
Huge thanks to Sam D’Amico for working on this monster with me, and to Conrad Bastable for many of the original ideas! Everyone go buy an Impulse stove.
That’s all for today. We’ll be back in your inbox on Friday with a Weekly Dose.
Thanks for reading,
Packy
You have no idea how hard it was to find this data. It doesn’t exist in one place. It’s everywhere, in old catalogs, YouTube teardowns, government reports. The data isn’t clean: old hard-disk drives have higher $/kW in part because they needed so few kilowatts; the denominator was very high. Comparing that to a modern EV motor feels a little absurd, but it’s also how the progress actually happened, so we’re sticking with it. We used Claude and ChatGPT to help find all of the data, and you can find a list of sources here. This graph does its best to normalize across and account for a bunch of different industries that use electromagnetic motors, although in some cases, certain use cases achieved better $/kW performance. The Tesla Model 3, for example, got below $5/kW. It is directionally correct, and while specific years may be off the takeaway is the same: 98.8% reduction in the past 35 years.
Note that these are cell costs and our Li-Ion Cost Curve shows pack costs, which are higher.
For those paying attention, this is less than half the cost in our chart because the 2024 BNEF number we use for the chart are global (not just China), include other battery use cases (not just EVs), and are at the pack-level (cell makes up ~72% of the cost of the pack).
“Outrunner” refers to the fact that the neo magnet rotor is on the outside and spins around a fixed electromagnetic stator in the center. Because the spinning outer can has a larger radius, outrunners give you more torque for the same magnetic force (torque = force x radius). Because the outer can moves air over the stator, it keeps the motor cool under high load. And because of low rotational inertia and high torque from the NdFeB magnets, drones can accelerate and decelerate rapidly. No gearbox required.
And thanks to the Electric Stack, quadcopter drones replace the complex, mechanical swashplates of helicopters with four fixed-pitch rotors, relying on MEMS (Micro-Electro-Mechanical Systems) gyroscopes and microcontrollers to do in software what used to require delicate and fragile hardware. Software (AI or otherwise) is critical to modern electric products, but software can only act on these products because they’re built on the Electric Stack.
Claude tells me he was terrible at publishing, compared to Faraday, who published everything immediately, and that in the 1830s, American science didn’t have the prestige of European science, and that he was too nice (he didn’t fight for his money when Morse made millions on the telegraph using his relay), and that he decided to become the first Secretary of the Smithsonian Institution instead of pursuing personal glory.
This same region creates massive resistance when conducting. Early power MOSFETs at 100V have 10-100x higher conduction losses than similar BJTs. They keep tweaking, but they keep facing the same problem: the power MOSFETs lose too much energy as heat. Switching 10A with a MOSFET means 100V drop and 1000W of heat; 10A BJTs drop only 1-2V (10-20W loss).
Here’s a little computer history sliding doors moment for you: “Ironically, Busicom — who owned the exclusive rights to the world-changing device — was unhappy with it,” the Intel website practically cackles. “The bottom had fallen out of the desktop calculator market during the 4004’s development, so Busicom wanted to renegotiate the price for the product. In May 1971, at the urging of the 4004’s design team, Intel CEO Robert Noyce repurchased rights to the chip for everything but calculators in exchange for returning Busicom’s $60,000 investment in its development.”
“Busicom wound up going bankrupt in 1974,” the Intel website sneers. “Intel went on to become a global powerhouse with nearly $60 billion in annual revenue in 2017, all of which stemmed to some extent from the microprocessor industry the company established and the modern age that came with it.” For more on what became of Intel and what might become of it, read Contrary Research’s Building an American TSMC.
I was feeling pretty good this morning. About something. I forget what. And then I read this.
Every single time I read "Not Boring" I learn more than any other substack or email or even sometimes research papers... well done... again