Why Design Just Became AI's Most Expensive Ingredient
December 16, 2025 · Anthony Franco

Last year, OpenAI spent $6.5 billion to acquire a design company.
Not a model company. Not a chip company. Not a data company. A design company. Jony Ive's io Products, the hardware studio founded by the man who designed the iPhone. OpenAI made him their creative head and told the world they were building "the coolest piece of technology the world will have ever seen."
This is the most important signal in AI right now. And most people missed it because they were arguing about benchmarks.
The Graveyard Next Door
The same year OpenAI bet $6.5 billion on design, the AI hardware graveyard kept filling up.
The Humane AI Pin ($699) was a disaster. The Rabbit R1 ($199) was a paperweight. Both had incredible technology. Both had massive hype. And both failed for the same reason: they were technology in search of a problem.
OpenAI looked at that graveyard, looked at their own chatbot interface, and wrote the largest check in the company's history. Not for better AI. For better design.
That should tell you something.
The Interface Gap
I spent a decade building one of the first premier UX firms in the country (EffectiveUI). We worked with 40% of the Fortune 100 during the mobile revolution. I saw the same pattern then that I see now.
Engineers build what is possible. Designers build what is necessary.
Right now, AI is in its "Command Line" era. We're typing text into a blank box and hoping for the best. It's powerful, yes. But it's high-friction. It demands that the user be an expert in the machine's language (prompt engineering) rather than the machine understanding the user's intent.
Jony Ive didn't make the iPhone successful because it had a better processor. He made it successful because he hid the processor behind a piece of glass that my grandmother could use.
He made the technology invisible.
Design is Not Decoration
Most corporate leaders think design is "making it pretty." They think it's the coat of paint you apply at the end.
This is wrong. Design is the architecture of the interaction.
In the context of AI, design is the answer to three questions:
- Prediction: How does the system know what I want before I ask?
- Correction: How do I tell it when it's wrong without starting over?
- Trust: How does it prove it did the work?
The Rabbit R1 failed because it ignored the Correction loop. When it hallucinated, you were stuck. The Humane Pin failed because it ignored the Trust loop. It spoke answers into the air with no way to verify them.
Look at what we know about the OpenAI device. Codenamed "Sweetpea." A screenless, voice-first wearable. No display. No traditional interface at all. If that product gets even one of those three questions wrong, it's a $6.5 billion Humane Pin.
Ive knows this. It's why the device was just delayed to 2027. Reports cite "privacy, compute, and personality issues." Translated into design language: Trust isn't solved. Prediction isn't fast enough. And the Correction loop for a screenless device is a problem nobody has cracked yet. They're not late because the technology isn't ready. They're late because the design isn't.
Why Observation Comes Before Building
These failures, the Pins, the Rabbits, and likely the first wave of whatever comes next, share a root cause: the teams built from technical capability instead of user experience. They asked "What can our AI do?" instead of "What does a person actually need in this moment?"
This is what AI First Principle #6 gets at: Design systems from lived experience, not distant observation. The people wrestling with system failures are the ones qualified to design system futures. Not the engineers in the lab. Not the executives in the boardroom. The people doing the work.
The Humane team spent years perfecting a wearable projector. If they had spent two weeks watching people use phones in real contexts, they would have noticed something obvious: people don't trust information they can't re-read. Audio-only answers from a chest-mounted device fail the most basic usability test.
Observation first. Technology second. Every time.
The $6.5 Billion Lesson for Your Business
You probably aren't building consumer hardware. But you are building AI workflows for your team or your customers.
And you are probably making the same mistake Rabbit made.
You're focusing 90% of your budget on the "Model." Which LLM to use, how to fine-tune it, where to host it. You're focusing 10% on the "Interface." How the human actually interacts with it.
Flip that ratio.
The model is a commodity. GPT-4, Claude, Gemini, they're all brilliant commodities. The difference between a successful implementation and a failed one is design.
- Don't give your employees a chatbot. Give them a button that says "Draft Contract."
- Don't ask them to "prompt." Ask them to fill out a form, then let the AI generate the prompt.
- Don't show them the raw output. Show them a diff they can approve.
OpenAI could have spent $6.5 billion on more GPUs. They could have bought another AI lab. Instead, they bought a design studio and made Jony Ive the most expensive hire in tech history. If the company that built the model thinks design is the bottleneck, what does that tell you about your own implementation?
The "Invisible" Standard
The ultimate goal of AI design is to make the AI disappear.
When you use Spotify, you don't say, "Wow, what a great recommendation algorithm." You just say, "I love this song." When you use Uber, you don't say, "The routing optimization is impressive." You just say, "My car is here."
That's what Ive is trying to build. Not a "better AI device." A device where you forget you're using AI at all. He calls it "calm computing." The industry calls it a moonshot. And even with the best designer alive and $6.5 billion, they just pushed the launch back a year because they haven't figured it out yet.
If your users have to "think about the AI," you haven't finished designing it. And if you think that's easy, consider that the people who made the iPhone think it's hard enough to delay.