Steve Jobs died in 2011 — before large language models existed, before the smartphone had fully transformed consumer behavior, and before AI had become every startup's go-to positioning statement. But his philosophy about technology, design, and what separates good products from great ones gives us a fairly clear picture of what he'd say. Most of it is uncomfortable.
He Would Hate 90% of AI Products
Jobs famously said: "Design is not just what it looks like and feels like. Design is how it works." He applied this ruthlessly — removing the keyboard from the first iPhone, eliminating the headphone jack when wireless audio was ready, insisting the original Mac ship with no cursor keys to force proper interface design.
The overwhelming majority of AI products in 2026 fail this test. They bolt a chatbot onto existing workflows and call it "AI-powered." They expose raw model outputs without designing the experience around what users actually need. They add AI as a feature rather than rethinking the product around what AI makes possible.
Jobs would look at most AI-enhanced SaaS products and say exactly what he said about Windows 95: "We have no taste." Not because the underlying technology is bad. Because nobody asked the hard question about what a product truly designed for AI capability would look like — from scratch.
He Would Be Obsessed With the Interaction Model
Jobs spent months on the original iPhone scroll physics. The precise deceleration curve. The rubber-band effect at the end of a list. These details felt like magic because they matched how human hands actually move. That physical intuition built trust — and trust is what makes people adopt new behavior.
AI interfaces in 2026 mostly fail at this. Chat is the default because it's the path of least resistance for engineers. But most people don't think in sentences. They think in outcomes. They have challenges they can't fully articulate. They want guidance, not dialogue.
Jobs would push hard on the fundamental question no AI company is adequately answering: what is the right interaction model for this specific intelligence, for this specific person, in this specific context? The answer is almost certainly not a text box that says "Ask me anything."
He Would Fear the Mediocrity Loop
One of Jobs' consistent arguments was that giving people what they say they want produces mediocre products. Customers couldn't have told him they needed the iPhone — they were satisfied with their Nokia and Blackberry devices. The insight required seeing what was possible, designing backward from that possibility, and then building the thing that would make people realize what they'd been missing.
The AI industry's training data problem is a version of this. Models trained to predict human preference tend to produce the average of human preference — which is, by definition, average. The outputs are competent, inoffensive, and forgettable. Jobs would recognize this as the mediocrity loop: the technology is sophisticated enough to give people what they ask for, but not audacious enough to give them what they didn't know to ask for.
Breaking this loop requires the same thing it always has: deep conviction about what matters, willingness to make something that seems wrong before it seems right, and enough taste to know the difference.
He Would See the Real Opportunity Clearly
Jobs was direct about what computers were for. He called them "bicycles for the mind" — tools that amplify human capability without replacing human judgment. He believed the human remained essential: the creativity, the taste, the values, the decision about what to build and why.
The opportunity he would see in AI is exactly this amplification at scale. Not AI as replacement for human thinking, but AI as compression of the distance between a person's best intention and their best execution. You know what the right strategy is. You don't know how to communicate it perfectly. AI closes that gap. You understand the customer's problem. You struggle to articulate the solution at the right level of detail. AI closes that gap too.
The companies that will win with AI are the ones that use it to amplify human capability rather than substitute for it. That was Jobs' argument about personal computers in 1984. It's still the right argument.
What He'd Say About Elon Musk
Jobs and Musk are often compared because both take extreme positions and drive extraordinary outputs. But their fundamental orientations are different. Jobs believed technology should serve human experience. Musk believes technology should expand human possibility — often at the expense of current experience.
Jobs would probably have respected the engineering ambition behind Tesla and SpaceX while being skeptical about the product execution. He once said the goal was to make a dent in the universe, not the user. He'd see xAI's Grok as a product that prioritizes capability demonstration over user experience — technically impressive, designed for engineers, not for people.
That said, both would agree on one thing: most AI companies are building features, not products. And features don't change the world.
Ask Them Directly
The ideas above are grounded in Jobs' documented philosophy — his speeches, interviews, and the decisions he made at Apple. But the most useful thing is to bring your actual problem to him and see what he'd say about it specifically.
Grand Mentors' Steve Jobs AI is trained on his philosophy, his interviews, and his documented decision-making approach. Ask him about your AI product strategy. Ask him what he'd cut from your feature list. Ask him why your product doesn't feel like it has a soul yet.
Or try Elon Musk for the contrasting perspective — the first principles approach that asks not "what's the best version of what exists" but "what should exist if we were starting from scratch."
The tension between those two views is where the best products come from.