The kind of problem Excel was built for
There’s a scene in Seinfeld where Kramer and Newman realize they can get 10 cents per bottle in Michigan instead of 5 cents in New York. Naturally, they decide to arbitrage it. The entire episode revolves around whether this idea actually works. They go back and forth on the logistics, the costs, the volume, and whether the effort would ever pay off.
What makes it funny is that it sounds just far enough out there to raise an eyebrow, but not so far that you can dismiss it outright. It sits in that uncomfortable middle ground where it might actually work, which makes you want to model it, even informally, just to see if the intuition holds up.
That is exactly the kind of situation where Excel is at its best.
At its best, you don’t open a spreadsheet because you already have the answer, but because you’re trying to understand what the answer even is. Laying out assumptions, linking them, and watching how changes ripple through the model isn’t just a step along the way. It is the work.
What AI is getting very good at
Now compare that to the kinds of examples we are starting to see with Copilot.
Microsoft recently published a demo showing how Copilot can generate a full tournament bracket directly in Excel. You provide a prompt, and the tool builds the structure, fills in the logic, and presents something that looks complete almost immediately.
At a glance, this is impressive. It is also useful, in a narrow sense. If you need a bracket, and you need it quickly, this is a reasonable way to get one. But it is worth pausing on what kind of problem that actually is.
A bracket is a known structure. It has been built countless times before. There is no ambiguity about what it should look like or how it should function. The task is not to discover anything new, but simply to produce a familiar artifact.
In that context, AI is operating exactly where it is strongest. It is recognizing a pattern and recreating it efficiently.
And to be honest, this is also where Excel has always been somewhat overextended. A bracket is arguably better suited to a small web application or purpose-built tool. The fact that we historically used Excel for these kinds of tasks says more about convenience than about fit.
So yes, Copilot can generate a bracket. But in that case, it is just as fair to ask whether Excel is even the right medium at all.
Where things start to get uncomfortable
The bracket example is clean and harmless. It makes for a nice demo.
Things get murkier when the same framing gets applied to messier, more human problems.
There’s an ad circulating where someone uses Copilot to generate negotiation points in real time and then reads them off. With each prompt, she literally rises higher in her chair. It’s not subtle.
Which, on one level, makes it easy to dismiss. The visual is so over the top that you can immediately write it off as not real.
But even if the visual itself is exaggerated, the takeaway doesn’t necessarily land that way. Someone watching casually could come away thinking this is more or less how the tool works in practice, not just helping you structure your thinking, but actively guiding what you should say in a live, context-heavy situation like a negotiation. And that’s a bold claim.
It also raises a more basic question that somehow gets skipped over entirely: how do we even know the numbers or suggestions being generated are accurate? Not long ago, the standard caveat was “Copilot may make mistakes, please validate.” Now the tone feels closer to “just read the numbers off the spreadsheet.”
Negotiation isn’t a templated problem. It depends on context, incentives, relationships, and timing. You can generate language and numbers that sound plausible, but that’s not the same as actually understanding the situation.
Why the process matters more than the output
Excel has never really been just about the finished spreadsheet. The real value lives in the journey of building it. What matters isn’t only the final output, but the process that gets you there.
When you model something out, you’re making your assumptions visible. You’re deciding what matters, what connects to what, and where the weak spots are. You start to notice when something feels off, often before you can fully explain why. That sense comes from working through the logic yourself, not from being handed a result.
And a lot of that judgment is tacit. It’s not neatly written down anywhere, and it’s not something you can assume is sitting inside a model’s training data. It comes from context, experience, and familiarity with the specific situation you’re in.
That kind of understanding doesn’t come from a finished output. It comes from the act of constructing it.
So while it’s tempting to frame all of this in terms of speed, the tradeoff isn’t just time. It’s depth. It’s the difference between seeing an answer and actually knowing where it came from.
The auditability problem
There is also a more practical issue that starts to show up once you move past the demos and actually have to live with the generated workbooks.
Spreadsheets have always been a little awkward to audit. The logic is spread out across cells, references jump around, and it is not always obvious how one part connects to another. Even when you build the model yourself, it can take time to trace through it and convince yourself everything is behaving the way you think it is.
Now imagine dropping into a workbook that was generated for you in one pass.
You did not decide how it was structured. You did not lay out the relationships. You were not there when the assumptions were made. So instead of following your own thinking, you are trying to reconstruct someone else’s, after the fact, inside a format that was already a bit opaque to begin with.
And to be fair, you can try to get ahead of this. You can give detailed instructions upfront and specify how things should be structured, what tables to use, and how the logic should flow. But a lot of good model design often does not happen all at once like that. Especially for problems that are still taking shape, it is very hard to lay everything out correctly on the first pass.
Which makes the “one shot” idea a bit shaky in this context.
In a lot of other environments, especially something like a small HTML or code based app, this problem has actually gotten easier over time. You can compare versions, see exactly what changed, track the logic in one place, and reason about it more directly. Even basic version control gives you a clear sense of how something evolved.
Excel does not really work that way. There are tools and workarounds, but they are not the natural mode of working. So when you generate something in one shot inside a spreadsheet, you are putting yourself in one of the hardest possible positions: a dense, cell based model, no clear history, and no real visibility into how it came together.
Which is why, of all places, Excel is probably the least forgiving environment for this kind of one shot generation.
When Excel is the wrong tool
If all you’re really trying to do is ship something clean-looking and “good enough” as fast as possible, Excel probably isn’t where you need to start.
A small app or even a simple HTML interface is usually a better fit now. The barrier to entry there used to be a real constraint. It was easier to open Excel and start building than to spin up anything resembling an app. That’s changed. With AI, getting something basic up and running in HTML or code is not nearly as out of reach as it used to be.
So if the goal is just a clean, directional output, you might as well use the tool that’s built for that. The logic lives in one place, changes are easier to track, and you can version, compare, and deploy without digging through a grid of cells to figure out what moved.
Excel was always doing two jobs at once. One was lowering the barrier to entry. The other was giving you a place to think. The first one matters less now. The second one still matters a lot.
Whither Excel?
Excel only fades if it becomes a place where people just accept generated outputs instead of working through the logic themselves. And that is not something AI decides, but a choice to stop thinking.
If people keep using Excel to test ideas, challenge assumptions, and understand what is going on, it stays relevant. If they do not, then it becomes just another surface for results, and there are already better tools for that.
Conclusion
What makes the Seinfeld example stick is not the scheme itself, but the way they keep working it from different angles, trying to see if it can actually hold together.
Kramer and Newman keep coming back to the numbers, adjusting assumptions, pushing on the weak spots, and looking for some version of the idea that might work. The humor comes from how far they take it, but the instinct behind it is pretty familiar.
And for them, it is not hypothetical. It is a real, emerging business case, at least in their minds. They need to see the numbers, lay them out, tweak them, and stress test the idea in real time to figure out if it holds up.
That is exactly what Excel is built to do, and it does it in a way very few other tools can. It gives you a place to make the logic visible, to change assumptions on the fly, and to see immediately how those changes play out.
That is exactly the mindset Excel was built to support, and it is also the part that is easiest to lose if everything starts arriving fully formed.
If you are thinking about how this plays out in real workflows, training, or your own team’s use of Excel and AI, you can learn more about how I approach this work here:
