As an Excel trainer, I regularly encounter a particular type of question that shows up in almost every tool-focused session. It usually takes the form of:
“Why don’t we just use [another tool] instead?”
This came up again recently during a webinar I was running on AI workflows in Excel for finance teams. During the Q&A, one attendee asked why we would not simply use Python for data cleaning, and another asked why the same work would not be better handled in Power BI.

On the surface, these are reasonable questions. Python and Power BI are both powerful tools. I use them myself and teach them in other sessions. Anyone working in data benefits from having some awareness of the broader ecosystem.
The issue is not the question itself, but the timing and what it does to the session once it is introduced.
Why this question appears so often
This pattern shows up in almost every type of technical training. The tools change, but the behavior is consistent. In Excel sessions, people bring up Python or Power BI. In Power BI sessions, people bring up Tableau. You get the drift.
Participants are often trying to connect ideas, compare approaches, or identify what they believe is the most efficient path forward. Sometimes there is also a subtle desire to signal familiarity with other tools.
That broader perspective is useful in the right setting. But once the conversation shifts into comparison mode, the session starts to drift away from its purpose.
What gets lost when the frame shifts
A training session is meant to build practical skill within a defined environment. When that environment is repeatedly questioned, the focus becomes less concrete and more abstract. You can see the shift pretty clearly:
| Intended outcome | What actually happens |
|---|---|
| Learn how to execute a workflow | Debate which tool is “better” |
| Build confidence in a specific environment | Compare hypothetical alternatives |
| Leave with something usable | Leave with more questions than answers |
Participants end up with a kind of surface-level awareness across tools, but without enough depth to actually apply any of them effectively.
Why boundaries are necessary for learning
Every effective training session operates within a set of constraints. These constraints are not there to limit thinking; they are there to make focused learning possible. If the topic is AI workflows in Excel for finance teams, then the session is built around a simple and realistic assumption: we are working within Excel, in environments where Excel is already the primary tool.
That assumption reflects how many organizations actually operate, especially in finance:
- Data is stored and exchanged in Excel files
- Reporting processes are built around spreadsheets
- Teams are already trained in Excel
- Stakeholders expect Excel-based outputs
Within that context, improving Excel workflows is not theoretical. It is immediately useful.
Once those boundaries are removed, however, the session becomes difficult to sustain. Every example can be replaced with an alternative, and the discussion never settles long enough to build depth.
A useful analogy: suspending disbelief
A helpful way to think about this is through analogy. When watching a film, you accept a certain premise so the story can unfold. If you spend the entire time questioning that premise, the experience starts to break down.
You can imagine the running commentary:
- Why didn’t they just do this differently?
- Why didn’t they use another approach?
- Why didn’t they avoid the situation entirely?
At some point, the story never really gets a chance to develop.
Training sessions work in much the same way. For the session to be useful, you need to stay within the frame long enough to see what is possible. This is less about agreement and more about focus.
Tools exist within context, not in isolation
One of the underlying issues with “Why not X?” questions is the assumption that tools can be evaluated independently of their environment. In reality, tools are embedded in a broader system that includes processes, team capabilities, constraints, and expectations.
A tool that is technically more powerful is not necessarily more effective if it introduces friction into that system.
| Theoretical view | Practical reality |
|---|---|
| Best tool = most powerful | Best tool = best fit for the environment |
| Focus on capability | Focus on usability and adoption |
| Optimize for ideal scenarios | Optimize for real workflows |
In many finance teams, Excel remains the center of that system. That does not make it perfect, but it does make it highly relevant.
A more productive way to approach these questions
Instead of stepping outside the session to ask why another tool is not being used, a more useful question is: how far can we push this tool within the context we are working in?
That shift leads to a different kind of engagement. Participants move from debating tools to exploring possibilities, from comparing abstractions to building something concrete. Over time, this produces better judgment. Depth within a tool makes it much easier to recognize when it is sufficient and when it is not.
Conclusion
Questions about other tools usually come from a good place, but in the middle of a training session they can work against the learning process. When the focus shifts too quickly to comparison, it becomes harder to build real skill in any one environment.
A better approach is to stay with the premise long enough to understand what can actually be accomplished within it. That depth of engagement makes it easier to evaluate tools afteraaward with a clearer sense of where each one fits.
If you’re thinking about how this applies to your team, you can check out my “How I Work” page to get a better sense of how I approach training and consulting with organizations:
