Skip to content
/Mike Aron
← All Insights
·9 min read·Finance AI

Practical AI: The Only Answer to a Disruption Curve That Isn't Going to Slow Down

New models, new capabilities, new tools every few weeks. Most leaders are reacting wrong — chasing, waiting, or picking winners. The only position that compounds is practical AI: hands on keys, honest about limitations, focused on application. The model is the commodity. The application is the moat.

Pick a month. In the last stretch alone, Anthropic shipped Claude Design and Claude Routines. Claude Code keeps crossing usability thresholds on a steady cadence. OpenAI has pushed multiple rounds of agent-tool improvements. Google keeps extending context windows. A handful of open-source models have clawed closer to frontier performance. New protocols for tool use, memory, and multi-agent coordination keep maturing.

Pick the vendor. The pattern is the same. Every few weeks, something meaningfully shifts about what AI can actually do.

This is not a phase. This is the new operating condition.

And most leaders are responding to it wrong.

The three wrong reflexes.#

Walk into any leadership conversation about AI and you will see one of three defaults.

Chase the latest. Every release produces a reshuffle — which model to use, which platform to consolidate on, which capability now changes the roadmap. The org lives in a state of permanent re-planning, spends more time evaluating than building, and still ends up behind because the vendors are always one release ahead. By the time the committee has blessed the model, there is a new one on the table.

Pick a winner. Commit to a single vendor, standardize, move forward. Clean on paper. Brutal in practice. No vendor is uniformly best across use cases, and the leaderboards shift every few weeks. A bet today becomes a constraint tomorrow. The orgs that went all-in on a single stack two years ago spent most of the last two years trying to get out of it.

Wait for it to settle. The most expensive move on the board, because it is not going to settle. Not on the timeline that matters to your business. "We'll wait until the dust settles" is an admission that you've opted out of the period of largest advantage available in your career, in exchange for a stable steady state that is not coming.

Each of these is a reflex from a previous technology cycle — cloud, mobile, SaaS, data. None of those are good analogies for what is happening now. The release cadence is faster, the capability delta per release is larger, and the surface area of what AI can do expands every quarter. The playbooks that worked for previous waves do not apply.

The only move that holds up is the one almost nobody is optimizing for.

Practical AI.#

The single biggest misunderstanding in the market right now is the assumption that the tech itself is where the value lives. It is not. The tech is becoming a commodity faster than most organizations realize. A frontier model today is a mid-tier commodity in eighteen months. The tooling around it compresses the same way.

Where the value actually lives is in the application. What you do with it. How you deploy it inside your business. The problems you solve that no vendor will solve for you. The way you wire it into how your people work, how your processes run, how your data flows.

The value in AI is not in the latest tech. The value is in the practice of applying it.

I call this practical AI, and it is the brand I want to be known for.

Everyone says they are practical. That is not the same as being practical.#

Here is the uncomfortable part of this argument, including for me: "practical" is a cheap word. Every consultant, every vendor, every AI strategist claims to be practical. The label is free. The evidence is not.

What makes practical AI actually practical is a pattern I would argue is non-negotiable: you have to be building.

Not necessarily building products. Not necessarily building at startup scale. But actively, hands-on, using the current tools to solve real problems in your own work or life. Because the only way to have a defensible point of view on AI today is to have recent, personal, real-world contact with what it can and cannot do. Without that, your opinions are borrowed, and in a field where the capability set changes every six weeks, borrowed opinions are stale opinions.

I build because that is how I stay honest. Every time I ship something real, I learn things no amount of reading, briefing, or vendor demo would have taught me. The limitations are different than the deck said. The failure modes are different than the whitepaper said. The wins show up in places no analyst flagged.

When I sit across from a CFO and talk about what AI can do for Finance, the reason what I say carries weight is that I have spent last night, last weekend, last quarter building with the same tools I am recommending. That is not a marketing posture. That is the only kind of credibility I trust, including in myself.

The tension: do not chase, but stay current.#

There is a real tension inside practical AI that is worth naming rather than papering over.

On one side: organizations should not chase models. Chasing is a losing move, for all the reasons above.

On the other side: people who want to credibly advise on AI, build on AI, or lead AI programs have to stay current with what is actually shipping. Because the capability frontier is moving.

These two things are not contradictions. They are audience-specific.

If you are running an enterprise AI program, the right posture is stability of direction with flexibility of implementation. Pick the problems you are solving, pick the architecture you are solving them with, and let the underlying models and tools evolve inside that architecture. You do not need to reshuffle your strategy every time a new model ships. You do need an architecture that lets you swap them cleanly when it is time.

If you are the person advising, building, or personally leveraging AI, staying current is table stakes. Not because the latest model will change your advice — it probably will not — but because hands-on use is how you keep your instincts calibrated to the actual frontier instead of the frontier as it existed eight months ago. This is the part most advisors skip. It shows.

The limitations are where the real learning lives.#

The other thing practical AI teaches you — and this is the half I think almost nobody is writing about — is what AI is bad at right now.

Most takes on AI are bullish. The builders I trust are more measured, because they run into walls every day.

A few of mine, from recent building work across very different domains:

  • Sustained multi-step autonomous workflows at high accuracy are still hard. Agents are improving fast. They are not yet reliable enough for unattended production use on high-stakes tasks without meaningful scaffolding. Most of the agent demos you see in vendor pitches are running in conditions that do not look much like your business.

  • Confidence calibration is still a serious problem. Models confidently produce wrong answers. In Finance, where accuracy and precision are non-negotiable, this is not a minor footnote — it is a primary design constraint. Generic AI does not know what it does not know. Your architecture has to.

  • Deep domain judgment without an ontology underneath is fragile. This is the argument I made in a recent piece on the finance ontology: generic AI understands generic finance. It does not understand your business. The gap between a plausible answer and a usable one is almost always a domain layer the model does not have.

None of this is a reason to slow down. It is a reason to be honest about where the walls are and to architect around them instead of pretending they do not exist. The organizations that are going to win over the next few years are the ones whose leaders know both the capabilities and the current limitations, cold.

Why this matters in Finance specifically.#

Finance is where the gap between theoretical AI and practical AI is widest.

Finance does not tolerate a plausible answer. Every number has consequences. Every narrative has to tie back to the driver. Every decision has to be defensible to an auditor, a board, a regulator, a CEO. Theoretical AI — AI as a concept, AI as a vendor pitch, AI as a PowerPoint slide — produces programs that die in this environment, every time.

Practical AI — AI that has been built into the way Finance actually runs, grounded in the business's own ontology, matched honestly to the limitations of the current tech — is the only version that survives the encounter with a real Finance function.

This is the work. Not picking models. Not waiting out the cycle. Not riding the disruption curve as a passenger. Building it into how the business operates, the way only hands-on practice can teach you.

What I hope this piece lands with.#

If you take one thing from this, take this: the disruption curve is not slowing down, and the reflex to chase it or wait it out is going to leave you behind. The only position that compounds is practical. Hands on keys. Honest about limitations. Focused on application.

I would rather be known for what I have actually shipped than for what I have theorized. A lot of what is written about AI right now comes from people who do not use it to do real work. That is a problem for everyone who has to act on their advice.

It is also why I build. Not to show off, and not because I need another project. Because I am not willing to have an opinion about AI that I have not tested against reality. And because the only way to help the leaders I work with is to know what I am talking about — because I have done it, not because I have read about someone else doing it.

Practical AI. That is the work. Everything else is decoration.