Skip to content
/Mike Aron
← All Insights
·9 min read·Finance AI

The Finance Ontology: The Missing Layer Between Generic AI and a Senior Practitioner

The most valuable asset in your Finance function isn't on any balance sheet — it's the logic that lives in the heads of your senior practitioners. The finance ontology is how you get that logic out, and it's the dividing line between AI that sounds competent and AI that actually runs your business.

There's a test I put to Finance leaders.

Think about the senior controller or FP&A lead on your team. The one everyone escalates to. The one who, when a number looks off, can tell you in thirty seconds which driver slipped, which edge case got triggered, which intercompany move hasn't posted yet. The one your new joiners shadow for a month before they can do anything real on their own.

Now ask: if that person left tomorrow, how much of what they know is actually written down?

Most of it isn't. That's not a criticism of your documentation. It's just how Finance functions actually work. The patterns, the heuristics, the thresholds, the language, the "we don't care about that variance but we do care about this one" judgments — those live in heads. Almost always. Your real operational moat as a Finance organization is the logic trapped in the heads of your most experienced practitioners.

And that is exactly the moat that AI cannot cross on its own.

What the finance ontology actually is.#

The finance ontology is how you get that logic out of your practitioners' heads and into a form AI can use.

The broader concept of ontology as enterprise infrastructure isn't new. It has been developing across the enterprise data and AI space for a while, and it has shaped how a lot of us think about applying AI inside large organizations. What I'm doing here is making it concrete for Finance, combining that broader thinking with what I've seen up close — because Finance is where the gap between "generic AI" and "a specialist who actually knows your business" is widest, and where the cost of getting it wrong is highest.

Think of it like onboarding a new hire.

You've hired someone brilliant. They passed the interviews, they know the textbook, they can talk about GAAP, FP&A structure, and the close cycle in the abstract. You are excited to have them. And in the first week, you also realize: they do not know your business. They don't know your chart of accounts. They don't know which flux explanations your CFO actually reads. They don't know which entities consolidate up and which ones sit out. They need guidance — policies, structures, thresholds, narrative conventions — before they can operate at the level you need them to.

AI is that new hire. Generic AI understands finance in general. It does not understand your finance. The finance ontology is the onboarding — structured, explicit, reusable — that turns a generalist into a specialist.

Why generic AI cannot bridge this gap on its own.#

Most Finance AI today is generic AI pointed at Finance. And generic AI does one thing well that looks dangerously like competence: it infers.

Ask a generic model to walk you through an income statement variance. It will produce a structured, confident answer. It will look right. But it got there by inferring — guessing at your structure, assuming an industry-standard shape, filling in blanks from training data. Finance does not tolerate inference. Finance demands accuracy and precision, every single time, because the numbers have consequences.

Take a small, concrete example: the GL account structure.

Generic AI does not understand your GL account structure. It does not understand how your sub-ledgers are set up. It does not understand which accounts roll up into which lines of your income statement, which ones get eliminated on consolidation, which dimensions carry meaning and which are vestigial. It can guess reasonably. It cannot know.

That's a problem when AI is summarizing a document. It's a much bigger problem when AI is doing anything that matters — running a flux analysis, preparing commentary, feeding a forecast, moving through a close. At best, inference produces a plausible answer that's quietly wrong. At scale, it produces plausible-but-wrong answers at operational velocity. Neither is what Finance is looking for.

The finance ontology exists so that AI doesn't have to infer. It can look up, reference, calculate against, and reason from an explicit representation of your business.

The four layers.#

The way I think about the finance ontology is in four layers, each of which solves for a different kind of logic trapped in your team's heads.

Layer 1: The semantic foundation.#

This is the business architecture — your driver models, your entity and dimensional structures, the connected tissue of how your business actually fits together. Which products drive which revenue streams, which cost centers roll into which P&L lines, which legal entities consolidate into which reporting entities, which leading indicators actually move the lagging ones.

The semantic foundation is what lets AI understand, at a structural level, what your business is. Without it, every question AI tries to answer starts from zero.

Layer 2: Calculation and logic.#

This is where your KPIs, dimensions, and calculation rules live — the single source of truth for how every meaningful metric is defined and computed. What counts as revenue for this purpose and not that one. How margin is calculated for this segment. What "adjusted EBITDA" actually means in your company's dictionary. How a scenario has to cascade through the business when one driver moves.

Without this layer, every AI-generated answer is quietly arguing with your own Finance team about the numbers. You cannot scale that.

Layer 3: Interpretation.#

This is the layer most organizations don't realize they have, because it lives almost entirely in senior practitioners' heads. Materiality thresholds — what's big enough to call out and what isn't. Vocabulary — the specific terms and phrases your organization uses, and the meaning attached to them. Narrative patterns — what a flux explanation is supposed to sound like, how a month-end commentary reads, what tone a CFO memo expects.

This is the layer that separates an AI-generated summary from one a controller would actually sign. Everything above this layer gets you to a technically correct answer. This layer is what makes the answer usable.

Layer 4: Governance and security.#

The fourth layer is what keeps the other three safe. It's how AI routes to the right data, respects data ownership and permissioning, and honors the controls your function already has in place.

This layer is getting more important, fast. As AI agents move from summarizers to actors — running close tasks, executing treasury moves, drafting statements — security cannot be bolted on at the tool layer. It has to be built into the ontology itself, so that every time AI touches your data, the governance travels with the data. Ontology-level governance is where most of the serious control problems in agentic Finance AI will get solved over the next few years. The orgs that treat this as a tool question instead of a foundational one are going to learn an expensive lesson.

Why this argument lands now.#

The reason this matters in 2026 specifically, and not two years ago, is that Finance AI has crossed a threshold.

Two years ago, Finance AI was mostly summarization. Write this commentary. Summarize this board pack. Reconcile this schedule. Ontology gaps showed up as weaker summaries — annoying, but recoverable, because a human was always reading the output.

Finance AI in 2026 is moving toward action. Agentic systems that run multi-step workflows. Tools that call other tools. Agents that execute inside your ERP, your planning platform, your treasury system. The promise is enormous. The risk profile is fundamentally different.

When AI summarizes, an ontology gap produces a mediocre paragraph. When AI acts, an ontology gap produces a wrong entry, a miscategorized accrual, a flawed treasury move, a variance explanation that confidently misses the actual driver. The difference between "helpful tool" and "operational risk" runs directly through the ontology.

The same is true of security. When AI just reads, access controls on the data source are usually enough. When AI acts through tools and agents across your Finance stack, the governance has to live in a shared layer — the ontology — or it doesn't live anywhere coherent.

Everything that is exciting about where enterprise AI is heading — agentic workflows, multi-step reasoning, production-grade automation — makes the ontology more important, not less.

What to do.#

If you are running a Finance AI program today, the uncomfortable truth is that most of the work you're funding sits above the ontology. The pilots, the copilots, the ML forecasting, the summarization engines — they all assume the ontology exists. Most of the time, it doesn't. And that's why they quietly underperform.

The prescription is simple and not easy.

Fund the ontology as infrastructure, not as a feature of any one pilot. Ontology work does not belong to a vendor, a platform, or a use case. It belongs to Finance, owned by Finance, maintained over time. Treat it the way your IT function treats the data platform underneath it — as the thing everything else compounds off of.

Start with the semantic foundation and build outward. You do not need to boil the ocean. You need an honest map of your business, your calculation rules, your interpretation layer, and your governance model — even a rough one. Ship the rough version. Iterate in production. Ontologies mature the same way processes do: through contact with the real world.

Put it under a Finance leader, not an IT leader. The ontology encodes judgment, and judgment belongs to the people whose names are on the results. IT builds the systems. Finance builds the ontology.

Plug every new AI capability into it. The test for any new AI tool or vendor is no longer "does it work?" — of course it works. The test is: does it use your ontology, or does it reinvent one inside its own walls? If it's the latter, you have a short-term pilot and a long-term fragmentation problem. If it's the former, every new capability you add compounds on everything that came before.


The way I think about this, plainly: the finance ontology is how you turn AI from a promising new hire into a senior practitioner.

Everything above that line — the model, the tool, the vendor, the pilot — is decoration. It's the layer that makes AI sound competent. The finance ontology is what makes AI actually competent, in your specific business, under your specific rules, at the quality you would expect from someone who has been doing this for twenty years.

The organizations that treat the ontology as strategic infrastructure are going to compound. The ones that keep buying AI tools and hoping the model figures the rest out are going to keep buying the same pilot twice.

The logic is trapped in your team's heads. Get it out. That's the work.