Skip to content
/Mike Aron
← All Posts
·8 min read·Notes

Two Years In: What I Actually Believe About Building With AI

After two years, six products, and roughly two thousand hours at the keyboard, here's what I actually believe about AI — what's overrated, what's underrated, and what I'd tell someone who wants in.

This is the last post in this series, so I'll skip the scene-setting.

Two years ago I started the bender I've been writing about. Six products built, four of them shipped, somewhere around two thousand hours at the keyboard, and a lot of 2am coffees later — here's what I actually believe about AI right now.

Not what I'm supposed to say. Not the version that sounds good on a podcast. What I actually think, after doing the work.

The one thing I'd want you to take away

AI is much better than you think it is.

I don't mean "the models are smart" in the way people usually mean that. I mean: the ceiling of what you can do with what already exists today is dramatically higher than 95% of people realize. The limiter is almost never the model. It's you.

Here's the way I explain it to people who ask.

An AI out of the box is like a new joiner at your company. Bright. Capable. Willing. But they don't know your business. They don't know your customers. They don't know what good looks like in your domain. And if you hand them a real problem on day one and expect a partner-level answer, you're going to be disappointed.

The real skill — the one I've spent two years getting better at — is turning that new joiner into a 15-year seasoned executive. Giving them the right knowledge. The right context. The right tools. The right feedback loops. The right understanding of what's actually happening in the business they're working in.

That's the whole game. You don't prompt AI. You coach it.

What I had wrong

Eighteen months ago I put AI in a box.

I had a mental model of what it could and couldn't do, and I limited my imagination to fit. I assumed certain things were reliably solvable and certain things weren't. I was careful not to overestimate it.

I was wrong. Not in some of the specifics — in the framing itself. Putting AI in a box is the single worst move a builder can make right now, because the box is invalidated almost every week.

The speed of advancement over the past two years has been astronomical. Things that were impossible six or eight months ago are table stakes today. Things I wouldn't have attempted last summer are trivial now. If you set your expectations based on what's possible right now, you're already behind by the time you ship.

The right instinct is the opposite one: assume it can do more than you think, then push on it until you find where it actually breaks.

What's overrated

Model obsession.

Every week there's a new model. Every week there's a new AI company. Every week there's a new benchmark that someone on X is losing their mind over. All of it is real, and honestly a lot of it is exciting, but it's also a massive distraction from the only question that matters for people who actually want to build something useful:

Can you do something valuable with the models that already exist?

Because the models that already exist — the ones that have been around for months — are ridiculously powerful, and most people haven't gotten to 10% of what's possible with them. You don't need the latest release to build the thing you're trying to build. You need to actually sit down and apply what's already here.

The hype cycle is seductive. The next-model-is-almost-here energy makes you feel like you're staying current. You're not. You're reading. The people actually staying current are building.

What's underrated

Application.

The power has never been in the technology. The power has always been in how you apply the technology. That's been true of every major shift — the internet, mobile, cloud — and it is especially true of AI.

Here's my simplest argument for this. If models alone were the edge, the hyperscalers would already own every vertical. Microsoft, Anthropic, OpenAI, Google — they have the best models, the most compute, and nearly infinite engineering talent. Why don't they own every enterprise AI product on the planet? Why don't they have a dominant consumer AI tool for every category a human cares about?

Because having the best hammer does not make you the best carpenter. The secret sauce is figuring out how AI creates actual value in a specific domain for a specific user. That's a different skill. It requires understanding the domain, the user, and the AI at the same time — and most people who understand the AI don't understand the domain, and most people who understand the domain don't understand the AI.

That intersection is where the real products live. And it's wide open.

What I'm doing next

The short version: staying at the cutting edge.

The longer version: to be relevant in business now, you have to be relevant with AI. That means using the latest tools, building real things with them, and developing real perspectives — not borrowed ones, not ones I'm figuring out on the fly with other people's time on the line.

All those 2am nights, all those late weekends, all those thousand-hour stretches — they're not just for the products they produce. They're for the perspectives they produce. When I walk into a conversation about what AI means for a business, I'm not speculating. I've built the thing. I've broken it. I've watched it work. That's a different kind of credibility than "I've read a lot about AI," and it's the only kind that holds up under real pressure.

Another year like the last two is my baseline plan. Probably more.

If you want in — what to do

Someone's going to read this series and say "OK, I'm convinced. How do I get in?"

Here's my answer.

You have to be willing to be a psychopath about it.

I don't mean that metaphorically. I mean obsessively compulsive about learning AI. The way you were obsessed with your favorite video game as a kid, or your first real relationship, or your first startup. That level of focus, with none of it being performative.

There's no training course that substitutes for this. There are good courses on YouTube. There are institutional programs. They'll help with the foundational concepts and they'll grease the skid. But they won't do the part that matters, which is the hours.

You've got to put the effort in. You've got to put the hours in. Otherwise you're going to be stuck in the academic and theoretical lane, and there are already too many people in that lane. The real value is in the practical — what you can actually build and how it actually creates value for a real user.

Pick something small you want to exist. Build it badly. Ship it embarrassingly. Learn what you don't know by watching it fail. Then go again.

That's it. That's the whole method.

The cost

I should be honest about the cost.

I've sacrificed a lot of sleep. For a stretch last year, my youngest was on a feeding schedule where I'd do the midnight feeding — and then at some point I realized the 2am feeding was close enough that I might as well just stay up and work through it. So I did. More nights than I'd like to admit.

I've been chronically sleep-deprived. My wife has been patient in a way I don't fully deserve. There's no version of this journey where you put in a thousand hours a year at the keyboard without something giving somewhere.

But I'll tell you what I've also learned, and I believe this all the way down: when you invest in yourself, when you invest in learning, the cost stops mattering. $50, $100, $1,000, $10,000 — the number is irrelevant. If you walk away genuinely understanding something you didn't understand before, you won.

I learn by doing. YouTube videos help. Books help. Foundational concepts come from reading. But I don't actually internalize anything until I've built it — an API integration, a database, an agent. Until I've watched my own code fail in ways I didn't predict. Until I've had to fix it at 1am because nobody else is going to.

That's the trade. Sleep and comfort in exchange for skill and perspective that don't exist any other way. I'd take it again every single time.

Two years in

Two years ago, ChatGPT was a novelty I was staying up too late to play with. Today I've shipped four products that real people use, built and discarded two more along the way, and accumulated something I think is actually rare — a set of beliefs about AI that come from building with it, not talking about it.

That's the whole point of this series. Not to tell anyone what to think. To show the work, the hours, the mistakes, and the beliefs that fell out of the process.

The laptop still opens around 10. The headphones still go on. There's still a version of me that's going to be up until 2am tonight figuring out whatever the next thing is.

Two years in, I wouldn't have it any other way.