Skip to content
/Mike Aron
← All Posts
·9 min read·Craft

In Defense of Vibe Coding (If You've Earned the Vocabulary)

Vibe coding gets a bad rap. I've put roughly a thousand hours into it, shipped four products in ten months, and had real developers tell me the quality holds up. Here's what it actually looks like when you do it right.

The last week of paternity leave, summer of 2025. Youngest son down for the night. Laptop open at my desk by 10pm. What I was trying to build was a cocktail app.

That's where the second act of my AI bender started.

I'd spent the previous six months burning through ChatGPT token limits and teaching myself Python and APIs and databases. I already knew enough to build things. What I didn't have yet was a way to move fast. Up until that point, whenever I wanted something real — an actual product, not a script — I'd end up in the same loop: write requirements, hand them off to a developer, wait days, see what came back, iterate by email.

Then I opened Cursor.

Holy cow. That was the moment my brain exploded.

I could describe what I wanted — the feature, the UX, how it should look and feel — and see results almost instantly. I didn't have to send an email. I didn't have to wait a week. The feedback loop collapsed from "days or weeks" to "seconds."

That's when I understood what people meant by vibe coding.

Why a cocktail app

I should explain the cocktail app, because it's going to sound like a random choice otherwise.

Before the AI bender, I ran a creative agency called Mike Aron Visuals. A big chunk of the work was food and beverage — and inside that, a lot of cocktail photography. You don't shoot cocktails for a living without learning the space. I ended up knowing more about mixology than I ever planned to.

The idea I'd been sitting on for years: what if you could take a picture of what's on your bar at home, and an app would tell you exactly what cocktails you could make with it? Or: "If you grabbed one more bottle, here are six new cocktails you'd unlock." Or: "Friends are coming over, here's what we're grilling — what should we pour?"

The database had recipes, builds, inventory, ingredients, tasting profiles, the whole thing. The signature feature was called Make-A-Riff: tell it what you like, and it would generate a custom cocktail recipe using my algorithm, then use generative AI to render a photo of the drink it just invented.

I built it. It ran on my phone. It worked.

Then I decided not to launch it. I didn't really want to be famous for something in the spirits industry — alcohol is a touchy space, and it wasn't the hill I wanted to plant a flag on. The code is still sitting on my machine. With the models we have now, I might bring it back someday.

But the app wasn't really the point. The point was what building it taught me.

The thesis: you need to earn the vocabulary first

Here's what I actually believe about vibe coding, and it's the thing that gets lost in the hot takes:

Vibe coding only works if you've earned the vocabulary.

The reason Cursor exploded my brain isn't because it turned me into a developer. It's because I'd already spent six months learning what APIs are, how databases work, how to debug a pipeline, what an LLM is good at, what it's bad at, where it gaslights you. Without that foundation, I'd have been a passenger. With it, I could actually drive.

I'm kind of like a psychopath when it comes to intellectual curiosity. I don't just vibe code. I write the code, have the model explain what it did and how the components fit together, and then I push back — if I want the architecture different, if I want a different pattern, if the state management looks wrong to me. Understanding the output is what lets you redirect it. Without understanding, you're hitting enter and hoping.

There's a perception out there that vibe coding means you hit enter, walk away, and come back to a finished app. That's not what I do. I literally watch every single line as it's being written. That was one of the things I actually liked about earlier Cursor versus some of the newer tools — the visibility. I'd sit there at 1am reading every diff, every decision, questioning every choice.

That's where the skill gets built. Not in the prompts you write. In the lines you read.

The hours behind the thesis

I'll put a number on it.

From July of 2025 through April of 2026 — call it 10 months — I was in the chair 3 to 5 nights a week from roughly 10pm to 2am. That's the baseline. On top of that, whole weekend days where I'd wake up, grab coffee, and not look up from the laptop until my wife reminded me the sun had set. Some days I'd be at it for 10 or 12 hours straight.

Do the math and it comes out to roughly a thousand hours.

That's on top of the roughly equivalent stretch during the earlier six-month run, when I was learning Python and APIs from scratch.

So when I say "vibe coding works when you've earned the vocabulary," what I mean specifically is: I've spent close to two thousand hours at the keyboard with AI tools. I have scar tissue. I've had Cursor delete an entire project on me — which is how I learned that one backup isn't a backup and two backups barely is. I've spent 5 or 6 hours straight debugging a single issue, which is how I ended up building actual methodologies for how to debug systematically with AI instead of just mashing "try again."

You can't shortcut that part. You can't read your way to it.

Cursor → Claude Code (and why the tool matters less than you think)

Cursor was my primary IDE for a long stretch. Then something shifted.

As I was testing across models, I kept gravitating back to Claude. First Sonnet, then Opus 4.5, now Opus 4.6. At some point I started using Claude Code directly and realized it felt more optimized for the models I was already leaning on — and as a bonus, it uses my existing Claude Max plan, so the economics got better too.

I still pay for Cursor. If I'm doing something where a Gemini or GPT model is the right fit for the job, I want that flexibility. Anthropic's models are really good. There are other models that are really good at other things. Pretending otherwise would be lazy.

Point being: the tool isn't the religion. The model is the engine; the tool is whatever happens to give that engine the cleanest path to what you're trying to build. That calculus changes every few months, and if you're not re-running it, you're behind.

The cascade

Once the cocktail app proved out what was possible, things moved fast.

The Interview Assistant came next. Anyone can use ChatGPT to review their resume — that's the easy gap. The hard gap is what happens when you're actually sitting in the chair across from the interviewer. You know the role. You know your resume. You freeze. You talk about the wrong thing. You miss the chance to pull in the experience you should've pulled in. The Interview Assistant runs mock interviews and gives real feedback — "you probably shouldn't have framed it that way" or "here's something from your background you could've brought in." Not a teleprompter. A coach.

Alfred came next — I've written about that one separately.

Then AutoRev. Four weeks. No product at the start of December. Live on December 31st. We rebranded, sharpened the positioning, and did a second launch at the end of January. Since then we've been hardening it.

Four weeks. That's the number I keep coming back to. A full production app — backend, auth, database, payments, AI pipeline, the works — from zero to live in four weeks.

The old way, with a developer on the other end of an email thread, that's a six-month project. Minimum. And that's if you're lucky. I've watched a full production app come together in 12 to 14 hours of focused work — the kind of thing that, under the previous model, would've been a multi-month slog. It's insane. There's no other word for it.

SirHENRY is the latest in the stack. Still cooking.

The pushback

Someone will read this and go: "Vibe coding just means the AI writes everything and the code sucks."

Three responses.

One. They probably haven't vibe coded. Most of the strongest opinions come from people watching it from outside. Same thing happens every time a new paradigm shows up — people are scared of what they haven't used, and the second they actually sit down with it, the fear dissolves. AI was the same. Vibe coding is the same.

Two. I've had real developers — career engineers — look at code that came out of my vibe-coding sessions and tell me it's clean. Good patterns. Sound architecture. The "code sucks" assumption is based on a 2023 mental model that isn't true anymore.

Three. The actual skill isn't "can you get AI to write code." It's "do you know what AI is good at, where it struggles, and how to work around the struggle parts." That's the thing you only get by using it enough to feel the shape of its failure modes. And that only comes from hours.

Which is the whole point of this post. If you've put in the hours, vibe coding is a cheat code. If you haven't, it's a trap.

Earn the vocabulary first. Then use it.

What it actually unlocks

The reason I could go from "no product on December 1st" to "live product on December 31st" isn't because vibe coding is magic. It's because I'd already put in the six months of 2am nights before December. Earned the vocabulary. Learned what an LLM can be trusted with and what it can't. Built the debugging muscles. Accumulated the scar tissue from the project Cursor once deleted.

Then I used all of it.

That's what people miss. Vibe coding isn't a replacement for knowing how to build software. It's a multiplier on top of it. 1 × 100 is still 100. 0 × 100 is zero.

The laptop still comes open around 10. The headphones still go on. The only thing that's changed is the output: what used to take six months of back-and-forth with someone else's Jira queue, I can now ship by 2am.

Still grinding. Just shipping faster.