Skip to content
Go back

What Building a Laravel App With AI Actually Taught Me About AI

There’s a popular idea right now that the best way to build “AI product sense” is to use AI coding agents for real work — not toy projects, real work. Nik Shultz and Lenny Rachitsky wrote about this recently, arguing that hands-on experimentation with tools like Cursor develops genuine intuition about how AI actually behaves.

I agree. But I’d add something: you don’t learn the interesting lessons when things go right. You learn them when the AI confidently does the wrong thing, and you have to figure out why.

I’ve been building a SaaS product (Growth Method) with Laravel, Livewire, and Claude Code. I’m not a developer — I’m a founder. Over the past few months, I’ve shipped features, fixed bugs, refactored performance problems, and handled security reports, all by working alongside an AI coding agent.

Here’s what that experience actually taught me about AI.

Table of contents

Open Table of contents

1. AI is great at structure, bad at taste

The most revealing thing I’ve done with Claude Code is polish the UX of my app. Not build features — polish. The small stuff: border colours, font weights, heading alignment, dropdown icon choices.

None of these were things the AI flagged on its own. Every single one came from me noticing something felt slightly off and asking about it. The AI never said “your border colours are inconsistent” or “that heading isn’t aligned with the sidebar logo.” I had to notice, then ask.

Once I asked, the AI was excellent. It audited 38 files for mismatched border colours, found old brand colours hiding in hover states, and explained the difference between gray and zinc in Tailwind. It executed the fixes perfectly.

But it never would have initiated any of them.

This maps directly to what Shultz calls the gap between “technically correct” and “actually good.” AI handles structure — find-and-replace, consistent patterns, applying rules across a codebase. Taste — the judgement that something looks wrong, that a dashed line is too heavy, that meatball menus feel more current than kebab menus — remains human work.

The lesson: AI won’t tell you your app looks bad. It will fix everything you point at. You still need to do the pointing.

2. AI confidently applies the fix in the wrong place

My favourite example of AI failure is the Flux UI date formatting problem. The charts on my dashboard were showing raw dates (2025-01-01) instead of month names. Simple fix, right?

Claude Code’s first attempt was to change the :format prop in the Blade template. Reasonable — that’s what the prop is for. But when scale="categorical" is set, Flux UI silently ignores the :format prop. No error. No warning. The prop just does nothing.

The second attempt removed the categorical scale, which made the labels appear — but duplicated and interpolated, because the chart was now treating 5 monthly data points as a continuous time series.

The third attempt — formatting the data in PHP before it reaches the chart — worked perfectly.

This is exactly what Shultz and Rachitsky describe as “semantic fragility” in Part 2: the model technically understands the words (format, date, chart) but misses the structural context (categorical scale ignores format props). It confidently invented a solution at the template layer when the fix belonged in the backend.

The lesson: When an AI fix doesn’t work, the diagnosis is usually wrong, not just the code. The model applied the right concept in the wrong place. Ask it to explain why the first approach failed before accepting the second.

3. AI defaults to the most powerful tool, not the right one

When I investigated why my campaigns page felt sluggish, the root cause was architectural: every table row was a full Livewire component. Ten rows meant 11 Livewire components — each with its own PHP lifecycle, JSON serialisation, and server communication channel. All that overhead just to display a name, an avatar, and a date.

The fix was to replace Livewire components with plain Blade markup. Same HTML. No interactivity needed. The result: 200 fewer lines of code, 10 fewer component lifecycles, identical output.

This pattern — defaulting to the most powerful tool rather than the simplest — is something I’ve seen repeatedly with AI coding agents. Asked to render a table row, the AI reaches for Livewire because it can, not because it should. It doesn’t ask “does this piece of UI need to independently talk to the server?” It just builds.

Lenny’s Part 2 calls this a failure of guardrails. The fix isn’t a smarter model — it’s explicit constraints. Caleb Porzio, who created Livewire, says it directly: “Before you extract a portion of your template into a nested Livewire component, ask yourself: does this content need to be live?”

That question is a guardrail. The AI won’t ask it unprompted.

The lesson: AI builds what you ask for using the most capable tool available. You need to ask “is this the simplest thing that works?” because the AI won’t.

4. Context is the hardest problem (and the most invisible)

When I pulled my Laravel project locally for the first time, I hit four errors in ten minutes. None were bugs. They were all caused by the same thing: my local .env didn’t have credentials for S3, Algolia, Bento, or the right Livewire package version.

Claude Code fixed each one quickly — swap FILESYSTEM_DISK=s3 to local, set SCOUT_DRIVER=null, add BENTO_OFFLINE=true. Easy. But what struck me was that the AI needed me to encounter each error individually and paste it in. It didn’t proactively say “you’re running locally for the first time — here are the six .env values you’ll need to change.”

The information was all there. The AI knew the codebase. It knew which services required API keys. It could have saved me thirty minutes by listing everything upfront. But it didn’t have the context that I was setting up a local environment for the first time, and it didn’t think to ask.

This is what the Lenny articles call “context engineering” — the challenge of giving AI the right information at the right time. It’s also where the biggest product opportunities are. The AI that proactively tells you “you’re going to hit four errors, here’s how to fix all of them” is dramatically more useful than the one that waits for you to hit each wall.

The lesson: AI has access to information but not always context. The gap between “knows the answer” and “knows when to offer the answer” is where most of the friction lives.

5. You don’t need to understand the code — but you need to understand the concepts

My git article documents going from “what does git add . do?” to “commit and push, this closes issue 226” in a single session. I never learned the commands. I learned five concepts — stage, commit, push, pull, branch — and described what I wanted in plain English.

Same with migrations. The word “migration” terrified me. When Claude Code said it had “created a migration file,” I immediately asked “do I need to manually do something?” Once I understood the concept — a migration is a recipe for a database change, with a built-in undo button — I stopped worrying. Now I review migration files the same way I review any code: by understanding what they intend to do, not how they do it.

This is the core thesis of Shultz’s articles: building real things with AI teaches you what AI is actually good at and where it breaks. But I’d frame it differently for founders. You’re not building “AI product sense.” You’re building technical taste — the ability to ask the right questions, spot the wrong assumptions, and know when the AI is solving the wrong problem.

You don’t need to know PHP. You need to know that formatting data in the backend is more robust than formatting it in the template. You don’t need to know Livewire’s serialisation lifecycle. You need to know that a table row displaying static data doesn’t need its own server communication channel.

The lesson: The concepts matter more than the commands. Once you understand what should happen, AI handles how. But if you don’t understand the what, you can’t catch the AI when it’s wrong.

What this means for building with AI

If you’re a founder building with an AI coding agent, here’s the pattern I’ve found:

  1. You direct, AI executes. Taste, priorities, and “does this feel right?” stay with you. Structure, consistency, and “apply this change everywhere” go to the AI.

  2. Ask why before accepting the second attempt. When the first fix doesn’t work, the AI’s diagnosis is often wrong. Understanding why it failed matters more than the next code it writes.

  3. Simpler is almost always right. If the AI builds something complex, ask if a simpler approach exists. It probably does.

  4. Context is your job (for now). Tell the AI what you’re trying to achieve, not just what’s broken. “I’m setting up local dev for the first time” gets better help than pasting an error message.

  5. Learn concepts, not syntax. Five git concepts. One mental model for migrations. One rule for Livewire vs Blade. That’s enough to direct an AI agent effectively — and to catch it when it’s wrong.

The best way to build AI intuition isn’t reading about AI. It’s building something real and paying attention to where things break. If you’re already building with Laravel and an AI coding agent, you’re already doing the work. You just need to notice the lessons.

Key takeaway

Working with AI coding agents teaches you as much about AI as it does about your codebase. The failures are the curriculum — a confidently wrong fix, a silently ignored prop, an over-engineered component, a missing context clue. Pay attention to those moments. They’re where your intuition develops, and they’re what separates founders who use AI from founders who understand it.


Back to top ↑