Published:

Vibe Coding Works (podcast highlights)

Vibe coding has a branding problem.

First coined by a serious engineer, it quickly turned into 2025’s favorite buzzword for amateurs who “built an app in 3 hours”.

But you shouldn’t let that noise discourage you from AI (unless you’re okay with giving up rocket fuel). If you know what you’re doing, AI-assisted development can help you 10x your output.

I recently invited two engineers I highly respect, Michał Warda and Tomasz Gancarczyk, to discuss just that on my new podcast (links at the bottom). Here are my main takeaways.

”I tried Copilot a year ago”

Some resistance to AI tools is understandable, especially from companies that handle sensitive data. But your concerns from last quarter don’t apply anymore. Tools for AI-assisted coding are evolving at increasingly shorter intervals, and they’re not going to stop any time soon.

Whatever assumptions you hold, revisit them regularly. There’s no point in getting attached to one specific tool, either. All I care about is what the best-performing option is right now.

Context management > prompt engineering

The right dose of relevant context improves AI output more than any prompt. Some ideas that came up in our conversation:

  • Keeping files small enough for AI to read completely in one go
  • Defining project-specific rules in different directories
  • Restarting from checkpoint when the conversation goes off track
  • Using web search to load recent docs and library versions
  • Loading related services together so the AI understands cross-service interactions

Of course, we’re still experimenting to see what strategies work best. Our toolkit keeps growing.

A senior and 10 interns walk into a bar

I like to think about my agent as an “eager intern” (or more like “10 eager interns”). Someone who’s hardworking and tireless but can make simple mistakes if you’re not paying attention. This helps me remember to always monitor their output and be proactive with guidance.

It’s also a good way to decide how much autonomy to allow: “Would I let an intern push something to prod here?” If you want your AI interns to iterate without constant babysitting, build proper feedback loops. This way, you can let them solve complex problems while you’re away making your coffee.

In practice, this means:

  • Tests. If a codebase doesn’t have a test suite, the first thing I do is ask the AI to build one. Then, I can let the agents iterate autonomously, using git as a safety net.
  • Linters and type checkers. If something makes a programmer’s life better, it usually makes an AI’s life better, too.
  • Terminal access. This is where people get nervous, but I believe it’s crucial. If your agent can run commands, check outputs, and iterate, it can solve problems autonomously.

Here are some more resources I found interesting when preparing for this conversation:

And if you want to hear everything we had to say, listen to the first episode of my new podcast, Hidden Layers:


Interested in bringing these practices to your team? I run hands-on workshops on AI-assisted development. Book a call here.