Why "Vibe Coding" Isn't Enough: The Case for Agentic Engineering

By Alvin Vinalon ·

Why "Vibe Coding" Isn't Enough: The Case for Agentic Engineering

I was watching a recent video of Peter Steinberger chatting with Lex Fridman about AI agents, and something Peter said really stuck with me. They were talking about how people build software with AI, and Peter dropped a hot take: he actually thinks "vibe coding" is a bit of a slur.

I completely agree with him.

There’s this trend right now where people think you can just smash a prompt into an AI, let it "vibe," and out comes a perfect product. But Peter nailed it when he said that vibe coding is what happens at 3:00 a.m., and then you have regrets the next day because you have to clean up the mess. It’s like sitting at a piano for the first time, hitting a few keys, and wondering why you aren’t making a symphony.

A Hard Lesson: Just Because It Works, Doesn’t Mean It’s Ready

I actually learned this lesson the hard way recently.

I had an experience with a client where we were allowed to use GitHub Copilot, use Agents, and prompt templates. We had instructions to ensure "good quality," "security," and all the guardrails that we thought would generate solid code. The features were done, deployed to the test environment, and tested.

We were excited and raised a PR, only to be spoiled by SonarQube and CheckMarx rules that our code violated. We spent several hours analyzing each violation and fixing them manually. It practically offset the hours we saved building the feature—it technically worked, but failed miserably in the SAST tools.

Why? The LLMs don't necessarily apply what they know about the rule combinations for SonarQube and CheckMarx, especially if a client has their own specific rulesets.

The learning here is that just because it works, doesn't mean it's ready. As the engineer, you have to understand and know your client's coding standards and provide these as specific instructions in your agent's persona, prompt files, or instruction files.

The Real Way: Agentic Engineering

I want to talk about Agentic Engineering. This is how OpenClaw is built, and it’s the reason that project is taking the internet by storm. It’s not just a bunch of AI-generated spaghetti code; it’s engineered.

The problem with "vibe coding" is that it implies you are letting the Agent decide how to build your software. If you are a hobbyist, that’s fine. But for true enterprise production software, that is a catastrophe waiting to happen. You cannot rely on marketing statements claiming AI tools produce "Better Code Quality and Security" if you are just letting the model rely on its training data without strict oversight.

Be the Lead Engineer, Not Just a Prompter

We need to look at these AI agents—whether it's Claude Code, GitHub Copilot, or others—as junior developers. They need a Lead Engineer. That’s you.

In the video, Peter explains that these agents start every session knowing nothing about your project. They don't have the full context. If you just let them guess, they might force a feature into an architecture where it doesn't fit, or spend forever thinking in circles.

As the human in the loop, you have to provide the guardrails. You have to instruct the agents exactly how you want the code produced. You handle the security and the quality checks. Peter mentioned that when he reviews PRs from agents, he approaches it like a discussion with a capable engineer, asking, "Do you understand the intent?" before even looking at the implementation.

Let Agents Handle the Boring Stuff

The original goal of these agents wasn't to replace our thinking. It was to take away the mundane, repetitive tasks.

Peter put it perfectly: most software is just moving data from one shape to another, or figuring out how a button is aligned in Tailwind. That stuff is boring. I don’t need to read that code. That is the perfect use case for an agent. Let them change the color themes, write the test scripts, and create documentations.

This saves us hours of time so we can do the actual heavy lifting: Design Thinking, Architecture, and Security.

Orchestration is the Key

This brings me to "Agent Orchestration." This is the workflow we should be promoting. It’s not about typing a prompt and hoping for the best. It’s about managing a team of autonomous agents.

You act as the overlord. You provide the direction. When the agent produces something, you question it. You iterate with them. Peter describes having multiple agents running at once—maybe one is exploring an idea, while two or three others are fixing bugs or writing documentation,.

That is the future of software engineering. It’s not about being lazy and "vibe coding" your way to a finished product. It’s about Agentic Engineering—using these tools to build high-quality, secure software where the human is still very much the captain of the ship.

Here’s what this looks like in practice.

In my own project, I use Visual Studio Code with GitHub Copilot as my main AI coding assistant. But instead of just chatting with one agent, I set up a small “team” I can hand work to—features, bug fixes, whatever. This setup was inspired by Burke Holland’s YouTube video on "Agentic Orchestration".

(If you want to try this: Subagents in VS Code is currently experimental. Go to Settings, search for "subagent", and enable it.)

Chat › Custom Agent In Subagent: Enabled

Here is my Agentic Engineering Team in GitHub Copilot:

  • Orchestrator (orchestrator.agent.md): Breaks down complex requests into tasks and delegates them to specialist subagents. Crucially, it does not write code.
  • Planner (planner.agent.md): Creates the plan, does the research, double-checks the logic, and calls out edge cases before we build. It also does not write code.
  • Designer (designer.agent.md): Focuses solely on the best possible user experience and interface.
  • Coder (coder.agent.md): The dedicated code writer.

Another nice benefit of this kind of agent orchestration is that each agent gets its own context window—so you’re a lot less likely to hit that dreaded “sorry, you’ve reached the context limit” message in GitHub Copilot Chat.

This blog post component you’re reading right now was 100% developed by my Agentic Engineering team.

Burke generously shared all of these agent files here.

If you have access to GitHub Copilot, I’d recommend setting this up, testing it in your workflow, and then adjusting the instructions and objects to match your standard development patterns. (Yes, you still need to do this—or delegate it to GitHub Copilot 🙂)