From Design Pack to Build Plan: How GitHub Copilot Custom Agents Helped Our Team Move Faster
By Alvin Vinalon ·
In my earlier article, Why "Vibe Coding" Isn't Enough: The Case for Agentic Engineering, I argued that AI agents are most valuable when they are treated like junior engineers: capable, fast, and useful, but still in need of direction, standards, and review.
This article is the practical sequel. It picks up where that first blog left off and shows how we extended those ideas in a real project scenario. Instead of asking one AI assistant to “figure it out,” we used GitHub Copilot Custom Agents as a small delivery team: planning the work, locating patterns in the codebase, building in smaller steps, reviewing outputs, and curating as-built knowledge that stayed with the repository.
In a large enterprise monorepo, the bottleneck is rarely just writing code. It is understanding the design pack, connecting it to the current implementation, sequencing the work safely, and leaving behind documentation that the next team can actually use.
That is where this workflow paid off.
A delivery team, not a single assistant
Rather than using Copilot as one general-purpose chat partner, we split responsibilities across role-based custom agents. A planner broke requirements into work items. A coder handled targeted implementation. A reviewer checked assumptions and gaps. A tester validated the behavior. A knowledge-focused agent turned delivery context into Markdown that remained useful after release.
That separation mattered because it produced better artifacts with less context mixing. Instead of one giant response trying to do everything, we got a clearer chain: plan, build, review, test, document.
GitHub Copilot features such as Instructions and Skills made that possible. Instructions kept each agent aligned with repo conventions and quality expectations. Skills gave agents bounded autonomy to do real work—parse inputs, search the codebase, and assemble structured outputs—without turning the process into an uncontrolled free-for-all.
Role-based orchestration in practice: a Dev Lead coordinates Planner, Coder, Reviewer, Tester, and Knowledge Lead, with each role producing a distinct delivery artifact.
From a design pack to a build plan
One of the strongest results came before any code was written. We used the custom-agent workflow to analyze a large design pack and its supporting files, compare them with the existing monorepo, find reference implementations, and turn all of that into a structured implementation plan.
That planning step took less than an hour.
That may not sound “instant,” but that is exactly the point. This was not a single prompt and a clever reply. It was a multi-step workflow: reading documents, cross-referencing the repo, identifying dependencies, and producing something reviewable. Manually, that effort could have consumed most of a working day. With custom agents, the output arrived faster, and it was already organized into work items, dependencies, risks, and acceptance criteria.
The biggest win was not just speed. It was clarity. A large design pack often leaves teams with the hardest question: What do we build first? The implementation plan answered that directly.
From design pack to implementation plan: the workflow ingests design inputs, analyzes them against the codebase, and produces structured work items, acceptance criteria, and build sequencing in under an hour.
From plan to delivery in smaller steps
The plan did not sit in a folder gathering digital dust. It became the bridge into delivery.
Because the work was already broken down, implementation could start in smaller, safer steps that matched existing repo patterns. Instead of beginning from a blank page, the team could map each work item to known conventions for shared models, pipelines, functions, tests, and infrastructure. That improved consistency and reduced the risk of inventing a one-off solution that looks clever on day one and awkward on day ten.
This is where agentic engineering becomes practical. The value is not that an agent writes a lot of code quickly. The value is that the engineer can steer the process with more confidence: build a step, review it, validate it, and move on.
That is a very different mindset from “vibe coding.” It is much closer to leading a capable junior team with strong tooling.
The repo becomes the knowledge layer
A second benefit appeared after planning and build work had started: knowledge capture.
We used a Markdown-first approach inspired by the idea of curating knowledge in formats that work well for both people and language models. Instead of pushing project context into a separate RAG stack, we kept implementation notes, as-built documentation, and delivery context inside the repository itself.
That choice had practical advantages. The knowledge was versioned with the code. It could be reviewed in pull requests. It stayed close to the implementation instead of drifting into a disconnected wiki or forgotten document library. And because it lived in Markdown, GitHub Copilot could use the repo itself as grounded context.
For BAU teams, that matters. The result is not just a delivered feature, but a codebase that is easier to query, understand, and hand over.
Comparing two approaches to AI-ready project knowledge: a traditional RAG stack adds retrieval infrastructure, while a repo-native Markdown wiki keeps the knowledge versioned, reviewable, and directly usable in GitHub Copilot Chat.
Why this matters
The real value of GitHub Copilot Custom Agents in this workflow was not limited to code generation. They helped compress the time between design and delivery, improved build sequencing, reduced repeated analysis, and made as-built knowledge easier to preserve.
That is why I see this as a natural extension of my first article. The first post argued that AI needs engineering discipline. This one shows what that looks like when the idea is applied in a real project: role-based agents, bounded autonomy, structured planning, smaller delivery steps, and repo-native knowledge that survives beyond the initial release.
If AI is going to help teams in enterprise software, this is the shape I believe it needs to take. Not magic. Not vibes. Engineering.
Reader takeaways
- Treat GitHub Copilot Custom Agents as a role-based engineering team, not a single all-purpose assistant.
- Use planning agents to turn large design packs into structured, build-ready implementation plans.
- Use smaller, reviewable work items to reduce delivery risk and stay aligned with existing repo patterns.
- Keep as-built knowledge in Markdown inside the repo so it remains versioned, reviewable, and queryable.
- Measure the value of AI not only by how fast it writes code, but by how much it improves clarity, traceability, and handover.