← back to all posts

Looking into BMAD

16.05.2026 agentic-coding bmad workflows opencode

I've been following the agentic-coding space pretty closely for a while now — built my own multi-agent orchestrator inside pi agent, written about test contracts as executable specifications, and generally tried to figure out what actually works versus what's just vibe-driven development with extra steps. Lately I've been looking into BMAD — the Breakthrough Method of Agile AI-Driven Development — and it's one of the few frameworks that seems to take the same structural approach I've been converging on, but formalised enough that you don't have to invent it yourself.

// what bmad actually is

BMAD is an open-source methodology — not a tool, not a platform — that defines a structured workflow for AI-assisted development. It comes out of the bmad-code-org organisation and is implemented as plugins for editors and AI coding assistants. The core idea is that instead of having a single open-ended chat with an LLM, you get a team of specialised agents with defined roles, a set of slash commands that trigger structured workflows, and a process model that moves work through stages — plan, design, implement, review, integrate.

The official documentation on Nexonoma breaks the framework into five agent roles: the Orchestrator, the Analyst, the Architect, the Developer, and the Reviewer. Each one gets a specific context window and a specific set of responsibilities. The Orchestrator routes tasks between the others. The Analyst investigates requirements. The Architect produces designs. The Developer implements. The Reviewer audits. It's basically the Spotify model for AI agents — small, purpose-built squads with clear interfaces between them.

What caught my attention is that the workflow isn't just a suggestion in a README — it's encoded as slash commands. You type /plan, /design, /implement, /review, and the framework routes the task through the right sequence of agents with the right context. There are currently 7 slash commands and 5 agents in the standard plugin, and the whole thing is designed to be model-agnostic — you can route different agents to different models depending on what they're good at.

// the opencode connection

The reason BMAD landed on my radar in the first place is that there's been active work to integrate it into OpenCode, the open-source AI coding assistant I've been using more and more. A pull request went through recently that adds BMAD as a built-in workflow plugin — the 7 slash commands and 5 agents, all wired into OpenCode's plugin system. The integration discussion on the BMAD repo covers the setup steps.

This matters because OpenCode already has a solid plugin and agent model — you can define custom modes, tools, and even MCP servers. BMAD layers a methodology on top of that, so instead of configuring your own agent team from scratch (which I did with pi agent), you get a battle-tested team structure with documented workflows. It's the difference between assembling your own IKEA furniture and buying something that already comes with instructions and the right tools in the box.

There's also a feature request open for Pi agent integration — the tool I currently use for my multi-agent orchestrator. The interesting detail is that BMAD already generates compatible command files, so the integration path might be simpler than you'd expect. If Pi agent picks this up, that's a compelling combination.

// what I've been testing

I've been running BMAD's workflow model side-by-side with my existing setup to see where it fits. Some observations so far:

The /plan/design/implement/review pipeline maps almost exactly onto the orchestrator pipeline I wrote about in my earlier post. The main difference is that BMAD formalises the handoffs — the output of each stage is a structured document that the next stage consumes. My approach was looser: I passed natural-language summaries between agents. BMAD's approach is more repeatable and easier to debug when something goes wrong.

The five-agent model is the right level of granularity. You could argue for more agents (a dedicated tester, a security reviewer) but five keeps the cognitive overhead manageable. Each agent has a clear identity and the Orchestrator handles routing. In practice I think three of the five do most of the heavy lifting — Analyst, Architect, Developer — and the Orchestrator and Reviewer act as quality gates. That feels right.

The model-routing feature is underappreciated. BMAD lets you assign different LLMs to different agents. You can route the Architect to a strong reasoning model (Claude Opus or Gemini 2.5 Pro), the Developer to a fast/cheap model for iteration, and the Reviewer to a model with strong critical reasoning. This is exactly the pattern I've been trying to approximate with manual routing, and having it as a first-class feature of the framework is genuinely useful.

// where i'm still not sold

A few things I'm keeping an eye on:

The installation process has friction. There's a Reddit thread where someone got stuck during the npx setup for the OpenCode integration. Not a dealbreaker — open-source tooling always has rough edges — but it suggests the onboarding isn't as smooth as it could be. I had a similar experience getting the initial plugin configured; the documentation assumes you already know OpenCode's plugin model.

The methodology is prescriptive. That's the point, but it also means you have to buy into the full model to get the benefit. If you already have a workflow that works for you, adapting to BMAD's agent roles and handoff structure might not be worth the overhead. I suspect it's most valuable if you're starting fresh or if your current approach is unstructured prompting.

No built-in test contract mechanism. BMAD's Review agent checks code quality and design compliance, but it doesn't have the equivalent of the TestStore pattern I built — an executable contract that the agent runs against its own output. The Review agent is a pass/fail gate based on code review, not test execution. I think there's a natural extension here: a /test slash command that generates and runs test contracts as part of the pipeline. Maybe something to contribute.

// what's next

I'm planning to run BMAD on a real feature — something non-trivial in the app I'm building — and compare the outcome against my current pi-agent orchestrator pipeline. The key metrics I care about:

  1. Time from task spec to merge-ready implementation
  2. Number of human interventions required mid-pipeline
  3. Quality of the output (crash-free rate, pattern consistency, test coverage)
  4. How easy it is to debug when the pipeline produces a wrong result

If the results are promising, I'll write a follow-up with the full comparison. In the meantime, if you're in the agentic-coding space and haven't looked at BMAD yet, it's worth a weekend afternoon. The GitHub repo has everything you need to get started, and the Medium article by Courtlin Holt is a solid introduction to the philosophy behind it.

— AM, Amsterdam, May 2026