Sub-Agents in AI Coding Assistants
Sub-agents are a core architectural pattern in modern AI coding tools such as Claude Code. Rather than handling every part of a task in a single conversation turn, the system breaks work down and delegates each piece to a specialised agent. Each sub-agent has its own focus, its own set of tools, and, in some cases, its own choice of underlying model.
How Sub-Agents Work
When you give Claude Code a high-level instruction -- say, "add a new dashboard page" -- it does not try to do everything at once. Instead, it orchestrates a series of sub-agents, each responsible for a distinct phase of the work:
- Exploration -- An Explore agent reads your codebase to understand the existing structure, conventions, and relevant files. It answers questions like "what patterns do the current pages follow?" without modifying anything.
- Planning -- A Plan agent takes the information gathered by the Explore agent and designs a concrete implementation approach. It produces a step-by-step plan that you can review before any code is written.
- Implementation -- Once you approve the plan, one or more agents carry out the actual work: creating files, writing code, running commands, and executing tests.
- Validation -- A Test Runner or Build Validator agent checks that the implementation actually works, running your test suite and flagging any failures.
Each of these agents runs autonomously within its own scope. You do not need to manually hand off work between them -- the orchestrating system manages that.
Common Sub-Agent Types
The following agent types appear frequently in Claude Code workflows:
- Bash Agent -- Executes shell commands and performs terminal operations such as running builds, installing dependencies, or querying the file system.
- Explore Agent -- Searches and analyses codebases to understand structure, find relevant files, and gather context. It does not modify anything.
- Research Agent -- Gathers information from external sources, documentation, APIs, or the web to inform decisions.
- Plan Agent -- Designs implementation strategies and breaks complex work into concrete, ordered steps.
- Test Runner -- Executes tests and reports results, flagging failures for the developer or the orchestrating agent to address.
- Build Validator -- Compiles code and validates that the build succeeds before changes are considered complete.
When Are Sub-Agents Useful?
Sub-agents become particularly valuable when the task at hand is too large or too complex for a single pass. A few common scenarios:
- Multi-file features -- When a new feature touches several files across different layers of the application, sub-agents can explore, plan, and implement each piece in the right order.
- Research-heavy work -- When you need to understand how something works before you can build it, an Explore or Research agent can gather that context first, saving you time.
- Code review -- A review workflow can dispatch one agent to fetch the pull request, another to run static analysis, and a third to reason about code quality -- each with an appropriate model for its job.
- Long-running tasks -- Tasks that would take many manual steps (such as setting up a new service or refactoring a module) benefit from being broken into smaller, autonomous chunks.
Model Selection Within Sub-Agents
One of the advantages of the sub-agent model is that each agent can use a different Claude model depending on the complexity of its task. An Explore agent doing a straightforward file search might use Haiku for speed, whilst a Plan agent reasoning about architecture might use Opus for deeper analysis. This keeps workflows fast and cost-effective without sacrificing quality where it matters.
Sub-Agents vs. Single-Turn Prompts
It is worth understanding the distinction between a single-turn prompt and a sub-agent workflow:
| Approach | Best for | Limitation |
|---|---|---|
| Single-turn prompt | Small, well-defined tasks | Struggles with tasks that require exploration or multi-step reasoning |
| Sub-agent workflow | Complex, multi-phase tasks | Slightly more overhead to set up, but handles large tasks far more reliably |
For most non-trivial development work in 2026, sub-agent workflows are the default. They give you better visibility into what the AI is doing at each stage, make it easier to course-correct, and produce more consistent results than asking a single model to do everything in one go.
Creating Custom Agents for Your Projects
You can define custom agents in your .claude/agents/ directory to automate workflows specific to your project. Each agent is a YAML-frontmatter markdown file that describes:
- Purpose: What this agent does and when to use it
- Capabilities: Which tools and skills it has access to
- Workflow: Step-by-step process it follows
- Output: What it delivers (files saved, commands run, etc.)
Agent Design Principles
Keep agents focused — Each agent should handle one primary responsibility. An agent for code review should do code review, not also run tests and push to main.
Reference skills, don't duplicate — If your agent needs writing guidelines, version control practices, or testing conventions, store those in skills and reference them. This keeps agents lean and makes guidelines reusable.
Define clear input and output — Specify exactly what the agent reads (file paths, arguments, environment state) and what it produces (files created, state changes, side effects). This makes agents composable and debuggable.
Use appropriate models — Lightweight agents doing straightforward tasks (file searches, command execution) can use Haiku for speed. Complex reasoning tasks (architecture design, code review) benefit from Opus or Sonnet.
Provide examples in descriptions — Include concrete examples of when and how to invoke the agent. This helps both humans and orchestration systems use it correctly.
Example: Content Creation Agents
A practical example from Paulund: three agents for managing social content.
LinkedIn Content Drafter focuses on one responsibility: turning raw ideas into LinkedIn posts. It:
- Reads ideas from
content/ideas/ - References two skills: LinkedIn style guide and professional brand guidelines
- Saves output to
content/drafts/with a-linkedin.mdsuffix - Uses a clean workflow: read idea → determine format → write content → save
X Content Drafter does the same for Twitter/X:
- Reads from
content/ideas/ - References the X style guide
- Saves to
content/drafts/with an-x.mdsuffix
Tech Article Writer creates blog articles:
- Reads from
content/ideas/ - References blog style guide and technical writing principles
- Saves to
content/drafts/with a-blog.mdsuffix
Each agent is focused, references reusable guidance (skills), and produces well-defined outputs. This makes them easy to invoke, understand, and maintain.