Building Agentic Flows with Claude Code

Part 3 — Orchestration

Chapters 9-11 · Coordination patterns, agent teams, and a full walkthrough.

Chapter 9

Orchestration Patterns

9.1 The Orchestrator's Role

In Claude Code, orchestration isn't a special component — it's a responsibility. The orchestrator is whichever part of your system is making routing decisions: the main conversation directed by CLAUDE.md, a Skill that coordinates a multi-step workflow, or a dedicated orchestrator agent for complex systems.

Three things define orchestration:

  • It receives the goal — not individual tasks, but the high-level objective
  • It decides what context flows where — which subagent gets which information, in what order
  • It assembles the result — subagents report back to the parent; they cannot coordinate with each other directly
🔑
The key architectural constraint: Subagents can only communicate with their parent, not with each other. If Agent A's output needs to reach Agent B, the orchestrator must pass it. This isn't a limitation — it's what keeps systems predictable and debuggable.

What orchestrates in Claude Code

Orchestrator typeHow it worksBest for
CLAUDE.md rulesDelegation rules in CLAUDE.md tell Claude when to invoke which agent. No extra file needed.Simple flows, 2-3 agents, sequential steps
Workflow (wor-)A dedicated entity in .claude/commands/wor-*.md that coordinates the entire pipeline — invokes agents, manages handoffs, assembles resultsNamed, repeatable multi-step pipelines with defined steps
Orchestrating SkillA Skill with context: fork coordinates a workflow, invoking agents and passing contextReusable orchestration that agents can also trigger

9.2 Five Orchestration Patterns

Pattern 1 — Linear

Each agent completes its work and passes the full output to the next. The simplest pattern. Use it when stages are sequential and each stage depends entirely on the previous one.

text
Input
  │
  ▼
Agent A (Researcher)
  │ writes output to /work/research.md
  ▼
Agent B (Writer) ← reads /work/research.md
  │ writes output to /work/draft.md
  ▼
Agent C (Editor) ← reads /work/draft.md
  │
  ▼
Final Output

CLAUDE.md orchestration for linear flows:

markdown
## Orchestration Rules
When asked to produce an article on any topic:
1. Invoke @researcher — pass the topic, ask it to write to /work/research.md
2. After researcher completes, invoke @writer — tell it to read /work/research.md
3. After writer completes, invoke @editor — tell it to read /work/draft.md
4. Return the editor's final output to the user

Pattern 2 — With Checkpoints (Human-in-the-Loop)

A human review gate sits between stages. The orchestrator pauses and presents the intermediate result before proceeding. Use this for high-stakes decisions or content that needs human judgment before moving forward.

text
Input
  │
  ▼
Agent A (Researcher)
  │
  ▼
┌─────────────────────────────────────┐
│  👤 CHECKPOINT                      │
│  "Here's the research. Proceed with │
│   this framing? [Y / adjust / skip]" │
└─────────────────────────────────────┘
  │ (only if approved)
  ▼
Agent B (Writer)
  │
  ▼
Output

Encode checkpoints in CLAUDE.md by making the gate explicit:

markdown
## Orchestration Rules
For article production:
1. Run @researcher and present the research summary to the user
2. PAUSE — ask: "Does this research direction look right? Continue?"
3. Only proceed to @writer after explicit user confirmation
4. Present the draft before running @editor — same approval gate

Pattern 3 — With Decisions (Routing)

A classifier agent reads the input and routes it to the appropriate specialist. Use when the same trigger can lead to fundamentally different workflows depending on what's in the input.

text
Input
  │
  ▼
Classifier Agent
  │
  ├─ "technical question" ──→ Technical Support Agent
  ├─ "billing issue"      ──→ Billing Agent
  ├─ "feature request"    ──→ Product Feedback Agent
  └─ "complaint"          ──→ Escalation Agent ──→ Human

Classifier agent frontmatter:

yaml
---
name: classifier
description: Classifies incoming support requests by type and routes them
  to the appropriate specialist. Use for any new support ticket or
  customer message that needs to be triaged.
model: haiku
tools: Read
permissionMode: default
---

Classify the incoming message into exactly one category:
- technical: software bugs, error messages, how-to questions
- billing: payment issues, subscription changes, refunds
- feature: product feedback, feature suggestions
- complaint: expressions of dissatisfaction requiring escalation

Output only the category label. Nothing else.

Pattern 4 — With Integrations

An agent interacts with an external system (via MCP or Bash tools) and passes the results to the next agent in the flow. Use when your workflow depends on real-time data from outside your project.

text
Input: "Analyze our GitHub PRs from last week"
  │
  ▼
GitHub Fetch Agent (uses MCP github tools)
  │ writes PR data to /work/prs-raw.json
  ▼
Analysis Agent (reads /work/prs-raw.json)
  │ writes summary to /work/analysis.md
  ▼
Report Agent (reads /work/analysis.md)
  │
  ▼
Weekly engineering report

Pattern 5 — Parallel with Consolidation

Multiple agents work on the same input simultaneously. A consolidator agent assembles their outputs into a unified result. Use when independent specialists can work in parallel and the outputs need to be synthesized.

text
Input: article draft to review
  │
  ├──→ Fact Checker    ──→ /work/fact-review.md    ─┐
  ├──→ Style Reviewer  ──→ /work/style-review.md   ─┤→ Editor (consolidator)
  └──→ SEO Analyst     ──→ /work/seo-review.md     ─┘        │
                                                              ▼
                                                    Final structured feedback
💡
Claude Code doesn't run agents truly in parallel within a single session — it runs them sequentially but without blocking context between them. True parallelism requires Agent Teams (Chapter 10). For most workflows, sequential-but-fast is sufficient and easier to debug.

9.3 Context Transfer Between Agents

Every subagent starts with an empty context window. It doesn't automatically inherit the conversation history or know what previous agents produced. Passing the right context to each agent is the orchestrator's most important job.

Four mechanisms for context transfer

MechanismHow it worksBest forTradeoff
Spawn promptThe orchestrator includes relevant context in the text it uses to invoke the subagentShort context, key facts, specific instructionsContext size limited by prompt length
File-based handoffAgent A writes output to a file; the orchestrator tells Agent B to read that fileLong outputs, structured data, rich contentRequires write access; adds file I/O
Orchestrator summaryThe orchestrator distills Agent A's output and passes only the relevant parts to Agent BWhen Agent B needs a subset of Agent A's workRequires orchestrator judgment; can lose detail
Context LedgerA structured session file (context-ledger/) where each step writes its state. Next steps read it for full traceability.Multi-session workflows, complex pipelines needing audit trailMore infrastructure; best for production systems
🔑
The handoff principle: The only channel from parent to subagent is the spawn prompt string — plus any files that exist on disk. Design your context transfer around this constraint from the start.

Context Ledger for production systems

For complex workflows that span multiple sessions or need full traceability, use a Context Ledger — a structured file at the project root where each step writes its outputs, decisions, and state.

bash
context-ledger/
└── 2026-04-10-14-30-content-pipeline.md   ← one file per session

Each agent appends its results to the ledger. If the session is interrupted, the workflow can resume by reading the ledger file and picking up from the last completed step. This is how production agentic systems maintain cross-session continuity — the ledger is the workflow's memory.

Pair it with a lightweight memory snapshot (memory/) for fast startup: a ~1-2 KB summary that tells the workflow where it left off without reading the full ledger.

File-based handoff in practice

This is the most reliable mechanism for sequential flows. Establish a /work/ directory convention and have each agent write its output there:

markdown
## Orchestration Rules

### File Handoff Convention
- Researcher writes to: /work/research-{topic}.md
- Writer reads from:    /work/research-{topic}.md
- Writer writes to:    /work/draft-{topic}.md
- Editor reads from:   /work/draft-{topic}.md
- Editor writes to:    /work/final-{topic}.md

### Invoking with Context
When spawning the writer, tell it explicitly:
"Read /work/research-{topic}.md for context. The topic is {topic}.
 Write your draft to /work/draft-{topic}.md."

9.4 The Explore → Plan → Execute Pipeline

This is the most reliable orchestration pattern for complex tasks. It forces the system to understand before it acts, and it gives you a review gate before anything consequential happens.

PhaseAgent typeToolsOutput
ExploreRead-only explorerRead, Grep, GlobMap of relevant files, current state, scope
PlanPlanner (no writes)ReadStep-by-step plan presented to user for approval
ExecuteImplementerRead, Write, Edit, BashActual changes, only after plan approved

The human review gate between Plan and Execute is the key safety mechanism. The system understands the full scope before touching anything, and the user approves the plan before any file is modified.

Encoding in CLAUDE.md

markdown
## Agent Delegation Rules

For ANY task that touches more than 3 files or involves refactoring:
1. First invoke @explorer to map the relevant files and summarize current state
2. Use the explorer's output to invoke @planner with specific scope
3. Present the plan to the user: "Here's what I'll do: [plan]. Proceed?"
4. Only after explicit approval, invoke @implementer with the approved plan

For simple, single-file tasks: proceed directly without the pipeline.

The three agents

yaml
<!-- .claude/agents/explorer.md -->
---
name: explorer
description: Maps files, reads code, and produces a concise summary of
  current state and scope. Use as the first phase of any complex task
  before planning or implementing.
model: sonnet
tools: Read, Grep, Glob, Bash
permissionMode: default
---

You are an exploration specialist. Your only job is to understand, never to change.

When invoked with a task description:
1. Identify which files are relevant to this task
2. Read and summarize the current state of those files
3. Identify dependencies, risks, and scope
4. Output a concise map: file list + key observations + recommended approach

Never write files. Never suggest implementation. Only observe and report.
yaml
<!-- .claude/agents/planner.md -->
---
name: planner
description: Designs a step-by-step implementation plan from an exploration
  report. Use after @explorer completes. Produces a human-reviewable plan
  before any code is written.
model: opus
tools: Read
permissionMode: plan
---

You are a planning specialist. Your job is design, never implementation.

Given an exploration report and task description:
1. Design a complete, numbered implementation plan
2. For each step: state what changes, which file, what the change does
3. Flag any risks or irreversible operations
4. Output the plan clearly — it will be shown to a human for approval

Be specific enough that someone else could execute your plan exactly.
Never write code. Never implement. Only plan.
Chapter 10

Agent Teams

⚠️
Experimental feature. Agent Teams are available in Claude Code but are under active development. APIs and behaviors may change. The concepts here are stable; verify specific configuration against code.claude.com/docs before building production systems.

10.1 Subagents vs. Agent Teams

Subagents (Chapters 6–9) and Agent Teams solve different coordination problems. Choosing between them is straightforward:

SubagentsAgent Teams
What they areWorkers that run inside your session and report backFull separate Claude Code sessions running simultaneously
CommunicationOne-way: parent spawns, subagent reports backTwo-way: teammates can message each other and the lead
ContextShares parent session context (on spawn)Each teammate has a completely independent context window
ParallelismSequential within a sessionTrue parallelism — multiple sessions run simultaneously
Model requiredAny modelOpus 4.6 (lead)
CostOne context windowMultiple full context windows running in parallel

Decision rule: If agents just need to report back to you, use subagents. If agents need to coordinate with each other during execution, use teams.


10.2 How Agent Teams Work

An Agent Team consists of one team lead session and multiple teammate sessions. The lead coordinates; teammates specialize.

text
┌─────────────────────────────────────────────────────────────────┐
│                        Agent Team                               │
│                                                                 │
│   ┌─────────────────────────────────────────────────────────┐   │
│   │  TEAM LEAD (your session)                               │   │
│   │  Opus 4.6 · Receives goal · Delegates · Assembles       │   │
│   └───────┬──────────────┬──────────────┬───────────────────┘   │
│           │              │              │                        │
│    Shared Task List:  pending → assigned → in-progress → done   │
│           │              │              │                        │
│   ┌───────▼────┐  ┌──────▼──────┐  ┌───▼────────────┐         │
│   │ Teammate A │  │ Teammate B  │  │   Teammate C   │         │
│   │ (Frontend) │  │ (Backend)   │  │  (Tests)       │         │
│   │ Own context│  │ Own context │  │  Own context   │         │
│   └────────────┘  └─────────────┘  └────────────────┘         │
│                                                                 │
│   Communication: direct messages · broadcasts · idle alerts     │
└─────────────────────────────────────────────────────────────────┘

Each teammate loads CLAUDE.md independently — their context window starts from scratch. They don't inherit anything from the lead except what's explicitly passed through tasks and messages.

Task lifecycle:

text
Lead creates task (pending)
  │
  ▼
Teammate picks up task (assigned → in-progress)
  │
  ▼
Teammate completes work (completed)
  │
  ▼
Lead receives completion notification
  │
  ▼
Lead assigns next task or assembles results

10.3 Setting Up Agent Teams

Enable in settings.json

json
{
  "CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": true
}

Start a team session

Open Claude Desktop Code tab in your project. Agent Teams mode activates automatically when the setting is enabled. To spawn teammates, tell the lead what roles you need in natural language:

text
I need to refactor our authentication module. Spawn a backend specialist
for the server-side code and a test specialist to update the test suite.
Work on these in parallel.

The lead will create teammate sessions using your subagent definitions as templates if they exist, or spin up general-purpose sessions if not. Using existing subagent files gives you more control over each teammate's behavior and tool access.

Using existing subagent definitions as templates

If you have .claude/agents/backend-engineer.md and .claude/agents/test-writer.md, mention them explicitly:

text
Spawn a teammate using the backend-engineer agent definition for the
server changes, and a teammate using the test-writer definition for tests.

Monitoring with tmux (CLI)

In the CLI, run each teammate session in a separate tmux pane so you can watch all of them simultaneously:

bash
tmux new-session -d -s lead 'claude'
tmux split-window -h 'claude --teammate'
tmux split-window -v 'claude --teammate'
tmux attach -t lead

10.4 Best Practices

PracticeWhy it matters
Plan first, then teamUse Plan Mode to design the complete approach before spawning teammates. A clear plan prevents teammates from working at cross-purposes.
Use delegate modeTell the lead explicitly: "Do not implement — only coordinate." Leads tend to grab implementation work, which defeats the purpose of the team.
Define module boundaries in CLAUDE.mdEach teammate loads CLAUDE.md fresh. Clear module ownership (@backend owns /src/api/, @frontend owns /src/ui/) prevents file conflicts.
Start with 2-3 teammatesMore teammates = more context windows = more cost and complexity. Start small. Add teammates only when you have a concrete task for each one.
One module per teammateTeammates that touch overlapping files will create merge conflicts. Assign by file ownership, not by function.

10.5 When NOT to Use Agent Teams

SituationWhy teams are wrong hereUse instead
Simple, single-step tasksOverhead of spawning multiple sessions far exceeds any benefitDirect execution in main session
Agents need to edit the same filesConcurrent writes will create merge conflictsSequential subagents with file handoffs
Sequential workflow (A must finish before B starts)No parallelism benefit — you're paying for parallel context windows you don't needLinear subagent pattern (Chapter 9)
Budget-sensitive workEach teammate is a full Opus context window — costs add up fastSubagents (cheaper, one context window)
Experimental / exploratory tasksHard to debug when multiple sessions run simultaneouslySingle session with Plan Mode
Chapter 11

Walkthrough

11.1 The Scenario

We're going to build a content production pipeline from scratch: a system that takes a topic, researches it, writes a draft, reviews it, and produces a polished article ready to publish.

This is a good first system because it has clear stage boundaries, obvious agent responsibilities, and a concrete, evaluable output. The same design principles apply to research workflows, ops pipelines, and any multi-step process with distinct phases.

What we'll build:

text
content-pipeline/
├── CLAUDE.md                           ← orchestration rules + module boundaries
└── .claude/
    ├── commands/
    │   └── wor-content-pipeline.md     ← workflow orchestration (optional)
    ├── agents/
    │   ├── age-spe-researcher.md       ← Phase 1: research and source gathering
    │   ├── age-spe-writer.md           ← Phase 2: draft creation
    │   └── age-sup-reviewer.md         ← Phase 3: quality review (supervisor)
    ├── skills/
    │   └── ski-format-article/
    │       └── SKILL.md                ← Phase 4: final formatting and export
    ├── rules/
    │   └── rul-citation-standards.md   ← detailed citation rules
    └── settings.json                   ← hook: notify on completion

11.2 Step 1 — Decompose into Responsibilities

Before writing a single file, map the workflow to component types using the decision tree from Chapter 4.

Job to doComponent typeWhy
Search web, gather sources, synthesize researchSpecialist (age-spe-)Single specialized domain, bounded output, read-only tools
Write draft from research briefSpecialist (age-spe-)Single specialized domain, needs write access, distinct responsibility
Review draft for quality and accuracySupervisor (age-sup-)Validates output from specialists — quality gate, not a worker
Format and export the final articleSkill (ski-)Reusable procedure, triggered by name, template-based
Word count, tone, citation rulesRules in CLAUDE.mdApply to every session, constrain all agents equally
Notify when pipeline completesHookEvent-driven, fires on system event (Stop), not on command

11.3 Step 2 — Design the Architecture

The flow is linear with file-based handoffs. Each agent writes to a dedicated directory; the next agent reads from it.

text
User provides topic
        │
        ▼
  @age-spe-researcher ──writes──→ /work/research-{topic}.md
        │
        ▼ (orchestrator passes file path)
  @age-spe-writer ──reads──→ /work/research-{topic}.md
         ──writes──→ /work/draft-{topic}.md
        │
        ▼
  @age-sup-reviewer ──reads──→ /work/draft-{topic}.md
            ──writes──→ /work/review-{topic}.md
        │
        ▼ (if approved)
  /ski-format-article skill ──reads──→ /work/draft-{topic}.md + /work/review-{topic}.md
                        ──writes──→ /output/{topic}.md
        │
        ▼
  Completion hook fires → desktop notification

Model selection decisions:

  • Researcher: Opus — needs deep reasoning to evaluate sources and synthesize accurately
  • Writer: Sonnet — fast enough for drafting, good at following style instructions
  • Reviewer: Sonnet — review requires reading and evaluation, not deep reasoning

11.4 Step 3 — Implement the Files

CLAUDE.md

markdown
---
# Content Production Pipeline

## Purpose
Automated system for producing high-quality articles on any topic.
Research → Draft → Review → Publish.

## Module Boundaries
| Agent | Owns | Never touches |
|---|---|---|
| @age-spe-researcher | /work/research-*.md | /work/draft-*, /output/ |
| @age-spe-writer | /work/draft-*.md | /work/research-*, /output/ |
| @age-sup-reviewer | /work/review-*.md | /work/draft-*, /output/ |
| ski-format-article | /output/ | /work/ |

## Orchestration Rules
When asked to produce an article on any topic:
1. Invoke @age-spe-researcher — provide the topic. Ask it to write to /work/research-{topic}.md
2. After completion, invoke @age-spe-writer — tell it to read /work/research-{topic}.md
   and write the draft to /work/draft-{topic}.md
3. After completion, invoke @age-sup-reviewer — tell it to read /work/draft-{topic}.md
   and write its report to /work/review-{topic}.md
4. If reviewer approves: invoke /ski-format-article {topic}
5. If reviewer requests changes: inform the user and ask whether to revise or proceed

## Content Rules
- Articles: 800–1,200 words unless user specifies otherwise
- Tone: professional but conversational — no academic jargon
- Every factual claim must have a cited source (URL)
- Instead of using vague phrases like "many experts say", name the source
- Structure: Introduction → 3-4 main sections → Conclusion
---

.claude/agents/age-spe-researcher.md

yaml
---
name: age-spe-researcher
description: Researches a topic using web search, evaluates sources for
  credibility, and produces a structured research brief. Use as the first
  step in article production or for any information gathering task. Invoke
  with the topic as input.
model: opus
tools: WebSearch, Read
permissionMode: default
memory: project
---

You are a research specialist. Your output fuels the writing stage —
quality here determines quality throughout the pipeline.

## Research Process
1. Search for 5-8 high-quality sources on the topic
2. Prioritize: peer-reviewed content, official sources, established publications
3. Avoid: opinion pieces without data, anonymous sources, content older than 3 years
4. For each source: verify credibility, extract key facts and quotes
5. Synthesize across sources — identify consensus and disagreements

## Output Format
Write your output to the file path provided. Structure:

# Research Brief: [Topic]

## Key Findings
[3-5 bullet points summarizing the most important facts]

## Main Arguments / Perspectives
[2-3 paragraphs covering the main angles on this topic]

## Statistics and Data
| Stat | Value | Source |
|---|---|---|

## Sources
| Title | URL | Credibility | Key Contribution |
|---|---|---|---|

## Recommended Article Angle
[One paragraph suggesting the most compelling angle for the article,
based on what's most interesting/underreported in the research]

## Rules
- Every fact must trace to a specific source — no generalities
- Flag any conflicting information across sources
- If a topic has insufficient quality sources, report this rather than
  using low-quality ones

.claude/agents/age-spe-writer.md

yaml
---
name: age-spe-writer
description: Writes article drafts from research briefs. Invoked after
  the researcher agent completes. Applies the content pipeline style guide
  automatically. Input: path to research brief. Output: draft article.
model: sonnet
tools: Read, Write
permissionMode: acceptEdits
---

You are a professional writer specializing in clear, engaging non-fiction.
You transform research briefs into polished article drafts.

## Writing Process
1. Read the research brief thoroughly before writing anything
2. Identify the most compelling angle from the researcher's recommendation
3. Write a complete draft following the structure below
4. Review your own draft once before saving — fix any rough patches

## Article Structure
**Introduction (150-200 words)**
Open with a specific fact, statistic, or scenario that captures the
reader immediately. State what the article covers. No "In this article..."

**Body sections (3-4 sections, 150-250 words each)**
Each section has a clear, descriptive heading.
Each section makes one main point, supported by evidence.
Transitions between sections should flow naturally.

**Conclusion (100-150 words)**
Synthesize the key insight. End with something actionable or thought-provoking.
Never summarize what was already said — add perspective.

## Style Rules
- Active voice throughout
- Sentences: max 25 words
- Paragraphs: max 4 sentences
- Every factual claim: cite the source inline (Author/Publication, Year)
- Instead of "experts say", name the expert and organization
- Never use: "In conclusion", "It is important to note", "Leverage"

## Output
Save to the file path provided in your instructions.

.claude/agents/age-sup-reviewer.md

yaml
---
name: age-sup-reviewer
description: Reviews article drafts for factual accuracy, style compliance,
  structural quality, and readability. Invoked after writer completes.
  Produces a structured review report with a clear verdict.
model: sonnet
tools: Read, Write
permissionMode: default
---

You are an editorial reviewer. Your job is to catch problems, not rewrite.
The writer will act on your feedback — be specific enough to make that easy.

## Review Checklist

### Factual Accuracy
- [ ] All facts traceable to sources from the research brief
- [ ] No claims made without support
- [ ] Statistics cited with source and year
- [ ] No outdated information

### Structure and Flow
- [ ] Introduction opens with a hook (not a generic statement)
- [ ] Each section makes one clear point
- [ ] Transitions between sections are smooth
- [ ] Conclusion adds perspective (doesn't just summarize)

### Style Compliance
- [ ] Word count: 800-1,200 words
- [ ] Active voice throughout (flag passive instances)
- [ ] Sentences under 25 words (flag violations)
- [ ] Paragraphs under 4 sentences
- [ ] No banned phrases ("In conclusion", "It is important to", "Leverage")

### Readability
- [ ] Technical terms explained on first use
- [ ] Concrete examples for abstract concepts
- [ ] Appropriate for a professional but non-specialist audience

## Output Format
Write your review report to the file path provided:

# Review Report: [Article Title]

## Verdict
**APPROVED** / **REVISE** / **REJECT**
[One sentence justification]

## Issues Found
| Severity | Location | Issue | Suggested Fix |
|---|---|---|---|
| Critical | Para 2 | Claim has no source | Add citation from research brief |
| Minor | Section 3 | Passive voice | Rewrite in active voice |

## Strengths
[2-3 things the article does well — genuine observations, not filler]

## If REVISE: Priority fixes
[Numbered list of exactly what to change, in order of importance]

.claude/skills/ski-format-article/SKILL.md

yaml
---
name: ski-format-article
description: Final formatting and export of a reviewed article. Reads the
  approved draft and review report, applies final polish, and saves to
  /output/. Invoke with /ski-format-article {topic} after reviewer approves.
user-invocable: true
allowed-tools: Read, Write
---

# Format Article Skill

Final step in the content pipeline. Takes approved draft + review notes
and produces the publication-ready version.

## Steps
1. Read the draft: /work/draft-$ARGUMENTS.md
2. Read the review report: /work/review-$ARGUMENTS.md
3. Apply any "Minor" fixes flagged in the review (Critical issues should
   have been addressed before this step)
4. Apply final formatting:
   - Ensure consistent heading hierarchy (H1 → H2 → H3)
   - Add frontmatter with metadata
   - Verify all URLs are complete (https://)
   - Ensure consistent citation format throughout
5. Save to /output/$ARGUMENTS.md

## Output Frontmatter
Every article gets this frontmatter:

---
title: [Article title]
date: [Today's date, ISO 8601]
status: ready-to-publish
word-count: [actual count]
sources: [number of sources cited]
reviewed: true
---

.claude/settings.json

json
{
  "hooks": {
    "Stop": [
      {
        "matcher": ".*",
        "hooks": [
          {
            "type": "command",
            "command": "osascript -e 'display notification \"Content pipeline complete\" with title \"Claude Code\" sound name \"Glass\"'",
            "timeout": 5
          }
        ]
      }
    ]
  }
}

11.5 Step 4 — Run and Invoke

Open Claude Desktop → Code tab. Select the content-pipeline/ folder. Start prompting:

text
Write an article about the impact of remote work on urban real estate markets.

Claude reads CLAUDE.md, sees the orchestration rules, and begins the pipeline. You'll see it delegate to @age-spe-researcher, then @age-spe-writer, then @age-sup-reviewer. Each agent runs in its own context, writes to its designated file, and reports back.

To invoke a specific agent explicitly:

text
@age-spe-researcher research the topic of urban heat islands for our next article

To trigger formatting after a separate review session:

text
/ski-format-article remote-work-urban-real-estate

To check what's been produced:

bash
!ls /work/ && ls /output/

11.6 Step 5 — Test and Iterate

Common issues and fixes

IssueLikely causeFix
Agent ignores a ruleRule is buried in prose; agent misses itMove to a clearly labeled ## Rules section with bullet points
Writer doesn't use researchFile path not passed explicitly in orchestrator promptBe explicit: "Read /work/research-{topic}.md before writing"
Wrong agent invoked automaticallyDescription is too vague or overlaps with another agentTighten the description — make trigger conditions more specific
Output saved to wrong locationAgent writes where it wants, not where instructedAdd file path to the spawn prompt explicitly, not just in agent instructions
Reviewer approves bad draftsChecklist too vague — "good writing" is subjectiveReplace subjective criteria with measurable ones (word count, citation presence)

Debugging workflow

  1. Check what each agent received. Add a rule to CLAUDE.md: "At the start of each agent task, print: 'Agent [name] starting with context: [first 100 chars of spawn prompt]'"
  2. Use Plan Mode first. Run /plan before triggering the pipeline — Claude will show you exactly how it plans to delegate before doing anything.
  3. Inspect intermediate files. After each stage, check /work/ to verify the output matches expectations before the next agent runs.
  4. Tighten descriptions iteratively. If auto-invocation goes to the wrong agent, refine the description to be more specific about trigger conditions.
💡
The most common problem in first systems is vague descriptions. If Claude hesitates between two agents or picks the wrong one, the fix is almost always the description — not the agent's instructions.