1. Meeting Notes Formatter
After a meeting, you need a system that turns raw, messy notes into a structured summary with decisions, action items, owners, and deadlines. You are going to build it step by step using AiAgentArchitect in Express mode — the simplest path to a working system.
By the end of this exercise you will have three generated entity files ready to use: a Command, a Rule, and a Knowledge Base.
Prerequisites
- AiAgentArchitect installed in your environment. Follow the setup instructions on GitHub.
- Access to Claude Code, Google Antigravity, or OpenAI Codex — any platform that supports AiAgentArchitect.
- The Express template ready to paste. You will find it below, pre-filled for this exercise.
First time? Express mode is designed for single entities with a clear responsibility. It skips the multi-agent orchestration steps and gets you to a result fast. Perfect for learning the pipeline without complexity.
Step-by-step
Invoke AiAgentArchitect
Open a new conversation in your platform and type the command:
/wor-agentic-architect The system will greet you and ask how you want to proceed.
What you should see
A welcome message with two options:
- A) Complete process — Architect mode (full multi-agent system)
- B) Concrete entity — Express mode (single entity)
Select Express mode
Choose option B to enter Express mode. The system will ask whether you have a pre-filled template or want to describe the entity from scratch.
Tell the system you have a template ready.
Paste the pre-filled template
Copy the template below and paste it into the conversation. It is already filled with the Meeting Notes Formatter data:
# Agentic Architect — Template Express
## What do you want to create?
- [ ] Agent Specialist
- [ ] Agent Supervisor
- [ ] Skill
- [X] Command
- [ ] Rule
## Entity description
**Tentative name:**
format-notes
**What exactly should it do?**
Receive raw meeting notes (pasted by the user) and produce a
structured summary with four sections: Decisions, Action Items
(with owner and deadline), Open Questions, and Next Steps.
**What does it receive as input?**
Unstructured meeting notes pasted by the user after invoking
the command.
**What does it produce as output?**
A formatted summary in Markdown with consistent sections.
The output is displayed directly to the user.
## Known constraints
**Is there anything this entity should never do?**
Never invent action items or attendees that are not in the
original notes. Never change the meaning of decisions.
**Are there any relevant restrictions?**
The output format must always follow the same four-section
structure, regardless of the input quality.
## Additional context
**Related entities:**
A Rule for the output format and a Knowledge Base with the
team roster (names, roles, responsibilities) to assign action
items correctly. What you should see
The system will acknowledge the template and may ask 2-3 clarifying questions before proceeding. These are normal — the Input Enricher (S0) is structuring your input.
Review the S0 output (Input Structuring)
After answering any clarifying questions, the system presents the structured input — a clean version of what you provided, with normalized names, goals, and a suggested entity list.
Review it and select A) Approve to continue.
What you should see
A structured summary including:
- System name:
meeting-notes-formatter - Primary entity:
com-format-notes(Command) - Supporting entities suggested: a Rule for output format and a Knowledge Base for team data
- Target platforms detected from your environment
Review the S1 output (Process Discovery)
The system runs the Process Discovery stage and presents an AS-IS diagram — a simplified view of how the task currently works (manually) and how the system will improve it.
In Express mode this step is lighter than in Architect mode. Review and select A) Approve.
What you should see
A brief process description showing:
- Current state: user manually formats meeting notes
- Proposed state: Command automates formatting with Rule constraints and KB enrichment
- Gap analysis: manual process is inconsistent, misses owners, takes time
Review the S2 output (Architecture Design)
The Architecture Designer presents the Blueprint — the final list of entities and how they connect. For this system, you should see exactly 3 entities:
com-format-notes— the Command (user-invoked)rul-output-format— the Rule (constrains output structure)kno-team-roster— the Knowledge Base (team data)
Review the Blueprint and select A) Approve to proceed to entity generation.
What you should see
A Blueprint showing:
- Entity list with types and names following the naming convention (prefix + kebab-case)
- Relationships: Command constrains Rule, Command consults KB
- No Agents, no Workflow — Express mode confirmed
Review and approve each entity (S3)
The Entity Builder generates each entity one by one. For each one, the system presents the generated .md file and asks for approval. Review and approve each:
- com-format-notes.md — the Command with goals, tasks, input/output, and the Rule/KB references
- rul-output-format.md — the Rule defining the four-section output structure
- kno-team-roster.md — the Knowledge Base with a template for team member data
For each entity, select A) Approve. If something does not match your expectations, use B) Adjust and describe the change.
Verify the generated files
After all entities are approved, the system packages everything into:
exports/meeting-notes-formatter/
├── .agents/
│ ├── workflows/
│ │ └── com-format-notes.md
│ ├── rules/
│ │ └── rul-output-format.md
│ └── knowledge-base/
│ └── kno-team-roster.md
├── .claude/ (auto-synced)
├── process-overview.md
└── CLAUDE.md Open the files and verify the content matches what you approved in each checkpoint.
Checkpoints
Use these checkpoints to verify you are on track at each stage of the pipeline:
The system displays a clean, normalized version of your input with the system name meeting-notes-formatter, the primary entity com-format-notes, and suggested supporting entities (Rule + KB). If this looks correct, approve it.
A brief AS-IS/TO-BE comparison showing the manual process (user formats notes by hand) versus the automated process (Command + Rule + KB). The gap analysis should highlight inconsistency and missing ownership as the main problems.
The Blueprint lists exactly 3 entities: com-format-notes (Command), rul-output-format (Rule), kno-team-roster (Knowledge Base). No Agents, no Workflow. Relationships: Command constrains Rule, Command consults KB.
Three .md files generated in exports/meeting-notes-formatter/. Each file follows the entity template structure with goals, tasks, input/output, and references. The Command references the Rule and KB correctly.
Reference Solution
Your generated system should produce a structure similar to this. The diagram below shows how the three entities connect:
| Entity | Type | Purpose |
|---|---|---|
| com-format-notes | Command | User-invoked saved prompt. Receives raw meeting notes, applies the Rule for consistent structure, and consults the KB to match action items to team members. |
| rul-output-format | Rule | Enforces output structure: Decisions, Action Items (with owner and deadline), Open Questions, and Next Steps. Applied automatically. |
| kno-team-roster | Knowledge Base | Team members' names, roles, and responsibilities. The Command consults this to assign action items to the correct people. |
Key Takeaways
- Express mode is for systems with a single primary entity and clear responsibility. No Agents, no Workflows — just a Command with supporting entities.
- A Command is a saved prompt: deterministic, user-triggered, same behavior every time. If the task does not require contextual reasoning or adaptation, a Command is the right choice over an Agent.
- Rules and Knowledge Bases enrich a Command without adding complexity. The Rule constrains the output format; the KB provides reference data.
- The checkpoint system (A/B/C/D) lets you validate every stage before moving forward. Nothing is generated without your approval.
- The template pre-fills 60-70% of the interview questions. Without it, the system asks you those same questions one by one — same result, longer path.
Ready for more? In the next exercise you will build a PR Review Assistant — your first system with an Agent that adapts to context instead of a deterministic Command.
2. PR Review Assistant
A developer needs automated code review for every pull request — detecting bugs, style violations, and security issues. Unlike the previous exercise, this system uses an Agent instead of a Command because the reviewer must adapt its behavior to context: different diffs produce different analysis strategies.
By the end of this exercise you will have four entity files: an Agent Specialist, a Skill, a Rule, and a Knowledge Base.
Key concept: A Command always produces the same base behavior. An Agent makes decisions — where to look deeper, how severe an issue is, what to prioritize. If your entity needs contextual reasoning, choose an Agent.
Prerequisites
- Exercise 1 completed — you should be familiar with the pipeline stages (S0→S3) and the checkpoint system.
- AiAgentArchitect installed and working.
- The Express template pre-filled below.
Step-by-step
Invoke AiAgentArchitect
Start a new conversation and type:
/wor-agentic-architect Select option B) Concrete entity → Express Mode.
Paste the pre-filled template
Copy and paste the template below. Notice this time we select Agent Specialist instead of Command:
# Agentic Architect — Template Express
## What do you want to create?
- [X] Agent Specialist
- [ ] Agent Supervisor
- [ ] Skill
- [ ] Command
- [ ] Rule
## Entity description
**Tentative name:**
code-reviewer
**What exactly should it do?**
Analyze the current git diff for bugs, style violations, and
security issues. Adapt its analysis strategy based on the type
of changes (frontend vs. backend, new code vs. refactor).
Produce a structured report with findings by severity.
**What does it receive as input?**
The current git diff or PR changeset, invoked directly by
the developer.
**What does it produce as output?**
A structured review report with findings categorized by
severity (critical, warning, info). Each finding includes
a specific line reference and an actionable suggestion.
## Known constraints
**Is there anything this entity should never do?**
Never produce cosmetic nitpicks or vague feedback like
"consider refactoring this." Every finding must be actionable
with a specific suggestion.
**Are there any relevant restrictions?**
Findings must follow a consistent structure: severity level,
line reference, description, and suggestion. Must respect
the team's style guide conventions.
## Additional context
**Related entities:**
A Skill for formatting the report output (ski-format-report).
A Rule enforcing review standards (severity levels, no nitpicks).
A Knowledge Base with team coding conventions (kno-style-guide). What you should see
The system will process the template and may ask 2-3 questions about the Agent's scope — for example, which languages to focus on or how to handle multi-file diffs.
Review S0 — Input Structuring
The Input Enricher presents a structured version of your input. Verify:
- Primary entity:
age-spe-code-reviewer(Agent Specialist) - Supporting entities:
ski-format-report,rul-review-standards,kno-style-guide
Select A) Approve.
Review S1 — Process Discovery
The AS-IS analysis shows the current manual code review process and how the Agent improves it. In Express mode this is brief.
Select A) Approve.
What you should see
A comparison showing:
- Manual: developer reads diff line by line, checks mentally against style guide, writes comments
- Automated: Agent analyzes diff contextually, uses the Skill for report formatting, consults KB for team standards
Review S2 — Architecture Design
The Blueprint should show exactly 4 entities:
age-spe-code-reviewer— the Agent (adapts to context, makes decisions)ski-format-report— the Skill (reusable formatting, no decisions)rul-review-standards— the Rule (constrains all output)kno-style-guide— the Knowledge Base (team conventions)
Notice the Rule acts as an outer constraint — it applies to everything the Agent produces, not just one output. Select A) Approve.
Review and approve each entity (S3)
The Entity Builder generates each entity one by one. Review and approve each:
- age-spe-code-reviewer.md — verify it has goals, tasks, and references to the Skill and KB
- ski-format-report.md — verify it is a reusable procedure with no own identity
- rul-review-standards.md — verify it defines severity levels and prohibits nitpicks
- kno-style-guide.md — verify it has a template for coding conventions
Verify the generated files
Check the output structure:
exports/pr-review-assistant/
├── .agents/
│ ├── workflows/
│ │ └── age-spe-code-reviewer.md
│ ├── skills/
│ │ └── ski-format-report.md
│ ├── rules/
│ │ └── rul-review-standards.md
│ └── knowledge-base/
│ └── kno-style-guide.md
├── .claude/
├── process-overview.md
└── CLAUDE.md Checkpoints
System name: pr-review-assistant. Primary entity: age-spe-code-reviewer (Agent Specialist). Three supporting entities identified: Skill, Rule, Knowledge Base.
AS-IS shows manual review process. TO-BE shows Agent-driven automated review with contextual adaptation. Gap: inconsistent review quality, missed security issues, time-consuming.
Blueprint with 4 entities. The Rule wraps everything as an outer constraint. The Agent uses the Skill and consults the KB. No Workflow needed — single Agent, no coordination.
Four .md files in exports/pr-review-assistant/. The Agent references the Skill by name. The Rule defines severity levels (critical/warning/info). The KB has a template for team conventions.
Reference Solution
| Entity | Type | Purpose |
|---|---|---|
| age-spe-code-reviewer | Agent | Specialized worker with one domain: code review. Has its own identity and makes decisions during execution (what to focus on, severity assessment). Invoked directly by the user. |
| ski-format-report | Skill | Reusable formatting procedure — no own identity, no decisions. Structures the Agent's raw findings into a clean report with sections by severity. |
| rul-review-standards | Rule | Constrains every review: findings must have a severity (critical / warning / info), a specific line reference, and an actionable suggestion. Prohibits cosmetic nitpicks. |
| kno-style-guide | Knowledge Base | Team coding conventions: naming patterns, error handling, test expectations, architectural boundaries. The Agent consults this to calibrate findings against team standards. |
Key Takeaways
- Agent vs. Command: A Command always produces the same base behavior. An Agent adapts to context — different inputs trigger different analysis strategies. Choose Agent when the task requires reasoning.
- Skills are reusable tools:
ski-format-reporthas no identity — it is a procedure any Agent can use. This same Skill could be reused in a completely different system. - Rules as outer constraints: The Rule wraps the entire system. It does not execute — it constrains. Every piece of output must comply, regardless of which Agent produces it.
- Still no Workflow: A single Agent does not need orchestration. You only need a Workflow when two or more Agents must coordinate.
Next up: In Exercise 3 you will build your first Architect mode system — a Workflow coordinating two Agents with conditional routing. This is where multi-agent orchestration begins.
3. Customer Support Triage
A support team receives tickets and needs to classify them by urgency, route them to the right department, and draft responses for urgent cases automatically. This is your first Architect mode project — because two Agents with different responsibilities need a Workflow to coordinate them.
By the end you will have six entity files: a Workflow, two Agent Specialists, a Skill, a Rule, and a Knowledge Base.
Mode switch: This exercise uses Architect mode, not Express. The pipeline is the same (S0→S3), but the interview in S1 is deeper — the system will ask about the full process flow, decision points, and dependencies between agents.
Prerequisites
- Exercises 1-2 completed — you should understand Commands, Agents, Skills, Rules, and Knowledge Bases.
- AiAgentArchitect installed and working.
- The Architect template pre-filled below (different from the Express template used in exercises 1-2).
Step-by-step
Invoke AiAgentArchitect in Architect mode
Start a new conversation and type:
/wor-agentic-architect This time select option A) Complete process → Architect Mode.
What changes in Architect mode?
Architect mode activates the full BPM/BPA interview in S1 — the system will deeply analyze your process flow, identify decision points, and detect gaps before designing the architecture. Express mode skips most of this.
Paste the Architect template
Copy and paste the pre-filled template. Notice this template has 6 sections — more detailed than the Express template:
# Agentic Architect — Template Architect
## 1. What you want to agentize
**Name or title of the process:**
Customer Support Triage
**Describe the process in 2-4 sentences:**
Classify incoming support tickets by urgency and department,
then draft responses for urgent cases. The system should
reduce response time for critical tickets while ensuring
consistent quality across all responses.
**How is this done today, without the system?**
A support agent reads each ticket, manually assesses urgency
based on keywords and tone, checks an internal FAQ for known
issues, and writes a response. Critical tickets sometimes
wait in the general queue.
**What happens if the system doesn't exist or fails?**
Critical tickets get delayed. Response quality varies by
agent. SLA violations for urgent cases.
## 2. Process flow
**How does the process start?**
The user invokes the workflow with incoming ticket data
(subject, body, sender info).
**Main steps you have already identified:**
1. Classify ticket by urgency (critical/high/normal/low)
and department (billing/technical/account/general)
2. For urgent tickets, draft an empathetic response using
known solutions from the FAQ
3. Return classification + draft (if applicable)
**How does the process end?**
Returns a structured classification with urgency, department,
and — for urgent tickets — a draft response ready for review.
**Are there decisions or branches in the flow?**
Yes: after classification, only urgent tickets trigger the
draft-responder. Normal/low tickets only get classified.
**Are there repeating steps?**
No.
## 3. Technical context
**Does the process interact with external systems?**
No external APIs. Only internal Knowledge Base for product FAQ.
**Are there points where a human must review?**
The drafted response is always reviewed by a human before sending.
**Are there irreversible actions?**
No. The system classifies and drafts but never sends.
## 4. Existing skills and entities
**Reusable skills:**
A sentiment-check skill (ski-sentiment-check) that both agents
can use to analyze emotional tone and detect frustration.
## 5. Known constraints
**Is there anything the system should never do?**
Never send a response automatically. Never downgrade a ticket
that shows frustration signals.
**Are there relevant restrictions?**
SLA compliance: critical tickets must be responded to within
1 hour, standard within 24 hours.
## 6. Expected result
**What does success look like?**
All tickets classified correctly by urgency and department.
Urgent tickets have a draft response within minutes. Zero
SLA violations for critical cases. Complete the S1 interview (Process Discovery)
Unlike Express mode, the system will now run a structured BPM interview. It asks deeper questions about:
- Edge cases — what happens when a ticket is ambiguous?
- Decision criteria — how exactly is urgency determined?
- Data flow — what information passes from the classifier to the responder?
Answer the questions thoughtfully. The system will then present an AS-IS diagram showing the current manual process and a gap analysis.
Review and select A) Approve.
What you should see
An AS-IS diagram showing the manual triage flow (read → assess → check FAQ → respond). The gap analysis highlights: inconsistent urgency assessment, missed frustration signals, slow routing for critical tickets.
Review S2 — Architecture Design (Blueprint)
This is the most important step in Architect mode. The system presents the Blueprint — the complete architecture with all entities and their relationships.
You should see 6 entities:
wor-support-triage— Workflow that coordinates the two agentsage-spe-classifier— Agent that classifies urgency and departmentage-spe-draft-responder— Agent that drafts responses (only for urgent tickets)ski-sentiment-check— Skill shared by both agentsrul-sla-compliance— Rule enforcing SLA timingskno-product-faq— Knowledge Base with known solutions
Pay attention to the conditional routing: the Workflow runs the classifier first, then only invokes the draft-responder if the ticket is urgent.
Select A) Approve.
Review and approve each entity (S3)
The Entity Builder generates all 6 entities one by one. Key things to verify:
- wor-support-triage.md — check it defines the agent sequence and the "if urgent" routing condition
- age-spe-classifier.md — check it uses the sentiment-check Skill
- age-spe-draft-responder.md — check it consults the FAQ and runs sentiment validation on its own output
- ski-sentiment-check.md — check it is generic (no reference to a specific agent)
- rul-sla-compliance.md — check it defines concrete time thresholds
- kno-product-faq.md — check it has a template structure for FAQ entries
Verify the generated files
exports/customer-support-triage/
├── .agents/
│ ├── workflows/
│ │ ├── wor-support-triage.md
│ │ ├── age-spe-classifier.md
│ │ └── age-spe-draft-responder.md
│ ├── skills/
│ │ └── ski-sentiment-check.md
│ ├── rules/
│ │ └── rul-sla-compliance.md
│ └── knowledge-base/
│ └── kno-product-faq.md
├── .claude/
├── process-overview.md
└── CLAUDE.md Checkpoints
System name: customer-support-triage. Process type: multi-agent with conditional routing. Six entities identified with their types and relationships.
Full AS-IS diagram of the manual triage process. TO-BE shows a Workflow coordinating two Agents. Gap analysis identifies: inconsistent urgency assessment, delayed critical ticket routing, and quality variance.
Blueprint with 6 entities. The Workflow has two branches: classifier (always) → draft-responder (conditional on urgency). Both agents share the sentiment-check Skill. The SLA Rule constrains timing.
Six .md files generated. The Workflow defines the execution sequence and the routing condition. The classifier output feeds into the draft-responder input. The Skill is referenced by both agents without duplication.
Reference Solution
| Entity | Type | Purpose |
|---|---|---|
| wor-support-triage | Workflow | Coordinates execution: launches classifier first, reads its output to determine urgency, then conditionally launches draft-responder for urgent tickets. |
| age-spe-classifier | Agent | Determines urgency (critical, high, normal, low) and department. Uses sentiment-check Skill to detect frustrated language. |
| age-spe-draft-responder | Agent | Only invoked for urgent tickets. Consults FAQ to draft accurate responses. Validates tone with sentiment-check Skill. |
| ski-sentiment-check | Skill | Reusable by both Agents. Analyzes text for emotional tone, frustration signals, and escalation risk. |
| rul-sla-compliance | Rule | Enforces response time thresholds: critical within 1 hour, standard within 24 hours. |
| kno-product-faq | Knowledge Base | Common questions, known issues, and resolution steps. Draft-responder consults this for accurate responses. |
Key Takeaways
- Architect mode enables Workflows: When two or more Agents need coordination, you must use Architect mode to design the Workflow that orchestrates them.
- Workflows coordinate, they don't execute:
wor-support-triagedecides the sequence and makes routing decisions, but the actual work is done by the Agents. - An Agent cannot invoke another Agent — that is an anti-pattern. Only a Workflow can coordinate multiple Agents.
- Shared Skills avoid duplication:
ski-sentiment-checkis used by both Agents without being defined twice. Skills are the reuse mechanism. - The BPM interview matters: In Architect mode, S1 asks deeper questions that help the system design better conditional routing and data flow between agents.
Final challenge: In Exercise 4 you will build the most complex system — a Sprint Planning Pipeline with 9 entities, 3 Agents, Hooks for automation, and every entity type in the system.
4. Sprint Planning Pipeline
At the end of a sprint, the PM or Engineering Lead needs to run a complete cycle: retrospective analysis, velocity estimation, and next sprint planning. This system uses every entity type in AiAgentArchitect: a Workflow orchestrates three Agents in sequence, two Skills provide reusable capabilities, a Rule constrains capacity, a Knowledge Base informs all agents, a Command handles publishing, and a Hook enables automatic triggering.
This is the most complex system in the learning path. By the end you will have 9 entity files working together as a production-grade pipeline.
Full complexity: This exercise introduces Hooks (event-driven triggers), Commands coexisting with Workflows, and sequential agent dependencies where each Agent's output feeds the next. Take your time reviewing the Blueprint.
Prerequisites
- Exercises 1-3 completed — you should be comfortable with Express and Architect modes, Workflows, and multi-agent coordination.
- AiAgentArchitect installed and working.
- The Architect template pre-filled below.
Step-by-step
Invoke AiAgentArchitect in Architect mode
/wor-agentic-architect Select option A) Complete process → Architect Mode.
Paste the Architect template
# Agentic Architect — Template Architect
## 1. What you want to agentize
**Name or title of the process:**
Sprint Planning Pipeline
**Describe the process in 2-4 sentences:**
End-of-sprint automation: analyze what happened (retrospective),
calculate velocity and capacity for the next sprint, then plan
the next sprint by prioritizing and assigning backlog items.
The pipeline can be triggered manually or automatically when
the sprint closes.
**How is this done today, without the system?**
The PM manually reviews completed tickets, gathers feedback,
calculates velocity in a spreadsheet, reviews the backlog,
scores items, assesses risk, and drafts the sprint plan.
Takes 4-6 hours spread across multiple meetings.
**What happens if the system doesn't exist or fails?**
Sprint planning takes too long, velocity estimates are
inconsistent, high-risk items slip through undetected, and
capacity is over-committed.
## 2. Process flow
**How does the process start?**
Either: (a) the user invokes the workflow manually, or
(b) a Hook triggers it automatically when the sprint is
marked as closed.
**Main steps you have already identified:**
1. Retro analysis: review completed tickets, identify what
went well, what didn't, and recurring blockers
2. Velocity calculation: compute rolling velocity across
last 3 sprints, adjust for availability
3. Sprint planning: prioritize backlog items using scoring
and risk assessment, draft the plan within capacity
**How does the process end?**
Produces a complete sprint plan document. The user can then
invoke a separate publish command to export it.
**Are there decisions or branches in the flow?**
Sequential: each agent depends on the previous one's output.
No conditional branches, but the planner adjusts based on
velocity and risk data.
**Are there repeating steps?**
No.
## 3. Technical context
**Does the process interact with external systems?**
No external APIs. Internal Knowledge Base with team data,
OKRs, and availability.
**Are there points where a human must review?**
The final sprint plan should be reviewed before publishing.
**Are there irreversible actions?**
No. Publishing is a separate manual command.
## 4. Existing skills and entities
**Reusable skills:**
- A backlog-scorer skill (ski-backlog-scorer) for prioritizing
items by business value, effort, and OKR alignment
- A risk-assessment skill (ski-risk-assessment) for evaluating
technical, dependency, and knowledge-gap risks
## 5. Known constraints
**Is there anything the system should never do?**
Never commit a sprint plan that exceeds team capacity. Never
include more than 2 high-risk items per sprint.
**Are there relevant restrictions?**
Capacity constraints are hard limits. Risk limits are hard
limits. Both must be enforced by a Rule.
## 6. Expected result
**What does success look like?**
A complete sprint plan that respects velocity, capacity, and
risk constraints. Ready for review and publishing. Generated
in minutes instead of hours. Complete the S1 interview (Process Discovery)
The BPM interview will be the deepest yet. The system will ask about:
- Agent dependencies — how does the retro output feed into velocity calculation?
- Dual invocation — how do manual and Hook-triggered invocations differ?
- The publish command — why is it separate from the Workflow?
- Skill sharing — which agents use which skills?
After the interview, review the AS-IS diagram and select A) Approve.
What you should see
A detailed AS-IS showing the manual 4-6 hour process: gather feedback → calculate velocity in spreadsheet → review backlog → score items → assess risk → draft plan. The gap analysis should highlight: time-consuming, inconsistent velocity tracking, risk items slipping through.
Review S2 — Architecture Design (Blueprint)
The Blueprint should show 9 entities across every type:
| Entity | Type | Role |
|---|---|---|
wor-sprint-pipeline | Workflow | Coordinates 3 agents in sequence |
age-spe-retro-analyst | Agent | 1st: retrospective analysis |
age-spe-velocity-calculator | Agent | 2nd: velocity + capacity |
age-spe-sprint-planner | Agent | 3rd: plan the next sprint |
ski-backlog-scorer | Skill | Scores items by value/effort |
ski-risk-assessment | Skill | Evaluates risk factors |
rul-capacity-constraints | Rule | Hard limits on capacity + risk |
kno-team-okrs | KB | Team data, OKRs, availability |
hok-sprint-close | Hook | Auto-triggers on sprint close |
Pay special attention to:
- Sequential dependencies: retro → velocity → planner (each feeds the next)
- Dual invocation: the User and the Hook both trigger the same Workflow
- The publish Command is separate — it is user-invoked after reviewing the plan
Select A) Approve.
Where is the publish Command?
The system may generate com-publish-plan as an additional entity (making 10 total) or leave it for a future iteration. Either is valid. The key point: the Command is independent of the Workflow — it is a separate user action.
Review and approve each entity (S3)
With 9 entities, this is the longest approval phase. Key things to verify for each:
- wor-sprint-pipeline.md — defines 3-agent sequence with output transfer between them
- age-spe-retro-analyst.md — consults KB for team responsibilities and OKR alignment
- age-spe-velocity-calculator.md — uses retro output, consults KB for headcount/availability
- age-spe-sprint-planner.md — uses velocity output, invokes both Skills, consults KB for OKRs
- ski-backlog-scorer.md — generic scoring procedure, no identity
- ski-risk-assessment.md — generic risk evaluation, no identity
- rul-capacity-constraints.md — two hard limits: total points ≤ capacity, max 2 high-risk items
- kno-team-okrs.md — template for team members, roles, availability, and current OKRs
- hok-sprint-close.md — event trigger definition: fires on sprint close, invokes the Workflow
Use B) Adjust if any entity does not reference the correct dependencies.
Verify the generated files
exports/sprint-planning-pipeline/
├── .agents/
│ ├── workflows/
│ │ ├── wor-sprint-pipeline.md
│ │ ├── age-spe-retro-analyst.md
│ │ ├── age-spe-velocity-calculator.md
│ │ └── age-spe-sprint-planner.md
│ ├── skills/
│ │ ├── ski-backlog-scorer.md
│ │ └── ski-risk-assessment.md
│ ├── rules/
│ │ └── rul-capacity-constraints.md
│ ├── knowledge-base/
│ │ └── kno-team-okrs.md
│ └── hooks/
│ └── hok-sprint-close.md
├── .claude/
├── process-overview.md
└── CLAUDE.md Checkpoints
System name: sprint-planning-pipeline. Process type: multi-agent sequential pipeline with event-driven trigger. Nine entities identified across all entity types.
Detailed AS-IS of the 4-6 hour manual process. TO-BE shows three Agents in sequence with dual invocation (manual + Hook). Gap analysis highlights time, inconsistency, and risk management failures.
Blueprint with 9 entities. Sequential dependencies: retro → velocity → planner. The planner uses 2 Skills. All agents consult the KB. The Hook provides automatic triggering. The Rule enforces capacity and risk hard limits.
Nine .md files generated. The Workflow defines the 3-agent sequence with output transfer. The Hook references the Workflow by name. Both Skills are generic and reusable. The Rule defines two concrete hard limits.
Reference Solution
| Entity | Type | Purpose |
|---|---|---|
| wor-sprint-pipeline | Workflow | Coordinates three Agents in sequence: retro → velocity → planner. Transfers outputs between them. |
| age-spe-retro-analyst | Agent | Reviews completed tickets, identifies what went well, what didn't, and recurring blockers. Consults KB for OKR alignment. |
| age-spe-velocity-calculator | Agent | Computes rolling velocity and availability-adjusted capacity. Consults KB for headcount and time-off. |
| age-spe-sprint-planner | Agent | Uses velocity estimate and backlog. Invokes backlog-scorer and risk-assessment Skills. Drafts sprint plan within constraints. |
| ski-backlog-scorer | Skill | Scores items by business value, effort, and OKR alignment. Returns a ranked list. |
| ski-risk-assessment | Skill | Evaluates technical risk, dependency risk, and knowledge-gap risk. Flags items needing mitigation. |
| rul-capacity-constraints | Rule | Two hard limits: total points ≤ velocity-adjusted capacity. Max 2 high-risk items per sprint. |
| kno-team-okrs | Knowledge Base | Team members, roles, availability, and current OKRs. Consulted by all three Agents. |
| hok-sprint-close | Hook | Event-driven trigger that fires on sprint close. Invokes the Workflow without user intervention. |
Key Takeaways
- Sequential dependencies: Each Agent uses the previous one's output. The Workflow manages this transfer — Agents never communicate directly.
- Hooks enable automation:
hok-sprint-closetriggers the Workflow automatically. The user can still invoke it manually — both paths lead to the same pipeline. - Commands are separate actions: The publish command is independent of the Workflow. It is a user action performed after review, not part of the automated pipeline.
- Skills are the reuse mechanism: Both Skills are generic — they could be used by any planning Agent in any system.
- Rules as hard limits: The capacity Rule is not a guideline — it is an enforced constraint. The planner cannot produce a plan that violates it.
- Every entity type has a purpose: Workflow coordinates, Agents execute, Skills provide reusable capabilities, Rules constrain, Knowledge Bases inform, Hooks automate, Commands export.
Congratulations! You have built four systems covering every entity type and both modes of AiAgentArchitect. You now have the mental model and hands-on experience to design and build your own agentic systems from scratch.