- Add .claude/ configuration (agents, commands, hooks, get-shit-done workflows) - Add prompts/ directory with development planning documents - Add scripts/setup-tenants/ with tenant configuration - Add docs/screenshots/ - Remove obsolete phase2.2-corrections-report.md - Update pnpm-lock.yaml - Update detect-secrets.sh to ignore setup.sh (env var usage, not secrets) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
14 KiB
<core_principle> The orchestrator's job is coordination, not execution. Each subagent loads the full execute-plan context itself. Orchestrator discovers plans, analyzes dependencies, groups into waves, spawns agents, handles checkpoints, collects results. </core_principle>
<required_reading> Read STATE.md before any operation to load project context. </required_reading>
Before any operation, read project state:cat .planning/STATE.md 2>/dev/null
If file exists: Parse and internalize:
- Current position (phase, plan, status)
- Accumulated decisions (constraints on this execution)
- Blockers/concerns (things to watch for)
If file missing but .planning/ exists:
STATE.md missing but planning artifacts exist.
Options:
1. Reconstruct from existing artifacts
2. Continue without project state (may lose accumulated context)
If .planning/ doesn't exist: Error - project not initialized.
Confirm phase exists and has plans:# Match both zero-padded (05-*) and unpadded (5-*) folders
PADDED_PHASE=$(printf "%02d" ${PHASE_ARG} 2>/dev/null || echo "${PHASE_ARG}")
PHASE_DIR=$(ls -d .planning/phases/${PADDED_PHASE}-* .planning/phases/${PHASE_ARG}-* 2>/dev/null | head -1)
if [ -z "$PHASE_DIR" ]; then
echo "ERROR: No phase directory matching '${PHASE_ARG}'"
exit 1
fi
PLAN_COUNT=$(ls -1 "$PHASE_DIR"/*-PLAN.md 2>/dev/null | wc -l | tr -d ' ')
if [ "$PLAN_COUNT" -eq 0 ]; then
echo "ERROR: No plans found in $PHASE_DIR"
exit 1
fi
Report: "Found {N} plans in {phase_dir}"
List all plans and extract metadata:# Get all plans
ls -1 "$PHASE_DIR"/*-PLAN.md 2>/dev/null | sort
# Get completed plans (have SUMMARY.md)
ls -1 "$PHASE_DIR"/*-SUMMARY.md 2>/dev/null | sort
For each plan, read frontmatter to extract:
wave: N- Execution wave (pre-computed)autonomous: true/false- Whether plan has checkpointsgap_closure: true/false- Whether plan closes gaps from verification/UAT
Build plan inventory:
- Plan path
- Plan ID (e.g., "03-01")
- Wave number
- Autonomous flag
- Gap closure flag
- Completion status (SUMMARY exists = complete)
Filtering:
- Skip completed plans (have SUMMARY.md)
- If
--gaps-onlyflag: also skip plans wheregap_closureis nottrue
If all plans filtered out, report "No matching incomplete plans" and exit.
Read `wave` from each plan's frontmatter and group by wave number:# For each plan, extract wave from frontmatter
for plan in $PHASE_DIR/*-PLAN.md; do
wave=$(grep "^wave:" "$plan" | cut -d: -f2 | tr -d ' ')
autonomous=$(grep "^autonomous:" "$plan" | cut -d: -f2 | tr -d ' ')
echo "$plan:$wave:$autonomous"
done
Group plans:
waves = {
1: [plan-01, plan-02],
2: [plan-03, plan-04],
3: [plan-05]
}
No dependency analysis needed. Wave numbers are pre-computed during /gsd:plan-phase.
Report wave structure with context:
## Execution Plan
**Phase {X}: {Name}** — {total_plans} plans across {wave_count} waves
| Wave | Plans | What it builds |
|------|-------|----------------|
| 1 | 01-01, 01-02 | {from plan objectives} |
| 2 | 01-03 | {from plan objectives} |
| 3 | 01-04 [checkpoint] | {from plan objectives} |
The "What it builds" column comes from skimming plan names/objectives. Keep it brief (3-8 words).
Execute each wave in sequence. Autonomous plans within a wave run in parallel.For each wave:
-
Describe what's being built (BEFORE spawning):
Read each plan's
<objective>section. Extract what's being built and why it matters.Output:
--- ## Wave {N} **{Plan ID}: {Plan Name}** {2-3 sentences: what this builds, key technical approach, why it matters in context} **{Plan ID}: {Plan Name}** (if parallel) {same format} Spawning {count} agent(s)... ---Examples:
- Bad: "Executing terrain generation plan"
- Good: "Procedural terrain generator using Perlin noise — creates height maps, biome zones, and collision meshes. Required before vehicle physics can interact with ground."
-
Spawn all autonomous agents in wave simultaneously:
Use Task tool with multiple parallel calls. Each agent gets prompt from subagent-task-prompt template:
<objective> Execute plan {plan_number} of phase {phase_number}-{phase_name}. Commit each task atomically. Create SUMMARY.md. Update STATE.md. </objective> <execution_context> @/home/payload/payload-cms/.claude/get-shit-done/workflows/execute-plan.md @/home/payload/payload-cms/.claude/get-shit-done/templates/summary.md @/home/payload/payload-cms/.claude/get-shit-done/references/checkpoints.md @/home/payload/payload-cms/.claude/get-shit-done/references/tdd.md </execution_context> <context> Plan: @{plan_path} Project state: @.planning/STATE.md Config: @.planning/config.json (if exists) </context> <success_criteria> - [ ] All tasks executed - [ ] Each task committed individually - [ ] SUMMARY.md created in plan directory - [ ] STATE.md updated with position and decisions </success_criteria> -
Wait for all agents in wave to complete:
Task tool blocks until each agent finishes. All parallel agents return together.
-
Report completion and what was built:
For each completed agent:
- Verify SUMMARY.md exists at expected path
- Read SUMMARY.md to extract what was built
- Note any issues or deviations
Output:
--- ## Wave {N} Complete **{Plan ID}: {Plan Name}** {What was built — from SUMMARY.md deliverables} {Notable deviations or discoveries, if any} **{Plan ID}: {Plan Name}** (if parallel) {same format} {If more waves: brief note on what this enables for next wave} ---Examples:
- Bad: "Wave 2 complete. Proceeding to Wave 3."
- Good: "Terrain system complete — 3 biome types, height-based texturing, physics collision meshes. Vehicle physics (Wave 3) can now reference ground surfaces."
-
Handle failures:
If any agent in wave fails:
- Report which plan failed and why
- Ask user: "Continue with remaining waves?" or "Stop execution?"
- If continue: proceed to next wave (dependent plans may also fail)
- If stop: exit with partial completion report
-
Execute checkpoint plans between waves:
See
<checkpoint_handling>for details. -
Proceed to next wave
Detection: Check autonomous field in frontmatter.
Execution flow for checkpoint plans:
-
Spawn agent for checkpoint plan:
Task(prompt="{subagent-task-prompt}", subagent_type="general-purpose") -
Agent runs until checkpoint:
- Executes auto tasks normally
- Reaches checkpoint task (e.g.,
type="checkpoint:human-verify") or auth gate - Agent returns with structured checkpoint (see checkpoint-return.md template)
-
Agent return includes (structured format):
- Completed Tasks table with commit hashes and files
- Current task name and blocker
- Checkpoint type and details for user
- What's awaited from user
-
Orchestrator presents checkpoint to user:
Extract and display the "Checkpoint Details" and "Awaiting" sections from agent return:
## Checkpoint: [Type] **Plan:** 03-03 Dashboard Layout **Progress:** 2/3 tasks complete [Checkpoint Details section from agent return] [Awaiting section from agent return] -
User responds:
- "approved" / "done" → spawn continuation agent
- Description of issues → spawn continuation agent with feedback
- Decision selection → spawn continuation agent with choice
-
Spawn continuation agent (NOT resume):
Use the continuation-prompt.md template:
Task( prompt=filled_continuation_template, subagent_type="general-purpose" )Fill template with:
{completed_tasks_table}: From agent's checkpoint return{resume_task_number}: Current task from checkpoint{resume_task_name}: Current task name from checkpoint{user_response}: What user provided{resume_instructions}: Based on checkpoint type (see continuation-prompt.md)
-
Continuation agent executes:
- Verifies previous commits exist
- Continues from resume point
- May hit another checkpoint (repeat from step 4)
- Or completes plan
-
Repeat until plan completes or user stops
Why fresh agent instead of resume: Resume relies on Claude Code's internal serialization which breaks with parallel tool calls. Fresh agents with explicit state are more reliable and maintain full context.
Checkpoint in parallel context: If a plan in a parallel wave has a checkpoint:
- Spawn as normal
- Agent pauses at checkpoint and returns with structured state
- Other parallel agents may complete while waiting
- Present checkpoint to user
- Spawn continuation agent with user response
- Wait for all agents to finish before next wave
## Phase {X}: {Name} Execution Complete
**Waves executed:** {N}
**Plans completed:** {M} of {total}
### Wave Summary
| Wave | Plans | Status |
|------|-------|--------|
| 1 | plan-01, plan-02 | ✓ Complete |
| CP | plan-03 | ✓ Verified |
| 2 | plan-04 | ✓ Complete |
| 3 | plan-05 | ✓ Complete |
### Plan Details
1. **03-01**: [one-liner from SUMMARY.md]
2. **03-02**: [one-liner from SUMMARY.md]
...
### Issues Encountered
[Aggregate from all SUMMARYs, or "None"]
Verify phase achieved its GOAL, not just completed its TASKS.
Spawn verifier:
Task(
prompt="Verify phase {phase_number} goal achievement.
Phase directory: {phase_dir}
Phase goal: {goal from ROADMAP.md}
Check must_haves against actual codebase. Create VERIFICATION.md.
Verify what actually exists in the code.",
subagent_type="gsd-verifier"
)
Read verification status:
grep "^status:" "$PHASE_DIR"/*-VERIFICATION.md | cut -d: -f2 | tr -d ' '
Route by status:
| Status | Action |
|---|---|
passed |
Continue to update_roadmap |
human_needed |
Present items to user, get approval or feedback |
gaps_found |
Present gap summary, offer /gsd:plan-phase {phase} --gaps |
If passed:
Phase goal verified. Proceed to update_roadmap.
If human_needed:
## ✓ Phase {X}: {Name} — Human Verification Required
All automated checks passed. {N} items need human testing:
### Human Verification Checklist
{Extract from VERIFICATION.md human_verification section}
---
**After testing:**
- "approved" → continue to update_roadmap
- Report issues → will route to gap closure planning
If user approves → continue to update_roadmap. If user reports issues → treat as gaps_found.
If gaps_found:
Present gaps and offer next command:
## ⚠ Phase {X}: {Name} — Gaps Found
**Score:** {N}/{M} must-haves verified
**Report:** {phase_dir}/{phase}-VERIFICATION.md
### What's Missing
{Extract gap summaries from VERIFICATION.md gaps section}
---
## ▶ Next Up
**Plan gap closure** — create additional plans to complete the phase
`/gsd:plan-phase {X} --gaps`
<sub>`/clear` first → fresh context window</sub>
---
**Also available:**
- `cat {phase_dir}/{phase}-VERIFICATION.md` — see full report
- `/gsd:verify-work {X}` — manual testing before planning
User runs /gsd:plan-phase {X} --gaps which:
- Reads VERIFICATION.md gaps
- Creates additional plans (04, 05, etc.) with
gap_closure: trueto close gaps - User then runs
/gsd:execute-phase {X} --gaps-only - Execute-phase runs only gap closure plans (04-05)
- Verifier runs again after new plans complete
User stays in control at each decision point.
Update ROADMAP.md to reflect phase completion:# Mark phase complete
# Update completion date
# Update status
Commit phase completion (roadmap, state, verification):
git add .planning/ROADMAP.md .planning/STATE.md .planning/phases/{phase_dir}/*-VERIFICATION.md
git add .planning/REQUIREMENTS.md # if updated
git commit -m "docs(phase-{X}): complete phase execution"
Present next steps based on milestone status:
If more phases remain:
## Next Up
**Phase {X+1}: {Name}** — {Goal}
`/gsd:plan-phase {X+1}`
<sub>`/clear` first for fresh context</sub>
If milestone complete:
MILESTONE COMPLETE!
All {N} phases executed.
`/gsd:complete-milestone`
<context_efficiency> Why this works:
Orchestrator context usage: ~10-15%
- Read plan frontmatter (small)
- Analyze dependencies (logic, no heavy reads)
- Fill template strings
- Spawn Task calls
- Collect results
Each subagent: Fresh 200k context
- Loads full execute-plan workflow
- Loads templates, references
- Executes plan with full capacity
- Creates SUMMARY, commits
No polling. Task tool blocks until completion. No TaskOutput loops.
No context bleed. Orchestrator never reads workflow internals. Just paths and results. </context_efficiency>
<failure_handling> Subagent fails mid-plan:
- SUMMARY.md won't exist
- Orchestrator detects missing SUMMARY
- Reports failure, asks user how to proceed
Dependency chain breaks:
- Wave 1 plan fails
- Wave 2 plans depending on it will likely fail
- Orchestrator can still attempt them (user choice)
- Or skip dependent plans entirely
All agents in wave fail:
- Something systemic (git issues, permissions, etc.)
- Stop execution
- Report for manual investigation
Checkpoint fails to resolve:
- User can't approve or provides repeated issues
- Ask: "Skip this plan?" or "Abort phase execution?"
- Record partial progress in STATE.md </failure_handling>
If phase execution was interrupted (context limit, user exit, error):
- Run
/gsd:execute-phase {phase}again - discover_plans finds completed SUMMARYs
- Skips completed plans
- Resumes from first incomplete plan
- Continues wave-based execution
STATE.md tracks:
- Last completed plan
- Current wave
- Any pending checkpoints