8.1 KiB
description, handoffs
| description | handoffs | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Execute the implementation planning workflow using the plan template to generate design artifacts. |
|
User Input
$ARGUMENTS
You MUST consider the user input before proceeding (if not empty).
Outline
-
Setup: Run
.specify/scripts/bash/setup-plan.sh --jsonfrom repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'''m Groot' (or double-quote if possible: "I'm Groot"). -
Load context: Read
.ai/ROOT.mdand.ai/PROJECT_MAP.mdto understand the project structure and navigation. Then read required standards:.ai/standards/constitution.mdand.ai/standards/semantics.md. Load IMPL_PLAN template. -
Execute plan workflow: Follow the structure in IMPL_PLAN template to:
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
- Fill Constitution Check section from constitution
- Evaluate gates (ERROR if violations unjustified)
- Phase 0: Generate
research.md(resolve all NEEDS CLARIFICATION) - Phase 1: Generate
data-model.md,contracts/,quickstart.md - Phase 1: Generate global ADR artifacts and connect them to the plan
- Phase 1: Update agent context by running the agent script
- Re-evaluate Constitution Check post-design
-
Stop and report: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, generated artifacts, and ADR decisions created.
Phases
Phase 0: Outline & Research
-
Extract unknowns from Technical Context above:
- For each NEEDS CLARIFICATION → research task
- For each dependency → best practices task
- For each integration → patterns task
-
Generate and dispatch research agents:
For each unknown in Technical Context: Task: "Research {unknown} for {feature context}" For each technology choice: Task: "Find best practices for {tech} in {domain}" -
Consolidate findings in
research.mdusing format:- Decision: [what was chosen]
- Rationale: [why chosen]
- Alternatives considered: [what else evaluated]
Output: research.md with all NEEDS CLARIFICATION resolved
Phase 1: Design, ADRs & Contracts
Prerequisites: research.md complete
-
Validate Design against UX Reference:
- Check if the proposed architecture supports the latency, interactivity, and flow defined in
ux_reference.md. - Linkage: Ensure key UI states from
ux_reference.mdmap to Component Contracts (@UX_STATE). - CRITICAL: If the technical plan compromises the UX (e.g. "We can't do real-time validation"), you MUST STOP and warn the user.
- Check if the proposed architecture supports the latency, interactivity, and flow defined in
-
Extract entities from feature spec →
data-model.md:- Entity name, fields, relationships, validation rules.
-
Generate Global ADRs (Decision Memory Root Layer):
- Read
spec.md,research.md, and the technical context to identify repo-shaping decisions: storage, auth pattern, framework boundaries, integration patterns, deployment assumptions, failure strategy. - For each durable architectural choice, emit a standalone semantic ADR block using
[DEF:DecisionId:ADR]. - Every ADR block MUST include:
@COMPLEXITY: 3or4depending on blast radius@PURPOSE@RATIONALE@REJECTED@RELATIONback to the originating spec/research/plan boundary or target module family
- Preferred destinations:
docs/architecture.mdfor cross-cutting repository decisions- feature-local design docs when the decision is feature-scoped
- root module headers only when the decision scope is truly local
- Hard Gate: do not continue to task decomposition until the blocking global decisions have been materialized as ADR nodes.
- Anti-Regression Goal: a later orchestrator must be able to read these ADRs and avoid creating tasks for rejected branches.
- Read
-
Design & Verify Contracts (Semantic Protocol):
- Drafting: Define semantic headers, metadata, and closing anchors for all new modules strictly from
.ai/standards/semantics.md. - Complexity Classification: Classify each contract with
@COMPLEXITY: [1|2|3|4|5]or@C:. Treat@TIERonly as a legacy compatibility hint and never as the primary rule source. - Adaptive Contract Requirements:
- Complexity 1: anchors only;
@PURPOSEoptional. - Complexity 2: require
@PURPOSE. - Complexity 3: require
@PURPOSEand@RELATION; UI also requires@UX_STATE. - Complexity 4: require
@PURPOSE,@RELATION,@PRE,@POST,@SIDE_EFFECT; Python modules must define a meaningfullogger.reason()/logger.reflect()path or equivalent belief-state mechanism. - Complexity 5: require full level-4 contract plus
@DATA_CONTRACTand@INVARIANT; Python modules must requirebelief_scope; UI modules must define UX contracts including@UX_STATE,@UX_FEEDBACK,@UX_RECOVERY, and@UX_REACTIVITY.
- Complexity 1: anchors only;
- Decision-Memory Propagation:
- If a module/function/component realizes or is constrained by an ADR, add local
@RATIONALEand@REJECTEDguardrails before coding begins. - Use
@RELATION: IMPLEMENTS ->[AdrId]when the contract realizes the ADR. - Use
@RELATION: DEPENDS_ON ->[AdrId]when the contract is merely constrained by the ADR. - Record known LLM traps directly in the contract header so the implementer inherits the guardrail from the start.
- If a module/function/component realizes or is constrained by an ADR, add local
- Relation Syntax: Write dependency edges in canonical GraphRAG form:
@RELATION: [PREDICATE] ->[TARGET_ID]. - Context Guard: If a target relation, DTO, required dependency, or decision rationale cannot be named confidently, stop generation and emit
[NEED_CONTEXT: target]instead of inventing placeholders. - Testing Contracts: Add
@TEST_CONTRACT,@TEST_SCENARIO,@TEST_FIXTURE,@TEST_EDGE, and@TEST_INVARIANTwhen the design introduces audit-critical or explicitly test-governed contracts, especially for Complexity 5 boundaries. - Self-Review:
- Complexity Fit: Does each contract include exactly the metadata and contract density required by its complexity level?
- Completeness: Do
@PRE/@POST,@SIDE_EFFECT,@DATA_CONTRACT, UX tags, and decision-memory tags cover the edge cases identified in Research and UX Reference? - Connectivity: Do
@RELATIONtags form a coherent graph using canonical@RELATION: [PREDICATE] ->[TARGET_ID]syntax? - Compliance: Are all anchors properly opened and closed, and does the chosen comment syntax match the target medium?
- Belief-State Requirements: Do Complexity 4/5 Python modules explicitly account for
logger.reason(),logger.reflect(), andbelief_scoperequirements? - ADR Continuity: Does every blocking architectural decision have a corresponding ADR node and at least one downstream guarded contract?
- Output: Write verified contracts to
contracts/modules.md.
- Drafting: Define semantic headers, metadata, and closing anchors for all new modules strictly from
-
Simulate Contract Usage:
- Trace one key user scenario through the defined contracts to ensure data flow continuity.
- If a contract interface mismatch is found, fix it immediately.
- Verify that no traced path accidentally realizes an alternative already named in any ADR
@REJECTEDtag.
-
Generate API contracts:
- Output OpenAPI/GraphQL schema to
/contracts/for backend-frontend sync.
- Output OpenAPI/GraphQL schema to
-
Agent context update:
- Run
.specify/scripts/bash/update-agent-context.sh kilocode - These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file
- Add only new technology from current plan
- Preserve manual additions between markers
- Run
Output: data-model.md, /contracts/*, quickstart.md, ADR artifact(s), agent-specific file
Key rules
- Use absolute paths
- ERROR on gate failures or unresolved clarifications
- Do not hand off to
speckit.tasksuntil blocking ADRs exist and rejected branches are explicit