6.0 KiB
description, handoffs
| description | handoffs | ||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Execute the implementation planning workflow using the plan template to generate design artifacts. |
|
User Input
$ARGUMENTS
You MUST consider the user input before proceeding (if not empty).
Outline
-
Setup: Run
.specify/scripts/bash/setup-plan.sh --jsonfrom repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'''m Groot' (or double-quote if possible: "I'm Groot"). -
Load context: Read
.ai/ROOT.mdand.ai/PROJECT_MAP.mdto understand the project structure and navigation. Then read required standards:.ai/standards/constitution.mdand.ai/standards/semantics.md. Load IMPL_PLAN template. -
Execute plan workflow: Follow the structure in IMPL_PLAN template to:
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
- Fill Constitution Check section from constitution
- Evaluate gates (ERROR if violations unjustified)
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
- Phase 1: Generate data-model.md, contracts/, quickstart.md
- Phase 1: Update agent context by running the agent script
- Re-evaluate Constitution Check post-design
-
Stop and report: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
Phases
Phase 0: Outline & Research
-
Extract unknowns from Technical Context above:
- For each NEEDS CLARIFICATION → research task
- For each dependency → best practices task
- For each integration → patterns task
-
Generate and dispatch research agents:
For each unknown in Technical Context: Task: "Research {unknown} for {feature context}" For each technology choice: Task: "Find best practices for {tech} in {domain}" -
Consolidate findings in
research.mdusing format:- Decision: [what was chosen]
- Rationale: [why chosen]
- Alternatives considered: [what else evaluated]
Output: research.md with all NEEDS CLARIFICATION resolved
Phase 1: Design & Contracts
Prerequisites: research.md complete
-
Validate Design against UX Reference:
- Check if the proposed architecture supports the latency, interactivity, and flow defined in
ux_reference.md. - Linkage: Ensure key UI states from
ux_reference.mdmap to Component Contracts (@UX_STATE). - CRITICAL: If the technical plan compromises the UX (e.g. "We can't do real-time validation"), you MUST STOP and warn the user.
- Check if the proposed architecture supports the latency, interactivity, and flow defined in
-
Extract entities from feature spec →
data-model.md:- Entity name, fields, relationships, validation rules.
-
Design & Verify Contracts (Semantic Protocol):
- Drafting: Define semantic headers, metadata, and closing anchors for all new modules strictly from
.ai/standards/semantics.md. - Complexity Classification: Classify each contract with
@COMPLEXITY: [1|2|3|4|5]or@C:. Treat@TIERonly as a legacy compatibility hint and never as the primary rule source. - Adaptive Contract Requirements:
- Complexity 1: anchors only;
@PURPOSEoptional. - Complexity 2: require
@PURPOSE. - Complexity 3: require
@PURPOSEand@RELATION; UI also requires@UX_STATE. - Complexity 4: require
@PURPOSE,@RELATION,@PRE,@POST,@SIDE_EFFECT; Python modules must define a meaningfullogger.reason()/logger.reflect()path or equivalent belief-state mechanism. - Complexity 5: require full level-4 contract plus
@DATA_CONTRACTand@INVARIANT; Python modules must requirebelief_scope; UI modules must define UX contracts including@UX_STATE,@UX_FEEDBACK,@UX_RECOVERY, and@UX_REACTIVITY.
- Complexity 1: anchors only;
- Relation Syntax: Write dependency edges in canonical GraphRAG form:
@RELATION: [PREDICATE] ->[TARGET_ID]. - Context Guard: If a target relation, DTO, or required dependency cannot be named confidently, stop generation and emit
[NEED_CONTEXT: target]instead of inventing placeholders. - Testing Contracts: Add
@TEST_CONTRACT,@TEST_SCENARIO,@TEST_FIXTURE,@TEST_EDGE, and@TEST_INVARIANTwhen the design introduces audit-critical or explicitly test-governed contracts, especially for Complexity 5 boundaries. - Self-Review:
- Complexity Fit: Does each contract include exactly the metadata and contract density required by its complexity level?
- Completeness: Do
@PRE/@POST,@SIDE_EFFECT,@DATA_CONTRACT, and UX tags cover the edge cases identified in Research and UX Reference? - Connectivity: Do
@RELATIONtags form a coherent graph using canonical@RELATION: [PREDICATE] ->[TARGET_ID]syntax? - Compliance: Are all anchors properly opened and closed, and does the chosen comment syntax match the target medium?
- Belief-State Requirements: Do Complexity 4/5 Python modules explicitly account for
logger.reason(),logger.reflect(), andbelief_scoperequirements?
- Output: Write verified contracts to
contracts/modules.md.
- Drafting: Define semantic headers, metadata, and closing anchors for all new modules strictly from
-
Simulate Contract Usage:
- Trace one key user scenario through the defined contracts to ensure data flow continuity.
- If a contract interface mismatch is found, fix it immediately.
-
Generate API contracts:
- Output OpenAPI/GraphQL schema to
/contracts/for backend-frontend sync.
- Output OpenAPI/GraphQL schema to
-
Agent context update:
- Run
.specify/scripts/bash/update-agent-context.sh kilocode - These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file
- Add only new technology from current plan
- Preserve manual additions between markers
- Run
Output: data-model.md, /contracts/*, quickstart.md, agent-specific file
Key rules
- Use absolute paths
- ERROR on gate failures or unresolved clarifications