Files
ss-tools/.kilocode/workflows/speckit.plan.md
2026-04-01 13:29:41 +03:00

8.1 KiB

description, handoffs
description handoffs
Execute the implementation planning workflow using the plan template to generate design artifacts.
label agent prompt send
Create Tasks speckit.tasks Break the plan into tasks true
label agent prompt
Create Checklist speckit.checklist Create a checklist for the following domain...

User Input

$ARGUMENTS

You MUST consider the user input before proceeding (if not empty).

Outline

  1. Setup: Run .specify/scripts/bash/setup-plan.sh --json from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'''m Groot' (or double-quote if possible: "I'm Groot").

  2. Load context: Read .ai/ROOT.md and .ai/PROJECT_MAP.md to understand the project structure and navigation. Then read required standards: .ai/standards/constitution.md and .ai/standards/semantics.md. Load IMPL_PLAN template.

  3. Execute plan workflow: Follow the structure in IMPL_PLAN template to:

    • Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
    • Fill Constitution Check section from constitution
    • Evaluate gates (ERROR if violations unjustified)
    • Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
    • Phase 1: Generate data-model.md, contracts/, quickstart.md
    • Phase 1: Generate global ADR artifacts and connect them to the plan
    • Phase 1: Update agent context by running the agent script
    • Re-evaluate Constitution Check post-design
  4. Stop and report: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, generated artifacts, and ADR decisions created.

Phases

Phase 0: Outline & Research

  1. Extract unknowns from Technical Context above:

    • For each NEEDS CLARIFICATION → research task
    • For each dependency → best practices task
    • For each integration → patterns task
  2. Generate and dispatch research agents:

    For each unknown in Technical Context:
      Task: "Research {unknown} for {feature context}"
    For each technology choice:
      Task: "Find best practices for {tech} in {domain}"
    
  3. Consolidate findings in research.md using format:

    • Decision: [what was chosen]
    • Rationale: [why chosen]
    • Alternatives considered: [what else evaluated]

Output: research.md with all NEEDS CLARIFICATION resolved

Phase 1: Design, ADRs & Contracts

Prerequisites: research.md complete

  1. Validate Design against UX Reference:

    • Check if the proposed architecture supports the latency, interactivity, and flow defined in ux_reference.md.
    • Linkage: Ensure key UI states from ux_reference.md map to Component Contracts (@UX_STATE).
    • CRITICAL: If the technical plan compromises the UX (e.g. "We can't do real-time validation"), you MUST STOP and warn the user.
  2. Extract entities from feature specdata-model.md:

    • Entity name, fields, relationships, validation rules.
  3. Generate Global ADRs (Decision Memory Root Layer):

    • Read spec.md, research.md, and the technical context to identify repo-shaping decisions: storage, auth pattern, framework boundaries, integration patterns, deployment assumptions, failure strategy.
    • For each durable architectural choice, emit a standalone semantic ADR block using [DEF:DecisionId:ADR].
    • Every ADR block MUST include:
      • @COMPLEXITY: 3 or 4 depending on blast radius
      • @PURPOSE
      • @RATIONALE
      • @REJECTED
      • @RELATION back to the originating spec/research/plan boundary or target module family
    • Preferred destinations:
      • docs/architecture.md for cross-cutting repository decisions
      • feature-local design docs when the decision is feature-scoped
      • root module headers only when the decision scope is truly local
    • Hard Gate: do not continue to task decomposition until the blocking global decisions have been materialized as ADR nodes.
    • Anti-Regression Goal: a later orchestrator must be able to read these ADRs and avoid creating tasks for rejected branches.
  4. Design & Verify Contracts (Semantic Protocol):

    • Drafting: Define semantic headers, metadata, and closing anchors for all new modules strictly from .ai/standards/semantics.md.
    • Complexity Classification: Classify each contract with @COMPLEXITY: [1|2|3|4|5] or @C:. Treat @TIER only as a legacy compatibility hint and never as the primary rule source.
    • Adaptive Contract Requirements:
      • Complexity 1: anchors only; @PURPOSE optional.
      • Complexity 2: require @PURPOSE.
      • Complexity 3: require @PURPOSE and @RELATION; UI also requires @UX_STATE.
      • Complexity 4: require @PURPOSE, @RELATION, @PRE, @POST, @SIDE_EFFECT; Python modules must define a meaningful logger.reason() / logger.reflect() path or equivalent belief-state mechanism.
      • Complexity 5: require full level-4 contract plus @DATA_CONTRACT and @INVARIANT; Python modules must require belief_scope; UI modules must define UX contracts including @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY, and @UX_REACTIVITY.
    • Decision-Memory Propagation:
      • If a module/function/component realizes or is constrained by an ADR, add local @RATIONALE and @REJECTED guardrails before coding begins.
      • Use @RELATION: IMPLEMENTS ->[AdrId] when the contract realizes the ADR.
      • Use @RELATION: DEPENDS_ON ->[AdrId] when the contract is merely constrained by the ADR.
      • Record known LLM traps directly in the contract header so the implementer inherits the guardrail from the start.
    • Relation Syntax: Write dependency edges in canonical GraphRAG form: @RELATION: [PREDICATE] ->[TARGET_ID].
    • Context Guard: If a target relation, DTO, required dependency, or decision rationale cannot be named confidently, stop generation and emit [NEED_CONTEXT: target] instead of inventing placeholders.
    • Testing Contracts: Add @TEST_CONTRACT, @TEST_SCENARIO, @TEST_FIXTURE, @TEST_EDGE, and @TEST_INVARIANT when the design introduces audit-critical or explicitly test-governed contracts, especially for Complexity 5 boundaries.
    • Self-Review:
      • Complexity Fit: Does each contract include exactly the metadata and contract density required by its complexity level?
      • Completeness: Do @PRE/@POST, @SIDE_EFFECT, @DATA_CONTRACT, UX tags, and decision-memory tags cover the edge cases identified in Research and UX Reference?
      • Connectivity: Do @RELATION tags form a coherent graph using canonical @RELATION: [PREDICATE] ->[TARGET_ID] syntax?
      • Compliance: Are all anchors properly opened and closed, and does the chosen comment syntax match the target medium?
      • Belief-State Requirements: Do Complexity 4/5 Python modules explicitly account for logger.reason(), logger.reflect(), and belief_scope requirements?
      • ADR Continuity: Does every blocking architectural decision have a corresponding ADR node and at least one downstream guarded contract?
    • Output: Write verified contracts to contracts/modules.md.
  5. Simulate Contract Usage:

    • Trace one key user scenario through the defined contracts to ensure data flow continuity.
    • If a contract interface mismatch is found, fix it immediately.
    • Verify that no traced path accidentally realizes an alternative already named in any ADR @REJECTED tag.
  6. Generate API contracts:

    • Output OpenAPI/GraphQL schema to /contracts/ for backend-frontend sync.
  7. Agent context update:

    • Run .specify/scripts/bash/update-agent-context.sh kilocode
    • These scripts detect which AI agent is in use
    • Update the appropriate agent-specific context file
    • Add only new technology from current plan
    • Preserve manual additions between markers

Output: data-model.md, /contracts/*, quickstart.md, ADR artifact(s), agent-specific file

Key rules

  • Use absolute paths
  • ERROR on gate failures or unresolved clarifications
  • Do not hand off to speckit.tasks until blocking ADRs exist and rejected branches are explicit