Files
ss-tools/.kilocode/workflows/speckit.plan.md
2026-03-16 23:11:19 +03:00

6.0 KiB

description, handoffs
description handoffs
Execute the implementation planning workflow using the plan template to generate design artifacts.
label agent prompt send
Create Tasks speckit.tasks Break the plan into tasks true
label agent prompt
Create Checklist speckit.checklist Create a checklist for the following domain...

User Input

$ARGUMENTS

You MUST consider the user input before proceeding (if not empty).

Outline

  1. Setup: Run .specify/scripts/bash/setup-plan.sh --json from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'''m Groot' (or double-quote if possible: "I'm Groot").

  2. Load context: Read .ai/ROOT.md and .ai/PROJECT_MAP.md to understand the project structure and navigation. Then read required standards: .ai/standards/constitution.md and .ai/standards/semantics.md. Load IMPL_PLAN template.

  3. Execute plan workflow: Follow the structure in IMPL_PLAN template to:

    • Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
    • Fill Constitution Check section from constitution
    • Evaluate gates (ERROR if violations unjustified)
    • Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
    • Phase 1: Generate data-model.md, contracts/, quickstart.md
    • Phase 1: Update agent context by running the agent script
    • Re-evaluate Constitution Check post-design
  4. Stop and report: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.

Phases

Phase 0: Outline & Research

  1. Extract unknowns from Technical Context above:

    • For each NEEDS CLARIFICATION → research task
    • For each dependency → best practices task
    • For each integration → patterns task
  2. Generate and dispatch research agents:

    For each unknown in Technical Context:
      Task: "Research {unknown} for {feature context}"
    For each technology choice:
      Task: "Find best practices for {tech} in {domain}"
    
  3. Consolidate findings in research.md using format:

    • Decision: [what was chosen]
    • Rationale: [why chosen]
    • Alternatives considered: [what else evaluated]

Output: research.md with all NEEDS CLARIFICATION resolved

Phase 1: Design & Contracts

Prerequisites: research.md complete

  1. Validate Design against UX Reference:

    • Check if the proposed architecture supports the latency, interactivity, and flow defined in ux_reference.md.
    • Linkage: Ensure key UI states from ux_reference.md map to Component Contracts (@UX_STATE).
    • CRITICAL: If the technical plan compromises the UX (e.g. "We can't do real-time validation"), you MUST STOP and warn the user.
  2. Extract entities from feature specdata-model.md:

    • Entity name, fields, relationships, validation rules.
  3. Design & Verify Contracts (Semantic Protocol):

    • Drafting: Define semantic headers, metadata, and closing anchors for all new modules strictly from .ai/standards/semantics.md.
    • Complexity Classification: Classify each contract with @COMPLEXITY: [1|2|3|4|5] or @C:. Treat @TIER only as a legacy compatibility hint and never as the primary rule source.
    • Adaptive Contract Requirements:
      • Complexity 1: anchors only; @PURPOSE optional.
      • Complexity 2: require @PURPOSE.
      • Complexity 3: require @PURPOSE and @RELATION; UI also requires @UX_STATE.
      • Complexity 4: require @PURPOSE, @RELATION, @PRE, @POST, @SIDE_EFFECT; Python modules must define a meaningful logger.reason() / logger.reflect() path or equivalent belief-state mechanism.
      • Complexity 5: require full level-4 contract plus @DATA_CONTRACT and @INVARIANT; Python modules must require belief_scope; UI modules must define UX contracts including @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY, and @UX_REACTIVITY.
    • Relation Syntax: Write dependency edges in canonical GraphRAG form: @RELATION: [PREDICATE] ->[TARGET_ID].
    • Context Guard: If a target relation, DTO, or required dependency cannot be named confidently, stop generation and emit [NEED_CONTEXT: target] instead of inventing placeholders.
    • Testing Contracts: Add @TEST_CONTRACT, @TEST_SCENARIO, @TEST_FIXTURE, @TEST_EDGE, and @TEST_INVARIANT when the design introduces audit-critical or explicitly test-governed contracts, especially for Complexity 5 boundaries.
    • Self-Review:
      • Complexity Fit: Does each contract include exactly the metadata and contract density required by its complexity level?
      • Completeness: Do @PRE/@POST, @SIDE_EFFECT, @DATA_CONTRACT, and UX tags cover the edge cases identified in Research and UX Reference?
      • Connectivity: Do @RELATION tags form a coherent graph using canonical @RELATION: [PREDICATE] ->[TARGET_ID] syntax?
      • Compliance: Are all anchors properly opened and closed, and does the chosen comment syntax match the target medium?
      • Belief-State Requirements: Do Complexity 4/5 Python modules explicitly account for logger.reason(), logger.reflect(), and belief_scope requirements?
    • Output: Write verified contracts to contracts/modules.md.
  4. Simulate Contract Usage:

    • Trace one key user scenario through the defined contracts to ensure data flow continuity.
    • If a contract interface mismatch is found, fix it immediately.
  5. Generate API contracts:

    • Output OpenAPI/GraphQL schema to /contracts/ for backend-frontend sync.
  6. Agent context update:

    • Run .specify/scripts/bash/update-agent-context.sh kilocode
    • These scripts detect which AI agent is in use
    • Update the appropriate agent-specific context file
    • Add only new technology from current plan
    • Preserve manual additions between markers

Output: data-model.md, /contracts/*, quickstart.md, agent-specific file

Key rules

  • Use absolute paths
  • ERROR on gate failures or unresolved clarifications