semantic cleanup
This commit is contained in:
4
.opencode/command/read_semantics.md
Normal file
4
.opencode/command/read_semantics.md
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
description: read semantic protocol
|
||||
---
|
||||
MANDATORY USE `skill({name="semantics-core"})`, `skill({name="semantics-contracts"})`, `skill({name="semantics-belief"})`
|
||||
72
.opencode/command/speckit.analyze.md
Normal file
72
.opencode/command/speckit.analyze.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
description: Perform a read-only consistency analysis across spec.md, plan.md, tasks.md, and ADR sources for the active Rust MCP feature.
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Identify inconsistencies, ambiguities, coverage gaps, and decision-memory drift across the feature artifacts before implementation proceeds.
|
||||
|
||||
## Operating Constraints
|
||||
|
||||
**STRICTLY READ-ONLY**: Do not modify files.
|
||||
|
||||
**Constitution Authority**: `.specify/memory/constitution.md` is the local constitutional baseline for this workflow. Conflicts with its must-level principles are CRITICAL.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` and derive absolute paths for `spec.md`, `plan.md`, `tasks.md`, and relevant ADR sources under `docs/adr/`.
|
||||
- Analyze the active feature directory under `specs/<feature>/` only.
|
||||
|
||||
2. Load minimal necessary context from:
|
||||
- `spec.md`
|
||||
- `plan.md`
|
||||
- `tasks.md`
|
||||
- `contracts/modules.md` when present
|
||||
- `README.md`
|
||||
- `docs/SEMANTIC_PROTOCOL_COMPLIANCE.md`
|
||||
- `.specify/memory/constitution.md`
|
||||
- relevant `docs/adr/*.md`
|
||||
|
||||
3. Build internal inventories for:
|
||||
- requirements
|
||||
- user stories and acceptance criteria
|
||||
- task coverage
|
||||
- constitution principles
|
||||
- ADR / decision-memory guardrails
|
||||
|
||||
4. Detect high-signal issues only:
|
||||
- duplication
|
||||
- ambiguity
|
||||
- underspecification
|
||||
- constitution conflicts
|
||||
- coverage gaps
|
||||
- terminology drift
|
||||
- repository-structure mismatches
|
||||
- decision-memory drift and rejected-path scheduling
|
||||
|
||||
5. Produce a compact Markdown report with:
|
||||
- findings table
|
||||
- coverage summary table
|
||||
- decision-memory summary table
|
||||
- constitution alignment issues
|
||||
- unmapped tasks
|
||||
- metrics
|
||||
|
||||
6. Provide next actions:
|
||||
- CRITICAL/HIGH issues should be resolved before `speckit.implement`
|
||||
- lower-severity issues may be deferred with explicit rationale
|
||||
|
||||
## Analysis Rules
|
||||
|
||||
- Treat stale Python/Svelte assumptions in plan/tasks as real defects for this repository.
|
||||
- Treat missing ADR propagation as a real defect, not a documentation nit.
|
||||
- Prefer repository-real expectations (`src/**/*.rs`, `tests/*.rs`, task-shaped MCP tools/resources, belief runtime, static semantic verification).
|
||||
- Do not treat `.kilo/plans/*` as feature artifacts for consistency analysis.
|
||||
317
.opencode/command/speckit.checklist.md
Normal file
317
.opencode/command/speckit.checklist.md
Normal file
@@ -0,0 +1,317 @@
|
||||
---
|
||||
description: Generate a custom checklist for the current feature based on user requirements.
|
||||
---
|
||||
|
||||
## Checklist Purpose: "Unit Tests for English"
|
||||
|
||||
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, completeness, and decision-memory readiness of requirements in a given domain.
|
||||
|
||||
**NOT for verification/testing**:
|
||||
|
||||
- ❌ NOT "Verify the button clicks correctly"
|
||||
- ❌ NOT "Test error handling works"
|
||||
- ❌ NOT "Confirm the API returns 200"
|
||||
- ❌ NOT checking if code/implementation matches the spec
|
||||
|
||||
**FOR requirements quality validation**:
|
||||
|
||||
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
|
||||
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
|
||||
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
|
||||
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
|
||||
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
|
||||
- ✅ "Do repo-shaping choices have explicit rationale and rejected alternatives before task decomposition?" (decision memory)
|
||||
|
||||
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Execution Steps
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
|
||||
- All file paths must be absolute.
|
||||
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
|
||||
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
|
||||
- Only ask about information that materially changes checklist content
|
||||
- Be skipped individually if already unambiguous in `$ARGUMENTS`
|
||||
- Prefer precision over breadth
|
||||
|
||||
Generation algorithm:
|
||||
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
|
||||
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
|
||||
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
|
||||
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria, decision-memory needs.
|
||||
5. Formulate questions chosen from these archetypes:
|
||||
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
|
||||
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
|
||||
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
|
||||
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
|
||||
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
|
||||
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
|
||||
- Decision-memory gap (e.g., "Do we need explicit ADR and rejected-path checks for this feature?")
|
||||
|
||||
Question formatting rules:
|
||||
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
|
||||
- Limit to A–E options maximum; omit table if a free-form answer is clearer
|
||||
- Never ask the user to restate what they already said
|
||||
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
|
||||
|
||||
Defaults when interaction impossible:
|
||||
- Depth: Standard
|
||||
- Audience: Reviewer (PR) if code-related; Author otherwise
|
||||
- Focus: Top 2 relevance clusters
|
||||
|
||||
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
|
||||
|
||||
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
|
||||
- Derive checklist theme (e.g., security, review, deploy, ux)
|
||||
- Consolidate explicit must-have items mentioned by user
|
||||
- Map focus selections to category scaffolding
|
||||
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
|
||||
|
||||
4. **Load feature context**: Read from FEATURE_DIR:
|
||||
- `spec.md`: Feature requirements and scope
|
||||
- `plan.md` (if exists): Technical details, dependencies, ADR references
|
||||
- `tasks.md` (if exists): Implementation tasks and inherited guardrails
|
||||
- ADR artifacts (if present): `[DEF:id:ADR]`, `@RATIONALE`, `@REJECTED`
|
||||
|
||||
**Context Loading Strategy**:
|
||||
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
|
||||
- Prefer summarizing long sections into concise scenario/requirement bullets
|
||||
- Use progressive disclosure: add follow-on retrieval only if gaps detected
|
||||
- If source docs are large, generate interim summary items instead of embedding raw text
|
||||
|
||||
5. **Generate checklist** - Create "Unit Tests for Requirements":
|
||||
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
|
||||
- Generate unique checklist filename:
|
||||
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
|
||||
- Format: `[domain].md`
|
||||
- If file exists, append to existing file
|
||||
- Number items sequentially starting from CHK001
|
||||
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
|
||||
|
||||
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
|
||||
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
|
||||
- **Completeness**: Are all necessary requirements present?
|
||||
- **Clarity**: Are requirements unambiguous and specific?
|
||||
- **Consistency**: Do requirements align with each other?
|
||||
- **Measurability**: Can requirements be objectively verified?
|
||||
- **Coverage**: Are all scenarios/edge cases addressed?
|
||||
- **Decision Memory**: Are durable choices and rejected alternatives explicit before implementation starts?
|
||||
|
||||
**Category Structure** - Group items by requirement quality dimensions:
|
||||
- **Requirement Completeness** (Are all necessary requirements documented?)
|
||||
- **Requirement Clarity** (Are requirements specific and unambiguous?)
|
||||
- **Requirement Consistency** (Do requirements align without conflicts?)
|
||||
- **Acceptance Criteria Quality** (Are success criteria measurable?)
|
||||
- **Scenario Coverage** (Are all flows/cases addressed?)
|
||||
- **Edge Case Coverage** (Are boundary conditions defined?)
|
||||
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
|
||||
- **Dependencies & Assumptions** (Are they documented and validated?)
|
||||
- **Decision Memory & ADRs** (Are architectural choices, rationale, and rejected paths explicit?)
|
||||
- **Ambiguities & Conflicts** (What needs clarification?)
|
||||
|
||||
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
|
||||
|
||||
❌ **WRONG** (Testing implementation):
|
||||
- "Verify landing page displays 3 episode cards"
|
||||
- "Test hover states work on desktop"
|
||||
- "Confirm logo click navigates home"
|
||||
|
||||
✅ **CORRECT** (Testing requirements quality):
|
||||
- "Are the exact number and layout of featured episodes specified?" [Completeness]
|
||||
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
|
||||
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
|
||||
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
|
||||
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
|
||||
- "Are blocking architecture decisions recorded with explicit rationale and rejected alternatives before task generation?" [Decision Memory]
|
||||
- "Does the plan make clear which implementation shortcuts are forbidden for this feature?" [Decision Memory, Gap]
|
||||
|
||||
**ITEM STRUCTURE**:
|
||||
Each item should follow this pattern:
|
||||
- Question format asking about requirement quality
|
||||
- Focus on what's WRITTEN (or not written) in the spec/plan
|
||||
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
|
||||
- Reference spec section `[Spec §X.Y]` when checking existing requirements
|
||||
- Use `[Gap]` marker when checking for missing requirements
|
||||
|
||||
**EXAMPLES BY QUALITY DIMENSION**:
|
||||
|
||||
Completeness:
|
||||
- "Are error handling requirements defined for all API failure modes? [Gap]"
|
||||
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
|
||||
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
|
||||
|
||||
Clarity:
|
||||
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
|
||||
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
|
||||
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
|
||||
|
||||
Consistency:
|
||||
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
|
||||
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
|
||||
|
||||
Coverage:
|
||||
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
|
||||
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
|
||||
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
|
||||
|
||||
Measurability:
|
||||
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
|
||||
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
|
||||
|
||||
Decision Memory:
|
||||
- "Do all repo-shaping technical choices have explicit rationale before tasks are generated? [Decision Memory, Plan]"
|
||||
- "Are rejected alternatives documented for architectural branches that would materially change implementation scope? [Decision Memory, Gap]"
|
||||
- "Can a coder determine from the planning artifacts which tempting shortcut is forbidden? [Decision Memory, Clarity]"
|
||||
|
||||
**Scenario Classification & Coverage** (Requirements Quality Focus):
|
||||
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
|
||||
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
|
||||
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
|
||||
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
|
||||
|
||||
**Traceability Requirements**:
|
||||
- MINIMUM: ≥80% of items MUST include at least one traceability reference
|
||||
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`, `[ADR]`
|
||||
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
|
||||
|
||||
**Surface & Resolve Issues** (Requirements Quality Problems):
|
||||
Ask questions about the requirements themselves:
|
||||
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
|
||||
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
|
||||
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
|
||||
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
|
||||
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
|
||||
- Decision-memory drift: "Do tasks inherit the same rejected-path guardrails defined in planning? [Decision Memory, Conflict]"
|
||||
|
||||
**Content Consolidation**:
|
||||
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
|
||||
- Merge near-duplicates checking the same requirement aspect
|
||||
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
|
||||
|
||||
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
|
||||
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
|
||||
- ❌ References to code execution, user actions, system behavior
|
||||
- ❌ "Displays correctly", "works properly", "functions as expected"
|
||||
- ❌ "Click", "navigate", "render", "load", "execute"
|
||||
- ❌ Test cases, test plans, QA procedures
|
||||
- ❌ Implementation details (frameworks, APIs, algorithms) unless the checklist is asking whether those decisions were explicitly documented and bounded by rationale/rejected alternatives
|
||||
|
||||
**✅ REQUIRED PATTERNS** - These test requirements quality:
|
||||
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
|
||||
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
|
||||
- ✅ "Are requirements consistent between [section A] and [section B]?"
|
||||
- ✅ "Can [requirement] be objectively measured/verified?"
|
||||
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
|
||||
- ✅ "Does the spec define [missing aspect]?"
|
||||
- ✅ "Does the plan record why [accepted path] was chosen and why [rejected path] is forbidden?"
|
||||
|
||||
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
|
||||
|
||||
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
|
||||
- Focus areas selected
|
||||
- Depth level
|
||||
- Actor/timing
|
||||
- Any explicit user-specified must-have items incorporated
|
||||
- Whether ADR / decision-memory checks were included
|
||||
|
||||
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
|
||||
|
||||
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
|
||||
- Simple, memorable filenames that indicate checklist purpose
|
||||
- Easy identification and navigation in the `checklists/` folder
|
||||
|
||||
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
|
||||
|
||||
## Example Checklist Types & Sample Items
|
||||
|
||||
**UX Requirements Quality:** `ux.md`
|
||||
|
||||
Sample items (testing the requirements, NOT the implementation):
|
||||
|
||||
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
|
||||
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
|
||||
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
|
||||
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
|
||||
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
|
||||
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
|
||||
|
||||
**API Requirements Quality:** `api.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Are error response formats specified for all failure scenarios? [Completeness]"
|
||||
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
|
||||
- "Are authentication requirements consistent across all endpoints? [Consistency]"
|
||||
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
|
||||
- "Is versioning strategy documented in requirements? [Gap]"
|
||||
|
||||
**Performance Requirements Quality:** `performance.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Are performance requirements quantified with specific metrics? [Clarity]"
|
||||
- "Are performance targets defined for all critical user journeys? [Coverage]"
|
||||
- "Are performance requirements under different load conditions specified? [Completeness]"
|
||||
- "Can performance requirements be objectively measured? [Measurability]"
|
||||
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
|
||||
|
||||
**Security Requirements Quality:** `security.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Are authentication requirements specified for all protected resources? [Coverage]"
|
||||
- "Are data protection requirements defined for sensitive information? [Completeness]"
|
||||
- "Is the threat model documented and requirements aligned to it? [Traceability]"
|
||||
- "Are security requirements consistent with compliance obligations? [Consistency]"
|
||||
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
|
||||
|
||||
**Architecture Decision Quality:** `architecture.md`
|
||||
|
||||
Sample items:
|
||||
|
||||
- "Do all repo-shaping architecture choices have explicit rationale before tasks are generated? [Decision Memory]"
|
||||
- "Are rejected alternatives documented for each blocking technology branch? [Decision Memory, Gap]"
|
||||
- "Can an implementer tell which shortcuts are forbidden without re-reading research artifacts? [Clarity, ADR]"
|
||||
- "Are ADR decisions traceable to requirements or constraints in the spec? [Traceability, ADR]"
|
||||
|
||||
## Anti-Examples: What NOT To Do
|
||||
|
||||
**❌ WRONG - These test implementation, not requirements:**
|
||||
|
||||
```markdown
|
||||
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
|
||||
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
|
||||
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
|
||||
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
|
||||
```
|
||||
|
||||
**✅ CORRECT - These test requirements quality:**
|
||||
|
||||
```markdown
|
||||
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
|
||||
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
|
||||
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
|
||||
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
|
||||
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
|
||||
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
|
||||
- [ ] CHK007 - Do planning artifacts state why the accepted architecture was chosen and which alternative is rejected? [Decision Memory, ADR]
|
||||
```
|
||||
|
||||
**Key Differences:**
|
||||
|
||||
- Wrong: Tests if the system works correctly
|
||||
- Correct: Tests if the requirements are written correctly
|
||||
- Wrong: Verification of behavior
|
||||
- Correct: Validation of requirement quality
|
||||
- Wrong: "Does it do X?"
|
||||
- Correct: "Is X clearly specified?"
|
||||
181
.opencode/command/speckit.clarify.md
Normal file
181
.opencode/command/speckit.clarify.md
Normal file
@@ -0,0 +1,181 @@
|
||||
---
|
||||
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
||||
handoffs:
|
||||
- label: Build Technical Plan
|
||||
agent: speckit.plan
|
||||
prompt: Create a plan for the spec. I am building with...
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
|
||||
|
||||
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
|
||||
|
||||
Execution steps:
|
||||
|
||||
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
|
||||
- `FEATURE_DIR`
|
||||
- `FEATURE_SPEC`
|
||||
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
|
||||
- If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
|
||||
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
|
||||
|
||||
Functional Scope & Behavior:
|
||||
- Core user goals & success criteria
|
||||
- Explicit out-of-scope declarations
|
||||
- User roles / personas differentiation
|
||||
|
||||
Domain & Data Model:
|
||||
- Entities, attributes, relationships
|
||||
- Identity & uniqueness rules
|
||||
- Lifecycle/state transitions
|
||||
- Data volume / scale assumptions
|
||||
|
||||
Interaction & UX Flow:
|
||||
- Critical user journeys / sequences
|
||||
- Error/empty/loading states
|
||||
- Accessibility or localization notes
|
||||
|
||||
Non-Functional Quality Attributes:
|
||||
- Performance (latency, throughput targets)
|
||||
- Scalability (horizontal/vertical, limits)
|
||||
- Reliability & availability (uptime, recovery expectations)
|
||||
- Observability (logging, metrics, tracing signals)
|
||||
- Security & privacy (authN/Z, data protection, threat assumptions)
|
||||
- Compliance / regulatory constraints (if any)
|
||||
|
||||
Integration & External Dependencies:
|
||||
- External services/APIs and failure modes
|
||||
- Data import/export formats
|
||||
- Protocol/versioning assumptions
|
||||
|
||||
Edge Cases & Failure Handling:
|
||||
- Negative scenarios
|
||||
- Rate limiting / throttling
|
||||
- Conflict resolution (e.g., concurrent edits)
|
||||
|
||||
Constraints & Tradeoffs:
|
||||
- Technical constraints (language, storage, hosting)
|
||||
- Explicit tradeoffs or rejected alternatives
|
||||
|
||||
Terminology & Consistency:
|
||||
- Canonical glossary terms
|
||||
- Avoided synonyms / deprecated terms
|
||||
|
||||
Completion Signals:
|
||||
- Acceptance criteria testability
|
||||
- Measurable Definition of Done style indicators
|
||||
|
||||
Misc / Placeholders:
|
||||
- TODO markers / unresolved decisions
|
||||
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
|
||||
|
||||
For each category with Partial or Missing status, add a candidate question opportunity unless:
|
||||
- Clarification would not materially change implementation or validation strategy
|
||||
- Information is better deferred to planning phase (note internally)
|
||||
|
||||
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
|
||||
- Maximum of 10 total questions across the whole session.
|
||||
- Each question must be answerable with EITHER:
|
||||
- A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
|
||||
- A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
|
||||
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
|
||||
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
|
||||
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
|
||||
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
|
||||
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
|
||||
|
||||
4. Sequential questioning loop (interactive):
|
||||
- Present EXACTLY ONE question at a time.
|
||||
- For multiple‑choice questions:
|
||||
- **Analyze all options** and determine the **most suitable option** based on:
|
||||
- Best practices for the project type
|
||||
- Common patterns in similar implementations
|
||||
- Risk reduction (security, performance, maintainability)
|
||||
- Alignment with any explicit project goals or constraints visible in the spec
|
||||
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
|
||||
- Format as: `**Recommended:** Option [X] - <reasoning>`
|
||||
- Then render all options as a Markdown table:
|
||||
|
||||
| Option | Description |
|
||||
|--------|-------------|
|
||||
| A | <Option A description> |
|
||||
| B | <Option B description> |
|
||||
| C | <Option C description> (add D/E as needed up to 5) |
|
||||
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
|
||||
|
||||
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
|
||||
- For short‑answer style (no meaningful discrete options):
|
||||
- Provide your **suggested answer** based on best practices and context.
|
||||
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
|
||||
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
|
||||
- After the user answers:
|
||||
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
|
||||
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
|
||||
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
||||
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
||||
- Stop asking further questions when:
|
||||
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
|
||||
- User signals completion ("done", "good", "no more"), OR
|
||||
- You reach 5 asked questions.
|
||||
- Never reveal future queued questions in advance.
|
||||
- If no valid questions exist at start, immediately report no critical ambiguities.
|
||||
|
||||
5. Integration after EACH accepted answer (incremental update approach):
|
||||
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
|
||||
- For the first integrated answer in this session:
|
||||
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
|
||||
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
|
||||
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
|
||||
- Then immediately apply the clarification to the most appropriate section(s):
|
||||
- Functional ambiguity → Update or add a bullet in Functional Requirements.
|
||||
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
||||
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
||||
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
||||
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
||||
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
||||
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
||||
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
|
||||
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
|
||||
- Keep each inserted clarification minimal and testable (avoid narrative drift).
|
||||
|
||||
6. Validation (performed after EACH write plus final pass):
|
||||
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
|
||||
- Total asked (accepted) questions ≤ 5.
|
||||
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
|
||||
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
|
||||
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
|
||||
- Terminology consistency: same canonical term used across all updated sections.
|
||||
|
||||
7. Write the updated spec back to `FEATURE_SPEC`.
|
||||
|
||||
8. Report completion (after questioning loop ends or early termination):
|
||||
- Number of questions asked & answered.
|
||||
- Path to updated spec.
|
||||
- Sections touched (list names).
|
||||
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
|
||||
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
|
||||
- Suggested next command.
|
||||
|
||||
Behavior rules:
|
||||
|
||||
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
|
||||
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
|
||||
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
|
||||
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
|
||||
- Respect user early termination signals ("stop", "done", "proceed").
|
||||
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
|
||||
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
||||
|
||||
Context for prioritization: $ARGUMENTS
|
||||
64
.opencode/command/speckit.constitution.md
Normal file
64
.opencode/command/speckit.constitution.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
description: Create or update the local workflow constitution and propagate principle changes into dependent speckit artifacts.
|
||||
handoffs:
|
||||
- label: Build Specification
|
||||
agent: speckit.specify
|
||||
prompt: Create the feature specification under the updated constitution
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
You are updating the local constitution at `.specify/memory/constitution.md`. This file is the workflow-facing constitutional source for the repository and must align with:
|
||||
|
||||
- `.kilo/skills/semantics-core/SKILL.md`
|
||||
- `.kilo/skills/semantics-contracts/SKILL.md`
|
||||
- `.kilo/skills/semantics-belief/SKILL.md`
|
||||
- `.kilo/skills/semantics-testing/SKILL.md`
|
||||
- `README.md`
|
||||
- `docs/SEMANTIC_PROTOCOL_COMPLIANCE.md`
|
||||
- `docs/adr/*`
|
||||
|
||||
Execution flow:
|
||||
|
||||
1. Load the existing constitution at `.specify/memory/constitution.md`.
|
||||
2. Identify placeholders, stale assumptions, or principles that conflict with the current Rust MCP repository.
|
||||
3. Derive concrete constitutional text from user input and repository reality.
|
||||
4. Version the constitution using semantic versioning:
|
||||
- MAJOR: incompatible governance/principle change
|
||||
- MINOR: new principle or materially expanded guidance
|
||||
- PATCH: clarifications and wording cleanup
|
||||
5. Replace placeholders with concrete, testable principles and governance text.
|
||||
6. Propagate consistency updates into dependent artifacts:
|
||||
- `.specify/templates/plan-template.md`
|
||||
- `.specify/templates/spec-template.md`
|
||||
- `.specify/templates/tasks-template.md`
|
||||
- `.specify/templates/test-docs-template.md`
|
||||
- `.specify/templates/ux-reference-template.md`
|
||||
- `.kilo/workflows/speckit.plan.md`
|
||||
- `.kilo/workflows/speckit.tasks.md`
|
||||
- `.kilo/workflows/speckit.implement.md`
|
||||
- `.kilo/workflows/speckit.test.md`
|
||||
- `.kilo/workflows/speckit.analyze.md`
|
||||
7. Prepend a sync impact report as an HTML comment at the top of the constitution.
|
||||
8. Validate:
|
||||
- no unexplained placeholders remain
|
||||
- version and dates are consistent
|
||||
- principles are declarative and testable
|
||||
9. Write back to `.specify/memory/constitution.md`.
|
||||
|
||||
## Output
|
||||
|
||||
Summarize:
|
||||
|
||||
- new version and bump rationale
|
||||
- affected templates/workflows
|
||||
- any deferred follow-ups
|
||||
- suggested commit message
|
||||
74
.opencode/command/speckit.implement.md
Normal file
74
.opencode/command/speckit.implement.md
Normal file
@@ -0,0 +1,74 @@
|
||||
---
|
||||
description: Execute the implementation plan by processing the active tasks.md for the Rust MCP repository.
|
||||
handoffs:
|
||||
- label: Audit & Verify (Tester)
|
||||
agent: qa-tester
|
||||
prompt: Perform semantic audit, executable verification, and contract checks for the completed task batch.
|
||||
send: true
|
||||
- label: Orchestration Control
|
||||
agent: swarm-master
|
||||
prompt: Review tester feedback and coordinate next steps.
|
||||
send: true
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` and locate the active feature artifacts.
|
||||
2. If `checklists/` exists, evaluate checklist completion status before implementation proceeds.
|
||||
3. Load implementation context from:
|
||||
- `tasks.md`
|
||||
- `plan.md`
|
||||
- `spec.md`
|
||||
- `ux_reference.md`
|
||||
- `contracts/modules.md` when present
|
||||
- `research.md`, `data-model.md`, `quickstart.md` when present
|
||||
- `.specify/memory/constitution.md`
|
||||
- `README.md`
|
||||
- `docs/SEMANTIC_PROTOCOL_COMPLIANCE.md`
|
||||
- relevant `docs/adr/*.md`
|
||||
4. Parse tasks by phase, dependencies, story ownership, and guardrails.
|
||||
5. Execute implementation phase-by-phase with strict semantic and verification discipline.
|
||||
|
||||
## Repository Reality Rules
|
||||
|
||||
- Default source paths are `src/**/*.rs` and `tests/*.rs`.
|
||||
- Active feature docs always live under `specs/<feature>/...` and are discovered via the `.specify/scripts/bash/*` helpers.
|
||||
- Default verification stack is Rust-native and repository-real:
|
||||
- `cargo test --all-targets --all-features -- --nocapture`
|
||||
- `cargo clippy --all-targets --all-features -- -D warnings` when applicable
|
||||
- `python3 scripts/static_verify.py`
|
||||
- Do not fall back to `backend/`, `frontend/`, `pytest`, `npm`, or `__tests__/` conventions unless the active feature genuinely introduces such a surface.
|
||||
|
||||
## Semantic Execution Rules
|
||||
|
||||
- Preserve and extend canonical `[DEF]` anchors and metadata.
|
||||
- Match contract density to effective complexity.
|
||||
- Keep accepted-path and rejected-path memory intact.
|
||||
- Do not silently restore an ADR- or contract-rejected branch.
|
||||
- For C4/C5 Rust orchestration flows, account for the belief runtime where required by repository norms and local contracts.
|
||||
- Treat pseudo-semantic markup as invalid.
|
||||
|
||||
## Progress and Acceptance
|
||||
|
||||
- Mark tasks complete only after local verification succeeds.
|
||||
- Handoff to the tester must include touched files, declared complexity, contract expectations, ADR guardrails, and executed verifiers.
|
||||
- Final acceptance requires explicit evidence that the `speckit.test` workflow-equivalent verification was executed.
|
||||
- `.kilo/plans/*` may exist as internal assistant scratch context, but it is not part of the speckit feature output surface and must not replace `specs/<feature>/...` artifacts.
|
||||
|
||||
## Completion Gate
|
||||
|
||||
No task batch is complete if any of the following remain in the touched scope:
|
||||
|
||||
- broken or unclosed anchors
|
||||
- missing complexity-required metadata
|
||||
- unresolved critical contract gaps
|
||||
- rejected-path regression
|
||||
- required verification not executed
|
||||
144
.opencode/command/speckit.plan.md
Normal file
144
.opencode/command/speckit.plan.md
Normal file
@@ -0,0 +1,144 @@
|
||||
---
|
||||
description: Execute the Rust MCP implementation planning workflow and generate research, design, contracts, and quickstart artifacts.
|
||||
handoffs:
|
||||
- label: Create Tasks
|
||||
agent: speckit.tasks
|
||||
prompt: Break the Rust MCP plan into executable tasks
|
||||
send: true
|
||||
- label: Create Checklist
|
||||
agent: speckit.checklist
|
||||
prompt: Create a requirements-quality checklist for the active Rust MCP feature
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse `FEATURE_SPEC`, `IMPL_PLAN`, `SPECS_DIR`, and `BRANCH`.
|
||||
- `IMPL_PLAN` is the authoritative path for `plan.md` inside `specs/<feature>/`.
|
||||
- Derive `FEATURE_DIR` from `IMPL_PLAN` and write every planning artifact there.
|
||||
- Never treat `.kilo/plans/*` as workflow output for `/speckit.plan`.
|
||||
|
||||
2. **Load canonical planning context**:
|
||||
- `README.md`
|
||||
- `Cargo.toml`
|
||||
- `docs/SEMANTIC_PROTOCOL_COMPLIANCE.md`
|
||||
- `docs/adr/ADR-0001-semantic-rust-module-layout.md`
|
||||
- `docs/adr/ADR-0002-belief-state-runtime.md`
|
||||
- `docs/adr/ADR-0003-comment-anchored-semantic-protocol.md`
|
||||
- `docs/adr/ADR-0004-task-shaped-server-routing.md`
|
||||
- `.specify/memory/constitution.md`
|
||||
- `.kilo/skills/semantics-core/SKILL.md`
|
||||
- `.kilo/skills/semantics-contracts/SKILL.md`
|
||||
- `.kilo/skills/semantics-testing/SKILL.md`
|
||||
- `.specify/templates/plan-template.md`
|
||||
|
||||
3. **Execute the planning workflow** using the template structure:
|
||||
- Fill `Technical Context` for the current repository reality: Rust crate, task-shaped MCP server, semantic contracts, belief runtime, and repository-local verification.
|
||||
- Fill `Constitution Check` using the local constitution, semantic protocol compliance doc, and ADR set.
|
||||
- ERROR if a blocking constitutional or semantic conflict is discovered and cannot be justified.
|
||||
- Phase 0: generate `research.md` in `FEATURE_DIR`, resolving all material unknowns.
|
||||
- Phase 1: generate `data-model.md`, `contracts/modules.md`, optional machine-readable contract artifacts, and `quickstart.md` in `FEATURE_DIR`.
|
||||
- Materialize blocking ADR references and planning decisions inside the plan and downstream contracts.
|
||||
- Run `.specify/scripts/bash/update-agent-context.sh kilocode` after planning artifacts are written.
|
||||
|
||||
4. **Stop and report** after planning artifacts are complete. Report branch, `plan.md` path, generated artifacts, and blocking ADR/decision-memory outcomes.
|
||||
|
||||
## Phase 0: Research
|
||||
|
||||
Research must resolve only implementation-shaping unknowns that matter for this Rust MCP repository, such as:
|
||||
|
||||
- crate/module placement under `src/`
|
||||
- `tests/*.rs` strategy and required fixture coverage
|
||||
- MCP tool/resource schema design
|
||||
- runtime evidence and belief-state coverage
|
||||
- semantic validation boundaries and static verification workflow
|
||||
- task-shaped routing, workspace safety, and error-envelope design
|
||||
|
||||
Write `research.md` with concise sections:
|
||||
|
||||
- Decision
|
||||
- Rationale
|
||||
- Alternatives Considered
|
||||
- Impact On Contracts / Tasks
|
||||
|
||||
Use `[NEED_CONTEXT: target]` instead of inventing relation targets, DTO names, or module boundaries that cannot be grounded in repo context.
|
||||
|
||||
## Phase 1: Design, ADR Continuity, and Contracts
|
||||
|
||||
### UX / Interaction Validation
|
||||
|
||||
Validate the proposed design against `ux_reference.md` as an **interaction reference** for MCP callers, CLI/operator flows, result envelopes, warnings, and recovery guidance.
|
||||
|
||||
If the planned architecture degrades the promised interaction model, deterministic recovery path, or context-budget behavior, stop and warn the user.
|
||||
|
||||
### Data Model Output
|
||||
|
||||
Generate `data-model.md` for Rust/MCP domain entities such as:
|
||||
|
||||
- tool request/response structs
|
||||
- semantic query payloads
|
||||
- runtime evidence envelopes
|
||||
- workspace/checkpoint/index/security entities
|
||||
- contract and relation traceability data
|
||||
|
||||
### Global ADR Continuity
|
||||
|
||||
Before task decomposition, planning must identify any repo-shaping decisions this feature depends on or extends:
|
||||
|
||||
- Rust module layout and decomposition
|
||||
- task-shaped tool/resource routing
|
||||
- belief-state runtime behavior
|
||||
- semantic comment-anchor rules
|
||||
- payload/schema stability decisions
|
||||
|
||||
For each durable choice, ensure the plan references the relevant ADR and explicitly records accepted and rejected paths.
|
||||
|
||||
### Contract Design Output
|
||||
|
||||
Generate `contracts/modules.md` as the primary design contract for implementation. Contracts must:
|
||||
|
||||
- use short semantic IDs
|
||||
- classify each planned module/component with `@COMPLEXITY` 1-5
|
||||
- use canonical relation syntax `@RELATION PREDICATE -> TARGET_ID`
|
||||
- preserve accepted-path and rejected-path memory via `@RATIONALE` and `@REJECTED` where needed
|
||||
- describe MCP tools/resources, runtime evidence, validation envelopes, and semantic boundaries instead of inventing backend/frontend layers
|
||||
|
||||
Complexity guidance for this repository:
|
||||
|
||||
- **Complexity 1**: anchors only
|
||||
- **Complexity 2**: `@PURPOSE`
|
||||
- **Complexity 3**: `@PURPOSE`, `@RELATION`
|
||||
- **Complexity 4**: `@PURPOSE`, `@RELATION`, `@PRE`, `@POST`, `@SIDE_EFFECT`; Rust orchestration paths should account for belief runtime markers before mutation or return
|
||||
- **Complexity 5**: level 4 plus `@DATA_CONTRACT`, `@INVARIANT`, and explicit decision-memory continuity
|
||||
|
||||
If a planned contract depends on unknown schema, relation target, or ADR identity, emit `[NEED_CONTEXT: target]` instead of fabricating placeholders.
|
||||
|
||||
### Optional Machine-Readable Contracts
|
||||
|
||||
You MAY generate machine-readable artifacts in `contracts/` only when they mirror the actual MCP tool/resource payloads of this Rust server. Do **not** default to REST/OpenAPI or frontend-sync artifacts unless the feature truly introduces them.
|
||||
|
||||
### Quickstart Output
|
||||
|
||||
Generate `quickstart.md` using real repository verification paths, typically:
|
||||
|
||||
- start or exercise the MCP server entrypoint
|
||||
- invoke relevant MCP tools/resources
|
||||
- validate expected envelopes and recovery flows
|
||||
- run `cargo test --all-targets --all-features -- --nocapture`
|
||||
- run `cargo clippy --all-targets --all-features -- -D warnings` when applicable
|
||||
- run `python3 scripts/static_verify.py`
|
||||
|
||||
## Key Rules
|
||||
|
||||
- Use absolute paths in workflow execution.
|
||||
- Planning must reflect the current repository structure (`src/**/*.rs`, `tests/*.rs`, `docs/adr/*`) rather than legacy Python/Svelte examples.
|
||||
- Do not reference `.ai/*` or `.kilocode/*` paths.
|
||||
- Do not write any feature planning artifact outside `specs/<feature>/...`.
|
||||
- Do not hand off to `speckit.tasks` until blocking ADR continuity and rejected-path guardrails are explicit.
|
||||
56
.opencode/command/speckit.semantics.md
Normal file
56
.opencode/command/speckit.semantics.md
Normal file
@@ -0,0 +1,56 @@
|
||||
---
|
||||
description: Maintain semantic integrity by reindexing, auditing, and reviewing the Rust MCP repository through AXIOM MCP tools.
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Ensure the repository adheres to the active GRACE semantic protocol using AXIOM MCP as the primary execution engine: reindex, measure semantic health, audit contracts, audit decision-memory continuity, and optionally route contract-safe fixes.
|
||||
|
||||
## Operating Constraints
|
||||
|
||||
1. **ROLE: Orchestrator** — coordinate semantic maintenance at the workflow level.
|
||||
2. **MCP-FIRST** — use AXIOM task-shaped tools for discovery, context, audit, impact analysis, and safe mutation planning.
|
||||
3. **STRICT ADHERENCE** — follow the local semantic authorities:
|
||||
- `.kilo/skills/semantics-core/SKILL.md`
|
||||
- `.kilo/skills/semantics-contracts/SKILL.md`
|
||||
- `.kilo/skills/semantics-testing/SKILL.md`
|
||||
- `docs/SEMANTIC_PROTOCOL_COMPLIANCE.md`
|
||||
- `docs/adr/*`
|
||||
4. **NON-DESTRUCTIVE** — do not remove business logic; only add or correct semantic markup unless the user requested implementation changes.
|
||||
5. **NO PSEUDO-CONTRACTS** — do not mechanically inject fake semantic boilerplate.
|
||||
6. **ID NAMING** — use short domain-driven IDs, never language import paths or filesystem-shaped IDs as the semantic primary key.
|
||||
7. **DECISION-MEMORY CONTINUITY** — audit ADRs, preventive task guardrails, and local `@RATIONALE` / `@REJECTED` as a single chain.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
1. Reindex the semantic workspace.
|
||||
2. Measure workspace semantic health.
|
||||
3. Audit top issues:
|
||||
- broken anchors or malformed DEF regions
|
||||
- missing complexity-required metadata
|
||||
- unresolved relations
|
||||
- isolated critical contracts
|
||||
- missing ADR continuity
|
||||
- restored rejected paths
|
||||
- retained workaround logic lacking local decision-memory tags
|
||||
4. Build remediation context for the top failing contracts.
|
||||
5. If `$ARGUMENTS` contains `fix` or `apply`, route to an implementation/curation agent instead of applying naive text edits.
|
||||
6. Re-run audit and report PASS/FAIL.
|
||||
|
||||
## Output
|
||||
|
||||
Return:
|
||||
|
||||
- health metrics
|
||||
- PASS/FAIL status
|
||||
- top issues
|
||||
- decision-memory summary
|
||||
- action taken or handoff initiated
|
||||
89
.opencode/command/speckit.specify.md
Normal file
89
.opencode/command/speckit.specify.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
description: Create or update the feature specification from a natural-language feature description for the Rust MCP repository.
|
||||
handoffs:
|
||||
- label: Build Technical Plan
|
||||
agent: speckit.plan
|
||||
prompt: Create a Rust MCP implementation plan for the active feature
|
||||
- label: Clarify Spec Requirements
|
||||
agent: speckit.clarify
|
||||
prompt: Clarify specification requirements
|
||||
send: true
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
The feature description is the text passed to `/speckit.specify`.
|
||||
|
||||
1. Generate a concise short name (2-4 words) for the feature branch.
|
||||
2. Check existing branches/spec directories and run `.specify/scripts/bash/create-new-feature.sh --json ...` exactly once.
|
||||
- This step is the source of truth for the feature lifecycle.
|
||||
- It MUST create and checkout the git branch `NNN-short-name` when git is available.
|
||||
- It MUST create `specs/NNN-short-name/` and initialize `spec.md` there.
|
||||
- Treat the returned `SPEC_FILE` path as authoritative and derive `FEATURE_DIR` from it.
|
||||
3. Load these sources before writing the spec:
|
||||
- `.specify/templates/spec-template.md`
|
||||
- `.specify/templates/ux-reference-template.md`
|
||||
- `.specify/memory/constitution.md`
|
||||
- `README.md`
|
||||
- `docs/SEMANTIC_PROTOCOL_COMPLIANCE.md`
|
||||
- relevant `docs/adr/*` when the feature clearly touches an existing architectural lane
|
||||
4. Create or update the following artifacts inside `FEATURE_DIR` only:
|
||||
- `spec.md`
|
||||
- `ux_reference.md`
|
||||
- `checklists/requirements.md`
|
||||
5. Generate `ux_reference.md` as an **interaction reference** for MCP callers, CLI/operator flows, result envelopes, warnings, and recovery behavior.
|
||||
6. Write `spec.md` focused on **what** the user/operator needs and **why**, not how the Rust crate will implement it.
|
||||
7. Validate the spec against a requirements-quality checklist and iterate until major issues are resolved.
|
||||
|
||||
## Specification Rules
|
||||
|
||||
- Use domain language appropriate for this repository: MCP callers, tools, resources, runtime evidence, workspace flows, operator recovery, semantic contracts.
|
||||
- Avoid leaking implementation details such as module names, crates, file-level refactors, or exact Rust APIs.
|
||||
- Use `[NEEDS CLARIFICATION: ...]` only for truly blocking product ambiguities. Maximum 3 markers.
|
||||
- Prefer informed defaults grounded in repository context over unnecessary clarification.
|
||||
- Do not assume web-app, backend/frontend, or Svelte UI flows unless the feature actually introduces them.
|
||||
- Do not write feature outputs to `.kilo/plans/`, `.kilo/reports/`, or any path outside `specs/<feature>/...`.
|
||||
|
||||
## UX / Interaction Reference Rules
|
||||
|
||||
- `ux_reference.md` is mandatory, but for this repository it is usually an interaction-reference artifact rather than a screen-design artifact.
|
||||
- Capture:
|
||||
- caller/operator persona
|
||||
- happy-path invocation flow
|
||||
- result envelope expectations
|
||||
- warning/degraded states
|
||||
- failure recovery guidance
|
||||
- canonical terminology
|
||||
- Only include UI-specific `@UX_*` guidance when the feature truly has a user interface component.
|
||||
|
||||
## Quality Validation
|
||||
|
||||
Generate `FEATURE_DIR/checklists/requirements.md` and ensure it validates:
|
||||
|
||||
- no implementation leakage into `spec.md`
|
||||
- no stale Python/Svelte assumptions unless the feature explicitly needs them
|
||||
- compatibility with the Rust MCP/task-shaped tool surface
|
||||
- measurable success criteria
|
||||
- explicit edge cases and recovery paths
|
||||
- decision-memory readiness for downstream planning
|
||||
|
||||
If unresolved clarification markers remain, present them in a compact, high-impact format and stop for user input.
|
||||
|
||||
## Completion Report
|
||||
|
||||
Report:
|
||||
|
||||
- branch name
|
||||
- feature directory under `specs/`
|
||||
- `spec.md` path
|
||||
- `ux_reference.md` path
|
||||
- checklist path and status
|
||||
- readiness for `/speckit.clarify` or `/speckit.plan`
|
||||
140
.opencode/command/speckit.tasks.md
Normal file
140
.opencode/command/speckit.tasks.md
Normal file
@@ -0,0 +1,140 @@
|
||||
---
|
||||
description: Generate an actionable, dependency-ordered tasks.md for the active Rust MCP feature.
|
||||
handoffs:
|
||||
- label: Analyze For Consistency
|
||||
agent: speckit.analyze
|
||||
prompt: Run a cross-artifact consistency analysis for the Rust MCP feature
|
||||
send: true
|
||||
- label: Implement Project
|
||||
agent: speckit.implement
|
||||
prompt: Start implementation in phases for the Rust MCP feature
|
||||
send: true
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse `FEATURE_DIR` and `AVAILABLE_DOCS`.
|
||||
- `FEATURE_DIR` under `specs/<feature>/` is the only valid output location for `tasks.md`.
|
||||
|
||||
2. **Load design documents** from `FEATURE_DIR`:
|
||||
- **Required**: `plan.md`, `spec.md`, `ux_reference.md`
|
||||
- **Optional**: `data-model.md`, `contracts/`, `research.md`, `quickstart.md`
|
||||
- **Required when referenced by plan**: ADR artifacts under `docs/adr/` or feature-local planning docs
|
||||
|
||||
3. **Build the task model**:
|
||||
- Extract user stories and priorities from `spec.md`
|
||||
- Extract repository structure, tool/resource scope, verification stack, and semantic constraints from `plan.md`
|
||||
- Extract accepted-path and rejected-path memory from ADRs and `contracts/modules.md`
|
||||
- Map entities, tool payloads, runtime evidence, and verification scenarios to stories
|
||||
- Generate tasks grouped by story and ordered by dependency
|
||||
- Validate that no task schedules an ADR-rejected path
|
||||
|
||||
4. **Generate `tasks.md`** using `.specify/templates/tasks-template.md` as the structure:
|
||||
- Phase 1: Setup
|
||||
- Phase 2: Foundational work
|
||||
- Phase 3+: one phase per user story in priority order
|
||||
- Final phase: polish and cross-cutting verification
|
||||
- Every task must use the strict checklist format and include exact file paths
|
||||
- Write the final document to `FEATURE_DIR/tasks.md`, never to `.kilo/plans/` or other side folders
|
||||
|
||||
5. **Report** the generated path and summarize:
|
||||
- total task count
|
||||
- task count per user story
|
||||
- parallel opportunities
|
||||
- story-level independent verification criteria
|
||||
- inherited ADR/guardrail coverage
|
||||
|
||||
## Task Generation Rules
|
||||
|
||||
### Story Organization
|
||||
|
||||
Tasks MUST be grouped by user story so each story can be implemented and verified independently.
|
||||
|
||||
### Required Format
|
||||
|
||||
Every task MUST follow:
|
||||
|
||||
```text
|
||||
- [ ] T001 [P] [US1] Description with exact file path
|
||||
```
|
||||
|
||||
Rules:
|
||||
|
||||
1. `- [ ]` checkbox is mandatory
|
||||
2. sequential task IDs (`T001`, `T002`, ...)
|
||||
3. `[P]` only for truly parallelizable tasks
|
||||
4. `[USx]` required only for user-story phases
|
||||
5. exact file paths required in the description
|
||||
|
||||
### Rust / MCP Pathing
|
||||
|
||||
Prefer real repository paths such as:
|
||||
|
||||
- `src/server/*.rs`
|
||||
- `src/services/**/*.rs`
|
||||
- `src/models/*.rs`
|
||||
- `src/semantics/*.rs`
|
||||
- `tests/*.rs`
|
||||
- `docs/adr/*.md`
|
||||
- `specs/<feature>/contracts/*.md`
|
||||
|
||||
Do **not** generate default tasks for:
|
||||
|
||||
- `backend/` or `frontend/`
|
||||
- `*.py`
|
||||
- `.svelte`
|
||||
- `__tests__/`
|
||||
|
||||
### Verification Discipline
|
||||
|
||||
Each story phase must end with:
|
||||
|
||||
- a verification task against `ux_reference.md` interpreted as the caller/operator interaction contract
|
||||
- a semantic audit / verification task tied to repository validators and touched contracts
|
||||
|
||||
Typical verification tasks may include:
|
||||
|
||||
- focused `cargo test` commands
|
||||
- `cargo test --all-targets --all-features -- --nocapture`
|
||||
- `cargo clippy --all-targets --all-features -- -D warnings`
|
||||
- `python3 scripts/static_verify.py`
|
||||
|
||||
Only include the commands that are truly required by the feature scope.
|
||||
|
||||
### Contract and ADR Propagation
|
||||
|
||||
If a task implements or depends on a guarded contract, append a concise guardrail summary derived from `@RATIONALE` and `@REJECTED`.
|
||||
|
||||
Examples:
|
||||
|
||||
- `- [ ] T021 [US1] Implement deterministic tool envelope mapping in src/server/tools.rs (RATIONALE: preserve task-shaped MCP parity; REJECTED: ad-hoc per-tool response shapes)`
|
||||
- `- [ ] T033 [US2] Add runtime evidence verification in tests/server_protocol.rs (RATIONALE: C4/C5 flows must expose belief markers; REJECTED: relying on manual log inspection only)`
|
||||
|
||||
If no safe executable task wording exists because the accepted path is still unclear, stop and emit `[NEED_CONTEXT: target]`.
|
||||
|
||||
### Test Tasks
|
||||
|
||||
Tests are optional only when the feature truly has no new verification surface. In this repository, test tasks are usually expected for:
|
||||
|
||||
- new MCP tools/resources
|
||||
- new query/mutation flows
|
||||
- C4/C5 semantic contracts
|
||||
- runtime evidence / belief-state behavior
|
||||
- rejected-path regression coverage
|
||||
|
||||
### Decision-Memory Validation Gate
|
||||
|
||||
Before finalizing `tasks.md`, verify that:
|
||||
|
||||
- blocking ADRs are inherited into setup/foundational or downstream story tasks
|
||||
- no task text schedules a rejected path
|
||||
- story tasks remain executable within the actual Rust crate structure
|
||||
- at least one explicit verification task protects against rejected-path regression
|
||||
30
.opencode/command/speckit.taskstoissues.md
Normal file
30
.opencode/command/speckit.taskstoissues.md
Normal file
@@ -0,0 +1,30 @@
|
||||
---
|
||||
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
|
||||
tools: ['github/github-mcp-server/issue_write']
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Outline
|
||||
|
||||
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
1. From the executed script, extract the path to **tasks**.
|
||||
1. Get the Git remote by running:
|
||||
|
||||
```bash
|
||||
git config --get remote.origin.url
|
||||
```
|
||||
|
||||
> [!CAUTION]
|
||||
> ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL
|
||||
|
||||
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
|
||||
|
||||
> [!CAUTION]
|
||||
> UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL
|
||||
118
.opencode/command/speckit.test.md
Normal file
118
.opencode/command/speckit.test.md
Normal file
@@ -0,0 +1,118 @@
|
||||
---
|
||||
description: Execute semantic audit and Rust-native testing for the active feature batch.
|
||||
---
|
||||
|
||||
## User Input
|
||||
|
||||
```text
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Run the verification loop for the touched Rust MCP scope: semantic audit, decision-memory audit, executable tests, logic review, and documentation of coverage/results.
|
||||
|
||||
## Operating Constraints
|
||||
|
||||
1. **NEVER delete existing tests** unless the user explicitly requests removal.
|
||||
2. **NEVER duplicate tests** when existing `tests/*.rs` coverage already validates the same contract.
|
||||
3. **Decision-memory regression guard**: tests and audits must not silently normalize any path documented as rejected.
|
||||
4. **Rust-native structure**: prefer existing integration/protocol test organization under `tests/`.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Analyze Context
|
||||
|
||||
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` and determine:
|
||||
|
||||
- `FEATURE_DIR`
|
||||
- touched implementation tasks from `tasks.md`
|
||||
- affected `.rs` files
|
||||
- relevant ADRs, `@RATIONALE`, and `@REJECTED` guardrails
|
||||
|
||||
All test documentation emitted by this workflow belongs under `FEATURE_DIR/tests/` or other files inside `specs/<feature>/...`, never under `.kilo/plans/`.
|
||||
|
||||
### 2. Load Relevant Artifacts
|
||||
|
||||
Load only the necessary portions of:
|
||||
|
||||
- `tasks.md`
|
||||
- `plan.md`
|
||||
- `contracts/modules.md` when present
|
||||
- `quickstart.md` when present
|
||||
- `.specify/memory/constitution.md`
|
||||
- `README.md`
|
||||
- `docs/SEMANTIC_PROTOCOL_COMPLIANCE.md`
|
||||
- relevant `docs/adr/*.md`
|
||||
|
||||
### 3. Coverage Matrix
|
||||
|
||||
Build a compact matrix:
|
||||
|
||||
| Module / Flow | File | Existing Tests | Complexity | Guardrails | Needed Verification |
|
||||
|---------------|------|----------------|------------|------------|---------------------|
|
||||
|
||||
### 4. Semantic Audit and Logic Review
|
||||
|
||||
Before writing or executing tests, perform a semantic audit of the touched scope:
|
||||
|
||||
1. Use the AXIOM semantic validation path where available.
|
||||
2. Reject malformed or pseudo-semantic markup.
|
||||
3. Verify contract density matches effective complexity.
|
||||
4. Verify C4/C5 Rust flows account for belief runtime markers (`belief_scope`, `reason`, `reflect`, `explore`) when required by the contract and repository norms.
|
||||
5. Verify no touched code silently restores an ADR- or contract-rejected path.
|
||||
6. Emulate the algorithm mentally to ensure `@PRE`, `@POST`, `@INVARIANT`, and declared side effects remain coherent.
|
||||
|
||||
If audit fails, emit `[AUDIT_FAIL: semantic_noncompliance | contract_mismatch | logic_mismatch | rejected_path_regression]` with concrete file-based reasons.
|
||||
|
||||
### 5. Test Writing / Updating
|
||||
|
||||
When test additions are needed:
|
||||
|
||||
- prefer `tests/*.rs` integration/protocol coverage
|
||||
- use deterministic fixtures rather than logic mirrors
|
||||
- trace tests back to semantic contracts and ADR guardrails
|
||||
- add explicit rejected-path regression coverage when the touched scope has a forbidden alternative
|
||||
|
||||
For non-UI Rust MCP flows, UX verification means validating interaction envelopes, warnings, recovery messaging, and tool/resource discoverability promised by `ux_reference.md`.
|
||||
|
||||
### 6. Execute Verifiers
|
||||
|
||||
Run the smallest truthful verifier set for the touched scope, typically chosen from:
|
||||
|
||||
```bash
|
||||
cargo test --all-targets --all-features -- --nocapture
|
||||
cargo clippy --all-targets --all-features -- -D warnings
|
||||
python3 scripts/static_verify.py
|
||||
```
|
||||
|
||||
Use narrower `cargo test <target>` runs when they are sufficient and then widen verification when finalizing the feature batch.
|
||||
|
||||
### 7. Test Documentation
|
||||
|
||||
Create or update `specs/<feature>/tests/` documentation using `.specify/templates/test-docs-template.md`.
|
||||
|
||||
Document:
|
||||
|
||||
- coverage summary
|
||||
- semantic audit verdict
|
||||
- commands run
|
||||
- failing or waived cases
|
||||
- decision-memory regression coverage
|
||||
|
||||
### 8. Update Tasks
|
||||
|
||||
Mark test tasks complete only after semantic audit and executable verification succeed.
|
||||
|
||||
## Output
|
||||
|
||||
Produce a Markdown test report containing:
|
||||
|
||||
- coverage summary
|
||||
- commands executed
|
||||
- semantic audit verdict
|
||||
- ADR / rejected-path coverage status
|
||||
- issues found and resolutions
|
||||
- remaining risk or debt
|
||||
Reference in New Issue
Block a user