semantic clean up

This commit is contained in:
2026-03-10 19:38:10 +03:00
parent 31717870e3
commit 542835e0ff
31 changed files with 5392 additions and 6647 deletions

39
.kilocode/setup-script Executable file
View File

@@ -0,0 +1,39 @@
#!/bin/bash
# Kilo Code Worktree Setup Script
# This script runs before the agent starts in a worktree (new sessions only).
#
# Available environment variables:
# WORKTREE_PATH - Absolute path to the worktree directory
# REPO_PATH - Absolute path to the main repository
#
# Example tasks:
# - Copy .env files from main repo
# - Install dependencies
# - Run database migrations
# - Set up local configuration
set -e # Exit on error
echo "Setting up worktree: $WORKTREE_PATH"
# Uncomment and modify as needed:
# Copy environment files
# if [ -f "$REPO_PATH/.env" ]; then
# cp "$REPO_PATH/.env" "$WORKTREE_PATH/.env"
# echo "Copied .env"
# fi
# Install dependencies (Node.js)
# if [ -f "$WORKTREE_PATH/package.json" ]; then
# cd "$WORKTREE_PATH"
# npm install
# fi
# Install dependencies (Python)
# if [ -f "$WORKTREE_PATH/requirements.txt" ]; then
# cd "$WORKTREE_PATH"
# pip install -r requirements.txt
# fi
echo "Setup complete!"

View File

@@ -1,5 +1,5 @@
---
description: Maintain semantic integrity by generating maps and auditing compliance reports.
description: Maintain semantic integrity via multi-agent delegation. Analyzes codebase, delegates markup tasks to Semantic Engineer, verifies via Reviewer Agent, and reports status.
---
## User Input
@@ -12,61 +12,62 @@ You **MUST** consider the user input before proceeding (if not empty).
## Goal
Ensure the codebase adheres to the semantic standards defined in `.ai/standards/semantics.md`. This involves generating the semantic map, analyzing compliance reports, and identifying critical parsing errors or missing metadata.
Ensure the codebase 100% adheres to the semantic standards defined in `.ai/standards/semantics.md` (GRACE-Poly Protocol). You are the **Manager/Supervisor**. You do not write code. You manage the queue, delegate files to the Semantic Engineer, audit their work via the Reviewer Agent, and commit successful changes.
## Operating Constraints
1. **ROLE: Orchestrator**: You are responsible for the high-level coordination of semantic maintenance.
2. **STRICT ADHERENCE**: Follow `.ai/standards/semantics.md` for all anchor and tag syntax.
3. **NON-DESTRUCTIVE**: Do not remove existing code logic; only add or update semantic annotations.
4. **TIER AWARENESS**: Prioritize CRITICAL and STANDARD modules for compliance fixes.
1. **ROLE: Orchestrator**: High-level coordination ONLY. Do not output raw code diffs yourself.
2. **DELEGATION PATTERN**: Strict `Orchestrator -> Engineer -> Reviewer -> Orchestrator` loop.
3. **FAIL-FAST METRICS**: If the Reviewer Agent rejects a file 3 times in a row, drop the file from the current queue and mark it as `[HUMAN_INTERVENTION_REQUIRED]`.
4. **TIER AWARENESS**: CRITICAL files MUST be processed first. A failure in a CRITICAL file blocks the entire pipeline.
## Execution Steps
### 1. Generate Semantic Map
Run the generator script from the repository root:
### 1. Generate Semantic State (Analyze)
Run the generator script to map the current reality:
```bash
python3 generate_semantic_map.py
```
Parse the output (Global Score, Critical Parsing Errors, Files with Score < 0.7).
### 2. Analyze Compliance Status
### 2. Formulate Task Queue
Create an execution queue based on the report. Priority:
- **Priority 1 (Blockers)**: Files with "Critical Parsing Errors" (unclosed `[/DEF]` anchors).
- **Priority 2 (Tier 1)**: `CRITICAL` tier modules missing mandatory tags (`@PRE`, `@POST`, `belief_scope`).
- **Priority 3 (Tier 2)**: `STANDARD` modules with missing graph relations (`@RELATION`).
**Parse the output to identify**:
- Path to the latest report in `semantics/reports/semantic_report_*.md`.
- Global Compliance Score.
- Total count of Global Errors and Warnings.
### 3. The Delegation Loop (For each file in the queue)
For every target file, execute this exact sequence:
### 3. Audit Critical Issues
* **Step 3A (Delegate to Worker):** Send the file path and the specific violation from the report to the **Semantic Markup Agent (Engineer)**.
*Prompt*: `"Fix semantic violations in [FILE]. Current issues: [ISSUES]. Apply GRACE-Poly standards without changing business logic."*
* **Step 3B (Delegate to Auditor):** Once the Engineer returns the modified file, send it to the **Reviewer Agent (Auditor)**.
*Prompt*: `"Verify GRACE-Poly compliance for [FILE]. Check for paired [DEF] anchors, complete contracts, and belief_scope usage. Return PASS or FAIL with specific line errors."*
* **Step 3C (Evaluate):**
* If Auditor returns `PASS`: Apply the diff to the codebase. Move to the next file.
* If Auditor returns `FAIL`: Send the Auditor's error report back to the Engineer (Step 3A). Repeat max 3 times.
Read the latest report and extract:
- **Critical Parsing Errors**: Unclosed anchors or mismatched tags.
- **Low-Score Files**: Files with score < 0.7 or marked with 🔴.
- **Missing Mandatory Tags**: Specifically for CRITICAL tier modules.
### 4. Verification
Once the queue is empty, re-run `python3 generate_semantic_map.py` to prove the metrics have improved.
### 4. Formulate Remediation Plan
## Output Format
Create a list of files requiring immediate attention:
1. **Priority 1**: Fix all "Critical Parsing Errors" (unclosed anchors).
2. **Priority 2**: Add missing mandatory tags for CRITICAL modules.
3. **Priority 3**: Improve coverage for STANDARD modules.
Return a structured summary of the operation:
### 5. Execute Fixes (Optional/Handoff)
```text
=== GRACE SEMANTIC ORCHESTRATION REPORT ===
Initial Global Score: [X]%
Final Global Score: [Y]%
Status:[PASS / BLOCKED]
If $ARGUMENTS contains "fix" or "apply":
- For each target file, use `read_file` to get context.
- Apply semantic fixes using `apply_diff`, preserving all code logic.
- Re-run `python3 generate_semantic_map.py` to verify the fix.
## Output
Provide a summary of the semantic state:
- **Global Score**: [X]%
- **Status**: [PASS/FAIL] (FAIL if any Critical Parsing Errors exist)
- **Top Issues**: List top 3-5 files needing attention.
- **Action Taken**: Summary of maps generated or fixes applied.
Files Processed:
1. [file_path] -[PASS (1 attempt) | PASS (2 attempts) | FAILED]
2. ...
Escalations (Human Intervention Required):
-[file_path]: Failed auditor review 3 times. Reason: [Last Auditor Note].
```
## Context
$ARGUMENTS
$ARGUMENTS
```