3.0 KiB
3.0 KiB
description
| description |
|---|
| Maintain semantic integrity by generating maps and auditing compliance reports. |
User Input
$ARGUMENTS
You MUST consider the user input before proceeding (if not empty).
Goal
Ensure the codebase adheres to the semantic standards defined in .ai/standards/semantics.md. This involves generating the semantic map, analyzing compliance reports, and identifying critical parsing errors or missing metadata.
Operating Constraints
- ROLE: Orchestrator: You are responsible for the high-level coordination of semantic maintenance.
- STRICT ADHERENCE: Follow
.ai/standards/semantics.mdfor all anchor and tag syntax. - NON-DESTRUCTIVE: Do not remove existing code logic; only add or update semantic annotations.
- TIER AWARENESS: Prioritize CRITICAL and STANDARD modules for compliance fixes.
- NO PSEUDO-CONTRACTS (CRITICAL): You are STRICTLY FORBIDDEN from using automated scripts (e.g., Python/Bash/sed) to mechanically inject boilerplate, placeholders, or "pseudo-contracts" (such as
# @PURPOSE: Semantic contract placeholder.or# @PRE: Inputs satisfy function contract.) merely to artificially inflate the compliance score. Every semantic tag, anchor, and contract you add MUST reflect a genuine, deep understanding of the specific code's actual logic and business requirements. Automated "stubbing" of semantics is classified as codebase corruption.
Execution Steps
1. Generate Semantic Map
Run the generator script from the repository root with the agent report option:
python3 generate_semantic_map.py --agent-report
2. Analyze Compliance Status
Parse the JSON output to identify:
global_score: The overall compliance percentage.critical_parsing_errors_count: Number of Priority 1 blockers.priority_2_tier1_critical_missing_mandatory_tags_files: Number of CRITICAL files needing metadata.targets: Status of key architectural files.
3. Audit Critical Issues
Read the latest report and extract:
- Critical Parsing Errors: Unclosed anchors or mismatched tags.
- Low-Score Files: Files with score < 0.7 or marked with 🔴.
- Missing Mandatory Tags: Specifically for CRITICAL tier modules.
4. Formulate Remediation Plan
Create a list of files requiring immediate attention:
- Priority 1: Fix all "Critical Parsing Errors" (unclosed anchors).
- Priority 2: Add missing mandatory tags for CRITICAL modules.
- Priority 3: Improve coverage for STANDARD modules.
5. Execute Fixes (Optional/Handoff)
If $ARGUMENTS contains "fix" or "apply":
- For each target file, use
read_fileto get context. - Apply semantic fixes using
apply_diff, preserving all code logic. - Re-run
python3 generate_semantic_map.py --agent-reportto verify the fix.
Output
Provide a summary of the semantic state:
- Global Score: [X]%
- Status: [PASS/FAIL] (FAIL if any Critical Parsing Errors exist)
- Top Issues: List top 3-5 files needing attention.
- Action Taken: Summary of maps generated or fixes applied.
Context
$ARGUMENTS