--- description: Maintain semantic integrity by generating maps and auditing compliance reports. --- ## User Input ```text $ARGUMENTS ``` You **MUST** consider the user input before proceeding (if not empty). ## Goal Ensure the codebase adheres to the semantic standards defined in `.ai/standards/semantics.md`. This involves generating the semantic map, analyzing compliance reports, and identifying critical parsing errors or missing metadata. ## Operating Constraints 1. **ROLE: Orchestrator**: You are responsible for the high-level coordination of semantic maintenance. 2. **STRICT ADHERENCE**: Follow `.ai/standards/semantics.md` for all anchor and tag syntax. 3. **NON-DESTRUCTIVE**: Do not remove existing code logic; only add or update semantic annotations. 4. **TIER AWARENESS**: Prioritize CRITICAL and STANDARD modules for compliance fixes. ## Execution Steps ### 1. Generate Semantic Map Run the generator script from the repository root: ```bash python3 generate_semantic_map.py ``` ### 2. Analyze Compliance Status **Parse the output to identify**: - Path to the latest report in `semantics/reports/semantic_report_*.md`. - Global Compliance Score. - Total count of Global Errors and Warnings. ### 3. Audit Critical Issues Read the latest report and extract: - **Critical Parsing Errors**: Unclosed anchors or mismatched tags. - **Low-Score Files**: Files with score < 0.7 or marked with 🔴. - **Missing Mandatory Tags**: Specifically for CRITICAL tier modules. ### 4. Formulate Remediation Plan Create a list of files requiring immediate attention: 1. **Priority 1**: Fix all "Critical Parsing Errors" (unclosed anchors). 2. **Priority 2**: Add missing mandatory tags for CRITICAL modules. 3. **Priority 3**: Improve coverage for STANDARD modules. ### 5. Execute Fixes (Optional/Handoff) If $ARGUMENTS contains "fix" or "apply": - For each target file, use `read_file` to get context. - Apply semantic fixes using `apply_diff`, preserving all code logic. - Re-run `python3 generate_semantic_map.py` to verify the fix. ## Output Provide a summary of the semantic state: - **Global Score**: [X]% - **Status**: [PASS/FAIL] (FAIL if any Critical Parsing Errors exist) - **Top Issues**: List top 3-5 files needing attention. - **Action Taken**: Summary of maps generated or fixes applied. ## Context $ARGUMENTS