75 lines
3.0 KiB
Markdown
75 lines
3.0 KiB
Markdown
---
|
|
description: Maintain semantic integrity by generating maps and auditing compliance reports.
|
|
---
|
|
|
|
## User Input
|
|
|
|
```text
|
|
$ARGUMENTS
|
|
```
|
|
|
|
You **MUST** consider the user input before proceeding (if not empty).
|
|
|
|
## Goal
|
|
|
|
Ensure the codebase adheres to the semantic standards defined in `.ai/standards/semantics.md`. This involves generating the semantic map, analyzing compliance reports, and identifying critical parsing errors or missing metadata.
|
|
|
|
## Operating Constraints
|
|
|
|
1. **ROLE: Orchestrator**: You are responsible for the high-level coordination of semantic maintenance.
|
|
2. **STRICT ADHERENCE**: Follow `.ai/standards/semantics.md` for all anchor and tag syntax.
|
|
3. **NON-DESTRUCTIVE**: Do not remove existing code logic; only add or update semantic annotations.
|
|
4. **TIER AWARENESS**: Prioritize CRITICAL and STANDARD modules for compliance fixes.
|
|
5. **NO PSEUDO-CONTRACTS (CRITICAL)**: You are STRICTLY FORBIDDEN from using automated scripts (e.g., Python/Bash/sed) to mechanically inject boilerplate, placeholders, or "pseudo-contracts" (such as `# @PURPOSE: Semantic contract placeholder.` or `# @PRE: Inputs satisfy function contract.`) merely to artificially inflate the compliance score. Every semantic tag, anchor, and contract you add MUST reflect a genuine, deep understanding of the specific code's actual logic and business requirements. Automated "stubbing" of semantics is classified as codebase corruption.
|
|
|
|
## Execution Steps
|
|
|
|
### 1. Generate Semantic Map
|
|
|
|
Run the generator script from the repository root with the agent report option:
|
|
|
|
```bash
|
|
python3 generate_semantic_map.py --agent-report
|
|
```
|
|
|
|
### 2. Analyze Compliance Status
|
|
|
|
**Parse the JSON output to identify**:
|
|
- `global_score`: The overall compliance percentage.
|
|
- `critical_parsing_errors_count`: Number of Priority 1 blockers.
|
|
- `priority_2_tier1_critical_missing_mandatory_tags_files`: Number of CRITICAL files needing metadata.
|
|
- `targets`: Status of key architectural files.
|
|
|
|
### 3. Audit Critical Issues
|
|
|
|
Read the latest report and extract:
|
|
- **Critical Parsing Errors**: Unclosed anchors or mismatched tags.
|
|
- **Low-Score Files**: Files with score < 0.7 or marked with 🔴.
|
|
- **Missing Mandatory Tags**: Specifically for CRITICAL tier modules.
|
|
|
|
### 4. Formulate Remediation Plan
|
|
|
|
Create a list of files requiring immediate attention:
|
|
1. **Priority 1**: Fix all "Critical Parsing Errors" (unclosed anchors).
|
|
2. **Priority 2**: Add missing mandatory tags for CRITICAL modules.
|
|
3. **Priority 3**: Improve coverage for STANDARD modules.
|
|
|
|
### 5. Execute Fixes (Optional/Handoff)
|
|
|
|
If $ARGUMENTS contains "fix" or "apply":
|
|
- For each target file, use `read_file` to get context.
|
|
- Apply semantic fixes using `apply_diff`, preserving all code logic.
|
|
- Re-run `python3 generate_semantic_map.py --agent-report` to verify the fix.
|
|
|
|
## Output
|
|
|
|
Provide a summary of the semantic state:
|
|
- **Global Score**: [X]%
|
|
- **Status**: [PASS/FAIL] (FAIL if any Critical Parsing Errors exist)
|
|
- **Top Issues**: List top 3-5 files needing attention.
|
|
- **Action Taken**: Summary of maps generated or fixes applied.
|
|
|
|
## Context
|
|
|
|
$ARGUMENTS
|