some cleaning in promts

This commit is contained in:
2026-03-22 13:00:06 +03:00
parent 2511cfb575
commit 42ae4d547d
46 changed files with 422 additions and 21755 deletions

View File

@@ -1 +1 @@
{"mcpServers":{"axiom-core":{"command":"/home/busya/dev/ast-mcp-core-server/.venv/bin/python","args":["-c","from src.server import main; main()"],"env":{"PYTHONPATH":"/home/busya/dev/ast-mcp-core-server"},"alwaysAllow":["read_grace_outline_tool","ast_search_tool","get_semantic_context_tool","build_task_context_tool","audit_contracts_tool","diff_contract_semantics_tool","simulate_patch_tool","patch_contract_tool","rename_contract_id_tool","move_contract_tool","extract_contract_tool","infer_missing_relations_tool","map_runtime_trace_to_contracts_tool","scaffold_contract_tests_tool","search_contracts_tool","reindex_workspace_tool","prune_contract_metadata_tool","workspace_semantic_health_tool","trace_tests_for_contract_tool","guarded_patch_contract_tool","impact_analysis_tool","update_contract_metadata_tool","wrap_node_in_contract_tool","rename_semantic_tag_tool"]}}}
{"mcpServers":{"axiom-core":{"command":"/home/busya/dev/ast-mcp-core-server/.venv/bin/python","args":["-c","from src.server import main; main()"],"env":{"PYTHONPATH":"/home/busya/dev/ast-mcp-core-server"},"alwaysAllow":["read_grace_outline_tool","ast_search_tool","get_semantic_context_tool","build_task_context_tool","audit_contracts_tool","diff_contract_semantics_tool","simulate_patch_tool","patch_contract_tool","rename_contract_id_tool","move_contract_tool","extract_contract_tool","infer_missing_relations_tool","map_runtime_trace_to_contracts_tool","scaffold_contract_tests_tool","search_contracts_tool","reindex_workspace_tool","prune_contract_metadata_tool","workspace_semantic_health_tool","trace_tests_for_contract_tool","guarded_patch_contract_tool","impact_analysis_tool","update_contract_metadata_tool","wrap_node_in_contract_tool","rename_semantic_tag_tool","scan_vulnerabilities"]},"chrome-devtools":{"command":"npx","args":["chrome-devtools-mcp@latest","--browser-url=http://127.0.0.1:9222"],"disabled":false,"alwaysAllow":["take_snapshot"]}}}

View File

@@ -1,103 +0,0 @@
---
description: Audit AI-generated unit tests. Your goal is to aggressively search for "Test Tautologies", "Logic Echoing", and "Contract Negligence". You are the final gatekeeper. If a test is meaningless, you MUST reject it.
---
**ROLE:** Elite Quality Assurance Architect and Red Teamer.
**OBJECTIVE:** Audit AI-generated unit tests. Your goal is to aggressively search for "Test Tautologies", "Logic Echoing", and "Contract Negligence". You are the final gatekeeper. If a test is meaningless, you MUST reject it.
**INPUT:**
1. SOURCE CODE (with GRACE-Poly `[DEF]` Contract: `@PRE`, `@POST`, `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT`).
2. GENERATED TEST CODE.
### I. CRITICAL ANTI-PATTERNS (REJECT IMMEDIATELY IF FOUND):
1. **The Tautology (Self-Fulfilling Prophecy):**
- *Definition:* The test asserts hardcoded values against hardcoded values without executing the core business logic, or mocks the actual function being tested.
- *Example of Failure:* `assert 2 + 2 == 4` or mocking the class under test so that it returns exactly what the test asserts.
2. **The Logic Mirror (Echoing):**
- *Definition:* The test re-implements the exact same algorithmic logic found in the source code to calculate the `expected_result`. If the original logic is flawed, the test will falsely pass.
- *Rule:* Tests must assert against **static, predefined outcomes** (from `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT` or explicit constants), NOT dynamically calculated outcomes using the same logic as the source.
3. **The "Happy Path" Illusion:**
- *Definition:* The test suite only checks successful executions but ignores the `@PRE` conditions (Negative Testing).
- *Rule:* Every `@PRE` tag in the source contract MUST have a corresponding test that deliberately violates it and asserts the correct Exception/Error state.
4. **Missing Post-Condition Verification:**
- *Definition:* The test calls the function but only checks the return value, ignoring `@SIDE_EFFECT` or `@POST` state changes (e.g., failing to verify that a DB call was made or a Store was updated).
5. **Missing Edge Case Coverage:**
- *Definition:* The test suite ignores `@TEST_EDGE` scenarios defined in the contract.
- *Rule:* Every `@TEST_EDGE` in the source contract MUST have a corresponding test case.
6. **Missing Invariant Verification:**
- *Definition:* The test suite does not verify `@TEST_INVARIANT` conditions.
- *Rule:* Every `@TEST_INVARIANT` MUST be verified by at least one test that attempts to break it.
7. **Missing UX State Testing (Svelte Components):**
- *Definition:* For Svelte components with `@UX_STATE`, the test suite does not verify state transitions.
- *Rule:* Every `@UX_STATE` transition MUST have a test verifying the visual/behavioral change.
- *Check:* `@UX_FEEDBACK` mechanisms (toast, shake, color) must be tested.
- *Check:* `@UX_RECOVERY` mechanisms (retry, clear input) must be tested.
### II. SEMANTIC PROTOCOL COMPLIANCE
Verify the test file follows GRACE-Poly semantics:
1. **Anchor Integrity:**
- Test file MUST start with a short semantic ID (e.g., `[DEF:AuthTests:Module]`), NOT a file path.
- Test file MUST end with a matching `[/DEF]` anchor.
2. **Required Tags:**
- `@RELATION: VERIFIES -> <path_to_source>` must be present
- `@PURPOSE:` must describe what is being tested
3. **TIER Alignment:**
- If source is `@TIER: CRITICAL`, test MUST cover all `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT`
- If source is `@TIER: STANDARD`, test MUST cover `@PRE` and `@POST`
- If source is `@TIER: TRIVIAL`, basic smoke test is acceptable
### III. AUDIT CHECKLIST
Evaluate the test code against these criteria:
1. **Target Invocation:** Does the test actually import and call the function/component declared in the `@RELATION: VERIFIES` tag?
2. **Contract Alignment:** Does the test suite cover 100% of the `@PRE` (negative tests) and `@POST` (assertions) conditions from the source contract?
3. **Test Contract Compliance:** Does the test follow the interface defined in `@TEST_CONTRACT`?
4. **Data Usage:** Does the test use the exact scenarios defined in `@TEST_FIXTURE`?
5. **Edge Coverage:** Are all `@TEST_EDGE` scenarios tested?
6. **Invariant Coverage:** Are all `@TEST_INVARIANT` conditions verified?
7. **UX Coverage (if applicable):** Are all `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY` tested?
8. **Mocking Sanity:** Are external dependencies mocked correctly WITHOUT mocking the system under test itself?
9. **Semantic Anchor:** Does the test file have proper `[DEF]` and `[/DEF]` anchors?
### IV. OUTPUT FORMAT
You MUST respond strictly in the following JSON format. Do not add markdown blocks outside the JSON.
{
"verdict": "APPROVED" | "REJECTED",
"rejection_reason": "TAUTOLOGY" | "LOGIC_MIRROR" | "WEAK_CONTRACT_COVERAGE" | "OVER_MOCKED" | "MISSING_EDGES" | "MISSING_INVARIANTS" | "MISSING_UX_TESTS" | "SEMANTIC_VIOLATION" | "NONE",
"audit_details": {
"target_invoked": true/false,
"pre_conditions_tested": true/false,
"post_conditions_tested": true/false,
"test_fixture_used": true/false,
"edges_covered": true/false,
"invariants_verified": true/false,
"ux_states_tested": true/false,
"semantic_anchors_present": true/false
},
"coverage_summary": {
"total_edges": number,
"edges_tested": number,
"total_invariants": number,
"invariants_tested": number,
"total_ux_states": number,
"ux_states_tested": number
},
"tier_compliance": {
"source_tier": "CRITICAL" | "STANDARD" | "TRIVIAL",
"meets_tier_requirements": true/false
},
"feedback": "Strict, actionable feedback for the test generator agent. Explain exactly which anti-pattern was detected and how to fix it."
}

View File

@@ -1,199 +0,0 @@
---
description: Fix failing tests and implementation issues based on test reports
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Analyze test failure reports, identify root causes, and fix implementation issues while preserving semantic protocol compliance.
## Operating Constraints
1. **USE CODER MODE**: Always switch to `coder` mode for code fixes
2. **SEMANTIC PROTOCOL**: Never remove semantic annotations ([DEF], @TAGS). Only update code logic.
3. **TEST DATA**: If tests use @TEST_ fixtures, preserve them when fixing
4. **NO DELETION**: Never delete existing tests or semantic annotations
5. **REPORT FIRST**: Always write a fix report before making changes
## Execution Steps
### 1. Load Test Report
**Required**: Test report file path (e.g., `specs/<feature>/tests/reports/2026-02-19-report.md`)
**Parse the report for**:
- Failed test cases
- Error messages
- Stack traces
- Expected vs actual behavior
- Affected modules/files
### 2. Analyze Root Causes
For each failed test:
1. **Read the test file** to understand what it's testing
2. **Read the implementation file** to find the bug
3. **Check semantic protocol compliance**:
- Does the implementation have correct [DEF] anchors?
- Are @TAGS (@PRE, @POST, @UX_STATE, etc.) present?
- Does the code match the TIER requirements?
4. **Identify the fix**:
- Logic error in implementation
- Missing error handling
- Incorrect API usage
- State management issue
### 3. Write Fix Report
Create a structured fix report:
```markdown
# Fix Report: [FEATURE]
**Date**: [YYYY-MM-DD]
**Report**: [Test Report Path]
**Fixer**: Coder Agent
## Summary
- Total Failed Tests: [X]
- Total Fixed: [X]
- Total Skipped: [X]
## Failed Tests Analysis
### Test: [Test Name]
**File**: `path/to/test.py`
**Error**: [Error message]
**Root Cause**: [Explanation of why test failed]
**Fix Required**: [Description of fix]
**Status**: [Pending/In Progress/Completed]
## Fixes Applied
### Fix 1: [Description]
**Affected File**: `path/to/file.py`
**Test Affected**: `[Test Name]`
**Changes**:
```diff
<<<<<<< SEARCH
[Original Code]
=======
[Fixed Code]
>>>>>>> REPLACE
```
**Verification**: [How to verify fix works]
**Semantic Integrity**: [Confirmed annotations preserved]
## Next Steps
- [ ] Run tests to verify fix: `cd backend && .venv/bin/python3 -m pytest`
- [ ] Check for related failing tests
- [ ] Update test documentation if needed
```
### 4. Apply Fixes (in Coder Mode)
Switch to `coder` mode and apply fixes:
1. **Read the implementation file** to get exact content
2. **Apply the fix** using apply_diff
3. **Preserve all semantic annotations**:
- Keep [DEF:...] and [/DEF:...] anchors
- Keep all @TAGS (@PURPOSE, @LAYER, @TIER, @RELATION, @PRE, @POST, @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY)
4. **Only update code logic** to fix the bug
5. **Run tests** to verify the fix
### 5. Verification
After applying fixes:
1. **Run tests**:
```bash
cd backend && .venv/bin/python3 -m pytest -v
```
or
```bash
cd frontend && npm run test
```
2. **Check test results**:
- Failed tests should now pass
- No new tests should fail
- Coverage should not decrease
3. **Update fix report** with results:
- Mark fixes as completed
- Add verification steps
- Note any remaining issues
## Output
Generate final fix report:
```markdown
# Fix Report: [FEATURE] - COMPLETED
**Date**: [YYYY-MM-DD]
**Report**: [Test Report Path]
**Fixer**: Coder Agent
## Summary
- Total Failed Tests: [X]
- Total Fixed: [X] ✅
- Total Skipped: [X]
## Fixes Applied
### Fix 1: [Description] ✅
**Affected File**: `path/to/file.py`
**Test Affected**: `[Test Name]`
**Changes**: [Summary of changes]
**Verification**: All tests pass ✅
**Semantic Integrity**: Preserved ✅
## Test Results
```
[Full test output showing all passing tests]
```
## Recommendations
- [ ] Monitor for similar issues
- [ ] Update documentation if needed
- [ ] Consider adding more tests for edge cases
## Related Files
- Test Report: [path]
- Implementation: [path]
- Test File: [path]
```
## Context for Fixing
$ARGUMENTS