Compare commits
132 Commits
022-sync-i
...
484019e750
| Author | SHA1 | Date | |
|---|---|---|---|
| 484019e750 | |||
| 4ff6d307f8 | |||
| f4612c0737 | |||
| 5ec1254336 | |||
| b7d1ee2b71 | |||
| 87285d8f0a | |||
| 04b01eadb5 | |||
| 4d5b9e88dd | |||
| 4bad4ab4e2 | |||
| 3801ca13d9 | |||
| 999c0c54df | |||
| f9ac282596 | |||
| 5d42a6b930 | |||
| 99f19ac305 | |||
| 590ba49ddb | |||
| 2a5b225800 | |||
| 33433c3173 | |||
| 21e969a769 | |||
| 783644c6ad | |||
| d32d85556f | |||
| bc0367ab72 | |||
| 1c362f4092 | |||
| 95ae9c6af1 | |||
| 7a12ed0931 | |||
| e0c0dd3221 | |||
| 5f6e9c0cc0 | |||
| 4fd9d6b6d5 | |||
| 7e6bd56488 | |||
| 5e3c213b92 | |||
| 37b75b5a5c | |||
| 3d42a487f7 | |||
| 2e93f5ca63 | |||
| 286167b1d5 | |||
| 7df7b4f98c | |||
| ab1c87ffba | |||
| 40e6d8cd4c | |||
| 18e96a58bc | |||
| 83e4875097 | |||
| e635bd7e5f | |||
| 43dd97ecbf | |||
| 0685f50ae7 | |||
| d0ffc2f1df | |||
| 26880d2e09 | |||
| 008b6d72c9 | |||
| f0c85e4c03 | |||
| 6ffdf5f8a4 | |||
| 0cf0ef25f1 | |||
| af74841765 | |||
| d7e4919d54 | |||
| fdcbe32dfa | |||
| 4de5b22d57 | |||
| c8029ed309 | |||
| c2a4c8062a | |||
| 2c820e103a | |||
| c8b84b7bd7 | |||
| fdb944f123 | |||
| d29bc511a2 | |||
| a3a9f0788d | |||
| 77147dc95b | |||
| 026239e3bf | |||
| 4a0273a604 | |||
| edb2dd5263 | |||
| 76b98fcf8f | |||
| 794cc55fe7 | |||
| 235b0e3c9f | |||
| e6087bd3c1 | |||
| 0f16bab2b8 | |||
| 7de96c17c4 | |||
| f018b97ed2 | |||
| 72846aa835 | |||
| 994c0c3e5d | |||
| 252a8601a9 | |||
| 8044f85ea4 | |||
| d4109e5a03 | |||
| b2bbd73439 | |||
| 0e0e26e2f7 | |||
| 18b42f8dd0 | |||
| e7b31accd6 | |||
| d3c3a80ed2 | |||
| cc244c2d86 | |||
| d10c23e658 | |||
| 1042b35d1b | |||
| 16ffeb1ed6 | |||
| da34deac02 | |||
| 51e9ee3fcc | |||
| edf9286071 | |||
| a542e7d2df | |||
| a863807cf2 | |||
| e2bc68683f | |||
| 43cb82697b | |||
| 4ba28cf93e | |||
| 343f2e29f5 | |||
| c9a53578fd | |||
| 07ec2d9797 | |||
| e9d3f3c827 | |||
| 26ba015b75 | |||
| 49129d3e86 | |||
| d99a13d91f | |||
| 203ce446f4 | |||
| c96d50a3f4 | |||
| 3bbe320949 | |||
| 2d2435642d | |||
| ec8d67c956 | |||
| 76baeb1038 | |||
| 11c59fb420 | |||
| b2529973eb | |||
| ae1d630ad6 | |||
| 9a9c5879e6 | |||
| 696aac32e7 | |||
| 7a9b1a190a | |||
| a3dc1fb2b9 | |||
| 297b29986d | |||
| 4c6fc8256d | |||
| a747a163c8 | |||
| fce0941e98 | |||
| 45c077b928 | |||
| 9ed3a5992d | |||
| a032fe8457 | |||
| 4c9d554432 | |||
| 6962a78112 | |||
| 3d75a21127 | |||
| 07914c8728 | |||
| cddc259b76 | |||
| dcbf0a7d7f | |||
| 65f61c1f80 | |||
| cb7386f274 | |||
| 83e34e1799 | |||
| d197303b9f | |||
| a43f8fb021 | |||
| 4aa01b6470 | |||
| 35b423979d | |||
| 2ffc3cc68f |
@@ -6,7 +6,7 @@ description: Audit AI-generated unit tests. Your goal is to aggressively search
|
||||
**OBJECTIVE:** Audit AI-generated unit tests. Your goal is to aggressively search for "Test Tautologies", "Logic Echoing", and "Contract Negligence". You are the final gatekeeper. If a test is meaningless, you MUST reject it.
|
||||
|
||||
**INPUT:**
|
||||
1. SOURCE CODE (with GRACE-Poly `[DEF]` Contract: `@PRE`, `@POST`, `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT`).
|
||||
1. SOURCE CODE (with GRACE-Poly `[DEF]` Contract: `@PRE`, `@POST`, `@TEST_DATA`).
|
||||
2. GENERATED TEST CODE.
|
||||
|
||||
### I. CRITICAL ANTI-PATTERNS (REJECT IMMEDIATELY IF FOUND):
|
||||
@@ -17,7 +17,7 @@ description: Audit AI-generated unit tests. Your goal is to aggressively search
|
||||
|
||||
2. **The Logic Mirror (Echoing):**
|
||||
- *Definition:* The test re-implements the exact same algorithmic logic found in the source code to calculate the `expected_result`. If the original logic is flawed, the test will falsely pass.
|
||||
- *Rule:* Tests must assert against **static, predefined outcomes** (from `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT` or explicit constants), NOT dynamically calculated outcomes using the same logic as the source.
|
||||
- *Rule:* Tests must assert against **static, predefined outcomes** (from `@TEST_DATA` or explicit constants), NOT dynamically calculated outcomes using the same logic as the source.
|
||||
|
||||
3. **The "Happy Path" Illusion:**
|
||||
- *Definition:* The test suite only checks successful executions but ignores the `@PRE` conditions (Negative Testing).
|
||||
@@ -26,78 +26,26 @@ description: Audit AI-generated unit tests. Your goal is to aggressively search
|
||||
4. **Missing Post-Condition Verification:**
|
||||
- *Definition:* The test calls the function but only checks the return value, ignoring `@SIDE_EFFECT` or `@POST` state changes (e.g., failing to verify that a DB call was made or a Store was updated).
|
||||
|
||||
5. **Missing Edge Case Coverage:**
|
||||
- *Definition:* The test suite ignores `@TEST_EDGE` scenarios defined in the contract.
|
||||
- *Rule:* Every `@TEST_EDGE` in the source contract MUST have a corresponding test case.
|
||||
|
||||
6. **Missing Invariant Verification:**
|
||||
- *Definition:* The test suite does not verify `@TEST_INVARIANT` conditions.
|
||||
- *Rule:* Every `@TEST_INVARIANT` MUST be verified by at least one test that attempts to break it.
|
||||
|
||||
7. **Missing UX State Testing (Svelte Components):**
|
||||
- *Definition:* For Svelte components with `@UX_STATE`, the test suite does not verify state transitions.
|
||||
- *Rule:* Every `@UX_STATE` transition MUST have a test verifying the visual/behavioral change.
|
||||
- *Check:* `@UX_FEEDBACK` mechanisms (toast, shake, color) must be tested.
|
||||
- *Check:* `@UX_RECOVERY` mechanisms (retry, clear input) must be tested.
|
||||
|
||||
### II. SEMANTIC PROTOCOL COMPLIANCE
|
||||
|
||||
Verify the test file follows GRACE-Poly semantics:
|
||||
|
||||
1. **Anchor Integrity:**
|
||||
- Test file MUST start with `[DEF:__tests__/test_name:Module]`
|
||||
- Test file MUST end with `[/DEF:__tests__/test_name:Module]`
|
||||
|
||||
2. **Required Tags:**
|
||||
- `@RELATION: VERIFIES -> <path_to_source>` must be present
|
||||
- `@PURPOSE:` must describe what is being tested
|
||||
|
||||
3. **TIER Alignment:**
|
||||
- If source is `@TIER: CRITICAL`, test MUST cover all `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT`
|
||||
- If source is `@TIER: STANDARD`, test MUST cover `@PRE` and `@POST`
|
||||
- If source is `@TIER: TRIVIAL`, basic smoke test is acceptable
|
||||
|
||||
### III. AUDIT CHECKLIST
|
||||
### II. AUDIT CHECKLIST
|
||||
|
||||
Evaluate the test code against these criteria:
|
||||
1. **Target Invocation:** Does the test actually import and call the function/component declared in the `@RELATION: VERIFIES` tag?
|
||||
2. **Contract Alignment:** Does the test suite cover 100% of the `@PRE` (negative tests) and `@POST` (assertions) conditions from the source contract?
|
||||
3. **Test Contract Compliance:** Does the test follow the interface defined in `@TEST_CONTRACT`?
|
||||
4. **Data Usage:** Does the test use the exact scenarios defined in `@TEST_FIXTURE`?
|
||||
5. **Edge Coverage:** Are all `@TEST_EDGE` scenarios tested?
|
||||
6. **Invariant Coverage:** Are all `@TEST_INVARIANT` conditions verified?
|
||||
7. **UX Coverage (if applicable):** Are all `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY` tested?
|
||||
8. **Mocking Sanity:** Are external dependencies mocked correctly WITHOUT mocking the system under test itself?
|
||||
9. **Semantic Anchor:** Does the test file have proper `[DEF]` and `[/DEF]` anchors?
|
||||
3. **Data Usage:** Does the test use the exact scenarios defined in `@TEST_DATA`?
|
||||
4. **Mocking Sanity:** Are external dependencies mocked correctly WITHOUT mocking the system under test itself?
|
||||
|
||||
### IV. OUTPUT FORMAT
|
||||
### III. OUTPUT FORMAT
|
||||
|
||||
You MUST respond strictly in the following JSON format. Do not add markdown blocks outside the JSON.
|
||||
|
||||
{
|
||||
"verdict": "APPROVED" | "REJECTED",
|
||||
"rejection_reason": "TAUTOLOGY" | "LOGIC_MIRROR" | "WEAK_CONTRACT_COVERAGE" | "OVER_MOCKED" | "MISSING_EDGES" | "MISSING_INVARIANTS" | "MISSING_UX_TESTS" | "SEMANTIC_VIOLATION" | "NONE",
|
||||
"rejection_reason": "TAUTOLOGY" | "LOGIC_MIRROR" | "WEAK_CONTRACT_COVERAGE" | "OVER_MOCKED" | "NONE",
|
||||
"audit_details": {
|
||||
"target_invoked": true/false,
|
||||
"pre_conditions_tested": true/false,
|
||||
"post_conditions_tested": true/false,
|
||||
"test_fixture_used": true/false,
|
||||
"edges_covered": true/false,
|
||||
"invariants_verified": true/false,
|
||||
"ux_states_tested": true/false,
|
||||
"semantic_anchors_present": true/false
|
||||
},
|
||||
"coverage_summary": {
|
||||
"total_edges": number,
|
||||
"edges_tested": number,
|
||||
"total_invariants": number,
|
||||
"invariants_tested": number,
|
||||
"total_ux_states": number,
|
||||
"ux_states_tested": number
|
||||
},
|
||||
"tier_compliance": {
|
||||
"source_tier": "CRITICAL" | "STANDARD" | "TRIVIAL",
|
||||
"meets_tier_requirements": true/false
|
||||
"test_data_used": true/false
|
||||
},
|
||||
"feedback": "Strict, actionable feedback for the test generator agent. Explain exactly which anti-pattern was detected and how to fix it."
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
---
|
||||
description: USE SEMANTIC
|
||||
---
|
||||
Прочитай .ai/standards/semantics.md. ОБЯЗАТЕЛЬНО используй его при разработке
|
||||
Прочитай .specify/memory/semantics.md (или .ai/standards/semantics.md, если не найден). ОБЯЗАТЕЛЬНО используй его при разработке
|
||||
|
||||
@@ -63,7 +63,6 @@ Load only the minimal necessary context from each artifact:
|
||||
**From constitution:**
|
||||
|
||||
- Load `.ai/standards/constitution.md` for principle validation
|
||||
- Load `.ai/standards/semantics.md` for technical standard validation
|
||||
|
||||
### 3. Build Semantic Models
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ Analyze test failure reports, identify root causes, and fix implementation issue
|
||||
|
||||
1. **USE CODER MODE**: Always switch to `coder` mode for code fixes
|
||||
2. **SEMANTIC PROTOCOL**: Never remove semantic annotations ([DEF], @TAGS). Only update code logic.
|
||||
3. **TEST DATA**: If tests use @TEST_ fixtures, preserve them when fixing
|
||||
3. **TEST DATA**: If tests use @TEST_DATA fixtures, preserve them when fixing
|
||||
4. **NO DELETION**: Never delete existing tests or semantic annotations
|
||||
5. **REPORT FIRST**: Always write a fix report before making changes
|
||||
|
||||
|
||||
@@ -53,15 +53,6 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||
|
||||
3. Load and analyze the implementation context:
|
||||
- **REQUIRED**: Read `.ai/standards/semantics.md` for strict coding standards and contract requirements
|
||||
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
||||
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
||||
- **IF EXISTS**: Read data-model.md for entities and relationships
|
||||
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
||||
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||
|
||||
4. **Project Setup Verification**:
|
||||
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
||||
|
||||
@@ -120,13 +111,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||
|
||||
7. Implementation execution rules:
|
||||
- **Strict Adherence**: Apply `.ai/standards/semantics.md` rules:
|
||||
- Every file MUST start with a `[DEF:id:Type]` header and end with a closing `[/DEF:id:Type]` anchor.
|
||||
- Include `@TIER` and define contracts (`@PRE`, `@POST`).
|
||||
- For Svelte components, use `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY`, and explicitly declare reactivity with `@UX_REATIVITY: State: $state, Derived: $derived`.
|
||||
- **Molecular Topology Logging**: Use prefixes `[EXPLORE]`, `[REASON]`, `[REFLECT]` in logs to trace logic.
|
||||
- **CRITICAL Contracts**: If a task description contains a contract summary (e.g., `CRITICAL: PRE: ..., POST: ...`), these constraints are **MANDATORY** and must be strictly implemented in the code using guards/assertions (if applicable per protocol).
|
||||
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||
- **Core development**: Implement models, services, CLI commands, endpoints
|
||||
- **Integration work**: Database connections, middleware, logging, external services
|
||||
|
||||
@@ -22,7 +22,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Load context**: Read `.ai/ROOT.md` and `.ai/PROJECT_MAP.md` to understand the project structure and navigation. Then read required standards: `.ai/standards/constitution.md` and `.ai/standards/semantics.md`. Load IMPL_PLAN template.
|
||||
2. **Load context**: Read FEATURE_SPEC and `.ai/standards/constitution.md`. Load IMPL_PLAN template (already copied).
|
||||
|
||||
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
||||
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
||||
@@ -64,30 +64,16 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
**Prerequisites:** `research.md` complete
|
||||
|
||||
0. **Validate Design against UX Reference**:
|
||||
- Check if the proposed architecture supports the latency, interactivity, and flow defined in `ux_reference.md`.
|
||||
- **Linkage**: Ensure key UI states from `ux_reference.md` map to Component Contracts (`@UX_STATE`).
|
||||
- **CRITICAL**: If the technical plan compromises the UX (e.g. "We can't do real-time validation"), you **MUST STOP** and warn the user.
|
||||
|
||||
1. **Extract entities from feature spec** → `data-model.md`:
|
||||
- Entity name, fields, relationships, validation rules.
|
||||
- Entity name, fields, relationships
|
||||
- Validation rules from requirements
|
||||
- State transitions if applicable
|
||||
|
||||
2. **Design & Verify Contracts (Semantic Protocol)**:
|
||||
- **Drafting**: Define `[DEF:id:Type]` Headers, Contracts, and closing `[/DEF:id:Type]` for all new modules based on `.ai/standards/semantics.md`.
|
||||
- **TIER Classification**: Explicitly assign `@TIER: [CRITICAL|STANDARD|TRIVIAL]` to each module.
|
||||
- **CRITICAL Requirements**: For all CRITICAL modules, define full `@PRE`, `@POST`, and (if UI) `@UX_STATE` contracts. **MUST** also define testing contracts: `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, and `@TEST_INVARIANT`.
|
||||
- **Self-Review**:
|
||||
- *Completeness*: Do `@PRE`/`@POST` cover edge cases identified in Research? Are test contracts present for CRITICAL?
|
||||
- *Connectivity*: Do `@RELATION` tags form a coherent graph?
|
||||
- *Compliance*: Does syntax match `[DEF:id:Type]` exactly and is it closed with `[/DEF:id:Type]`?
|
||||
- **Output**: Write verified contracts to `contracts/modules.md`.
|
||||
|
||||
3. **Simulate Contract Usage**:
|
||||
- Trace one key user scenario through the defined contracts to ensure data flow continuity.
|
||||
- If a contract interface mismatch is found, fix it immediately.
|
||||
|
||||
4. **Generate API contracts**:
|
||||
- Output OpenAPI/GraphQL schema to `/contracts/` for backend-frontend sync.
|
||||
2. **Define interface contracts** (if project has external interfaces) → `/contracts/`:
|
||||
- Identify what interfaces the project exposes to users or other systems
|
||||
- Document the contract format appropriate for the project type
|
||||
- Examples: public APIs for libraries, command schemas for CLI tools, endpoints for web services, grammars for parsers, UI contracts for applications
|
||||
- Skip if project is purely internal (build scripts, one-off tools, etc.)
|
||||
|
||||
3. **Agent context update**:
|
||||
- Run `.specify/scripts/bash/update-agent-context.sh agy`
|
||||
|
||||
@@ -24,7 +24,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Load design documents**: Read from FEATURE_DIR:
|
||||
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities), ux_reference.md (experience source of truth)
|
||||
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
|
||||
- **Optional**: data-model.md (entities), contracts/ (interface contracts), research.md (decisions), quickstart.md (test scenarios)
|
||||
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
||||
|
||||
@@ -70,12 +70,6 @@ The tasks.md should be immediately executable - each task must be specific enoug
|
||||
|
||||
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
|
||||
|
||||
### UX Preservation (CRITICAL)
|
||||
|
||||
- **Source of Truth**: `ux_reference.md` is the absolute standard for the "feel" of the feature.
|
||||
- **Violation Warning**: If any task would inherently violate the UX (e.g. "Remove progress bar to simplify code"), you **MUST** flag this to the user immediately.
|
||||
- **Verification Task**: You **MUST** add a specific task at the end of each User Story phase: `- [ ] Txxx [USx] Verify implementation matches ux_reference.md (Happy Path & Errors)`
|
||||
|
||||
### Checklist Format (REQUIRED)
|
||||
|
||||
Every task MUST strictly follow this format:
|
||||
@@ -119,12 +113,9 @@ Every task MUST strictly follow this format:
|
||||
- If tests requested: Tests specific to that story
|
||||
- Mark story dependencies (most stories should be independent)
|
||||
|
||||
2. **From Contracts (CRITICAL TIER)**:
|
||||
- Identify components marked as `@TIER: CRITICAL` in `contracts/modules.md`.
|
||||
- For these components, **MUST** append the summary of `@PRE`, `@POST`, `@UX_STATE`, and test contracts (`@TEST_FIXTURE`, `@TEST_EDGE`) directly to the task description.
|
||||
- Example: `- [ ] T005 [P] [US1] Implement Auth (CRITICAL: PRE: token exists, POST: returns User, TESTS: 2 edges) in src/auth.py`
|
||||
- Map each contract/endpoint → to the user story it serves
|
||||
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
||||
2. **From Contracts**:
|
||||
- Map each interface contract → to the user story it serves
|
||||
- If tests requested: Each interface contract → contract test task [P] before implementation in that story's phase
|
||||
|
||||
3. **From Data Model**:
|
||||
- Map each entity to the user story(ies) that need it
|
||||
|
||||
@@ -249,7 +249,6 @@ component/__tests__/Component.test.js
|
||||
# [DEF:__tests__/test_module:Module]
|
||||
# @RELATION: VERIFIES -> ../module.py
|
||||
# @PURPOSE: Contract testing for module
|
||||
# [/DEF:__tests__/test_module:Module]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
1654
.ai/MODULE_MAP.md
1654
.ai/MODULE_MAP.md
File diff suppressed because it is too large
Load Diff
@@ -26,7 +26,7 @@
|
||||
4. **ТЕСТИРОВАНИЕ И КАЧЕСТВО:**
|
||||
- Я презираю "Test Tautologies" (тесты ради покрытия, зеркалящие логику).
|
||||
- Тесты должны быть Contract-Driven. Если есть `@PRE`, я ожидаю тест на его нарушение.
|
||||
- Тесты обязаны использовать `@TEST_` из контрактов.
|
||||
- Тесты обязаны использовать `@TEST_DATA` из контрактов.
|
||||
|
||||
5. **ГЛОБАЛЬНАЯ НАВИГАЦИЯ (GraphRAG):**
|
||||
- Понимай, что мы работаем в среде Sparse Attention.
|
||||
|
||||
4846
.ai/PROJECT_MAP.md
4846
.ai/PROJECT_MAP.md
File diff suppressed because it is too large
Load Diff
@@ -3,29 +3,14 @@
|
||||
# @SEMANTICS: Finance, ACID, Transfer, Ledger
|
||||
# @PURPOSE: Core banking transaction processor with ACID guarantees.
|
||||
# @LAYER: Domain (Core)
|
||||
# @RELATION: DEPENDS_ON ->[DEF:Infra:PostgresDB]
|
||||
#
|
||||
# @RELATION: DEPENDS_ON -> [DEF:Infra:PostgresDB]
|
||||
# @RELATION: DEPENDS_ON -> [DEF:Infra:AuditLog]
|
||||
# @INVARIANT: Total system balance must remain constant (Double-Entry Bookkeeping).
|
||||
# @INVARIANT: Negative transfers are strictly forbidden.
|
||||
|
||||
# --- Test Specifications (The "What" and "Why", not the "Data") ---
|
||||
# @TEST_CONTRACT: Input -> TransferInputDTO, Output -> TransferResultDTO
|
||||
|
||||
# Happy Path
|
||||
# @TEST_SCENARIO: sufficient_funds -> Returns COMPLETED, balances updated.
|
||||
# @TEST_FIXTURE: sufficient_funds -> file:./__tests__/fixtures/transfers.json#happy_path
|
||||
|
||||
# Edge Cases (CRITICAL)
|
||||
# @TEST_SCENARIO: insufficient_funds -> Throws BusinessRuleViolation("INSUFFICIENT_FUNDS").
|
||||
# @TEST_SCENARIO: negative_amount -> Throws BusinessRuleViolation("Transfer amount must be positive.").
|
||||
# @TEST_SCENARIO: self_transfer -> Throws BusinessRuleViolation("Cannot transfer to self.").
|
||||
# @TEST_SCENARIO: audit_failure -> Throws RuntimeError("TRANSACTION_ABORTED").
|
||||
# @TEST_SCENARIO: concurrency_conflict -> Throws DBTransactionError.
|
||||
|
||||
# Linking Tests to Invariants
|
||||
# @TEST_INVARIANT: total_balance_constant -> VERIFIED_BY: [sufficient_funds, concurrency_conflict]
|
||||
# @TEST_INVARIANT: negative_transfer_forbidden -> VERIFIED_BY: [negative_amount]
|
||||
|
||||
# @TEST_DATA: sufficient_funds -> {"from": "acc_A", "to": "acc_B", "amt": 100.00}
|
||||
# @TEST_DATA: insufficient_funds -> {"from": "acc_empty", "to": "acc_B", "amt": 1000.00}
|
||||
# @TEST_DATA: concurrency_lock -> {./fixtures/transactions.json#race_condition}
|
||||
|
||||
from decimal import Decimal
|
||||
from typing import NamedTuple
|
||||
|
||||
@@ -1,67 +1,24 @@
|
||||
<!-- [DEF:FrontendComponentShot:Component] -->
|
||||
<!--
|
||||
/**
|
||||
* @TIER: CRITICAL
|
||||
* @SEMANTICS: Task, Button, Action, UX
|
||||
* @PURPOSE: Action button to spawn a new task with full UX feedback cycle.
|
||||
* @LAYER: UI (Presentation)
|
||||
* @RELATION: CALLS -> postApi
|
||||
*
|
||||
* @INVARIANT: Must prevent double-submission while loading.
|
||||
* @INVARIANT: Loading state must always terminate (no infinite spinner).
|
||||
* @INVARIANT: User must receive feedback on both success and failure.
|
||||
*
|
||||
* @TEST_CONTRACT: ComponentState ->
|
||||
* {
|
||||
* required_fields: {
|
||||
* isLoading: bool
|
||||
* },
|
||||
* invariants: [
|
||||
* "isLoading=true implies button.disabled=true",
|
||||
* "isLoading=true implies aria-busy=true",
|
||||
* "isLoading=true implies spinner visible"
|
||||
* ]
|
||||
* }
|
||||
*
|
||||
* @TEST_CONTRACT: ApiResponse ->
|
||||
* {
|
||||
* required_fields: {},
|
||||
* optional_fields: {
|
||||
* task_id: str
|
||||
* }
|
||||
* }
|
||||
|
||||
* @TEST_FIXTURE: idle_state ->
|
||||
* {
|
||||
* isLoading: false
|
||||
* }
|
||||
*
|
||||
* @TEST_FIXTURE: successful_response ->
|
||||
* {
|
||||
* task_id: "task_123"
|
||||
* }
|
||||
|
||||
* @TEST_EDGE: api_failure -> raises Error("Network")
|
||||
* @TEST_EDGE: empty_response -> {}
|
||||
* @TEST_EDGE: rapid_double_click -> special: concurrent_click
|
||||
* @TEST_EDGE: unresolved_promise -> special: pending_state
|
||||
|
||||
* @TEST_INVARIANT: prevent_double_submission -> verifies: [rapid_double_click]
|
||||
* @TEST_INVARIANT: loading_state_consistency -> verifies: [idle_state, pending_state]
|
||||
* @TEST_INVARIANT: feedback_always_emitted -> verifies: [successful_response, api_failure]
|
||||
|
||||
* @UX_STATE: Idle -> Button enabled, primary color, no spinner.
|
||||
* @UX_STATE: Loading -> Button disabled, spinner visible, aria-busy=true.
|
||||
* @UX_STATE: Success -> Toast success displayed.
|
||||
* @UX_STATE: Error -> Toast error displayed.
|
||||
*
|
||||
* @UX_FEEDBACK: toast.success, toast.error
|
||||
*
|
||||
* @UX_TEST: Idle -> {click: spawnTask, expected: isLoading=true}
|
||||
* @UX_TEST: Loading -> {double_click: ignored, expected: single_api_call}
|
||||
* @UX_TEST: Success -> {api_resolve: task_id, expected: toast.success called}
|
||||
* @UX_TEST: Error -> {api_reject: error, expected: toast.error called}
|
||||
-->
|
||||
<!-- /**
|
||||
* @TIER: CRITICAL
|
||||
* @SEMANTICS: Task, Button, Action, UX
|
||||
* @PURPOSE: Action button to spawn a new task with full UX feedback cycle.
|
||||
* @LAYER: UI (Presentation)
|
||||
* @RELATION: CALLS -> postApi
|
||||
* @INVARIANT: Must prevent double-submission while loading.
|
||||
*
|
||||
* @TEST_DATA: idle_state -> {"isLoading": false}
|
||||
* @TEST_DATA: loading_state -> {"isLoading": true}
|
||||
*
|
||||
* @UX_STATE: Idle -> Button enabled, primary color.
|
||||
* @UX_STATE: Loading -> Button disabled, spinner visible.
|
||||
* @UX_STATE: Error -> Toast notification triggers.
|
||||
*
|
||||
* @UX_FEEDBACK: Toast success/error.
|
||||
* @UX_TEST: Idle -> {click: spawnTask, expected: isLoading=true}
|
||||
* @UX_TEST: Success -> {api_resolve: 200, expected: toast.success called}
|
||||
*/
|
||||
-->
|
||||
<script>
|
||||
import { postApi } from "$lib/api.js";
|
||||
import { t } from "$lib/i18n";
|
||||
|
||||
@@ -5,7 +5,6 @@
|
||||
|
||||
#### I. ЗАКОН (АКСИОМЫ)
|
||||
1. Смысл первичен. Код вторичен.
|
||||
2.Слепота недопустима. Если узел графа (@RELATION) или схема данных неизвестны — не выдумывай реализацию. Остановись и запроси контекст.
|
||||
2. Контракт (@PRE/@POST) — источник истины.
|
||||
**3. UX — это логика, а не декор. Состояния интерфейса — часть контракта.**
|
||||
4. Структура `[DEF]...[/DEF]` — нерушима.
|
||||
@@ -48,13 +47,11 @@
|
||||
@PRE: Входные условия.
|
||||
@POST: Гарантии выхода.
|
||||
@SIDE_EFFECT: Мутации, IO.
|
||||
@DATA_CONTRACT: Ссылка на DTO/Pydantic модель. Заменяет ручное описание @PARAM. Формат: Input -> [Model], Output -> [Model].
|
||||
|
||||
**UX Теги (Svelte/Frontend):**
|
||||
**@UX_STATE:** `[StateName] -> Визуальное поведение` (Idle, Loading, Error).
|
||||
**@UX_FEEDBACK:** Реакция системы (Toast, Shake, Red Border).
|
||||
**@UX_RECOVERY:** Механизм исправления ошибки пользователем (Retry, Clear Input).
|
||||
**@UX_REATIVITY:** Явное указание использования рун. Формат: State: $state, Derived: $derived. Никаких устаревших export let.
|
||||
|
||||
**UX Testing Tags (для Tester Agent):**
|
||||
**@UX_TEST:** Спецификация теста для UX состояния.
|
||||
@@ -66,26 +63,41 @@
|
||||
#### V. АДАПТАЦИЯ (TIERS)
|
||||
Определяется тегом `@TIER` в Header.
|
||||
|
||||
### V. УРОВНИ СТРОГОСТИ (TIERS)
|
||||
Степень контроля задается тегом `@TIER` в Header.
|
||||
1. **CRITICAL** (Core/Security/**Complex UI**):
|
||||
- Требование: Полный контракт (включая **все @UX теги**), Граф, Инварианты, Строгие Логи.
|
||||
```
|
||||
@TEST_CONTRACT: Обязательное описание структуры входных/выходных данных.
|
||||
Формат:
|
||||
@TEST_CONTRACT: Name -> {
|
||||
required_fields: {field: type},
|
||||
optional_fields: {field: type},
|
||||
invariants: [...]
|
||||
}
|
||||
|
||||
**1. CRITICAL** (Ядро / Безопасность / Сложный UI)
|
||||
- **Закон:** Полный GRACE. Граф, Инварианты, Строгий Лог, все `@UX` теги.
|
||||
- **Догма Тестирования:** Тесты рождаются из контракта. Голый код без данных — слеп.
|
||||
- `@TEST_CONTRACT: InputType -> OutputType`. (Строгий интерфейс).
|
||||
- `@TEST_SCENARIO: name -> Ожидаемое поведение`. (Суть теста).
|
||||
- `@TEST_FIXTURE: name -> file:PATH | INLINE_JSON`. (Данные для Happy Path).
|
||||
- `@TEST_EDGE: name -> Описание сбоя`. (Минимум 3 границы).
|
||||
- *Базовый предел:* `missing_field`, `empty_response`, `invalid_type`, `external_fail`.
|
||||
- `@TEST_INVARIANT: inv_name -> VERIFIED_BY: [scenario_1, ...]`. (Смыкание логики).
|
||||
- **Исполнение:** Tester Agent обязан строить проверки строго по этим тегам.
|
||||
@TEST_FIXTURE: Эталонный корректный пример (happy-path).
|
||||
Формат:
|
||||
@TEST_FIXTURE: fixture_name -> {INLINE_JSON | PATH#fragment}
|
||||
|
||||
**2. STANDARD** (Бизнес-логика / Формы)
|
||||
- **Закон:** База. (`@PURPOSE`, `@UX_STATE`, Лог, `@RELATION`).
|
||||
- **Исключение:** Для сложных форм внедряй `@TEST_SCENARIO` и `@TEST_INVARIANT`.
|
||||
@TEST_EDGE: Граничные случаи (минимум 3 для CRITICAL).
|
||||
Формат:
|
||||
@TEST_EDGE: case_name -> {INLINE_JSON | special_case}
|
||||
|
||||
**3. TRIVIAL** (DTO / Атомы UI / Утилиты)
|
||||
- **Закон:** Каркас. Только якорь `[DEF]` и `@PURPOSE`. Данные и графы не требуются.
|
||||
@TEST_INVARIANT: Обязательно. Связывает тесты с инвариантами.
|
||||
Формат:
|
||||
@TEST_INVARIANT: invariant_name -> verifies: [test_case_1, test_case_2]
|
||||
|
||||
Обязательные edge-типы для CRITICAL:
|
||||
- missing_required_field
|
||||
- empty_response
|
||||
- invalid_type
|
||||
- external_failure (exception)
|
||||
```
|
||||
- Tester Agent **ОБЯЗАН** использовать @TEST_CONTRACT, @TEST_FIXTURE и @TEST_EDGE при написании тестов для CRITICAL модулей.
|
||||
2. **STANDARD** (BizLogic/**Forms**):
|
||||
- Требование: Базовый контракт (@PURPOSE, @UX_STATE), Логи, @RELATION.
|
||||
- @TEST_DATA: Рекомендуется для Complex Forms.
|
||||
3. **TRIVIAL** (DTO/**Atoms**):
|
||||
- Требование: Только Якоря [DEF] и @PURPOSE.
|
||||
|
||||
#### VI. ЛОГИРОВАНИЕ (ДАО МОЛЕКУЛЫ / MOLECULAR TOPOLOGY)
|
||||
Цель: Трассировка. Самокоррекция. Управление Матрицей Внимания ("Химия мышления").
|
||||
@@ -117,16 +129,10 @@
|
||||
|
||||
**Незыблемое правило:** Всякому логу системы — тавро `source`. Для Внешенго Мира (Svelte) начертай рунами вручную: `console.log("[ID][REFLECT] Msg")`.
|
||||
|
||||
#### VIII. АЛГОРИТМ ГЕНЕРАЦИИ И ВЫХОД ИЗ ТУПИКА
|
||||
1. АНАЛИЗ. Оцени TIER, слой и UX-требования. Чего не хватает? Запроси `[NEED_CONTEXT: id]`.
|
||||
#### VII. АЛГОРИТМ ГЕНЕРАЦИИ
|
||||
1. АНАЛИЗ. Оцени TIER, слой и UX-требования.
|
||||
2. КАРКАС. Создай `[DEF]`, Header и Контракты.
|
||||
3. РЕАЛИЗАЦИЯ. Напиши логику, удовлетворяющую Контракту (и UX-состояниям). Орошай путь логами `[REASON]` и `[REFLECT]`.
|
||||
3. РЕАЛИЗАЦИЯ. Напиши логику, удовлетворяющую Контракту (и UX-состояниям).
|
||||
4. ЗАМЫКАНИЕ. Закрой все `[/DEF]`.
|
||||
|
||||
**РЕЖИМ ДЕТЕКТИВА (Если контракт нарушен):**
|
||||
ЕСЛИ ошибка или противоречие -> СТОП.
|
||||
1. Выведи `[COHERENCE_CHECK_FAILED]`.
|
||||
2. Сформулируй гипотезу: `[EXPLORE] Ошибка в I/O, состоянии или зависимости?`
|
||||
3. Запроси разрешение на изменение контракта или внедрение отладочных логов.
|
||||
|
||||
ЕСЛИ ошибка или противоречие -> СТОП. Выведи `[COHERENCE_CHECK_FAILED]`.
|
||||
@@ -6,7 +6,7 @@ description: Audit AI-generated unit tests. Your goal is to aggressively search
|
||||
**OBJECTIVE:** Audit AI-generated unit tests. Your goal is to aggressively search for "Test Tautologies", "Logic Echoing", and "Contract Negligence". You are the final gatekeeper. If a test is meaningless, you MUST reject it.
|
||||
|
||||
**INPUT:**
|
||||
1. SOURCE CODE (with GRACE-Poly `[DEF]` Contract: `@PRE`, `@POST`, `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT`).
|
||||
1. SOURCE CODE (with GRACE-Poly `[DEF]` Contract: `@PRE`, `@POST`, `@TEST_DATA`).
|
||||
2. GENERATED TEST CODE.
|
||||
|
||||
### I. CRITICAL ANTI-PATTERNS (REJECT IMMEDIATELY IF FOUND):
|
||||
@@ -17,7 +17,7 @@ description: Audit AI-generated unit tests. Your goal is to aggressively search
|
||||
|
||||
2. **The Logic Mirror (Echoing):**
|
||||
- *Definition:* The test re-implements the exact same algorithmic logic found in the source code to calculate the `expected_result`. If the original logic is flawed, the test will falsely pass.
|
||||
- *Rule:* Tests must assert against **static, predefined outcomes** (from `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT` or explicit constants), NOT dynamically calculated outcomes using the same logic as the source.
|
||||
- *Rule:* Tests must assert against **static, predefined outcomes** (from `@TEST_DATA` or explicit constants), NOT dynamically calculated outcomes using the same logic as the source.
|
||||
|
||||
3. **The "Happy Path" Illusion:**
|
||||
- *Definition:* The test suite only checks successful executions but ignores the `@PRE` conditions (Negative Testing).
|
||||
@@ -26,78 +26,26 @@ description: Audit AI-generated unit tests. Your goal is to aggressively search
|
||||
4. **Missing Post-Condition Verification:**
|
||||
- *Definition:* The test calls the function but only checks the return value, ignoring `@SIDE_EFFECT` or `@POST` state changes (e.g., failing to verify that a DB call was made or a Store was updated).
|
||||
|
||||
5. **Missing Edge Case Coverage:**
|
||||
- *Definition:* The test suite ignores `@TEST_EDGE` scenarios defined in the contract.
|
||||
- *Rule:* Every `@TEST_EDGE` in the source contract MUST have a corresponding test case.
|
||||
|
||||
6. **Missing Invariant Verification:**
|
||||
- *Definition:* The test suite does not verify `@TEST_INVARIANT` conditions.
|
||||
- *Rule:* Every `@TEST_INVARIANT` MUST be verified by at least one test that attempts to break it.
|
||||
|
||||
7. **Missing UX State Testing (Svelte Components):**
|
||||
- *Definition:* For Svelte components with `@UX_STATE`, the test suite does not verify state transitions.
|
||||
- *Rule:* Every `@UX_STATE` transition MUST have a test verifying the visual/behavioral change.
|
||||
- *Check:* `@UX_FEEDBACK` mechanisms (toast, shake, color) must be tested.
|
||||
- *Check:* `@UX_RECOVERY` mechanisms (retry, clear input) must be tested.
|
||||
|
||||
### II. SEMANTIC PROTOCOL COMPLIANCE
|
||||
|
||||
Verify the test file follows GRACE-Poly semantics:
|
||||
|
||||
1. **Anchor Integrity:**
|
||||
- Test file MUST start with `[DEF:__tests__/test_name:Module]`
|
||||
- Test file MUST end with `[/DEF:__tests__/test_name:Module]`
|
||||
|
||||
2. **Required Tags:**
|
||||
- `@RELATION: VERIFIES -> <path_to_source>` must be present
|
||||
- `@PURPOSE:` must describe what is being tested
|
||||
|
||||
3. **TIER Alignment:**
|
||||
- If source is `@TIER: CRITICAL`, test MUST cover all `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT`
|
||||
- If source is `@TIER: STANDARD`, test MUST cover `@PRE` and `@POST`
|
||||
- If source is `@TIER: TRIVIAL`, basic smoke test is acceptable
|
||||
|
||||
### III. AUDIT CHECKLIST
|
||||
### II. AUDIT CHECKLIST
|
||||
|
||||
Evaluate the test code against these criteria:
|
||||
1. **Target Invocation:** Does the test actually import and call the function/component declared in the `@RELATION: VERIFIES` tag?
|
||||
2. **Contract Alignment:** Does the test suite cover 100% of the `@PRE` (negative tests) and `@POST` (assertions) conditions from the source contract?
|
||||
3. **Test Contract Compliance:** Does the test follow the interface defined in `@TEST_CONTRACT`?
|
||||
4. **Data Usage:** Does the test use the exact scenarios defined in `@TEST_FIXTURE`?
|
||||
5. **Edge Coverage:** Are all `@TEST_EDGE` scenarios tested?
|
||||
6. **Invariant Coverage:** Are all `@TEST_INVARIANT` conditions verified?
|
||||
7. **UX Coverage (if applicable):** Are all `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY` tested?
|
||||
8. **Mocking Sanity:** Are external dependencies mocked correctly WITHOUT mocking the system under test itself?
|
||||
9. **Semantic Anchor:** Does the test file have proper `[DEF]` and `[/DEF]` anchors?
|
||||
3. **Data Usage:** Does the test use the exact scenarios defined in `@TEST_DATA`?
|
||||
4. **Mocking Sanity:** Are external dependencies mocked correctly WITHOUT mocking the system under test itself?
|
||||
|
||||
### IV. OUTPUT FORMAT
|
||||
### III. OUTPUT FORMAT
|
||||
|
||||
You MUST respond strictly in the following JSON format. Do not add markdown blocks outside the JSON.
|
||||
|
||||
{
|
||||
"verdict": "APPROVED" | "REJECTED",
|
||||
"rejection_reason": "TAUTOLOGY" | "LOGIC_MIRROR" | "WEAK_CONTRACT_COVERAGE" | "OVER_MOCKED" | "MISSING_EDGES" | "MISSING_INVARIANTS" | "MISSING_UX_TESTS" | "SEMANTIC_VIOLATION" | "NONE",
|
||||
"rejection_reason": "TAUTOLOGY" | "LOGIC_MIRROR" | "WEAK_CONTRACT_COVERAGE" | "OVER_MOCKED" | "NONE",
|
||||
"audit_details": {
|
||||
"target_invoked": true/false,
|
||||
"pre_conditions_tested": true/false,
|
||||
"post_conditions_tested": true/false,
|
||||
"test_fixture_used": true/false,
|
||||
"edges_covered": true/false,
|
||||
"invariants_verified": true/false,
|
||||
"ux_states_tested": true/false,
|
||||
"semantic_anchors_present": true/false
|
||||
},
|
||||
"coverage_summary": {
|
||||
"total_edges": number,
|
||||
"edges_tested": number,
|
||||
"total_invariants": number,
|
||||
"invariants_tested": number,
|
||||
"total_ux_states": number,
|
||||
"ux_states_tested": number
|
||||
},
|
||||
"tier_compliance": {
|
||||
"source_tier": "CRITICAL" | "STANDARD" | "TRIVIAL",
|
||||
"meets_tier_requirements": true/false
|
||||
"test_data_used": true/false
|
||||
},
|
||||
"feedback": "Strict, actionable feedback for the test generator agent. Explain exactly which anti-pattern was detected and how to fix it."
|
||||
}
|
||||
@@ -1,4 +1,4 @@
|
||||
---
|
||||
description: USE SEMANTIC
|
||||
---
|
||||
Прочитай .ai/standards/semantics.md. ОБЯЗАТЕЛЬНО используй его при разработке
|
||||
Прочитай .specify/memory/semantics.md (или .ai/standards/semantics.md, если не найден). ОБЯЗАТЕЛЬНО используй его при разработке
|
||||
|
||||
@@ -63,7 +63,6 @@ Load only the minimal necessary context from each artifact:
|
||||
**From constitution:**
|
||||
|
||||
- Load `.ai/standards/constitution.md` for principle validation
|
||||
- Load `.ai/standards/semantics.md` for technical standard validation
|
||||
|
||||
### 3. Build Semantic Models
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ Analyze test failure reports, identify root causes, and fix implementation issue
|
||||
|
||||
1. **USE CODER MODE**: Always switch to `coder` mode for code fixes
|
||||
2. **SEMANTIC PROTOCOL**: Never remove semantic annotations ([DEF], @TAGS). Only update code logic.
|
||||
3. **TEST DATA**: If tests use @TEST_ fixtures, preserve them when fixing
|
||||
3. **TEST DATA**: If tests use @TEST_DATA fixtures, preserve them when fixing
|
||||
4. **NO DELETION**: Never delete existing tests or semantic annotations
|
||||
5. **REPORT FIRST**: Always write a fix report before making changes
|
||||
|
||||
|
||||
@@ -53,15 +53,6 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||
|
||||
3. Load and analyze the implementation context:
|
||||
- **REQUIRED**: Read `.ai/standards/semantics.md` for strict coding standards and contract requirements
|
||||
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
||||
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
||||
- **IF EXISTS**: Read data-model.md for entities and relationships
|
||||
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
||||
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||
|
||||
4. **Project Setup Verification**:
|
||||
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
||||
|
||||
@@ -120,13 +111,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||
|
||||
7. Implementation execution rules:
|
||||
- **Strict Adherence**: Apply `.ai/standards/semantics.md` rules:
|
||||
- Every file MUST start with a `[DEF:id:Type]` header and end with a closing `[/DEF:id:Type]` anchor.
|
||||
- Include `@TIER` and define contracts (`@PRE`, `@POST`).
|
||||
- For Svelte components, use `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY`, and explicitly declare reactivity with `@UX_REATIVITY: State: $state, Derived: $derived`.
|
||||
- **Molecular Topology Logging**: Use prefixes `[EXPLORE]`, `[REASON]`, `[REFLECT]` in logs to trace logic.
|
||||
- **CRITICAL Contracts**: If a task description contains a contract summary (e.g., `CRITICAL: PRE: ..., POST: ...`), these constraints are **MANDATORY** and must be strictly implemented in the code using guards/assertions (if applicable per protocol).
|
||||
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||
- **Core development**: Implement models, services, CLI commands, endpoints
|
||||
- **Integration work**: Database connections, middleware, logging, external services
|
||||
|
||||
@@ -22,7 +22,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Load context**: Read `.ai/ROOT.md` and `.ai/PROJECT_MAP.md` to understand the project structure and navigation. Then read required standards: `.ai/standards/constitution.md` and `.ai/standards/semantics.md`. Load IMPL_PLAN template.
|
||||
2. **Load context**: Read FEATURE_SPEC and `.ai/standards/constitution.md`. Load IMPL_PLAN template (already copied).
|
||||
|
||||
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
||||
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
||||
@@ -64,30 +64,16 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
**Prerequisites:** `research.md` complete
|
||||
|
||||
0. **Validate Design against UX Reference**:
|
||||
- Check if the proposed architecture supports the latency, interactivity, and flow defined in `ux_reference.md`.
|
||||
- **Linkage**: Ensure key UI states from `ux_reference.md` map to Component Contracts (`@UX_STATE`).
|
||||
- **CRITICAL**: If the technical plan compromises the UX (e.g. "We can't do real-time validation"), you **MUST STOP** and warn the user.
|
||||
|
||||
1. **Extract entities from feature spec** → `data-model.md`:
|
||||
- Entity name, fields, relationships, validation rules.
|
||||
- Entity name, fields, relationships
|
||||
- Validation rules from requirements
|
||||
- State transitions if applicable
|
||||
|
||||
2. **Design & Verify Contracts (Semantic Protocol)**:
|
||||
- **Drafting**: Define `[DEF:id:Type]` Headers, Contracts, and closing `[/DEF:id:Type]` for all new modules based on `.ai/standards/semantics.md`.
|
||||
- **TIER Classification**: Explicitly assign `@TIER: [CRITICAL|STANDARD|TRIVIAL]` to each module.
|
||||
- **CRITICAL Requirements**: For all CRITICAL modules, define full `@PRE`, `@POST`, and (if UI) `@UX_STATE` contracts. **MUST** also define testing contracts: `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, and `@TEST_INVARIANT`.
|
||||
- **Self-Review**:
|
||||
- *Completeness*: Do `@PRE`/`@POST` cover edge cases identified in Research? Are test contracts present for CRITICAL?
|
||||
- *Connectivity*: Do `@RELATION` tags form a coherent graph?
|
||||
- *Compliance*: Does syntax match `[DEF:id:Type]` exactly and is it closed with `[/DEF:id:Type]`?
|
||||
- **Output**: Write verified contracts to `contracts/modules.md`.
|
||||
|
||||
3. **Simulate Contract Usage**:
|
||||
- Trace one key user scenario through the defined contracts to ensure data flow continuity.
|
||||
- If a contract interface mismatch is found, fix it immediately.
|
||||
|
||||
4. **Generate API contracts**:
|
||||
- Output OpenAPI/GraphQL schema to `/contracts/` for backend-frontend sync.
|
||||
2. **Define interface contracts** (if project has external interfaces) → `/contracts/`:
|
||||
- Identify what interfaces the project exposes to users or other systems
|
||||
- Document the contract format appropriate for the project type
|
||||
- Examples: public APIs for libraries, command schemas for CLI tools, endpoints for web services, grammars for parsers, UI contracts for applications
|
||||
- Skip if project is purely internal (build scripts, one-off tools, etc.)
|
||||
|
||||
3. **Agent context update**:
|
||||
- Run `.specify/scripts/bash/update-agent-context.sh agy`
|
||||
|
||||
@@ -24,7 +24,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||
|
||||
2. **Load design documents**: Read from FEATURE_DIR:
|
||||
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities), ux_reference.md (experience source of truth)
|
||||
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
|
||||
- **Optional**: data-model.md (entities), contracts/ (interface contracts), research.md (decisions), quickstart.md (test scenarios)
|
||||
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
||||
|
||||
@@ -70,12 +70,6 @@ The tasks.md should be immediately executable - each task must be specific enoug
|
||||
|
||||
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
|
||||
|
||||
### UX Preservation (CRITICAL)
|
||||
|
||||
- **Source of Truth**: `ux_reference.md` is the absolute standard for the "feel" of the feature.
|
||||
- **Violation Warning**: If any task would inherently violate the UX (e.g. "Remove progress bar to simplify code"), you **MUST** flag this to the user immediately.
|
||||
- **Verification Task**: You **MUST** add a specific task at the end of each User Story phase: `- [ ] Txxx [USx] Verify implementation matches ux_reference.md (Happy Path & Errors)`
|
||||
|
||||
### Checklist Format (REQUIRED)
|
||||
|
||||
Every task MUST strictly follow this format:
|
||||
@@ -119,12 +113,9 @@ Every task MUST strictly follow this format:
|
||||
- If tests requested: Tests specific to that story
|
||||
- Mark story dependencies (most stories should be independent)
|
||||
|
||||
2. **From Contracts (CRITICAL TIER)**:
|
||||
- Identify components marked as `@TIER: CRITICAL` in `contracts/modules.md`.
|
||||
- For these components, **MUST** append the summary of `@PRE`, `@POST`, `@UX_STATE`, and test contracts (`@TEST_FIXTURE`, `@TEST_EDGE`) directly to the task description.
|
||||
- Example: `- [ ] T005 [P] [US1] Implement Auth (CRITICAL: PRE: token exists, POST: returns User, TESTS: 2 edges) in src/auth.py`
|
||||
- Map each contract/endpoint → to the user story it serves
|
||||
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
||||
2. **From Contracts**:
|
||||
- Map each interface contract → to the user story it serves
|
||||
- If tests requested: Each interface contract → contract test task [P] before implementation in that story's phase
|
||||
|
||||
3. **From Data Model**:
|
||||
- Map each entity to the user story(ies) that need it
|
||||
|
||||
@@ -20,7 +20,7 @@ Execute full testing cycle: analyze code for testable modules, write tests with
|
||||
|
||||
1. **NEVER delete existing tests** - Only update if they fail due to bugs in the test or implementation
|
||||
2. **NEVER duplicate tests** - Check existing tests first before creating new ones
|
||||
3. **Use TEST_FIXTURE fixtures** - For CRITICAL tier modules, read @TEST_FIXTURE from semantics header
|
||||
3. **Use TEST_DATA fixtures** - For CRITICAL tier modules, read @TEST_DATA from .specify/memory/semantics.md
|
||||
4. **Co-location required** - Write tests in `__tests__` directories relative to the code being tested
|
||||
|
||||
## Execution Steps
|
||||
@@ -40,9 +40,9 @@ Determine:
|
||||
- Identify completed implementation tasks (not test tasks)
|
||||
- Extract file paths that need tests
|
||||
|
||||
**From .ai/standards/semantics.md:**
|
||||
**From .specify/memory/semantics.md:**
|
||||
- Read @TIER annotations for modules
|
||||
- For CRITICAL modules: Read @TEST_ fixtures
|
||||
- For CRITICAL modules: Read @TEST_DATA fixtures
|
||||
|
||||
**From existing tests:**
|
||||
- Scan `__tests__` directories for existing tests
|
||||
@@ -52,8 +52,8 @@ Determine:
|
||||
|
||||
Create coverage matrix:
|
||||
|
||||
| Module | File | Has Tests | TIER | TEST_FIXTURE Available |
|
||||
|--------|------|-----------|------|----------------------|
|
||||
| Module | File | Has Tests | TIER | TEST_DATA Available |
|
||||
|--------|------|-----------|------|-------------------|
|
||||
| ... | ... | ... | ... | ... |
|
||||
|
||||
### 4. Write Tests (TDD Approach)
|
||||
@@ -61,7 +61,7 @@ Create coverage matrix:
|
||||
For each module requiring tests:
|
||||
|
||||
1. **Check existing tests**: Scan `__tests__/` for duplicates
|
||||
2. **Read TEST_FIXTURE**: If CRITICAL tier, read @TEST_FIXTURE from semantic header
|
||||
2. **Read TEST_DATA**: If CRITICAL tier, read @TEST_DATA from .specify/memory/semantics.md
|
||||
3. **Write test**: Follow co-location strategy
|
||||
- Python: `src/module/__tests__/test_module.py`
|
||||
- Svelte: `src/lib/components/__tests__/test_component.test.js`
|
||||
@@ -102,7 +102,6 @@ describe('Component UX States', () => {
|
||||
// @UX_RECOVERY: Retry on error
|
||||
it('should allow retry on error', async () => { ... });
|
||||
});
|
||||
// [/DEF:__tests__/test_Component:Module]
|
||||
```
|
||||
|
||||
### 5. Test Documentation
|
||||
@@ -171,7 +170,7 @@ Generate test execution report:
|
||||
|
||||
- [ ] Fix failed tests
|
||||
- [ ] Add more coverage for [module]
|
||||
- [ ] Review TEST_FIXTURE fixtures
|
||||
- [ ] Review TEST_DATA fixtures
|
||||
```
|
||||
|
||||
## Context for Testing
|
||||
|
||||
@@ -1,103 +0,0 @@
|
||||
---
|
||||
description: Audit AI-generated unit tests. Your goal is to aggressively search for "Test Tautologies", "Logic Echoing", and "Contract Negligence". You are the final gatekeeper. If a test is meaningless, you MUST reject it.
|
||||
---
|
||||
|
||||
**ROLE:** Elite Quality Assurance Architect and Red Teamer.
|
||||
**OBJECTIVE:** Audit AI-generated unit tests. Your goal is to aggressively search for "Test Tautologies", "Logic Echoing", and "Contract Negligence". You are the final gatekeeper. If a test is meaningless, you MUST reject it.
|
||||
|
||||
**INPUT:**
|
||||
1. SOURCE CODE (with GRACE-Poly `[DEF]` Contract: `@PRE`, `@POST`, `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT`).
|
||||
2. GENERATED TEST CODE.
|
||||
|
||||
### I. CRITICAL ANTI-PATTERNS (REJECT IMMEDIATELY IF FOUND):
|
||||
|
||||
1. **The Tautology (Self-Fulfilling Prophecy):**
|
||||
- *Definition:* The test asserts hardcoded values against hardcoded values without executing the core business logic, or mocks the actual function being tested.
|
||||
- *Example of Failure:* `assert 2 + 2 == 4` or mocking the class under test so that it returns exactly what the test asserts.
|
||||
|
||||
2. **The Logic Mirror (Echoing):**
|
||||
- *Definition:* The test re-implements the exact same algorithmic logic found in the source code to calculate the `expected_result`. If the original logic is flawed, the test will falsely pass.
|
||||
- *Rule:* Tests must assert against **static, predefined outcomes** (from `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT` or explicit constants), NOT dynamically calculated outcomes using the same logic as the source.
|
||||
|
||||
3. **The "Happy Path" Illusion:**
|
||||
- *Definition:* The test suite only checks successful executions but ignores the `@PRE` conditions (Negative Testing).
|
||||
- *Rule:* Every `@PRE` tag in the source contract MUST have a corresponding test that deliberately violates it and asserts the correct Exception/Error state.
|
||||
|
||||
4. **Missing Post-Condition Verification:**
|
||||
- *Definition:* The test calls the function but only checks the return value, ignoring `@SIDE_EFFECT` or `@POST` state changes (e.g., failing to verify that a DB call was made or a Store was updated).
|
||||
|
||||
5. **Missing Edge Case Coverage:**
|
||||
- *Definition:* The test suite ignores `@TEST_EDGE` scenarios defined in the contract.
|
||||
- *Rule:* Every `@TEST_EDGE` in the source contract MUST have a corresponding test case.
|
||||
|
||||
6. **Missing Invariant Verification:**
|
||||
- *Definition:* The test suite does not verify `@TEST_INVARIANT` conditions.
|
||||
- *Rule:* Every `@TEST_INVARIANT` MUST be verified by at least one test that attempts to break it.
|
||||
|
||||
7. **Missing UX State Testing (Svelte Components):**
|
||||
- *Definition:* For Svelte components with `@UX_STATE`, the test suite does not verify state transitions.
|
||||
- *Rule:* Every `@UX_STATE` transition MUST have a test verifying the visual/behavioral change.
|
||||
- *Check:* `@UX_FEEDBACK` mechanisms (toast, shake, color) must be tested.
|
||||
- *Check:* `@UX_RECOVERY` mechanisms (retry, clear input) must be tested.
|
||||
|
||||
### II. SEMANTIC PROTOCOL COMPLIANCE
|
||||
|
||||
Verify the test file follows GRACE-Poly semantics:
|
||||
|
||||
1. **Anchor Integrity:**
|
||||
- Test file MUST start with `[DEF:__tests__/test_name:Module]`
|
||||
- Test file MUST end with `[/DEF:__tests__/test_name:Module]`
|
||||
|
||||
2. **Required Tags:**
|
||||
- `@RELATION: VERIFIES -> <path_to_source>` must be present
|
||||
- `@PURPOSE:` must describe what is being tested
|
||||
|
||||
3. **TIER Alignment:**
|
||||
- If source is `@TIER: CRITICAL`, test MUST cover all `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, `@TEST_INVARIANT`
|
||||
- If source is `@TIER: STANDARD`, test MUST cover `@PRE` and `@POST`
|
||||
- If source is `@TIER: TRIVIAL`, basic smoke test is acceptable
|
||||
|
||||
### III. AUDIT CHECKLIST
|
||||
|
||||
Evaluate the test code against these criteria:
|
||||
1. **Target Invocation:** Does the test actually import and call the function/component declared in the `@RELATION: VERIFIES` tag?
|
||||
2. **Contract Alignment:** Does the test suite cover 100% of the `@PRE` (negative tests) and `@POST` (assertions) conditions from the source contract?
|
||||
3. **Test Contract Compliance:** Does the test follow the interface defined in `@TEST_CONTRACT`?
|
||||
4. **Data Usage:** Does the test use the exact scenarios defined in `@TEST_FIXTURE`?
|
||||
5. **Edge Coverage:** Are all `@TEST_EDGE` scenarios tested?
|
||||
6. **Invariant Coverage:** Are all `@TEST_INVARIANT` conditions verified?
|
||||
7. **UX Coverage (if applicable):** Are all `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY` tested?
|
||||
8. **Mocking Sanity:** Are external dependencies mocked correctly WITHOUT mocking the system under test itself?
|
||||
9. **Semantic Anchor:** Does the test file have proper `[DEF]` and `[/DEF]` anchors?
|
||||
|
||||
### IV. OUTPUT FORMAT
|
||||
|
||||
You MUST respond strictly in the following JSON format. Do not add markdown blocks outside the JSON.
|
||||
|
||||
{
|
||||
"verdict": "APPROVED" | "REJECTED",
|
||||
"rejection_reason": "TAUTOLOGY" | "LOGIC_MIRROR" | "WEAK_CONTRACT_COVERAGE" | "OVER_MOCKED" | "MISSING_EDGES" | "MISSING_INVARIANTS" | "MISSING_UX_TESTS" | "SEMANTIC_VIOLATION" | "NONE",
|
||||
"audit_details": {
|
||||
"target_invoked": true/false,
|
||||
"pre_conditions_tested": true/false,
|
||||
"post_conditions_tested": true/false,
|
||||
"test_fixture_used": true/false,
|
||||
"edges_covered": true/false,
|
||||
"invariants_verified": true/false,
|
||||
"ux_states_tested": true/false,
|
||||
"semantic_anchors_present": true/false
|
||||
},
|
||||
"coverage_summary": {
|
||||
"total_edges": number,
|
||||
"edges_tested": number,
|
||||
"total_invariants": number,
|
||||
"invariants_tested": number,
|
||||
"total_ux_states": number,
|
||||
"ux_states_tested": number
|
||||
},
|
||||
"tier_compliance": {
|
||||
"source_tier": "CRITICAL" | "STANDARD" | "TRIVIAL",
|
||||
"meets_tier_requirements": true/false
|
||||
},
|
||||
"feedback": "Strict, actionable feedback for the test generator agent. Explain exactly which anti-pattern was detected and how to fix it."
|
||||
}
|
||||
@@ -20,7 +20,7 @@ Analyze test failure reports, identify root causes, and fix implementation issue
|
||||
|
||||
1. **USE CODER MODE**: Always switch to `coder` mode for code fixes
|
||||
2. **SEMANTIC PROTOCOL**: Never remove semantic annotations ([DEF], @TAGS). Only update code logic.
|
||||
3. **TEST DATA**: If tests use @TEST_ fixtures, preserve them when fixing
|
||||
3. **TEST DATA**: If tests use @TEST_DATA fixtures, preserve them when fixing
|
||||
4. **NO DELETION**: Never delete existing tests or semantic annotations
|
||||
5. **REPORT FIRST**: Always write a fix report before making changes
|
||||
|
||||
|
||||
@@ -117,11 +117,7 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||
|
||||
7. Implementation execution rules:
|
||||
- **Strict Adherence**: Apply `.ai/standards/semantics.md` rules:
|
||||
- Every file MUST start with a `[DEF:id:Type]` header and end with a closing `[/DEF:id:Type]` anchor.
|
||||
- Include `@TIER` and define contracts (`@PRE`, `@POST`).
|
||||
- For Svelte components, use `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY`, and explicitly declare reactivity with `@UX_REATIVITY: State: $state, Derived: $derived`.
|
||||
- **Molecular Topology Logging**: Use prefixes `[EXPLORE]`, `[REASON]`, `[REFLECT]` in logs to trace logic.
|
||||
- **Strict Adherence**: Apply `.ai/standards/semantics.md` rules - every file must start with [DEF] header, include @TIER, and define contracts.
|
||||
- **CRITICAL Contracts**: If a task description contains a contract summary (e.g., `CRITICAL: PRE: ..., POST: ...`), these constraints are **MANDATORY** and must be strictly implemented in the code using guards/assertions (if applicable per protocol).
|
||||
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||
|
||||
@@ -73,13 +73,13 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
- Entity name, fields, relationships, validation rules.
|
||||
|
||||
2. **Design & Verify Contracts (Semantic Protocol)**:
|
||||
- **Drafting**: Define `[DEF:id:Type]` Headers, Contracts, and closing `[/DEF:id:Type]` for all new modules based on `.ai/standards/semantics.md`.
|
||||
- **Drafting**: Define [DEF] Headers and Contracts for all new modules based on `.ai/standards/semantics.md`.
|
||||
- **TIER Classification**: Explicitly assign `@TIER: [CRITICAL|STANDARD|TRIVIAL]` to each module.
|
||||
- **CRITICAL Requirements**: For all CRITICAL modules, define full `@PRE`, `@POST`, and (if UI) `@UX_STATE` contracts. **MUST** also define testing contracts: `@TEST_CONTRACT`, `@TEST_FIXTURE`, `@TEST_EDGE`, and `@TEST_INVARIANT`.
|
||||
- **CRITICAL Requirements**: For all CRITICAL modules, define full `@PRE`, `@POST`, and (if UI) `@UX_STATE` contracts.
|
||||
- **Self-Review**:
|
||||
- *Completeness*: Do `@PRE`/`@POST` cover edge cases identified in Research? Are test contracts present for CRITICAL?
|
||||
- *Completeness*: Do `@PRE`/`@POST` cover edge cases identified in Research?
|
||||
- *Connectivity*: Do `@RELATION` tags form a coherent graph?
|
||||
- *Compliance*: Does syntax match `[DEF:id:Type]` exactly and is it closed with `[/DEF:id:Type]`?
|
||||
- *Compliance*: Does syntax match `[DEF:id:Type]` exactly?
|
||||
- **Output**: Write verified contracts to `contracts/modules.md`.
|
||||
|
||||
3. **Simulate Contract Usage**:
|
||||
|
||||
@@ -121,8 +121,8 @@ Every task MUST strictly follow this format:
|
||||
|
||||
2. **From Contracts (CRITICAL TIER)**:
|
||||
- Identify components marked as `@TIER: CRITICAL` in `contracts/modules.md`.
|
||||
- For these components, **MUST** append the summary of `@PRE`, `@POST`, `@UX_STATE`, and test contracts (`@TEST_FIXTURE`, `@TEST_EDGE`) directly to the task description.
|
||||
- Example: `- [ ] T005 [P] [US1] Implement Auth (CRITICAL: PRE: token exists, POST: returns User, TESTS: 2 edges) in src/auth.py`
|
||||
- For these components, **MUST** append the summary of `@PRE`, `@POST`, and `@UX_STATE` contracts directly to the task description.
|
||||
- Example: `- [ ] T005 [P] [US1] Implement Auth (CRITICAL: PRE: token exists, POST: returns User) in src/auth.py`
|
||||
- Map each contract/endpoint → to the user story it serves
|
||||
- If tests requested: Each contract → contract test task [P] before implementation in that story's phase
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ Execute full testing cycle: analyze code for testable modules, write tests with
|
||||
|
||||
1. **NEVER delete existing tests** - Only update if they fail due to bugs in the test or implementation
|
||||
2. **NEVER duplicate tests** - Check existing tests first before creating new ones
|
||||
3. **Use TEST_FIXTURE fixtures** - For CRITICAL tier modules, read @TEST_FIXTURE from .ai/standards/semantics.md
|
||||
3. **Use TEST_DATA fixtures** - For CRITICAL tier modules, read @TEST_DATA from .ai/standards/semantics.md
|
||||
4. **Co-location required** - Write tests in `__tests__` directories relative to the code being tested
|
||||
|
||||
## Execution Steps
|
||||
@@ -42,7 +42,7 @@ Determine:
|
||||
|
||||
**From .ai/standards/semantics.md:**
|
||||
- Read @TIER annotations for modules
|
||||
- For CRITICAL modules: Read @TEST_ fixtures
|
||||
- For CRITICAL modules: Read @TEST_DATA fixtures
|
||||
|
||||
**From existing tests:**
|
||||
- Scan `__tests__` directories for existing tests
|
||||
@@ -52,8 +52,8 @@ Determine:
|
||||
|
||||
Create coverage matrix:
|
||||
|
||||
| Module | File | Has Tests | TIER | TEST_FIXTURE Available |
|
||||
|--------|------|-----------|------|----------------------|
|
||||
| Module | File | Has Tests | TIER | TEST_DATA Available |
|
||||
|--------|------|-----------|------|-------------------|
|
||||
| ... | ... | ... | ... | ... |
|
||||
|
||||
### 4. Write Tests (TDD Approach)
|
||||
@@ -61,7 +61,7 @@ Create coverage matrix:
|
||||
For each module requiring tests:
|
||||
|
||||
1. **Check existing tests**: Scan `__tests__/` for duplicates
|
||||
2. **Read TEST_FIXTURE**: If CRITICAL tier, read @TEST_FIXTURE from semantics header
|
||||
2. **Read TEST_DATA**: If CRITICAL tier, read @TEST_DATA from .ai/standards/semantics.md
|
||||
3. **Write test**: Follow co-location strategy
|
||||
- Python: `src/module/__tests__/test_module.py`
|
||||
- Svelte: `src/lib/components/__tests__/test_component.test.js`
|
||||
@@ -102,7 +102,6 @@ describe('Component UX States', () => {
|
||||
// @UX_RECOVERY: Retry on error
|
||||
it('should allow retry on error', async () => { ... });
|
||||
});
|
||||
// [/DEF:__tests__/test_Component:Module]
|
||||
```
|
||||
|
||||
### 5. Test Documentation
|
||||
@@ -171,7 +170,7 @@ Generate test execution report:
|
||||
|
||||
- [ ] Fix failed tests
|
||||
- [ ] Add more coverage for [module]
|
||||
- [ ] Review TEST_FIXTURE fixtures
|
||||
- [ ] Review TEST_DATA fixtures
|
||||
```
|
||||
|
||||
## Context for Testing
|
||||
|
||||
Submodule backend/git_repos/10 deleted from 3c0ade67f9
148074
backend/logs/app.log.1
148074
backend/logs/app.log.1
File diff suppressed because it is too large
Load Diff
BIN
backend/mappings.db
Normal file
BIN
backend/mappings.db
Normal file
Binary file not shown.
@@ -76,15 +76,11 @@ class _FakeTaskManager:
|
||||
class _FakeConfigManager:
|
||||
def get_environments(self):
|
||||
return [
|
||||
SimpleNamespace(id="dev", name="Development", url="http://dev", credentials_id="dev", username="fakeuser", password="fakepassword"),
|
||||
SimpleNamespace(id="prod", name="Production", url="http://prod", credentials_id="prod", username="fakeuser", password="fakepassword"),
|
||||
SimpleNamespace(id="dev", name="Development"),
|
||||
SimpleNamespace(id="prod", name="Production"),
|
||||
]
|
||||
|
||||
def get_config(self):
|
||||
return SimpleNamespace(
|
||||
settings=SimpleNamespace(migration_sync_cron="0 0 * * *"),
|
||||
environments=self.get_environments()
|
||||
)
|
||||
|
||||
# [/DEF:_FakeConfigManager:Class]
|
||||
# [DEF:_admin_user:Function]
|
||||
# @TIER: TRIVIAL
|
||||
@@ -649,49 +645,5 @@ def test_confirm_nonexistent_id_returns_404():
|
||||
assert exc.value.status_code == 404
|
||||
|
||||
|
||||
# [DEF:test_migration_with_dry_run_includes_summary:Function]
|
||||
# @PURPOSE: Migration command with dry run flag must return the dry run summary in confirmation text.
|
||||
# @PRE: user specifies a migration with --dry-run flag.
|
||||
# @POST: Response state is needs_confirmation and text contains dry-run summary counts.
|
||||
def test_migration_with_dry_run_includes_summary(monkeypatch):
|
||||
import src.core.migration.dry_run_orchestrator as dry_run_module
|
||||
from unittest.mock import MagicMock
|
||||
_clear_assistant_state()
|
||||
task_manager = _FakeTaskManager()
|
||||
db = _FakeDb()
|
||||
|
||||
class _FakeDryRunService:
|
||||
def run(self, selection, source_client, target_client, db_session):
|
||||
return {
|
||||
"summary": {
|
||||
"dashboards": {"create": 1, "update": 0, "delete": 0},
|
||||
"charts": {"create": 3, "update": 2, "delete": 1},
|
||||
"datasets": {"create": 0, "update": 1, "delete": 0}
|
||||
}
|
||||
}
|
||||
|
||||
monkeypatch.setattr(dry_run_module, "MigrationDryRunService", _FakeDryRunService)
|
||||
|
||||
import src.core.superset_client as superset_client_module
|
||||
monkeypatch.setattr(superset_client_module, "SupersetClient", lambda env: MagicMock())
|
||||
|
||||
start = _run_async(
|
||||
assistant_module.send_message(
|
||||
request=assistant_module.AssistantMessageRequest(
|
||||
message="миграция с dev на prod для дашборда 10 --dry-run"
|
||||
),
|
||||
current_user=_admin_user(),
|
||||
task_manager=task_manager,
|
||||
config_manager=_FakeConfigManager(),
|
||||
db=db,
|
||||
)
|
||||
)
|
||||
|
||||
assert start.state == "needs_confirmation"
|
||||
assert "отчет dry-run: ВКЛ" in start.text
|
||||
assert "Отчет dry-run:" in start.text
|
||||
assert "создано новых объектов: 4" in start.text
|
||||
assert "обновлено: 3" in start.text
|
||||
assert "удалено: 1" in start.text
|
||||
# [/DEF:test_migration_with_dry_run_includes_summary:Function]
|
||||
# [/DEF:test_guarded_operation_confirm_roundtrip:Function]
|
||||
# [/DEF:backend.src.api.routes.__tests__.test_assistant_api:Module]
|
||||
|
||||
@@ -10,41 +10,6 @@ from datetime import datetime, timezone
|
||||
from fastapi.testclient import TestClient
|
||||
from src.app import app
|
||||
from src.api.routes.dashboards import DashboardsResponse
|
||||
from src.dependencies import get_current_user, has_permission, get_config_manager, get_task_manager, get_resource_service, get_mapping_service
|
||||
|
||||
# Global mock user for get_current_user dependency overrides
|
||||
mock_user = MagicMock()
|
||||
mock_user.username = "testuser"
|
||||
mock_user.roles = []
|
||||
admin_role = MagicMock()
|
||||
admin_role.name = "Admin"
|
||||
mock_user.roles.append(admin_role)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def mock_deps():
|
||||
config_manager = MagicMock()
|
||||
task_manager = MagicMock()
|
||||
resource_service = MagicMock()
|
||||
mapping_service = MagicMock()
|
||||
|
||||
app.dependency_overrides[get_config_manager] = lambda: config_manager
|
||||
app.dependency_overrides[get_task_manager] = lambda: task_manager
|
||||
app.dependency_overrides[get_resource_service] = lambda: resource_service
|
||||
app.dependency_overrides[get_mapping_service] = lambda: mapping_service
|
||||
app.dependency_overrides[get_current_user] = lambda: mock_user
|
||||
|
||||
app.dependency_overrides[has_permission("plugin:migration", "READ")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("plugin:migration", "EXECUTE")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("plugin:backup", "EXECUTE")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("tasks", "READ")] = lambda: mock_user
|
||||
|
||||
yield {
|
||||
"config": config_manager,
|
||||
"task": task_manager,
|
||||
"resource": resource_service,
|
||||
"mapping": mapping_service
|
||||
}
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
client = TestClient(app)
|
||||
|
||||
@@ -53,35 +18,45 @@ client = TestClient(app)
|
||||
# @TEST: GET /api/dashboards returns 200 and valid schema
|
||||
# @PRE: env_id exists
|
||||
# @POST: Response matches DashboardsResponse schema
|
||||
def test_get_dashboards_success(mock_deps):
|
||||
"""Uses @TEST_FIXTURE: dashboard_list_happy data."""
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
def test_get_dashboards_success():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.get_resource_service") as mock_service, \
|
||||
patch("src.api.routes.dashboards.get_task_manager") as mock_task_mgr, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
|
||||
# Mock task manager
|
||||
mock_task_mgr.return_value.get_all_tasks.return_value = []
|
||||
|
||||
# Mock resource service response
|
||||
async def mock_get_dashboards(env, tasks):
|
||||
return [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Sales Report",
|
||||
"slug": "sales",
|
||||
"git_status": {"branch": "main", "sync_status": "OK"},
|
||||
"last_task": {"task_id": "task-1", "status": "SUCCESS"}
|
||||
}
|
||||
]
|
||||
mock_service.return_value.get_dashboards_with_status = AsyncMock(
|
||||
side_effect=mock_get_dashboards
|
||||
)
|
||||
|
||||
# Mock permission
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
# @TEST_FIXTURE: dashboard_list_happy -> {"id": 1, "title": "Main Revenue"}
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Main Revenue",
|
||||
"slug": "main-revenue",
|
||||
"git_status": {"branch": "main", "sync_status": "OK"},
|
||||
"last_task": {"task_id": "task-1", "status": "SUCCESS"}
|
||||
}
|
||||
])
|
||||
|
||||
response = client.get("/api/dashboards?env_id=prod")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
# exhaustive @POST assertions
|
||||
assert "dashboards" in data
|
||||
assert len(data["dashboards"]) == 1
|
||||
assert data["dashboards"][0]["title"] == "Main Revenue"
|
||||
assert data["total"] == 1
|
||||
assert "page" in data
|
||||
DashboardsResponse(**data)
|
||||
response = client.get("/api/dashboards?env_id=prod")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "dashboards" in data
|
||||
assert "total" in data
|
||||
assert "page" in data
|
||||
|
||||
|
||||
# [/DEF:test_get_dashboards_success:Function]
|
||||
@@ -91,81 +66,55 @@ def test_get_dashboards_success(mock_deps):
|
||||
# @TEST: GET /api/dashboards filters by search term
|
||||
# @PRE: search parameter provided
|
||||
# @POST: Only matching dashboards returned
|
||||
def test_get_dashboards_with_search(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
def test_get_dashboards_with_search():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.get_resource_service") as mock_service, \
|
||||
patch("src.api.routes.dashboards.get_task_manager") as mock_task_mgr, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
|
||||
mock_task_mgr.return_value.get_all_tasks.return_value = []
|
||||
|
||||
async def mock_get_dashboards(env, tasks):
|
||||
return [
|
||||
{"id": 1, "title": "Sales Report", "slug": "sales"},
|
||||
{"id": 2, "title": "Marketing Dashboard", "slug": "marketing"}
|
||||
]
|
||||
mock_service.return_value.get_dashboards_with_status = AsyncMock(
|
||||
side_effect=mock_get_dashboards
|
||||
)
|
||||
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
async def mock_get_dashboards(env, tasks):
|
||||
return [
|
||||
{"id": 1, "title": "Sales Report", "slug": "sales"},
|
||||
{"id": 2, "title": "Marketing Dashboard", "slug": "marketing"}
|
||||
]
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(
|
||||
side_effect=mock_get_dashboards
|
||||
)
|
||||
|
||||
response = client.get("/api/dashboards?env_id=prod&search=sales")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
# @POST: Filtered result count must match search
|
||||
assert len(data["dashboards"]) == 1
|
||||
assert data["dashboards"][0]["title"] == "Sales Report"
|
||||
response = client.get("/api/dashboards?env_id=prod&search=sales")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
# Filtered by search term
|
||||
|
||||
|
||||
# [/DEF:test_get_dashboards_with_search:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_empty:Function]
|
||||
# @TEST_EDGE: empty_dashboards -> {env_id: 'empty_env', expected_total: 0}
|
||||
def test_get_dashboards_empty(mock_deps):
|
||||
"""@TEST_EDGE: empty_dashboards -> {env_id: 'empty_env', expected_total: 0}"""
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "empty_env"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[])
|
||||
|
||||
response = client.get("/api/dashboards?env_id=empty_env")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["total"] == 0
|
||||
assert len(data["dashboards"]) == 0
|
||||
assert data["total_pages"] == 1
|
||||
DashboardsResponse(**data)
|
||||
# [/DEF:test_get_dashboards_empty:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_superset_failure:Function]
|
||||
# @TEST_EDGE: external_superset_failure -> {env_id: 'bad_conn', status: 503}
|
||||
def test_get_dashboards_superset_failure(mock_deps):
|
||||
"""@TEST_EDGE: external_superset_failure -> {env_id: 'bad_conn', status: 503}"""
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "bad_conn"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(
|
||||
side_effect=Exception("Connection refused")
|
||||
)
|
||||
|
||||
response = client.get("/api/dashboards?env_id=bad_conn")
|
||||
assert response.status_code == 503
|
||||
assert "Failed to fetch dashboards" in response.json()["detail"]
|
||||
# [/DEF:test_get_dashboards_superset_failure:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_env_not_found:Function]
|
||||
# @TEST: GET /api/dashboards returns 404 if env_id missing
|
||||
# @PRE: env_id does not exist
|
||||
# @POST: Returns 404 error
|
||||
def test_get_dashboards_env_not_found(mock_deps):
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.get("/api/dashboards?env_id=nonexistent")
|
||||
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
def test_get_dashboards_env_not_found():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
|
||||
mock_config.return_value.get_environments.return_value = []
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
response = client.get("/api/dashboards?env_id=nonexistent")
|
||||
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
|
||||
|
||||
# [/DEF:test_get_dashboards_env_not_found:Function]
|
||||
@@ -175,29 +124,40 @@ def test_get_dashboards_env_not_found(mock_deps):
|
||||
# @TEST: GET /api/dashboards returns 400 for invalid page/page_size
|
||||
# @PRE: page < 1 or page_size > 100
|
||||
# @POST: Returns 400 error
|
||||
def test_get_dashboards_invalid_pagination(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
# Invalid page
|
||||
response = client.get("/api/dashboards?env_id=prod&page=0")
|
||||
assert response.status_code == 400
|
||||
assert "Page must be >= 1" in response.json()["detail"]
|
||||
def test_get_dashboards_invalid_pagination():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
|
||||
# Invalid page_size
|
||||
response = client.get("/api/dashboards?env_id=prod&page_size=101")
|
||||
assert response.status_code == 400
|
||||
assert "Page size must be between 1 and 100" in response.json()["detail"]
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
# Invalid page
|
||||
response = client.get("/api/dashboards?env_id=prod&page=0")
|
||||
assert response.status_code == 400
|
||||
assert "Page must be >= 1" in response.json()["detail"]
|
||||
|
||||
# Invalid page_size
|
||||
response = client.get("/api/dashboards?env_id=prod&page_size=101")
|
||||
assert response.status_code == 400
|
||||
assert "Page size must be between 1 and 100" in response.json()["detail"]
|
||||
|
||||
|
||||
# [/DEF:test_get_dashboards_invalid_pagination:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboard_detail_success:Function]
|
||||
# @TEST: GET /api/dashboards/{id} returns dashboard detail with charts and datasets
|
||||
def test_get_dashboard_detail_success(mock_deps):
|
||||
with patch("src.api.routes.dashboards.SupersetClient") as mock_client_cls:
|
||||
def test_get_dashboard_detail_success():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm, \
|
||||
patch("src.api.routes.dashboards.SupersetClient") as mock_client_cls:
|
||||
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_client.get_dashboard_detail.return_value = {
|
||||
@@ -245,46 +205,56 @@ def test_get_dashboard_detail_success(mock_deps):
|
||||
|
||||
# [DEF:test_get_dashboard_detail_env_not_found:Function]
|
||||
# @TEST: GET /api/dashboards/{id} returns 404 for missing environment
|
||||
def test_get_dashboard_detail_env_not_found(mock_deps):
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
|
||||
response = client.get("/api/dashboards/42?env_id=missing")
|
||||
def test_get_dashboard_detail_env_not_found():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
mock_config.return_value.get_environments.return_value = []
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
response = client.get("/api/dashboards/42?env_id=missing")
|
||||
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
# [/DEF:test_get_dashboard_detail_env_not_found:Function]
|
||||
|
||||
|
||||
# [DEF:test_migrate_dashboards_success:Function]
|
||||
# @TEST: POST /api/dashboards/migrate creates migration task
|
||||
# @PRE: Valid source_env_id, target_env_id, dashboard_ids
|
||||
# @POST: Returns task_id and create_task was called
|
||||
def test_migrate_dashboards_success(mock_deps):
|
||||
mock_source = MagicMock()
|
||||
mock_source.id = "source"
|
||||
mock_target = MagicMock()
|
||||
mock_target.id = "target"
|
||||
mock_deps["config"].get_environments.return_value = [mock_source, mock_target]
|
||||
# @POST: Returns task_id
|
||||
def test_migrate_dashboards_success():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.get_task_manager") as mock_task_mgr, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
|
||||
# Mock environments
|
||||
mock_source = MagicMock()
|
||||
mock_source.id = "source"
|
||||
mock_target = MagicMock()
|
||||
mock_target.id = "target"
|
||||
mock_config.return_value.get_environments.return_value = [mock_source, mock_target]
|
||||
|
||||
# Mock task manager
|
||||
mock_task = MagicMock()
|
||||
mock_task.id = "task-migrate-123"
|
||||
mock_task_mgr.return_value.create_task = AsyncMock(return_value=mock_task)
|
||||
|
||||
# Mock permission
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
mock_task = MagicMock()
|
||||
mock_task.id = "task-migrate-123"
|
||||
mock_deps["task"].create_task = AsyncMock(return_value=mock_task)
|
||||
|
||||
response = client.post(
|
||||
"/api/dashboards/migrate",
|
||||
json={
|
||||
"source_env_id": "source",
|
||||
"target_env_id": "target",
|
||||
"dashboard_ids": [1, 2, 3],
|
||||
"db_mappings": {"old_db": "new_db"}
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "task_id" in data
|
||||
# @POST/@SIDE_EFFECT: create_task was called
|
||||
mock_deps["task"].create_task.assert_called_once()
|
||||
response = client.post(
|
||||
"/api/dashboards/migrate",
|
||||
json={
|
||||
"source_env_id": "source",
|
||||
"target_env_id": "target",
|
||||
"dashboard_ids": [1, 2, 3],
|
||||
"db_mappings": {"old_db": "new_db"}
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "task_id" in data
|
||||
|
||||
|
||||
# [/DEF:test_migrate_dashboards_success:Function]
|
||||
@@ -294,184 +264,154 @@ def test_migrate_dashboards_success(mock_deps):
|
||||
# @TEST: POST /api/dashboards/migrate returns 400 for empty dashboard_ids
|
||||
# @PRE: dashboard_ids is empty
|
||||
# @POST: Returns 400 error
|
||||
def test_migrate_dashboards_no_ids(mock_deps):
|
||||
response = client.post(
|
||||
"/api/dashboards/migrate",
|
||||
json={
|
||||
"source_env_id": "source",
|
||||
"target_env_id": "target",
|
||||
"dashboard_ids": []
|
||||
}
|
||||
)
|
||||
def test_migrate_dashboards_no_ids():
|
||||
with patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
assert response.status_code == 400
|
||||
assert "At least one dashboard ID must be provided" in response.json()["detail"]
|
||||
response = client.post(
|
||||
"/api/dashboards/migrate",
|
||||
json={
|
||||
"source_env_id": "source",
|
||||
"target_env_id": "target",
|
||||
"dashboard_ids": []
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 400
|
||||
assert "At least one dashboard ID must be provided" in response.json()["detail"]
|
||||
|
||||
|
||||
# [/DEF:test_migrate_dashboards_no_ids:Function]
|
||||
|
||||
|
||||
# [DEF:test_migrate_dashboards_env_not_found:Function]
|
||||
# @PRE: source_env_id and target_env_id are valid environment IDs
|
||||
def test_migrate_dashboards_env_not_found(mock_deps):
|
||||
"""@PRE: source_env_id and target_env_id are valid environment IDs."""
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.post(
|
||||
"/api/dashboards/migrate",
|
||||
json={
|
||||
"source_env_id": "ghost",
|
||||
"target_env_id": "t",
|
||||
"dashboard_ids": [1]
|
||||
}
|
||||
)
|
||||
assert response.status_code == 404
|
||||
assert "Source environment not found" in response.json()["detail"]
|
||||
# [/DEF:test_migrate_dashboards_env_not_found:Function]
|
||||
|
||||
|
||||
# [DEF:test_backup_dashboards_success:Function]
|
||||
# @TEST: POST /api/dashboards/backup creates backup task
|
||||
# @PRE: Valid env_id, dashboard_ids
|
||||
# @POST: Returns task_id and create_task was called
|
||||
def test_backup_dashboards_success(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
# @POST: Returns task_id
|
||||
def test_backup_dashboards_success():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.get_task_manager") as mock_task_mgr, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
|
||||
# Mock task manager
|
||||
mock_task = MagicMock()
|
||||
mock_task.id = "task-backup-456"
|
||||
mock_task_mgr.return_value.create_task = AsyncMock(return_value=mock_task)
|
||||
|
||||
# Mock permission
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
mock_task = MagicMock()
|
||||
mock_task.id = "task-backup-456"
|
||||
mock_deps["task"].create_task = AsyncMock(return_value=mock_task)
|
||||
|
||||
response = client.post(
|
||||
"/api/dashboards/backup",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dashboard_ids": [1, 2, 3],
|
||||
"schedule": "0 0 * * *"
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "task_id" in data
|
||||
# @POST/@SIDE_EFFECT: create_task was called
|
||||
mock_deps["task"].create_task.assert_called_once()
|
||||
response = client.post(
|
||||
"/api/dashboards/backup",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dashboard_ids": [1, 2, 3],
|
||||
"schedule": "0 0 * * *"
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "task_id" in data
|
||||
|
||||
|
||||
# [/DEF:test_backup_dashboards_success:Function]
|
||||
|
||||
|
||||
# [DEF:test_backup_dashboards_env_not_found:Function]
|
||||
# @PRE: env_id is a valid environment ID
|
||||
def test_backup_dashboards_env_not_found(mock_deps):
|
||||
"""@PRE: env_id is a valid environment ID."""
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.post(
|
||||
"/api/dashboards/backup",
|
||||
json={
|
||||
"env_id": "ghost",
|
||||
"dashboard_ids": [1]
|
||||
}
|
||||
)
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
# [/DEF:test_backup_dashboards_env_not_found:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_database_mappings_success:Function]
|
||||
# @TEST: GET /api/dashboards/db-mappings returns mapping suggestions
|
||||
# @PRE: Valid source_env_id, target_env_id
|
||||
# @POST: Returns list of database mappings
|
||||
def test_get_database_mappings_success(mock_deps):
|
||||
mock_source = MagicMock()
|
||||
mock_source.id = "prod"
|
||||
mock_target = MagicMock()
|
||||
mock_target.id = "staging"
|
||||
mock_deps["config"].get_environments.return_value = [mock_source, mock_target]
|
||||
def test_get_database_mappings_success():
|
||||
with patch("src.api.routes.dashboards.get_mapping_service") as mock_service, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
|
||||
# Mock mapping service
|
||||
mock_service.return_value.get_suggestions = AsyncMock(return_value=[
|
||||
{
|
||||
"source_db": "old_sales",
|
||||
"target_db": "new_sales",
|
||||
"source_db_uuid": "uuid-1",
|
||||
"target_db_uuid": "uuid-2",
|
||||
"confidence": 0.95
|
||||
}
|
||||
])
|
||||
|
||||
# Mock permission
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
mock_deps["mapping"].get_suggestions = AsyncMock(return_value=[
|
||||
{
|
||||
"source_db": "old_sales",
|
||||
"target_db": "new_sales",
|
||||
"source_db_uuid": "uuid-1",
|
||||
"target_db_uuid": "uuid-2",
|
||||
"confidence": 0.95
|
||||
}
|
||||
])
|
||||
|
||||
response = client.get("/api/dashboards/db-mappings?source_env_id=prod&target_env_id=staging")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "mappings" in data
|
||||
assert len(data["mappings"]) == 1
|
||||
assert data["mappings"][0]["confidence"] == 0.95
|
||||
response = client.get("/api/dashboards/db-mappings?source_env_id=prod&target_env_id=staging")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "mappings" in data
|
||||
|
||||
|
||||
# [/DEF:test_get_database_mappings_success:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_database_mappings_env_not_found:Function]
|
||||
# @PRE: source_env_id and target_env_id are valid environment IDs
|
||||
def test_get_database_mappings_env_not_found(mock_deps):
|
||||
"""@PRE: source_env_id must be a valid environment."""
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.get("/api/dashboards/db-mappings?source_env_id=ghost&target_env_id=t")
|
||||
assert response.status_code == 404
|
||||
# [/DEF:test_get_database_mappings_env_not_found:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboard_tasks_history_filters_success:Function]
|
||||
# @TEST: GET /api/dashboards/{id}/tasks returns backup and llm tasks for dashboard
|
||||
def test_get_dashboard_tasks_history_filters_success(mock_deps):
|
||||
now = datetime.now(timezone.utc)
|
||||
def test_get_dashboard_tasks_history_filters_success():
|
||||
with patch("src.api.routes.dashboards.get_task_manager") as mock_task_mgr, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
llm_task = MagicMock()
|
||||
llm_task.id = "task-llm-1"
|
||||
llm_task.plugin_id = "llm_dashboard_validation"
|
||||
llm_task.status = "SUCCESS"
|
||||
llm_task.started_at = now
|
||||
llm_task.finished_at = now
|
||||
llm_task.params = {"dashboard_id": "42", "environment_id": "prod"}
|
||||
llm_task.result = {"summary": "LLM validation complete"}
|
||||
llm_task = MagicMock()
|
||||
llm_task.id = "task-llm-1"
|
||||
llm_task.plugin_id = "llm_dashboard_validation"
|
||||
llm_task.status = "SUCCESS"
|
||||
llm_task.started_at = now
|
||||
llm_task.finished_at = now
|
||||
llm_task.params = {"dashboard_id": "42", "environment_id": "prod"}
|
||||
llm_task.result = {"summary": "LLM validation complete"}
|
||||
|
||||
backup_task = MagicMock()
|
||||
backup_task.id = "task-backup-1"
|
||||
backup_task.plugin_id = "superset-backup"
|
||||
backup_task.status = "RUNNING"
|
||||
backup_task.started_at = now
|
||||
backup_task.finished_at = None
|
||||
backup_task.params = {"env": "prod", "dashboards": [42]}
|
||||
backup_task.result = {}
|
||||
backup_task = MagicMock()
|
||||
backup_task.id = "task-backup-1"
|
||||
backup_task.plugin_id = "superset-backup"
|
||||
backup_task.status = "RUNNING"
|
||||
backup_task.started_at = now
|
||||
backup_task.finished_at = None
|
||||
backup_task.params = {"env": "prod", "dashboards": [42]}
|
||||
backup_task.result = {}
|
||||
|
||||
other_task = MagicMock()
|
||||
other_task.id = "task-other"
|
||||
other_task.plugin_id = "superset-backup"
|
||||
other_task.status = "SUCCESS"
|
||||
other_task.started_at = now
|
||||
other_task.finished_at = now
|
||||
other_task.params = {"env": "prod", "dashboards": [777]}
|
||||
other_task.result = {}
|
||||
other_task = MagicMock()
|
||||
other_task.id = "task-other"
|
||||
other_task.plugin_id = "superset-backup"
|
||||
other_task.status = "SUCCESS"
|
||||
other_task.started_at = now
|
||||
other_task.finished_at = now
|
||||
other_task.params = {"env": "prod", "dashboards": [777]}
|
||||
other_task.result = {}
|
||||
|
||||
mock_deps["task"].get_all_tasks.return_value = [other_task, llm_task, backup_task]
|
||||
mock_task_mgr.return_value.get_all_tasks.return_value = [other_task, llm_task, backup_task]
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
response = client.get("/api/dashboards/42/tasks?env_id=prod&limit=10")
|
||||
response = client.get("/api/dashboards/42/tasks?env_id=prod&limit=10")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["dashboard_id"] == 42
|
||||
assert len(data["items"]) == 2
|
||||
assert {item["plugin_id"] for item in data["items"]} == {"llm_dashboard_validation", "superset-backup"}
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["dashboard_id"] == 42
|
||||
assert len(data["items"]) == 2
|
||||
assert {item["plugin_id"] for item in data["items"]} == {"llm_dashboard_validation", "superset-backup"}
|
||||
# [/DEF:test_get_dashboard_tasks_history_filters_success:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboard_thumbnail_success:Function]
|
||||
# @TEST: GET /api/dashboards/{id}/thumbnail proxies image bytes from Superset
|
||||
def test_get_dashboard_thumbnail_success(mock_deps):
|
||||
with patch("src.api.routes.dashboards.SupersetClient") as mock_client_cls:
|
||||
def test_get_dashboard_thumbnail_success():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm, \
|
||||
patch("src.api.routes.dashboards.SupersetClient") as mock_client_cls:
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
mock_client = MagicMock()
|
||||
mock_response = MagicMock()
|
||||
|
||||
@@ -11,41 +11,6 @@ from unittest.mock import MagicMock, patch, AsyncMock
|
||||
from fastapi.testclient import TestClient
|
||||
from src.app import app
|
||||
from src.api.routes.datasets import DatasetsResponse, DatasetDetailResponse
|
||||
from src.dependencies import get_current_user, has_permission, get_config_manager, get_task_manager, get_resource_service, get_mapping_service
|
||||
|
||||
# Global mock user for get_current_user dependency overrides
|
||||
mock_user = MagicMock()
|
||||
mock_user.username = "testuser"
|
||||
mock_user.roles = []
|
||||
admin_role = MagicMock()
|
||||
admin_role.name = "Admin"
|
||||
mock_user.roles.append(admin_role)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def mock_deps():
|
||||
config_manager = MagicMock()
|
||||
task_manager = MagicMock()
|
||||
resource_service = MagicMock()
|
||||
mapping_service = MagicMock()
|
||||
|
||||
app.dependency_overrides[get_config_manager] = lambda: config_manager
|
||||
app.dependency_overrides[get_task_manager] = lambda: task_manager
|
||||
app.dependency_overrides[get_resource_service] = lambda: resource_service
|
||||
app.dependency_overrides[get_mapping_service] = lambda: mapping_service
|
||||
app.dependency_overrides[get_current_user] = lambda: mock_user
|
||||
|
||||
app.dependency_overrides[has_permission("plugin:migration", "READ")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("plugin:migration", "EXECUTE")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("plugin:backup", "EXECUTE")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("tasks", "READ")] = lambda: mock_user
|
||||
|
||||
yield {
|
||||
"config": config_manager,
|
||||
"task": task_manager,
|
||||
"resource": resource_service,
|
||||
"mapping": mapping_service
|
||||
}
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
client = TestClient(app)
|
||||
|
||||
@@ -55,34 +20,41 @@ client = TestClient(app)
|
||||
# @TEST: GET /api/datasets returns 200 and valid schema
|
||||
# @PRE: env_id exists
|
||||
# @POST: Response matches DatasetsResponse schema
|
||||
def test_get_datasets_success(mock_deps):
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
|
||||
# Mock resource service response
|
||||
mock_deps["resource"].get_datasets_with_status = AsyncMock(
|
||||
return_value=[
|
||||
{
|
||||
"id": 1,
|
||||
"table_name": "sales_data",
|
||||
"schema": "public",
|
||||
"database": "sales_db",
|
||||
"mapped_fields": {"total": 10, "mapped": 5},
|
||||
"last_task": {"task_id": "task-1", "status": "SUCCESS"}
|
||||
}
|
||||
]
|
||||
)
|
||||
def test_get_datasets_success():
|
||||
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.datasets.get_resource_service") as mock_service, \
|
||||
patch("src.api.routes.datasets.has_permission") as mock_perm:
|
||||
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
|
||||
# Mock resource service response
|
||||
mock_service.return_value.get_datasets_with_status.return_value = AsyncMock()(
|
||||
return_value=[
|
||||
{
|
||||
"id": 1,
|
||||
"table_name": "sales_data",
|
||||
"schema": "public",
|
||||
"database": "sales_db",
|
||||
"mapped_fields": {"total": 10, "mapped": 5},
|
||||
"last_task": {"task_id": "task-1", "status": "SUCCESS"}
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
# Mock permission
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
response = client.get("/api/datasets?env_id=prod")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "datasets" in data
|
||||
assert len(data["datasets"]) >= 0
|
||||
# Validate against Pydantic model
|
||||
DatasetsResponse(**data)
|
||||
response = client.get("/api/datasets?env_id=prod")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "datasets" in data
|
||||
assert len(data["datasets"]) >= 0
|
||||
# Validate against Pydantic model
|
||||
DatasetsResponse(**data)
|
||||
|
||||
|
||||
# [/DEF:test_get_datasets_success:Function]
|
||||
@@ -92,13 +64,17 @@ def test_get_datasets_success(mock_deps):
|
||||
# @TEST: GET /api/datasets returns 404 if env_id missing
|
||||
# @PRE: env_id does not exist
|
||||
# @POST: Returns 404 error
|
||||
def test_get_datasets_env_not_found(mock_deps):
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
def test_get_datasets_env_not_found():
|
||||
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.datasets.has_permission") as mock_perm:
|
||||
|
||||
mock_config.return_value.get_environments.return_value = []
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
response = client.get("/api/datasets?env_id=nonexistent")
|
||||
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
response = client.get("/api/datasets?env_id=nonexistent")
|
||||
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
|
||||
|
||||
# [/DEF:test_get_datasets_env_not_found:Function]
|
||||
@@ -108,25 +84,24 @@ def test_get_datasets_env_not_found(mock_deps):
|
||||
# @TEST: GET /api/datasets returns 400 for invalid page/page_size
|
||||
# @PRE: page < 1 or page_size > 100
|
||||
# @POST: Returns 400 error
|
||||
def test_get_datasets_invalid_pagination(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
def test_get_datasets_invalid_pagination():
|
||||
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.datasets.has_permission") as mock_perm:
|
||||
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
# Invalid page
|
||||
response = client.get("/api/datasets?env_id=prod&page=0")
|
||||
assert response.status_code == 400
|
||||
assert "Page must be >= 1" in response.json()["detail"]
|
||||
|
||||
# Invalid page_size (too small)
|
||||
response = client.get("/api/datasets?env_id=prod&page_size=0")
|
||||
assert response.status_code == 400
|
||||
assert "Page size must be between 1 and 100" in response.json()["detail"]
|
||||
|
||||
# @TEST_EDGE: page_size > 100 exceeds max
|
||||
response = client.get("/api/datasets?env_id=prod&page_size=101")
|
||||
assert response.status_code == 400
|
||||
assert "Page size must be between 1 and 100" in response.json()["detail"]
|
||||
# Invalid page
|
||||
response = client.get("/api/datasets?env_id=prod&page=0")
|
||||
assert response.status_code == 400
|
||||
assert "Page must be >= 1" in response.json()["detail"]
|
||||
|
||||
# Invalid page_size
|
||||
response = client.get("/api/datasets?env_id=prod&page_size=0")
|
||||
assert response.status_code == 400
|
||||
assert "Page size must be between 1 and 100" in response.json()["detail"]
|
||||
|
||||
|
||||
# [/DEF:test_get_datasets_invalid_pagination:Function]
|
||||
@@ -136,31 +111,36 @@ def test_get_datasets_invalid_pagination(mock_deps):
|
||||
# @TEST: POST /api/datasets/map-columns creates mapping task
|
||||
# @PRE: Valid env_id, dataset_ids, source_type
|
||||
# @POST: Returns task_id
|
||||
def test_map_columns_success(mock_deps):
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
|
||||
# Mock task manager
|
||||
mock_task = MagicMock()
|
||||
mock_task.id = "task-123"
|
||||
mock_deps["task"].create_task = AsyncMock(return_value=mock_task)
|
||||
def test_map_columns_success():
|
||||
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.datasets.get_task_manager") as mock_task_mgr, \
|
||||
patch("src.api.routes.datasets.has_permission") as mock_perm:
|
||||
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
|
||||
# Mock task manager
|
||||
mock_task = MagicMock()
|
||||
mock_task.id = "task-123"
|
||||
mock_task_mgr.return_value.create_task = AsyncMock(return_value=mock_task)
|
||||
|
||||
# Mock permission
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
response = client.post(
|
||||
"/api/datasets/map-columns",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dataset_ids": [1, 2, 3],
|
||||
"source_type": "postgresql"
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "task_id" in data
|
||||
# @POST/@SIDE_EFFECT: create_task was called
|
||||
mock_deps["task"].create_task.assert_called_once()
|
||||
response = client.post(
|
||||
"/api/datasets/map-columns",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dataset_ids": [1, 2, 3],
|
||||
"source_type": "postgresql"
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "task_id" in data
|
||||
|
||||
|
||||
# [/DEF:test_map_columns_success:Function]
|
||||
@@ -170,18 +150,21 @@ def test_map_columns_success(mock_deps):
|
||||
# @TEST: POST /api/datasets/map-columns returns 400 for invalid source_type
|
||||
# @PRE: source_type is not 'postgresql' or 'xlsx'
|
||||
# @POST: Returns 400 error
|
||||
def test_map_columns_invalid_source_type(mock_deps):
|
||||
response = client.post(
|
||||
"/api/datasets/map-columns",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dataset_ids": [1],
|
||||
"source_type": "invalid"
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 400
|
||||
assert "Source type must be 'postgresql' or 'xlsx'" in response.json()["detail"]
|
||||
def test_map_columns_invalid_source_type():
|
||||
with patch("src.api.routes.datasets.has_permission") as mock_perm:
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
response = client.post(
|
||||
"/api/datasets/map-columns",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dataset_ids": [1],
|
||||
"source_type": "invalid"
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 400
|
||||
assert "Source type must be 'postgresql' or 'xlsx'" in response.json()["detail"]
|
||||
|
||||
|
||||
# [/DEF:test_map_columns_invalid_source_type:Function]
|
||||
@@ -191,110 +174,39 @@ def test_map_columns_invalid_source_type(mock_deps):
|
||||
# @TEST: POST /api/datasets/generate-docs creates doc generation task
|
||||
# @PRE: Valid env_id, dataset_ids, llm_provider
|
||||
# @POST: Returns task_id
|
||||
def test_generate_docs_success(mock_deps):
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
|
||||
# Mock task manager
|
||||
mock_task = MagicMock()
|
||||
mock_task.id = "task-456"
|
||||
mock_deps["task"].create_task = AsyncMock(return_value=mock_task)
|
||||
def test_generate_docs_success():
|
||||
with patch("src.api.routes.datasets.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.datasets.get_task_manager") as mock_task_mgr, \
|
||||
patch("src.api.routes.datasets.has_permission") as mock_perm:
|
||||
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
|
||||
# Mock task manager
|
||||
mock_task = MagicMock()
|
||||
mock_task.id = "task-456"
|
||||
mock_task_mgr.return_value.create_task = AsyncMock(return_value=mock_task)
|
||||
|
||||
# Mock permission
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
response = client.post(
|
||||
"/api/datasets/generate-docs",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dataset_ids": [1],
|
||||
"llm_provider": "openai"
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "task_id" in data
|
||||
# @POST/@SIDE_EFFECT: create_task was called
|
||||
mock_deps["task"].create_task.assert_called_once()
|
||||
response = client.post(
|
||||
"/api/datasets/generate-docs",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dataset_ids": [1],
|
||||
"llm_provider": "openai"
|
||||
}
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert "task_id" in data
|
||||
|
||||
|
||||
# [/DEF:test_generate_docs_success:Function]
|
||||
|
||||
|
||||
# [DEF:test_map_columns_empty_ids:Function]
|
||||
# @TEST: POST /api/datasets/map-columns returns 400 for empty dataset_ids
|
||||
# @PRE: dataset_ids is empty
|
||||
# @POST: Returns 400 error
|
||||
def test_map_columns_empty_ids(mock_deps):
|
||||
"""@PRE: dataset_ids must be non-empty."""
|
||||
response = client.post(
|
||||
"/api/datasets/map-columns",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dataset_ids": [],
|
||||
"source_type": "postgresql"
|
||||
}
|
||||
)
|
||||
assert response.status_code == 400
|
||||
assert "At least one dataset ID must be provided" in response.json()["detail"]
|
||||
# [/DEF:test_map_columns_empty_ids:Function]
|
||||
|
||||
|
||||
# [DEF:test_generate_docs_empty_ids:Function]
|
||||
# @TEST: POST /api/datasets/generate-docs returns 400 for empty dataset_ids
|
||||
# @PRE: dataset_ids is empty
|
||||
# @POST: Returns 400 error
|
||||
def test_generate_docs_empty_ids(mock_deps):
|
||||
"""@PRE: dataset_ids must be non-empty."""
|
||||
response = client.post(
|
||||
"/api/datasets/generate-docs",
|
||||
json={
|
||||
"env_id": "prod",
|
||||
"dataset_ids": [],
|
||||
"llm_provider": "openai"
|
||||
}
|
||||
)
|
||||
assert response.status_code == 400
|
||||
assert "At least one dataset ID must be provided" in response.json()["detail"]
|
||||
# [/DEF:test_generate_docs_empty_ids:Function]
|
||||
|
||||
|
||||
# [DEF:test_generate_docs_env_not_found:Function]
|
||||
# @TEST: POST /api/datasets/generate-docs returns 404 for missing env
|
||||
# @PRE: env_id does not exist
|
||||
# @POST: Returns 404 error
|
||||
def test_generate_docs_env_not_found(mock_deps):
|
||||
"""@PRE: env_id must be a valid environment."""
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.post(
|
||||
"/api/datasets/generate-docs",
|
||||
json={
|
||||
"env_id": "ghost",
|
||||
"dataset_ids": [1],
|
||||
"llm_provider": "openai"
|
||||
}
|
||||
)
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
# [/DEF:test_generate_docs_env_not_found:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_datasets_superset_failure:Function]
|
||||
# @TEST_EDGE: external_superset_failure -> {status: 503}
|
||||
def test_get_datasets_superset_failure(mock_deps):
|
||||
"""@TEST_EDGE: external_superset_failure -> {status: 503}"""
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "bad_conn"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_datasets_with_status = AsyncMock(
|
||||
side_effect=Exception("Connection refused")
|
||||
)
|
||||
|
||||
response = client.get("/api/datasets?env_id=bad_conn")
|
||||
assert response.status_code == 503
|
||||
assert "Failed to fetch datasets" in response.json()["detail"]
|
||||
# [/DEF:test_get_datasets_superset_failure:Function]
|
||||
|
||||
|
||||
# [/DEF:backend.src.api.routes.__tests__.test_datasets:Module]
|
||||
@@ -1,198 +0,0 @@
|
||||
# [DEF:backend.src.api.routes.__tests__.test_git_status_route:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, git, api, status, no_repo
|
||||
# @PURPOSE: Validate status endpoint behavior for missing and error repository states.
|
||||
# @LAYER: Domain (Tests)
|
||||
# @RELATION: CALLS -> src.api.routes.git.get_repository_status
|
||||
|
||||
from fastapi import HTTPException
|
||||
import pytest
|
||||
import asyncio
|
||||
|
||||
from src.api.routes import git as git_routes
|
||||
|
||||
|
||||
# [DEF:test_get_repository_status_returns_no_repo_payload_for_missing_repo:Function]
|
||||
# @PURPOSE: Ensure missing local repository is represented as NO_REPO payload instead of an API error.
|
||||
# @PRE: GitService.get_status raises HTTPException(404).
|
||||
# @POST: Route returns a deterministic NO_REPO status payload.
|
||||
def test_get_repository_status_returns_no_repo_payload_for_missing_repo(monkeypatch):
|
||||
class MissingRepoGitService:
|
||||
def _get_repo_path(self, dashboard_id: int) -> str:
|
||||
return f"/tmp/missing-repo-{dashboard_id}"
|
||||
|
||||
def get_status(self, dashboard_id: int) -> dict:
|
||||
raise AssertionError("get_status must not be called when repository path is missing")
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", MissingRepoGitService())
|
||||
|
||||
response = asyncio.run(git_routes.get_repository_status(34))
|
||||
|
||||
assert response["sync_status"] == "NO_REPO"
|
||||
assert response["sync_state"] == "NO_REPO"
|
||||
assert response["has_repo"] is False
|
||||
assert response["current_branch"] is None
|
||||
# [/DEF:test_get_repository_status_returns_no_repo_payload_for_missing_repo:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_repository_status_propagates_non_404_http_exception:Function]
|
||||
# @PURPOSE: Ensure HTTP exceptions other than 404 are not masked.
|
||||
# @PRE: GitService.get_status raises HTTPException with non-404 status.
|
||||
# @POST: Raised exception preserves original status and detail.
|
||||
def test_get_repository_status_propagates_non_404_http_exception(monkeypatch):
|
||||
class ConflictGitService:
|
||||
def _get_repo_path(self, dashboard_id: int) -> str:
|
||||
return f"/tmp/existing-repo-{dashboard_id}"
|
||||
|
||||
def get_status(self, dashboard_id: int) -> dict:
|
||||
raise HTTPException(status_code=409, detail="Conflict")
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", ConflictGitService())
|
||||
monkeypatch.setattr(git_routes.os.path, "exists", lambda _path: True)
|
||||
|
||||
with pytest.raises(HTTPException) as exc_info:
|
||||
asyncio.run(git_routes.get_repository_status(34))
|
||||
|
||||
assert exc_info.value.status_code == 409
|
||||
assert exc_info.value.detail == "Conflict"
|
||||
# [/DEF:test_get_repository_status_propagates_non_404_http_exception:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_repository_diff_propagates_http_exception:Function]
|
||||
# @PURPOSE: Ensure diff endpoint preserves domain HTTP errors from GitService.
|
||||
# @PRE: GitService.get_diff raises HTTPException.
|
||||
# @POST: Endpoint raises same HTTPException values.
|
||||
def test_get_repository_diff_propagates_http_exception(monkeypatch):
|
||||
class DiffGitService:
|
||||
def get_diff(self, dashboard_id: int, file_path=None, staged: bool = False) -> str:
|
||||
raise HTTPException(status_code=404, detail="Repository missing")
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", DiffGitService())
|
||||
|
||||
with pytest.raises(HTTPException) as exc_info:
|
||||
asyncio.run(git_routes.get_repository_diff(12))
|
||||
|
||||
assert exc_info.value.status_code == 404
|
||||
assert exc_info.value.detail == "Repository missing"
|
||||
# [/DEF:test_get_repository_diff_propagates_http_exception:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_history_wraps_unexpected_error_as_500:Function]
|
||||
# @PURPOSE: Ensure non-HTTP exceptions in history endpoint become deterministic 500 errors.
|
||||
# @PRE: GitService.get_commit_history raises ValueError.
|
||||
# @POST: Endpoint returns HTTPException with status 500 and route context.
|
||||
def test_get_history_wraps_unexpected_error_as_500(monkeypatch):
|
||||
class HistoryGitService:
|
||||
def get_commit_history(self, dashboard_id: int, limit: int = 50):
|
||||
raise ValueError("broken parser")
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", HistoryGitService())
|
||||
|
||||
with pytest.raises(HTTPException) as exc_info:
|
||||
asyncio.run(git_routes.get_history(12))
|
||||
|
||||
assert exc_info.value.status_code == 500
|
||||
assert exc_info.value.detail == "get_history failed: broken parser"
|
||||
# [/DEF:test_get_history_wraps_unexpected_error_as_500:Function]
|
||||
|
||||
|
||||
# [DEF:test_commit_changes_wraps_unexpected_error_as_500:Function]
|
||||
# @PURPOSE: Ensure commit endpoint does not leak unexpected errors as 400.
|
||||
# @PRE: GitService.commit_changes raises RuntimeError.
|
||||
# @POST: Endpoint raises HTTPException(500) with route context.
|
||||
def test_commit_changes_wraps_unexpected_error_as_500(monkeypatch):
|
||||
class CommitGitService:
|
||||
def commit_changes(self, dashboard_id: int, message: str, files):
|
||||
raise RuntimeError("index lock")
|
||||
|
||||
class CommitPayload:
|
||||
message = "test"
|
||||
files = ["dashboards/a.yaml"]
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", CommitGitService())
|
||||
|
||||
with pytest.raises(HTTPException) as exc_info:
|
||||
asyncio.run(git_routes.commit_changes(12, CommitPayload()))
|
||||
|
||||
assert exc_info.value.status_code == 500
|
||||
assert exc_info.value.detail == "commit_changes failed: index lock"
|
||||
# [/DEF:test_commit_changes_wraps_unexpected_error_as_500:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_repository_status_batch_returns_mixed_statuses:Function]
|
||||
# @PURPOSE: Ensure batch endpoint returns per-dashboard statuses in one response.
|
||||
# @PRE: Some repositories are missing and some are initialized.
|
||||
# @POST: Returned map includes resolved status for each requested dashboard ID.
|
||||
def test_get_repository_status_batch_returns_mixed_statuses(monkeypatch):
|
||||
class BatchGitService:
|
||||
def _get_repo_path(self, dashboard_id: int) -> str:
|
||||
return f"/tmp/repo-{dashboard_id}"
|
||||
|
||||
def get_status(self, dashboard_id: int) -> dict:
|
||||
if dashboard_id == 2:
|
||||
return {"sync_state": "SYNCED", "sync_status": "OK"}
|
||||
raise HTTPException(status_code=404, detail="not found")
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", BatchGitService())
|
||||
monkeypatch.setattr(git_routes.os.path, "exists", lambda path: path.endswith("/repo-2"))
|
||||
|
||||
class BatchRequest:
|
||||
dashboard_ids = [1, 2]
|
||||
|
||||
response = asyncio.run(git_routes.get_repository_status_batch(BatchRequest()))
|
||||
|
||||
assert response.statuses["1"]["sync_status"] == "NO_REPO"
|
||||
assert response.statuses["2"]["sync_state"] == "SYNCED"
|
||||
# [/DEF:test_get_repository_status_batch_returns_mixed_statuses:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_repository_status_batch_marks_item_as_error_on_service_failure:Function]
|
||||
# @PURPOSE: Ensure batch endpoint marks failed items as ERROR without failing entire request.
|
||||
# @PRE: GitService raises non-HTTP exception for one dashboard.
|
||||
# @POST: Failed dashboard status is marked as ERROR.
|
||||
def test_get_repository_status_batch_marks_item_as_error_on_service_failure(monkeypatch):
|
||||
class BatchErrorGitService:
|
||||
def _get_repo_path(self, dashboard_id: int) -> str:
|
||||
return f"/tmp/repo-{dashboard_id}"
|
||||
|
||||
def get_status(self, dashboard_id: int) -> dict:
|
||||
raise RuntimeError("boom")
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", BatchErrorGitService())
|
||||
monkeypatch.setattr(git_routes.os.path, "exists", lambda _path: True)
|
||||
|
||||
class BatchRequest:
|
||||
dashboard_ids = [9]
|
||||
|
||||
response = asyncio.run(git_routes.get_repository_status_batch(BatchRequest()))
|
||||
|
||||
assert response.statuses["9"]["sync_status"] == "ERROR"
|
||||
assert response.statuses["9"]["sync_state"] == "ERROR"
|
||||
# [/DEF:test_get_repository_status_batch_marks_item_as_error_on_service_failure:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_repository_status_batch_deduplicates_and_truncates_ids:Function]
|
||||
# @PURPOSE: Ensure batch endpoint protects server from oversized payloads.
|
||||
# @PRE: request includes duplicate IDs and more than MAX_REPOSITORY_STATUS_BATCH entries.
|
||||
# @POST: Result contains unique IDs up to configured cap.
|
||||
def test_get_repository_status_batch_deduplicates_and_truncates_ids(monkeypatch):
|
||||
class SafeBatchGitService:
|
||||
def _get_repo_path(self, dashboard_id: int) -> str:
|
||||
return f"/tmp/repo-{dashboard_id}"
|
||||
|
||||
def get_status(self, dashboard_id: int) -> dict:
|
||||
return {"sync_state": "SYNCED", "sync_status": "OK"}
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", SafeBatchGitService())
|
||||
monkeypatch.setattr(git_routes.os.path, "exists", lambda _path: True)
|
||||
|
||||
class BatchRequest:
|
||||
dashboard_ids = [1, 1] + list(range(2, 90))
|
||||
|
||||
response = asyncio.run(git_routes.get_repository_status_batch(BatchRequest()))
|
||||
|
||||
assert len(response.statuses) == git_routes.MAX_REPOSITORY_STATUS_BATCH
|
||||
assert "1" in response.statuses
|
||||
# [/DEF:test_get_repository_status_batch_deduplicates_and_truncates_ids:Function]
|
||||
|
||||
# [/DEF:backend.src.api.routes.__tests__.test_git_status_route:Module]
|
||||
@@ -407,104 +407,4 @@ async def test_execute_migration_invalid_env_raises_400(_mock_env):
|
||||
assert exc.value.status_code == 400
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_dry_run_migration_returns_diff_and_risk(db_session):
|
||||
# @TEST_EDGE: missing_target_datasource -> validates high risk item generation
|
||||
# @TEST_EDGE: breaking_reference -> validates high risk on missing dataset link
|
||||
from src.api.routes.migration import dry_run_migration
|
||||
from src.models.dashboard import DashboardSelection
|
||||
|
||||
env_source = MagicMock()
|
||||
env_source.id = "src"
|
||||
env_source.name = "Source"
|
||||
env_source.url = "http://source"
|
||||
env_source.username = "admin"
|
||||
env_source.password = "admin"
|
||||
env_source.verify_ssl = False
|
||||
env_source.timeout = 30
|
||||
|
||||
env_target = MagicMock()
|
||||
env_target.id = "tgt"
|
||||
env_target.name = "Target"
|
||||
env_target.url = "http://target"
|
||||
env_target.username = "admin"
|
||||
env_target.password = "admin"
|
||||
env_target.verify_ssl = False
|
||||
env_target.timeout = 30
|
||||
|
||||
cm = _make_sync_config_manager([env_source, env_target])
|
||||
selection = DashboardSelection(
|
||||
selected_ids=[42],
|
||||
source_env_id="src",
|
||||
target_env_id="tgt",
|
||||
replace_db_config=False,
|
||||
fix_cross_filters=True,
|
||||
)
|
||||
|
||||
with patch("src.api.routes.migration.SupersetClient") as MockClient, \
|
||||
patch("src.api.routes.migration.MigrationDryRunService") as MockService:
|
||||
source_client = MagicMock()
|
||||
target_client = MagicMock()
|
||||
MockClient.side_effect = [source_client, target_client]
|
||||
|
||||
service_instance = MagicMock()
|
||||
service_payload = {
|
||||
"generated_at": "2026-02-27T00:00:00+00:00",
|
||||
"selection": selection.model_dump(),
|
||||
"selected_dashboard_titles": ["Sales"],
|
||||
"diff": {
|
||||
"dashboards": {"create": [], "update": [{"uuid": "dash-1"}], "delete": []},
|
||||
"charts": {"create": [{"uuid": "chart-1"}], "update": [], "delete": []},
|
||||
"datasets": {"create": [{"uuid": "dataset-1"}], "update": [], "delete": []},
|
||||
},
|
||||
"summary": {
|
||||
"dashboards": {"create": 0, "update": 1, "delete": 0},
|
||||
"charts": {"create": 1, "update": 0, "delete": 0},
|
||||
"datasets": {"create": 1, "update": 0, "delete": 0},
|
||||
"selected_dashboards": 1,
|
||||
},
|
||||
"risk": {
|
||||
"score": 75,
|
||||
"level": "high",
|
||||
"items": [
|
||||
{"code": "missing_datasource"},
|
||||
{"code": "breaking_reference"},
|
||||
],
|
||||
},
|
||||
}
|
||||
service_instance.run.return_value = service_payload
|
||||
MockService.return_value = service_instance
|
||||
|
||||
result = await dry_run_migration(selection=selection, config_manager=cm, db=db_session, _=None)
|
||||
|
||||
assert result["summary"]["dashboards"]["update"] == 1
|
||||
assert result["summary"]["charts"]["create"] == 1
|
||||
assert result["summary"]["datasets"]["create"] == 1
|
||||
assert result["risk"]["score"] > 0
|
||||
assert any(item["code"] == "missing_datasource" for item in result["risk"]["items"])
|
||||
assert any(item["code"] == "breaking_reference" for item in result["risk"]["items"])
|
||||
|
||||
|
||||
@pytest.mark.asyncio
|
||||
async def test_dry_run_migration_rejects_same_environment(db_session):
|
||||
from src.api.routes.migration import dry_run_migration
|
||||
from src.models.dashboard import DashboardSelection
|
||||
|
||||
env = MagicMock()
|
||||
env.id = "same"
|
||||
env.name = "Same"
|
||||
env.url = "http://same"
|
||||
env.username = "admin"
|
||||
env.password = "admin"
|
||||
env.verify_ssl = False
|
||||
env.timeout = 30
|
||||
|
||||
cm = _make_sync_config_manager([env])
|
||||
selection = DashboardSelection(selected_ids=[1], source_env_id="same", target_env_id="same")
|
||||
|
||||
with pytest.raises(HTTPException) as exc:
|
||||
await dry_run_migration(selection=selection, config_manager=cm, db=db_session, _=None)
|
||||
assert exc.value.status_code == 400
|
||||
|
||||
|
||||
# [/DEF:backend.src.api.routes.__tests__.test_migration_routes:Module]
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# [DEF:backend.tests.test_reports_api:Module]
|
||||
# @TIER: STANDARD
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: tests, reports, api, contract, pagination, filtering
|
||||
# @PURPOSE: Contract tests for GET /api/reports defaults, pagination, and filtering behavior.
|
||||
# @LAYER: Domain (Tests)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# [DEF:backend.tests.test_reports_detail_api:Module]
|
||||
# @TIER: STANDARD
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: tests, reports, api, detail, diagnostics
|
||||
# @PURPOSE: Contract tests for GET /api/reports/{report_id} detail endpoint behavior.
|
||||
# @LAYER: Domain (Tests)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# [DEF:backend.tests.test_reports_openapi_conformance:Module]
|
||||
# @TIER: STANDARD
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: tests, reports, openapi, conformance
|
||||
# @PURPOSE: Validate implemented reports payload shape against OpenAPI-required top-level contract fields.
|
||||
# @LAYER: Domain (Tests)
|
||||
|
||||
@@ -810,9 +810,6 @@ def _parse_command(message: str, config_manager: ConfigManager) -> Dict[str, Any
|
||||
if any(k in lower for k in ["миграц", "migration", "migrate"]):
|
||||
src = _extract_id(lower, [r"(?:с|from)\s+([a-z0-9_-]+)"])
|
||||
tgt = _extract_id(lower, [r"(?:на|to)\s+([a-z0-9_-]+)"])
|
||||
dry_run = "--dry-run" in lower or "dry run" in lower
|
||||
replace_db_config = "--replace-db-config" in lower
|
||||
fix_cross_filters = "--fix-cross-filters" not in lower # Default true usually, but let's say test uses --dry-run
|
||||
is_dangerous = _is_production_env(tgt, config_manager)
|
||||
return {
|
||||
"domain": "migration",
|
||||
@@ -821,13 +818,10 @@ def _parse_command(message: str, config_manager: ConfigManager) -> Dict[str, Any
|
||||
"dashboard_id": int(dashboard_id) if dashboard_id else None,
|
||||
"source_env": src,
|
||||
"target_env": tgt,
|
||||
"dry_run": dry_run,
|
||||
"replace_db_config": replace_db_config,
|
||||
"fix_cross_filters": True,
|
||||
},
|
||||
"confidence": 0.95 if dashboard_id and src and tgt else 0.72,
|
||||
"risk_level": "dangerous" if is_dangerous else "guarded",
|
||||
"requires_confirmation": is_dangerous or dry_run,
|
||||
"requires_confirmation": is_dangerous,
|
||||
}
|
||||
|
||||
# Backup
|
||||
@@ -1063,7 +1057,7 @@ _SAFE_OPS = {"show_capabilities", "get_task_status"}
|
||||
# @PURPOSE: Build human-readable confirmation prompt for an intent before execution.
|
||||
# @PRE: intent contains operation and entities fields.
|
||||
# @POST: Returns descriptive Russian-language text ending with confirmation prompt.
|
||||
async def _async_confirmation_summary(intent: Dict[str, Any], config_manager: ConfigManager, db: Session) -> str:
|
||||
def _confirmation_summary(intent: Dict[str, Any]) -> str:
|
||||
operation = intent.get("operation", "")
|
||||
entities = intent.get("entities", {})
|
||||
descriptions: Dict[str, str] = {
|
||||
@@ -1091,67 +1085,8 @@ async def _async_confirmation_summary(intent: Dict[str, Any], config_manager: Co
|
||||
tgt=_label(entities.get("target_env")),
|
||||
dataset=_label(entities.get("dataset_id")),
|
||||
)
|
||||
|
||||
if operation == "execute_migration":
|
||||
flags = []
|
||||
flags.append("маппинг БД: " + ("ВКЛ" if _coerce_query_bool(entities.get("replace_db_config", False)) else "ВЫКЛ"))
|
||||
flags.append("исправление кроссфильтров: " + ("ВКЛ" if _coerce_query_bool(entities.get("fix_cross_filters", True)) else "ВЫКЛ"))
|
||||
dry_run_enabled = _coerce_query_bool(entities.get("dry_run", False))
|
||||
flags.append("отчет dry-run: " + ("ВКЛ" if dry_run_enabled else "ВЫКЛ"))
|
||||
text += f" ({', '.join(flags)})"
|
||||
|
||||
if dry_run_enabled:
|
||||
try:
|
||||
from ...core.migration.dry_run_orchestrator import MigrationDryRunService
|
||||
from ...models.dashboard import DashboardSelection
|
||||
from ...core.superset_client import SupersetClient
|
||||
|
||||
src_token = entities.get("source_env")
|
||||
tgt_token = entities.get("target_env")
|
||||
dashboard_id = _resolve_dashboard_id_entity(entities, config_manager, env_hint=src_token)
|
||||
|
||||
if dashboard_id and src_token and tgt_token:
|
||||
src_env_id = _resolve_env_id(src_token, config_manager)
|
||||
tgt_env_id = _resolve_env_id(tgt_token, config_manager)
|
||||
|
||||
if src_env_id and tgt_env_id:
|
||||
env_map = {env.id: env for env in config_manager.get_environments()}
|
||||
source_env = env_map.get(src_env_id)
|
||||
target_env = env_map.get(tgt_env_id)
|
||||
|
||||
if source_env and target_env and source_env.id != target_env.id:
|
||||
selection = DashboardSelection(
|
||||
source_env_id=source_env.id,
|
||||
target_env_id=target_env.id,
|
||||
selected_ids=[dashboard_id],
|
||||
replace_db_config=_coerce_query_bool(entities.get("replace_db_config", False)),
|
||||
fix_cross_filters=_coerce_query_bool(entities.get("fix_cross_filters", True))
|
||||
)
|
||||
service = MigrationDryRunService()
|
||||
source_client = SupersetClient(source_env)
|
||||
target_client = SupersetClient(target_env)
|
||||
report = service.run(selection, source_client, target_client, db)
|
||||
|
||||
s = report.get("summary", {})
|
||||
dash_s = s.get("dashboards", {})
|
||||
charts_s = s.get("charts", {})
|
||||
ds_s = s.get("datasets", {})
|
||||
|
||||
# Determine main actions counts
|
||||
creates = dash_s.get("create", 0) + charts_s.get("create", 0) + ds_s.get("create", 0)
|
||||
updates = dash_s.get("update", 0) + charts_s.get("update", 0) + ds_s.get("update", 0)
|
||||
deletes = dash_s.get("delete", 0) + charts_s.get("delete", 0) + ds_s.get("delete", 0)
|
||||
|
||||
text += f"\n\nОтчет dry-run:\n- Будет создано новых объектов: {creates}\n- Будет обновлено: {updates}\n- Будет удалено: {deletes}"
|
||||
else:
|
||||
text += "\n\n(Не удалось загрузить отчет dry-run: неверные окружения)."
|
||||
except Exception as e:
|
||||
import traceback
|
||||
logger.warning("[assistant.dry_run_summary][failed] Exception: %s\n%s", e, traceback.format_exc())
|
||||
text += f"\n\n(Не удалось загрузить отчет dry-run: {e})."
|
||||
|
||||
return f"Выполнить: {text}. Подтвердите или отмените."
|
||||
# [/DEF:_async_confirmation_summary:Function]
|
||||
# [/DEF:_confirmation_summary:Function]
|
||||
|
||||
|
||||
# [DEF:_clarification_text_for_intent:Function]
|
||||
@@ -1241,8 +1176,7 @@ async def _plan_intent_with_llm(
|
||||
]
|
||||
)
|
||||
except Exception as exc:
|
||||
import traceback
|
||||
logger.warning(f"[assistant.planner][fallback] LLM planner unavailable: {exc}\n{traceback.format_exc()}")
|
||||
logger.warning(f"[assistant.planner][fallback] LLM planner unavailable: {exc}")
|
||||
return None
|
||||
if not isinstance(response, dict):
|
||||
return None
|
||||
@@ -1646,7 +1580,7 @@ async def send_message(
|
||||
)
|
||||
CONFIRMATIONS[confirmation_id] = confirm
|
||||
_persist_confirmation(db, confirm)
|
||||
text = await _async_confirmation_summary(intent, config_manager, db)
|
||||
text = _confirmation_summary(intent)
|
||||
_append_history(
|
||||
user_id,
|
||||
conversation_id,
|
||||
@@ -1961,39 +1895,6 @@ async def list_conversations(
|
||||
# [/DEF:list_conversations:Function]
|
||||
|
||||
|
||||
# [DEF:delete_conversation:Function]
|
||||
# @PURPOSE: Soft-delete or hard-delete a conversation and clear its in-memory trace.
|
||||
# @PRE: conversation_id belongs to current_user.
|
||||
# @POST: Conversation records are removed from DB and CONVERSATIONS cache.
|
||||
@router.delete("/conversations/{conversation_id}")
|
||||
async def delete_conversation(
|
||||
conversation_id: str,
|
||||
current_user: User = Depends(get_current_user),
|
||||
db: Session = Depends(get_db),
|
||||
):
|
||||
with belief_scope("assistant.conversations.delete"):
|
||||
user_id = current_user.id
|
||||
|
||||
# 1. Remove from in-memory cache
|
||||
key = (user_id, conversation_id)
|
||||
if key in CONVERSATIONS:
|
||||
del CONVERSATIONS[key]
|
||||
|
||||
# 2. Delete from database
|
||||
deleted_count = db.query(AssistantMessageRecord).filter(
|
||||
AssistantMessageRecord.user_id == user_id,
|
||||
AssistantMessageRecord.conversation_id == conversation_id
|
||||
).delete()
|
||||
|
||||
db.commit()
|
||||
|
||||
if deleted_count == 0:
|
||||
raise HTTPException(status_code=404, detail="Conversation not found or already deleted")
|
||||
|
||||
return {"status": "success", "deleted": deleted_count, "conversation_id": conversation_id}
|
||||
# [/DEF:delete_conversation:Function]
|
||||
|
||||
|
||||
@router.get("/history")
|
||||
# [DEF:get_history:Function]
|
||||
# @PURPOSE: Retrieve paginated assistant conversation history for current user.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# [DEF:backend.src.api.routes.dashboards:Module]
|
||||
#
|
||||
# @TIER: CRITICAL
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: api, dashboards, resources, hub
|
||||
# @PURPOSE: API endpoints for the Dashboard Hub - listing dashboards with Git and task status
|
||||
# @LAYER: API
|
||||
@@ -9,27 +9,6 @@
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
|
||||
#
|
||||
# @INVARIANT: All dashboard responses include git_status and last_task metadata
|
||||
#
|
||||
# @TEST_CONTRACT: DashboardsAPI -> {
|
||||
# required_fields: {env_id: string, page: integer, page_size: integer},
|
||||
# optional_fields: {search: string},
|
||||
# invariants: ["Pagination must be valid", "Environment must exist"]
|
||||
# }
|
||||
#
|
||||
# @TEST_FIXTURE: dashboard_list_happy -> {
|
||||
# "env_id": "prod",
|
||||
# "expected_count": 1,
|
||||
# "dashboards": [{"id": 1, "title": "Main Revenue"}]
|
||||
# }
|
||||
#
|
||||
# @TEST_EDGE: pagination_zero_page -> {"env_id": "prod", "page": 0, "status": 400}
|
||||
# @TEST_EDGE: pagination_oversize -> {"env_id": "prod", "page_size": 101, "status": 400}
|
||||
# @TEST_EDGE: missing_env -> {"env_id": "ghost", "status": 404}
|
||||
# @TEST_EDGE: empty_dashboards -> {"env_id": "empty_env", "expected_total": 0}
|
||||
# @TEST_EDGE: external_superset_failure -> {"env_id": "bad_conn", "status": 503}
|
||||
#
|
||||
# @TEST_INVARIANT: metadata_consistency -> verifies: [dashboard_list_happy, empty_dashboards]
|
||||
#
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
from fastapi import APIRouter, Depends, HTTPException, Query, Response
|
||||
@@ -42,7 +21,6 @@ from ...dependencies import get_config_manager, get_task_manager, get_resource_s
|
||||
from ...core.logger import logger, belief_scope
|
||||
from ...core.superset_client import SupersetClient
|
||||
from ...core.utils.network import DashboardNotFoundError
|
||||
from ...services.resource_service import ResourceService
|
||||
# [/SECTION]
|
||||
|
||||
router = APIRouter(prefix="/api/dashboards", tags=["Dashboards"])
|
||||
@@ -58,11 +36,7 @@ class GitStatus(BaseModel):
|
||||
# [DEF:LastTask:DataClass]
|
||||
class LastTask(BaseModel):
|
||||
task_id: Optional[str] = None
|
||||
status: Optional[str] = Field(
|
||||
None,
|
||||
pattern="^PENDING|RUNNING|SUCCESS|FAILED|ERROR|AWAITING_INPUT|WAITING_INPUT|AWAITING_MAPPING$",
|
||||
)
|
||||
validation_status: Optional[str] = Field(None, pattern="^PASS|FAIL|WARN|UNKNOWN$")
|
||||
status: Optional[str] = Field(None, pattern="^RUNNING|SUCCESS|ERROR|WAITING_INPUT$")
|
||||
# [/DEF:LastTask:DataClass]
|
||||
|
||||
# [DEF:DashboardItem:DataClass]
|
||||
@@ -72,9 +46,6 @@ class DashboardItem(BaseModel):
|
||||
slug: Optional[str] = None
|
||||
url: Optional[str] = None
|
||||
last_modified: Optional[str] = None
|
||||
created_by: Optional[str] = None
|
||||
modified_by: Optional[str] = None
|
||||
owners: Optional[List[str]] = None
|
||||
git_status: Optional[GitStatus] = None
|
||||
last_task: Optional[LastTask] = None
|
||||
# [/DEF:DashboardItem:DataClass]
|
||||
@@ -155,57 +126,6 @@ class DatabaseMappingsResponse(BaseModel):
|
||||
mappings: List[DatabaseMapping]
|
||||
# [/DEF:DatabaseMappingsResponse:DataClass]
|
||||
|
||||
|
||||
# [DEF:_find_dashboard_id_by_slug:Function]
|
||||
# @PURPOSE: Resolve dashboard numeric ID by slug using Superset list endpoint.
|
||||
# @PRE: `dashboard_slug` is non-empty.
|
||||
# @POST: Returns dashboard ID when found, otherwise None.
|
||||
def _find_dashboard_id_by_slug(
|
||||
client: SupersetClient,
|
||||
dashboard_slug: str,
|
||||
) -> Optional[int]:
|
||||
query_variants = [
|
||||
{"filters": [{"col": "slug", "opr": "eq", "value": dashboard_slug}], "page": 0, "page_size": 1},
|
||||
{"filters": [{"col": "slug", "op": "eq", "value": dashboard_slug}], "page": 0, "page_size": 1},
|
||||
]
|
||||
|
||||
for query in query_variants:
|
||||
try:
|
||||
_count, dashboards = client.get_dashboards_page(query=query)
|
||||
if dashboards:
|
||||
resolved_id = dashboards[0].get("id")
|
||||
if resolved_id is not None:
|
||||
return int(resolved_id)
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
return None
|
||||
# [/DEF:_find_dashboard_id_by_slug:Function]
|
||||
|
||||
|
||||
# [DEF:_resolve_dashboard_id_from_ref:Function]
|
||||
# @PURPOSE: Resolve dashboard ID from slug-first reference with numeric fallback.
|
||||
# @PRE: `dashboard_ref` is provided in route path.
|
||||
# @POST: Returns a valid dashboard ID or raises HTTPException(404).
|
||||
def _resolve_dashboard_id_from_ref(
|
||||
dashboard_ref: str,
|
||||
client: SupersetClient,
|
||||
) -> int:
|
||||
normalized_ref = str(dashboard_ref or "").strip()
|
||||
if not normalized_ref:
|
||||
raise HTTPException(status_code=404, detail="Dashboard not found")
|
||||
|
||||
# Slug-first: even if ref looks numeric, try slug first.
|
||||
slug_match_id = _find_dashboard_id_by_slug(client, normalized_ref)
|
||||
if slug_match_id is not None:
|
||||
return slug_match_id
|
||||
|
||||
if normalized_ref.isdigit():
|
||||
return int(normalized_ref)
|
||||
|
||||
raise HTTPException(status_code=404, detail="Dashboard not found")
|
||||
# [/DEF:_resolve_dashboard_id_from_ref:Function]
|
||||
|
||||
# [DEF:get_dashboards:Function]
|
||||
# @PURPOSE: Fetch list of dashboards from a specific environment with Git status and last task status
|
||||
# @PRE: env_id must be a valid environment ID
|
||||
@@ -249,66 +169,27 @@ async def get_dashboards(
|
||||
try:
|
||||
# Get all tasks for status lookup
|
||||
all_tasks = task_manager.get_all_tasks()
|
||||
|
||||
# Fast path: real ResourceService -> one Superset page call per API request.
|
||||
if isinstance(resource_service, ResourceService):
|
||||
try:
|
||||
page_payload = await resource_service.get_dashboards_page_with_status(
|
||||
env,
|
||||
all_tasks,
|
||||
page=page,
|
||||
page_size=page_size,
|
||||
search=search,
|
||||
include_git_status=False,
|
||||
)
|
||||
paginated_dashboards = page_payload["dashboards"]
|
||||
total = page_payload["total"]
|
||||
total_pages = page_payload["total_pages"]
|
||||
except Exception as page_error:
|
||||
logger.warning(
|
||||
"[get_dashboards][Action] Page-based fetch failed; using compatibility fallback: %s",
|
||||
page_error,
|
||||
)
|
||||
dashboards = await resource_service.get_dashboards_with_status(
|
||||
env,
|
||||
all_tasks,
|
||||
include_git_status=False,
|
||||
)
|
||||
|
||||
if search:
|
||||
search_lower = search.lower()
|
||||
dashboards = [
|
||||
d for d in dashboards
|
||||
if search_lower in d.get('title', '').lower()
|
||||
or search_lower in d.get('slug', '').lower()
|
||||
]
|
||||
|
||||
total = len(dashboards)
|
||||
total_pages = (total + page_size - 1) // page_size if total > 0 else 1
|
||||
start_idx = (page - 1) * page_size
|
||||
end_idx = start_idx + page_size
|
||||
paginated_dashboards = dashboards[start_idx:end_idx]
|
||||
else:
|
||||
# Compatibility path for mocked services in route tests.
|
||||
dashboards = await resource_service.get_dashboards_with_status(
|
||||
env,
|
||||
all_tasks,
|
||||
include_git_status=False,
|
||||
)
|
||||
|
||||
if search:
|
||||
search_lower = search.lower()
|
||||
dashboards = [
|
||||
d for d in dashboards
|
||||
if search_lower in d.get('title', '').lower()
|
||||
or search_lower in d.get('slug', '').lower()
|
||||
]
|
||||
|
||||
total = len(dashboards)
|
||||
total_pages = (total + page_size - 1) // page_size if total > 0 else 1
|
||||
start_idx = (page - 1) * page_size
|
||||
end_idx = start_idx + page_size
|
||||
paginated_dashboards = dashboards[start_idx:end_idx]
|
||||
|
||||
# Fetch dashboards with status using ResourceService
|
||||
dashboards = await resource_service.get_dashboards_with_status(env, all_tasks)
|
||||
|
||||
# Apply search filter if provided
|
||||
if search:
|
||||
search_lower = search.lower()
|
||||
dashboards = [
|
||||
d for d in dashboards
|
||||
if search_lower in d.get('title', '').lower()
|
||||
or search_lower in d.get('slug', '').lower()
|
||||
]
|
||||
|
||||
# Calculate pagination
|
||||
total = len(dashboards)
|
||||
total_pages = (total + page_size - 1) // page_size if total > 0 else 1
|
||||
start_idx = (page - 1) * page_size
|
||||
end_idx = start_idx + page_size
|
||||
|
||||
# Slice dashboards for current page
|
||||
paginated_dashboards = dashboards[start_idx:end_idx]
|
||||
|
||||
logger.info(f"[get_dashboards][Coherence:OK] Returning {len(paginated_dashboards)} dashboards (page {page}/{total_pages}, total: {total})")
|
||||
|
||||
@@ -338,23 +219,10 @@ async def get_dashboards(
|
||||
async def get_database_mappings(
|
||||
source_env_id: str,
|
||||
target_env_id: str,
|
||||
config_manager=Depends(get_config_manager),
|
||||
mapping_service=Depends(get_mapping_service),
|
||||
_ = Depends(has_permission("plugin:migration", "READ"))
|
||||
):
|
||||
with belief_scope("get_database_mappings", f"source={source_env_id}, target={target_env_id}"):
|
||||
# Validate environments exist
|
||||
environments = config_manager.get_environments()
|
||||
source_env = next((e for e in environments if e.id == source_env_id), None)
|
||||
target_env = next((e for e in environments if e.id == target_env_id), None)
|
||||
|
||||
if not source_env:
|
||||
logger.error(f"[get_database_mappings][Coherence:Failed] Source environment not found: {source_env_id}")
|
||||
raise HTTPException(status_code=404, detail="Source environment not found")
|
||||
if not target_env:
|
||||
logger.error(f"[get_database_mappings][Coherence:Failed] Target environment not found: {target_env_id}")
|
||||
raise HTTPException(status_code=404, detail="Target environment not found")
|
||||
|
||||
try:
|
||||
# Get mapping suggestions using MappingService
|
||||
suggestions = await mapping_service.get_suggestions(source_env_id, target_env_id)
|
||||
@@ -382,17 +250,17 @@ async def get_database_mappings(
|
||||
|
||||
# [DEF:get_dashboard_detail:Function]
|
||||
# @PURPOSE: Fetch detailed dashboard info with related charts and datasets
|
||||
# @PRE: env_id must be valid and dashboard ref (slug or id) must exist
|
||||
# @PRE: env_id must be valid and dashboard_id must exist
|
||||
# @POST: Returns dashboard detail payload for overview page
|
||||
# @RELATION: CALLS -> SupersetClient.get_dashboard_detail
|
||||
@router.get("/{dashboard_ref}", response_model=DashboardDetailResponse)
|
||||
@router.get("/{dashboard_id:int}", response_model=DashboardDetailResponse)
|
||||
async def get_dashboard_detail(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
env_id: str,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:migration", "READ"))
|
||||
):
|
||||
with belief_scope("get_dashboard_detail", f"dashboard_ref={dashboard_ref}, env_id={env_id}"):
|
||||
with belief_scope("get_dashboard_detail", f"dashboard_id={dashboard_id}, env_id={env_id}"):
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
if not env:
|
||||
@@ -401,10 +269,9 @@ async def get_dashboard_detail(
|
||||
|
||||
try:
|
||||
client = SupersetClient(env)
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, client)
|
||||
detail = client.get_dashboard_detail(dashboard_id)
|
||||
logger.info(
|
||||
f"[get_dashboard_detail][Coherence:OK] Dashboard ref={dashboard_ref} resolved_id={dashboard_id}: {detail.get('chart_count', 0)} charts, {detail.get('dataset_count', 0)} datasets"
|
||||
f"[get_dashboard_detail][Coherence:OK] Dashboard {dashboard_id}: {detail.get('chart_count', 0)} charts, {detail.get('dataset_count', 0)} datasets"
|
||||
)
|
||||
return DashboardDetailResponse(**detail)
|
||||
except HTTPException:
|
||||
@@ -450,38 +317,17 @@ def _task_matches_dashboard(task: Any, dashboard_id: int, env_id: Optional[str])
|
||||
|
||||
# [DEF:get_dashboard_tasks_history:Function]
|
||||
# @PURPOSE: Returns history of backup and LLM validation tasks for a dashboard.
|
||||
# @PRE: dashboard ref (slug or id) is valid.
|
||||
# @PRE: dashboard_id is valid integer.
|
||||
# @POST: Response contains sorted task history (newest first).
|
||||
@router.get("/{dashboard_ref}/tasks", response_model=DashboardTaskHistoryResponse)
|
||||
@router.get("/{dashboard_id:int}/tasks", response_model=DashboardTaskHistoryResponse)
|
||||
async def get_dashboard_tasks_history(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
env_id: Optional[str] = None,
|
||||
limit: int = Query(20, ge=1, le=100),
|
||||
config_manager=Depends(get_config_manager),
|
||||
task_manager=Depends(get_task_manager),
|
||||
_ = Depends(has_permission("tasks", "READ"))
|
||||
):
|
||||
with belief_scope("get_dashboard_tasks_history", f"dashboard_ref={dashboard_ref}, env_id={env_id}, limit={limit}"):
|
||||
dashboard_id: Optional[int] = None
|
||||
if dashboard_ref.isdigit():
|
||||
dashboard_id = int(dashboard_ref)
|
||||
elif env_id:
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
if not env:
|
||||
logger.error(f"[get_dashboard_tasks_history][Coherence:Failed] Environment not found: {env_id}")
|
||||
raise HTTPException(status_code=404, detail="Environment not found")
|
||||
client = SupersetClient(env)
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, client)
|
||||
else:
|
||||
logger.error(
|
||||
"[get_dashboard_tasks_history][Coherence:Failed] Non-numeric dashboard ref requires env_id"
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="env_id is required when dashboard reference is a slug",
|
||||
)
|
||||
|
||||
with belief_scope("get_dashboard_tasks_history", f"dashboard_id={dashboard_id}, env_id={env_id}, limit={limit}"):
|
||||
matching_tasks = []
|
||||
for task in task_manager.get_all_tasks():
|
||||
if _task_matches_dashboard(task, dashboard_id, env_id):
|
||||
@@ -524,7 +370,7 @@ async def get_dashboard_tasks_history(
|
||||
)
|
||||
)
|
||||
|
||||
logger.info(f"[get_dashboard_tasks_history][Coherence:OK] Found {len(items)} tasks for dashboard_ref={dashboard_ref}, dashboard_id={dashboard_id}")
|
||||
logger.info(f"[get_dashboard_tasks_history][Coherence:OK] Found {len(items)} tasks for dashboard {dashboard_id}")
|
||||
return DashboardTaskHistoryResponse(dashboard_id=dashboard_id, items=items)
|
||||
# [/DEF:get_dashboard_tasks_history:Function]
|
||||
|
||||
@@ -533,15 +379,15 @@ async def get_dashboard_tasks_history(
|
||||
# @PURPOSE: Proxies Superset dashboard thumbnail with cache support.
|
||||
# @PRE: env_id must exist.
|
||||
# @POST: Returns image bytes or 202 when thumbnail is being prepared by Superset.
|
||||
@router.get("/{dashboard_ref}/thumbnail")
|
||||
@router.get("/{dashboard_id:int}/thumbnail")
|
||||
async def get_dashboard_thumbnail(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
env_id: str,
|
||||
force: bool = Query(False),
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:migration", "READ"))
|
||||
):
|
||||
with belief_scope("get_dashboard_thumbnail", f"dashboard_ref={dashboard_ref}, env_id={env_id}, force={force}"):
|
||||
with belief_scope("get_dashboard_thumbnail", f"dashboard_id={dashboard_id}, env_id={env_id}, force={force}"):
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
if not env:
|
||||
@@ -550,7 +396,6 @@ async def get_dashboard_thumbnail(
|
||||
|
||||
try:
|
||||
client = SupersetClient(env)
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, client)
|
||||
digest = None
|
||||
thumb_endpoint = None
|
||||
|
||||
|
||||
@@ -31,7 +31,6 @@ class EnvironmentResponse(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
url: str
|
||||
stage: str = "DEV"
|
||||
is_production: bool = False
|
||||
backup_schedule: Optional[ScheduleSchema] = None
|
||||
# [/DEF:EnvironmentResponse:DataClass]
|
||||
@@ -60,26 +59,18 @@ async def get_environments(
|
||||
# Ensure envs is a list
|
||||
if not isinstance(envs, list):
|
||||
envs = []
|
||||
response_items = []
|
||||
for e in envs:
|
||||
resolved_stage = str(
|
||||
getattr(e, "stage", "")
|
||||
or ("PROD" if bool(getattr(e, "is_production", False)) else "DEV")
|
||||
).upper()
|
||||
response_items.append(
|
||||
EnvironmentResponse(
|
||||
id=e.id,
|
||||
name=e.name,
|
||||
url=e.url,
|
||||
stage=resolved_stage,
|
||||
is_production=(resolved_stage == "PROD"),
|
||||
backup_schedule=ScheduleSchema(
|
||||
enabled=e.backup_schedule.enabled,
|
||||
cron_expression=e.backup_schedule.cron_expression
|
||||
) if getattr(e, 'backup_schedule', None) else None
|
||||
)
|
||||
)
|
||||
return response_items
|
||||
return [
|
||||
EnvironmentResponse(
|
||||
id=e.id,
|
||||
name=e.name,
|
||||
url=e.url,
|
||||
is_production=getattr(e, "is_production", False),
|
||||
backup_schedule=ScheduleSchema(
|
||||
enabled=e.backup_schedule.enabled,
|
||||
cron_expression=e.backup_schedule.cron_expression
|
||||
) if getattr(e, 'backup_schedule', None) else None
|
||||
) for e in envs
|
||||
]
|
||||
# [/DEF:get_environments:Function]
|
||||
|
||||
# [DEF:update_environment_schedule:Function]
|
||||
|
||||
@@ -14,22 +14,16 @@ from fastapi import APIRouter, Depends, HTTPException
|
||||
from sqlalchemy.orm import Session
|
||||
from typing import List, Optional
|
||||
import typing
|
||||
import os
|
||||
from src.dependencies import get_config_manager, has_permission
|
||||
from src.core.database import get_db
|
||||
from src.models.git import GitServerConfig, GitRepository, GitProvider
|
||||
from src.models.git import GitServerConfig, GitRepository
|
||||
from src.api.routes.git_schemas import (
|
||||
GitServerConfigSchema, GitServerConfigCreate,
|
||||
BranchSchema, BranchCreate,
|
||||
BranchCheckout, CommitSchema, CommitCreate,
|
||||
DeploymentEnvironmentSchema, DeployRequest, RepoInitRequest,
|
||||
RepoStatusBatchRequest, RepoStatusBatchResponse,
|
||||
GiteaRepoCreateRequest, GiteaRepoSchema,
|
||||
RemoteRepoCreateRequest, RemoteRepoSchema,
|
||||
PromoteRequest, PromoteResponse,
|
||||
DeploymentEnvironmentSchema, DeployRequest, RepoInitRequest
|
||||
)
|
||||
from src.services.git_service import GitService
|
||||
from src.core.superset_client import SupersetClient
|
||||
from src.core.logger import logger, belief_scope
|
||||
from ...services.llm_prompt_templates import (
|
||||
DEFAULT_LLM_PROMPTS,
|
||||
@@ -39,141 +33,6 @@ from ...services.llm_prompt_templates import (
|
||||
|
||||
router = APIRouter(tags=["git"])
|
||||
git_service = GitService()
|
||||
MAX_REPOSITORY_STATUS_BATCH = 50
|
||||
|
||||
|
||||
# [DEF:_build_no_repo_status_payload:Function]
|
||||
# @PURPOSE: Build a consistent status payload for dashboards without initialized repositories.
|
||||
# @PRE: None.
|
||||
# @POST: Returns a stable payload compatible with frontend repository status parsing.
|
||||
# @RETURN: dict
|
||||
def _build_no_repo_status_payload() -> dict:
|
||||
return {
|
||||
"is_dirty": False,
|
||||
"untracked_files": [],
|
||||
"modified_files": [],
|
||||
"staged_files": [],
|
||||
"current_branch": None,
|
||||
"upstream_branch": None,
|
||||
"has_upstream": False,
|
||||
"ahead_count": 0,
|
||||
"behind_count": 0,
|
||||
"is_diverged": False,
|
||||
"sync_state": "NO_REPO",
|
||||
"sync_status": "NO_REPO",
|
||||
"has_repo": False,
|
||||
}
|
||||
# [/DEF:_build_no_repo_status_payload:Function]
|
||||
|
||||
|
||||
# [DEF:_handle_unexpected_git_route_error:Function]
|
||||
# @PURPOSE: Convert unexpected route-level exceptions to stable 500 API responses.
|
||||
# @PRE: `error` is a non-HTTPException instance.
|
||||
# @POST: Raises HTTPException(500) with route-specific context.
|
||||
# @PARAM: route_name (str)
|
||||
# @PARAM: error (Exception)
|
||||
def _handle_unexpected_git_route_error(route_name: str, error: Exception) -> None:
|
||||
logger.error(f"[{route_name}][Coherence:Failed] {error}")
|
||||
raise HTTPException(status_code=500, detail=f"{route_name} failed: {str(error)}")
|
||||
# [/DEF:_handle_unexpected_git_route_error:Function]
|
||||
|
||||
|
||||
# [DEF:_resolve_repository_status:Function]
|
||||
# @PURPOSE: Resolve repository status for one dashboard with graceful NO_REPO semantics.
|
||||
# @PRE: `dashboard_id` is a valid integer.
|
||||
# @POST: Returns standard status payload or `NO_REPO` payload when repository path is absent.
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @RETURN: dict
|
||||
def _resolve_repository_status(dashboard_id: int) -> dict:
|
||||
repo_path = git_service._get_repo_path(dashboard_id)
|
||||
if not os.path.exists(repo_path):
|
||||
logger.debug(
|
||||
f"[get_repository_status][Action] Repository is not initialized for dashboard {dashboard_id}"
|
||||
)
|
||||
return _build_no_repo_status_payload()
|
||||
|
||||
try:
|
||||
return git_service.get_status(dashboard_id)
|
||||
except HTTPException as e:
|
||||
if e.status_code == 404:
|
||||
logger.debug(
|
||||
f"[get_repository_status][Action] Repository is not initialized for dashboard {dashboard_id}"
|
||||
)
|
||||
return _build_no_repo_status_payload()
|
||||
raise
|
||||
# [/DEF:_resolve_repository_status:Function]
|
||||
|
||||
|
||||
# [DEF:_get_git_config_or_404:Function]
|
||||
# @PURPOSE: Resolve GitServerConfig by id or raise 404.
|
||||
# @PRE: db session is available.
|
||||
# @POST: Returns GitServerConfig model.
|
||||
def _get_git_config_or_404(db: Session, config_id: str) -> GitServerConfig:
|
||||
config = db.query(GitServerConfig).filter(GitServerConfig.id == config_id).first()
|
||||
if not config:
|
||||
raise HTTPException(status_code=404, detail="Git configuration not found")
|
||||
return config
|
||||
# [/DEF:_get_git_config_or_404:Function]
|
||||
|
||||
|
||||
# [DEF:_find_dashboard_id_by_slug:Function]
|
||||
# @PURPOSE: Resolve dashboard numeric ID by slug in a specific environment.
|
||||
# @PRE: dashboard_slug is non-empty.
|
||||
# @POST: Returns dashboard ID or None when not found.
|
||||
def _find_dashboard_id_by_slug(
|
||||
client: SupersetClient,
|
||||
dashboard_slug: str,
|
||||
) -> Optional[int]:
|
||||
query_variants = [
|
||||
{"filters": [{"col": "slug", "opr": "eq", "value": dashboard_slug}], "page": 0, "page_size": 1},
|
||||
{"filters": [{"col": "slug", "op": "eq", "value": dashboard_slug}], "page": 0, "page_size": 1},
|
||||
]
|
||||
|
||||
for query in query_variants:
|
||||
try:
|
||||
_count, dashboards = client.get_dashboards_page(query=query)
|
||||
if dashboards:
|
||||
resolved_id = dashboards[0].get("id")
|
||||
if resolved_id is not None:
|
||||
return int(resolved_id)
|
||||
except Exception:
|
||||
continue
|
||||
return None
|
||||
# [/DEF:_find_dashboard_id_by_slug:Function]
|
||||
|
||||
|
||||
# [DEF:_resolve_dashboard_id_from_ref:Function]
|
||||
# @PURPOSE: Resolve dashboard ID from slug-or-id reference for Git routes.
|
||||
# @PRE: dashboard_ref is provided; env_id is required for slug values.
|
||||
# @POST: Returns numeric dashboard ID or raises HTTPException.
|
||||
def _resolve_dashboard_id_from_ref(
|
||||
dashboard_ref: str,
|
||||
config_manager,
|
||||
env_id: Optional[str] = None,
|
||||
) -> int:
|
||||
normalized_ref = str(dashboard_ref or "").strip()
|
||||
if not normalized_ref:
|
||||
raise HTTPException(status_code=400, detail="dashboard_ref is required")
|
||||
|
||||
if normalized_ref.isdigit():
|
||||
return int(normalized_ref)
|
||||
|
||||
if not env_id:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="env_id is required for slug-based Git operations",
|
||||
)
|
||||
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
if not env:
|
||||
raise HTTPException(status_code=404, detail="Environment not found")
|
||||
|
||||
dashboard_id = _find_dashboard_id_by_slug(SupersetClient(env), normalized_ref)
|
||||
if dashboard_id is None:
|
||||
raise HTTPException(status_code=404, detail=f"Dashboard slug '{normalized_ref}' not found")
|
||||
return dashboard_id
|
||||
# [/DEF:_resolve_dashboard_id_from_ref:Function]
|
||||
|
||||
# [DEF:get_git_configs:Function]
|
||||
# @PURPOSE: List all configured Git servers.
|
||||
@@ -248,175 +107,20 @@ async def test_git_config(
|
||||
raise HTTPException(status_code=400, detail="Connection failed")
|
||||
# [/DEF:test_git_config:Function]
|
||||
|
||||
|
||||
# [DEF:list_gitea_repositories:Function]
|
||||
# @PURPOSE: List repositories in Gitea for a saved Gitea config.
|
||||
# @PRE: config_id exists and provider is GITEA.
|
||||
# @POST: Returns repositories visible to PAT user.
|
||||
@router.get("/config/{config_id}/gitea/repos", response_model=List[GiteaRepoSchema])
|
||||
async def list_gitea_repositories(
|
||||
config_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("admin:settings", "READ"))
|
||||
):
|
||||
with belief_scope("list_gitea_repositories"):
|
||||
config = _get_git_config_or_404(db, config_id)
|
||||
if config.provider != GitProvider.GITEA:
|
||||
raise HTTPException(status_code=400, detail="This endpoint supports GITEA provider only")
|
||||
repos = await git_service.list_gitea_repositories(config.url, config.pat)
|
||||
return [
|
||||
GiteaRepoSchema(
|
||||
name=repo.get("name", ""),
|
||||
full_name=repo.get("full_name", ""),
|
||||
private=bool(repo.get("private", False)),
|
||||
clone_url=repo.get("clone_url"),
|
||||
html_url=repo.get("html_url"),
|
||||
ssh_url=repo.get("ssh_url"),
|
||||
default_branch=repo.get("default_branch"),
|
||||
)
|
||||
for repo in repos
|
||||
]
|
||||
# [/DEF:list_gitea_repositories:Function]
|
||||
|
||||
|
||||
# [DEF:create_gitea_repository:Function]
|
||||
# @PURPOSE: Create a repository in Gitea for a saved Gitea config.
|
||||
# @PRE: config_id exists and provider is GITEA.
|
||||
# @POST: Returns created repository payload.
|
||||
@router.post("/config/{config_id}/gitea/repos", response_model=GiteaRepoSchema)
|
||||
async def create_gitea_repository(
|
||||
config_id: str,
|
||||
request: GiteaRepoCreateRequest,
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("admin:settings", "WRITE"))
|
||||
):
|
||||
with belief_scope("create_gitea_repository"):
|
||||
config = _get_git_config_or_404(db, config_id)
|
||||
if config.provider != GitProvider.GITEA:
|
||||
raise HTTPException(status_code=400, detail="This endpoint supports GITEA provider only")
|
||||
repo = await git_service.create_gitea_repository(
|
||||
server_url=config.url,
|
||||
pat=config.pat,
|
||||
name=request.name,
|
||||
private=request.private,
|
||||
description=request.description,
|
||||
auto_init=request.auto_init,
|
||||
default_branch=request.default_branch,
|
||||
)
|
||||
return GiteaRepoSchema(
|
||||
name=repo.get("name", ""),
|
||||
full_name=repo.get("full_name", ""),
|
||||
private=bool(repo.get("private", False)),
|
||||
clone_url=repo.get("clone_url"),
|
||||
html_url=repo.get("html_url"),
|
||||
ssh_url=repo.get("ssh_url"),
|
||||
default_branch=repo.get("default_branch"),
|
||||
)
|
||||
# [/DEF:create_gitea_repository:Function]
|
||||
|
||||
|
||||
# [DEF:create_remote_repository:Function]
|
||||
# @PURPOSE: Create repository on remote Git server using selected provider config.
|
||||
# @PRE: config_id exists and PAT has creation permissions.
|
||||
# @POST: Returns normalized remote repository payload.
|
||||
@router.post("/config/{config_id}/repositories", response_model=RemoteRepoSchema)
|
||||
async def create_remote_repository(
|
||||
config_id: str,
|
||||
request: RemoteRepoCreateRequest,
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("admin:settings", "WRITE"))
|
||||
):
|
||||
with belief_scope("create_remote_repository"):
|
||||
config = _get_git_config_or_404(db, config_id)
|
||||
|
||||
if config.provider == GitProvider.GITEA:
|
||||
repo = await git_service.create_gitea_repository(
|
||||
server_url=config.url,
|
||||
pat=config.pat,
|
||||
name=request.name,
|
||||
private=request.private,
|
||||
description=request.description,
|
||||
auto_init=request.auto_init,
|
||||
default_branch=request.default_branch,
|
||||
)
|
||||
elif config.provider == GitProvider.GITHUB:
|
||||
repo = await git_service.create_github_repository(
|
||||
server_url=config.url,
|
||||
pat=config.pat,
|
||||
name=request.name,
|
||||
private=request.private,
|
||||
description=request.description,
|
||||
auto_init=request.auto_init,
|
||||
default_branch=request.default_branch,
|
||||
)
|
||||
elif config.provider == GitProvider.GITLAB:
|
||||
repo = await git_service.create_gitlab_repository(
|
||||
server_url=config.url,
|
||||
pat=config.pat,
|
||||
name=request.name,
|
||||
private=request.private,
|
||||
description=request.description,
|
||||
auto_init=request.auto_init,
|
||||
default_branch=request.default_branch,
|
||||
)
|
||||
else:
|
||||
raise HTTPException(status_code=501, detail=f"Provider {config.provider} is not supported")
|
||||
|
||||
return RemoteRepoSchema(
|
||||
provider=config.provider,
|
||||
name=repo.get("name", ""),
|
||||
full_name=repo.get("full_name", repo.get("name", "")),
|
||||
private=bool(repo.get("private", False)),
|
||||
clone_url=repo.get("clone_url"),
|
||||
html_url=repo.get("html_url"),
|
||||
ssh_url=repo.get("ssh_url"),
|
||||
default_branch=repo.get("default_branch"),
|
||||
)
|
||||
# [/DEF:create_remote_repository:Function]
|
||||
|
||||
|
||||
# [DEF:delete_gitea_repository:Function]
|
||||
# @PURPOSE: Delete repository in Gitea for a saved Gitea config.
|
||||
# @PRE: config_id exists and provider is GITEA.
|
||||
# @POST: Target repository is deleted on Gitea.
|
||||
@router.delete("/config/{config_id}/gitea/repos/{owner}/{repo_name}")
|
||||
async def delete_gitea_repository(
|
||||
config_id: str,
|
||||
owner: str,
|
||||
repo_name: str,
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("admin:settings", "WRITE"))
|
||||
):
|
||||
with belief_scope("delete_gitea_repository"):
|
||||
config = _get_git_config_or_404(db, config_id)
|
||||
if config.provider != GitProvider.GITEA:
|
||||
raise HTTPException(status_code=400, detail="This endpoint supports GITEA provider only")
|
||||
await git_service.delete_gitea_repository(
|
||||
server_url=config.url,
|
||||
pat=config.pat,
|
||||
owner=owner,
|
||||
repo_name=repo_name,
|
||||
)
|
||||
return {"status": "success", "message": "Repository deleted"}
|
||||
# [/DEF:delete_gitea_repository:Function]
|
||||
|
||||
# [DEF:init_repository:Function]
|
||||
# @PURPOSE: Link a dashboard to a Git repository and perform initial clone/init.
|
||||
# @PRE: `dashboard_ref` exists and `init_data` contains valid config_id and remote_url.
|
||||
# @PRE: `dashboard_id` exists and `init_data` contains valid config_id and remote_url.
|
||||
# @POST: Repository is initialized on disk and a GitRepository record is saved in DB.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @PARAM: init_data (RepoInitRequest)
|
||||
@router.post("/repositories/{dashboard_ref}/init")
|
||||
@router.post("/repositories/{dashboard_id}/init")
|
||||
async def init_repository(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
init_data: RepoInitRequest,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("init_repository"):
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
# 1. Get config
|
||||
config = db.query(GitServerConfig).filter(GitServerConfig.id == init_data.config_id).first()
|
||||
if not config:
|
||||
@@ -449,172 +153,137 @@ async def init_repository(
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
logger.error(f"[init_repository][Coherence:Failed] Failed to init repository: {e}")
|
||||
if isinstance(e, HTTPException):
|
||||
raise
|
||||
_handle_unexpected_git_route_error("init_repository", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:init_repository:Function]
|
||||
|
||||
# [DEF:get_branches:Function]
|
||||
# @PURPOSE: List all branches for a dashboard's repository.
|
||||
# @PRE: Repository for `dashboard_ref` is initialized.
|
||||
# @PRE: Repository for `dashboard_id` is initialized.
|
||||
# @POST: Returns a list of branches from the local repository.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @RETURN: List[BranchSchema]
|
||||
@router.get("/repositories/{dashboard_ref}/branches", response_model=List[BranchSchema])
|
||||
@router.get("/repositories/{dashboard_id}/branches", response_model=List[BranchSchema])
|
||||
async def get_branches(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
dashboard_id: int,
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("get_branches"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
return git_service.list_branches(dashboard_id)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("get_branches", e)
|
||||
raise HTTPException(status_code=404, detail=str(e))
|
||||
# [/DEF:get_branches:Function]
|
||||
|
||||
# [DEF:create_branch:Function]
|
||||
# @PURPOSE: Create a new branch in the dashboard's repository.
|
||||
# @PRE: `dashboard_ref` repository exists and `branch_data` has name and from_branch.
|
||||
# @PRE: `dashboard_id` repository exists and `branch_data` has name and from_branch.
|
||||
# @POST: A new branch is created in the local repository.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @PARAM: branch_data (BranchCreate)
|
||||
@router.post("/repositories/{dashboard_ref}/branches")
|
||||
@router.post("/repositories/{dashboard_id}/branches")
|
||||
async def create_branch(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
branch_data: BranchCreate,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("create_branch"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
git_service.create_branch(dashboard_id, branch_data.name, branch_data.from_branch)
|
||||
return {"status": "success"}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("create_branch", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:create_branch:Function]
|
||||
|
||||
# [DEF:checkout_branch:Function]
|
||||
# @PURPOSE: Switch the dashboard's repository to a specific branch.
|
||||
# @PRE: `dashboard_ref` repository exists and branch `checkout_data.name` exists.
|
||||
# @PRE: `dashboard_id` repository exists and branch `checkout_data.name` exists.
|
||||
# @POST: The local repository HEAD is moved to the specified branch.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @PARAM: checkout_data (BranchCheckout)
|
||||
@router.post("/repositories/{dashboard_ref}/checkout")
|
||||
@router.post("/repositories/{dashboard_id}/checkout")
|
||||
async def checkout_branch(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
checkout_data: BranchCheckout,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("checkout_branch"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
git_service.checkout_branch(dashboard_id, checkout_data.name)
|
||||
return {"status": "success"}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("checkout_branch", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:checkout_branch:Function]
|
||||
|
||||
# [DEF:commit_changes:Function]
|
||||
# @PURPOSE: Stage and commit changes in the dashboard's repository.
|
||||
# @PRE: `dashboard_ref` repository exists and `commit_data` has message and files.
|
||||
# @PRE: `dashboard_id` repository exists and `commit_data` has message and files.
|
||||
# @POST: Specified files are staged and a new commit is created.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @PARAM: commit_data (CommitCreate)
|
||||
@router.post("/repositories/{dashboard_ref}/commit")
|
||||
@router.post("/repositories/{dashboard_id}/commit")
|
||||
async def commit_changes(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
commit_data: CommitCreate,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("commit_changes"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
git_service.commit_changes(dashboard_id, commit_data.message, commit_data.files)
|
||||
return {"status": "success"}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("commit_changes", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:commit_changes:Function]
|
||||
|
||||
# [DEF:push_changes:Function]
|
||||
# @PURPOSE: Push local commits to the remote repository.
|
||||
# @PRE: `dashboard_ref` repository exists and has a remote configured.
|
||||
# @PRE: `dashboard_id` repository exists and has a remote configured.
|
||||
# @POST: Local commits are pushed to the remote repository.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
@router.post("/repositories/{dashboard_ref}/push")
|
||||
# @PARAM: dashboard_id (int)
|
||||
@router.post("/repositories/{dashboard_id}/push")
|
||||
async def push_changes(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
dashboard_id: int,
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("push_changes"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
git_service.push_changes(dashboard_id)
|
||||
return {"status": "success"}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("push_changes", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:push_changes:Function]
|
||||
|
||||
# [DEF:pull_changes:Function]
|
||||
# @PURPOSE: Pull changes from the remote repository.
|
||||
# @PRE: `dashboard_ref` repository exists and has a remote configured.
|
||||
# @PRE: `dashboard_id` repository exists and has a remote configured.
|
||||
# @POST: Remote changes are fetched and merged into the local branch.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
@router.post("/repositories/{dashboard_ref}/pull")
|
||||
# @PARAM: dashboard_id (int)
|
||||
@router.post("/repositories/{dashboard_id}/pull")
|
||||
async def pull_changes(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
dashboard_id: int,
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("pull_changes"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
git_service.pull_changes(dashboard_id)
|
||||
return {"status": "success"}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("pull_changes", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:pull_changes:Function]
|
||||
|
||||
# [DEF:sync_dashboard:Function]
|
||||
# @PURPOSE: Sync dashboard state from Superset to Git using the GitPlugin.
|
||||
# @PRE: `dashboard_ref` is valid; GitPlugin is available.
|
||||
# @PRE: `dashboard_id` is valid; GitPlugin is available.
|
||||
# @POST: Dashboard YAMLs are exported from Superset and committed to Git.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @PARAM: source_env_id (Optional[str])
|
||||
@router.post("/repositories/{dashboard_ref}/sync")
|
||||
@router.post("/repositories/{dashboard_id}/sync")
|
||||
async def sync_dashboard(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
dashboard_id: int,
|
||||
source_env_id: typing.Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("sync_dashboard"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
from src.plugins.git_plugin import GitPlugin
|
||||
plugin = GitPlugin()
|
||||
return await plugin.execute({
|
||||
@@ -622,113 +291,10 @@ async def sync_dashboard(
|
||||
"dashboard_id": dashboard_id,
|
||||
"source_env_id": source_env_id
|
||||
})
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("sync_dashboard", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:sync_dashboard:Function]
|
||||
|
||||
|
||||
# [DEF:promote_dashboard:Function]
|
||||
# @PURPOSE: Promote changes between branches via MR or direct merge.
|
||||
# @PRE: dashboard repository is initialized and Git config is valid.
|
||||
# @POST: Returns promotion result metadata.
|
||||
@router.post("/repositories/{dashboard_ref}/promote", response_model=PromoteResponse)
|
||||
async def promote_dashboard(
|
||||
dashboard_ref: str,
|
||||
payload: PromoteRequest,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("promote_dashboard"):
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
db_repo = db.query(GitRepository).filter(GitRepository.dashboard_id == dashboard_id).first()
|
||||
if not db_repo:
|
||||
raise HTTPException(status_code=404, detail=f"Repository for dashboard {dashboard_ref} is not initialized")
|
||||
config = _get_git_config_or_404(db, db_repo.config_id)
|
||||
|
||||
from_branch = payload.from_branch.strip()
|
||||
to_branch = payload.to_branch.strip()
|
||||
if not from_branch or not to_branch:
|
||||
raise HTTPException(status_code=400, detail="from_branch and to_branch are required")
|
||||
if from_branch == to_branch:
|
||||
raise HTTPException(status_code=400, detail="from_branch and to_branch must be different")
|
||||
|
||||
mode = (payload.mode or "mr").strip().lower()
|
||||
if mode == "direct":
|
||||
reason = (payload.reason or "").strip()
|
||||
if not reason:
|
||||
raise HTTPException(status_code=400, detail="Direct promote requires non-empty reason")
|
||||
logger.warning(
|
||||
"[promote_dashboard][PolicyViolation] Direct promote without MR by actor=unknown dashboard_ref=%s from=%s to=%s reason=%s",
|
||||
dashboard_ref,
|
||||
from_branch,
|
||||
to_branch,
|
||||
reason,
|
||||
)
|
||||
result = git_service.promote_direct_merge(
|
||||
dashboard_id=dashboard_id,
|
||||
from_branch=from_branch,
|
||||
to_branch=to_branch,
|
||||
)
|
||||
return PromoteResponse(
|
||||
mode="direct",
|
||||
from_branch=from_branch,
|
||||
to_branch=to_branch,
|
||||
status=result.get("status", "merged"),
|
||||
policy_violation=True,
|
||||
)
|
||||
|
||||
title = (payload.title or "").strip() or f"Promote {from_branch} -> {to_branch}"
|
||||
description = payload.description
|
||||
if config.provider == GitProvider.GITEA:
|
||||
pr = await git_service.create_gitea_pull_request(
|
||||
server_url=config.url,
|
||||
pat=config.pat,
|
||||
remote_url=db_repo.remote_url,
|
||||
from_branch=from_branch,
|
||||
to_branch=to_branch,
|
||||
title=title,
|
||||
description=description,
|
||||
)
|
||||
elif config.provider == GitProvider.GITHUB:
|
||||
pr = await git_service.create_github_pull_request(
|
||||
server_url=config.url,
|
||||
pat=config.pat,
|
||||
remote_url=db_repo.remote_url,
|
||||
from_branch=from_branch,
|
||||
to_branch=to_branch,
|
||||
title=title,
|
||||
description=description,
|
||||
draft=payload.draft,
|
||||
)
|
||||
elif config.provider == GitProvider.GITLAB:
|
||||
pr = await git_service.create_gitlab_merge_request(
|
||||
server_url=config.url,
|
||||
pat=config.pat,
|
||||
remote_url=db_repo.remote_url,
|
||||
from_branch=from_branch,
|
||||
to_branch=to_branch,
|
||||
title=title,
|
||||
description=description,
|
||||
remove_source_branch=payload.remove_source_branch,
|
||||
)
|
||||
else:
|
||||
raise HTTPException(status_code=501, detail=f"Provider {config.provider} does not support promotion API")
|
||||
|
||||
return PromoteResponse(
|
||||
mode="mr",
|
||||
from_branch=from_branch,
|
||||
to_branch=to_branch,
|
||||
status=pr.get("status", "opened"),
|
||||
url=pr.get("url"),
|
||||
reference_id=str(pr.get("id")) if pr.get("id") is not None else None,
|
||||
policy_violation=False,
|
||||
)
|
||||
# [/DEF:promote_dashboard:Function]
|
||||
|
||||
# [DEF:get_environments:Function]
|
||||
# @PURPOSE: List all deployment environments.
|
||||
# @PRE: Config manager is accessible.
|
||||
@@ -753,21 +319,18 @@ async def get_environments(
|
||||
|
||||
# [DEF:deploy_dashboard:Function]
|
||||
# @PURPOSE: Deploy dashboard from Git to a target environment.
|
||||
# @PRE: `dashboard_ref` and `deploy_data.environment_id` are valid.
|
||||
# @PRE: `dashboard_id` and `deploy_data.environment_id` are valid.
|
||||
# @POST: Dashboard YAMLs are read from Git and imported into the target Superset.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @PARAM: deploy_data (DeployRequest)
|
||||
@router.post("/repositories/{dashboard_ref}/deploy")
|
||||
@router.post("/repositories/{dashboard_id}/deploy")
|
||||
async def deploy_dashboard(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
deploy_data: DeployRequest,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("deploy_dashboard"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
from src.plugins.git_plugin import GitPlugin
|
||||
plugin = GitPlugin()
|
||||
return await plugin.execute({
|
||||
@@ -775,147 +338,84 @@ async def deploy_dashboard(
|
||||
"dashboard_id": dashboard_id,
|
||||
"environment_id": deploy_data.environment_id
|
||||
})
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("deploy_dashboard", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:deploy_dashboard:Function]
|
||||
|
||||
# [DEF:get_history:Function]
|
||||
# @PURPOSE: View commit history for a dashboard's repository.
|
||||
# @PRE: `dashboard_ref` repository exists.
|
||||
# @PRE: `dashboard_id` repository exists.
|
||||
# @POST: Returns a list of recent commits from the repository.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @PARAM: limit (int)
|
||||
# @RETURN: List[CommitSchema]
|
||||
@router.get("/repositories/{dashboard_ref}/history", response_model=List[CommitSchema])
|
||||
@router.get("/repositories/{dashboard_id}/history", response_model=List[CommitSchema])
|
||||
async def get_history(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
limit: int = 50,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("get_history"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
return git_service.get_commit_history(dashboard_id, limit)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("get_history", e)
|
||||
raise HTTPException(status_code=404, detail=str(e))
|
||||
# [/DEF:get_history:Function]
|
||||
|
||||
# [DEF:get_repository_status:Function]
|
||||
# @PURPOSE: Get current Git status for a dashboard repository.
|
||||
# @PRE: `dashboard_ref` resolves to a valid dashboard.
|
||||
# @POST: Returns repository status; if repo is not initialized, returns `NO_REPO` payload.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PRE: `dashboard_id` repository exists.
|
||||
# @POST: Returns the status of the working directory (staged, unstaged, untracked).
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @RETURN: dict
|
||||
@router.get("/repositories/{dashboard_ref}/status")
|
||||
@router.get("/repositories/{dashboard_id}/status")
|
||||
async def get_repository_status(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
dashboard_id: int,
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("get_repository_status"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
return _resolve_repository_status(dashboard_id)
|
||||
except HTTPException:
|
||||
raise
|
||||
return git_service.get_status(dashboard_id)
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("get_repository_status", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:get_repository_status:Function]
|
||||
|
||||
|
||||
# [DEF:get_repository_status_batch:Function]
|
||||
# @PURPOSE: Get Git statuses for multiple dashboard repositories in one request.
|
||||
# @PRE: `request.dashboard_ids` is provided.
|
||||
# @POST: Returns `statuses` map where each key is dashboard ID and value is repository status payload.
|
||||
# @PARAM: request (RepoStatusBatchRequest)
|
||||
# @RETURN: RepoStatusBatchResponse
|
||||
@router.post("/repositories/status/batch", response_model=RepoStatusBatchResponse)
|
||||
async def get_repository_status_batch(
|
||||
request: RepoStatusBatchRequest,
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("get_repository_status_batch"):
|
||||
dashboard_ids = list(dict.fromkeys(request.dashboard_ids))
|
||||
if len(dashboard_ids) > MAX_REPOSITORY_STATUS_BATCH:
|
||||
logger.warning(
|
||||
"[get_repository_status_batch][Action] Batch size %s exceeds limit %s. Truncating request.",
|
||||
len(dashboard_ids),
|
||||
MAX_REPOSITORY_STATUS_BATCH,
|
||||
)
|
||||
dashboard_ids = dashboard_ids[:MAX_REPOSITORY_STATUS_BATCH]
|
||||
|
||||
statuses = {}
|
||||
for dashboard_id in dashboard_ids:
|
||||
try:
|
||||
statuses[str(dashboard_id)] = _resolve_repository_status(dashboard_id)
|
||||
except HTTPException:
|
||||
statuses[str(dashboard_id)] = {
|
||||
**_build_no_repo_status_payload(),
|
||||
"sync_state": "ERROR",
|
||||
"sync_status": "ERROR",
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"[get_repository_status_batch][Coherence:Failed] Failed for dashboard {dashboard_id}: {e}"
|
||||
)
|
||||
statuses[str(dashboard_id)] = {
|
||||
**_build_no_repo_status_payload(),
|
||||
"sync_state": "ERROR",
|
||||
"sync_status": "ERROR",
|
||||
}
|
||||
return RepoStatusBatchResponse(statuses=statuses)
|
||||
# [/DEF:get_repository_status_batch:Function]
|
||||
|
||||
# [DEF:get_repository_diff:Function]
|
||||
# @PURPOSE: Get Git diff for a dashboard repository.
|
||||
# @PRE: `dashboard_ref` repository exists.
|
||||
# @PRE: `dashboard_id` repository exists.
|
||||
# @POST: Returns the diff text for the specified file or all changes.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @PARAM: dashboard_id (int)
|
||||
# @PARAM: file_path (Optional[str])
|
||||
# @PARAM: staged (bool)
|
||||
# @RETURN: str
|
||||
@router.get("/repositories/{dashboard_ref}/diff")
|
||||
@router.get("/repositories/{dashboard_id}/diff")
|
||||
async def get_repository_diff(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
file_path: Optional[str] = None,
|
||||
staged: bool = False,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("get_repository_diff"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
diff_text = git_service.get_diff(dashboard_id, file_path, staged)
|
||||
return diff_text
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("get_repository_diff", e)
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:get_repository_diff:Function]
|
||||
|
||||
# [DEF:generate_commit_message:Function]
|
||||
# @PURPOSE: Generate a suggested commit message using LLM.
|
||||
# @PRE: Repository for `dashboard_ref` is initialized.
|
||||
# @PRE: Repository for `dashboard_id` is initialized.
|
||||
# @POST: Returns a suggested commit message string.
|
||||
@router.post("/repositories/{dashboard_ref}/generate-message")
|
||||
@router.post("/repositories/{dashboard_id}/generate-message")
|
||||
async def generate_commit_message(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager = Depends(get_config_manager),
|
||||
dashboard_id: int,
|
||||
db: Session = Depends(get_db),
|
||||
config_manager = Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("generate_commit_message"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
# 1. Get Diff
|
||||
diff = git_service.get_diff(dashboard_id, staged=True)
|
||||
if not diff:
|
||||
@@ -966,10 +466,9 @@ async def generate_commit_message(
|
||||
)
|
||||
|
||||
return {"message": message}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("generate_commit_message", e)
|
||||
logger.error(f"Failed to generate commit message: {e}")
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
# [/DEF:generate_commit_message:Function]
|
||||
|
||||
# [/DEF:backend.src.api.routes.git:Module]
|
||||
|
||||
@@ -9,7 +9,7 @@
|
||||
# @INVARIANT: All schemas must be compatible with the FastAPI router.
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import List, Optional
|
||||
from datetime import datetime
|
||||
from src.models.git import GitProvider, GitStatus, SyncStatus
|
||||
|
||||
@@ -141,93 +141,4 @@ class RepoInitRequest(BaseModel):
|
||||
remote_url: str
|
||||
# [/DEF:RepoInitRequest:Class]
|
||||
|
||||
# [DEF:RepoStatusBatchRequest:Class]
|
||||
# @PURPOSE: Schema for requesting repository statuses for multiple dashboards in a single call.
|
||||
class RepoStatusBatchRequest(BaseModel):
|
||||
dashboard_ids: List[int] = Field(default_factory=list, description="Dashboard IDs to resolve repository statuses for")
|
||||
# [/DEF:RepoStatusBatchRequest:Class]
|
||||
|
||||
|
||||
# [DEF:RepoStatusBatchResponse:Class]
|
||||
# @PURPOSE: Schema for returning repository statuses keyed by dashboard ID.
|
||||
class RepoStatusBatchResponse(BaseModel):
|
||||
statuses: Dict[str, Dict[str, Any]]
|
||||
# [/DEF:RepoStatusBatchResponse:Class]
|
||||
|
||||
|
||||
# [DEF:GiteaRepoSchema:Class]
|
||||
# @PURPOSE: Schema describing a Gitea repository.
|
||||
class GiteaRepoSchema(BaseModel):
|
||||
name: str
|
||||
full_name: str
|
||||
private: bool = False
|
||||
clone_url: Optional[str] = None
|
||||
html_url: Optional[str] = None
|
||||
ssh_url: Optional[str] = None
|
||||
default_branch: Optional[str] = None
|
||||
# [/DEF:GiteaRepoSchema:Class]
|
||||
|
||||
|
||||
# [DEF:GiteaRepoCreateRequest:Class]
|
||||
# @PURPOSE: Request schema for creating a Gitea repository.
|
||||
class GiteaRepoCreateRequest(BaseModel):
|
||||
name: str = Field(..., min_length=1, max_length=255)
|
||||
private: bool = True
|
||||
description: Optional[str] = None
|
||||
auto_init: bool = True
|
||||
default_branch: Optional[str] = "main"
|
||||
# [/DEF:GiteaRepoCreateRequest:Class]
|
||||
|
||||
|
||||
# [DEF:RemoteRepoSchema:Class]
|
||||
# @PURPOSE: Provider-agnostic remote repository payload.
|
||||
class RemoteRepoSchema(BaseModel):
|
||||
provider: GitProvider
|
||||
name: str
|
||||
full_name: str
|
||||
private: bool = False
|
||||
clone_url: Optional[str] = None
|
||||
html_url: Optional[str] = None
|
||||
ssh_url: Optional[str] = None
|
||||
default_branch: Optional[str] = None
|
||||
# [/DEF:RemoteRepoSchema:Class]
|
||||
|
||||
|
||||
# [DEF:RemoteRepoCreateRequest:Class]
|
||||
# @PURPOSE: Provider-agnostic repository creation request.
|
||||
class RemoteRepoCreateRequest(BaseModel):
|
||||
name: str = Field(..., min_length=1, max_length=255)
|
||||
private: bool = True
|
||||
description: Optional[str] = None
|
||||
auto_init: bool = True
|
||||
default_branch: Optional[str] = "main"
|
||||
# [/DEF:RemoteRepoCreateRequest:Class]
|
||||
|
||||
|
||||
# [DEF:PromoteRequest:Class]
|
||||
# @PURPOSE: Request schema for branch promotion workflow.
|
||||
class PromoteRequest(BaseModel):
|
||||
from_branch: str = Field(..., min_length=1, max_length=255)
|
||||
to_branch: str = Field(..., min_length=1, max_length=255)
|
||||
mode: str = Field(default="mr", pattern="^(mr|direct)$")
|
||||
title: Optional[str] = None
|
||||
description: Optional[str] = None
|
||||
reason: Optional[str] = None
|
||||
draft: bool = False
|
||||
remove_source_branch: bool = False
|
||||
# [/DEF:PromoteRequest:Class]
|
||||
|
||||
|
||||
# [DEF:PromoteResponse:Class]
|
||||
# @PURPOSE: Response schema for promotion operation result.
|
||||
class PromoteResponse(BaseModel):
|
||||
mode: str
|
||||
from_branch: str
|
||||
to_branch: str
|
||||
status: str
|
||||
url: Optional[str] = None
|
||||
reference_id: Optional[str] = None
|
||||
policy_violation: bool = False
|
||||
# [/DEF:PromoteResponse:Class]
|
||||
|
||||
# [/DEF:backend.src.api.routes.git_schemas:Module]
|
||||
# [/DEF:backend.src.api.routes.git_schemas:Module]
|
||||
@@ -14,7 +14,6 @@ from ...core.database import get_db
|
||||
from ...models.dashboard import DashboardMetadata, DashboardSelection
|
||||
from ...core.superset_client import SupersetClient
|
||||
from ...core.logger import belief_scope
|
||||
from ...core.migration.dry_run_orchestrator import MigrationDryRunService
|
||||
from ...core.mapping_service import IdMappingService
|
||||
from ...models.mapping import ResourceMapping
|
||||
|
||||
@@ -84,44 +83,6 @@ async def execute_migration(
|
||||
raise HTTPException(status_code=500, detail=f"Failed to create migration task: {str(e)}")
|
||||
# [/DEF:execute_migration:Function]
|
||||
|
||||
|
||||
# [DEF:dry_run_migration:Function]
|
||||
# @PURPOSE: Build pre-flight diff and risk summary without applying migration.
|
||||
# @PRE: Selection and environments are valid.
|
||||
# @POST: Returns deterministic JSON diff and risk scoring.
|
||||
@router.post("/migration/dry-run", response_model=Dict[str, Any])
|
||||
async def dry_run_migration(
|
||||
selection: DashboardSelection,
|
||||
config_manager=Depends(get_config_manager),
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("plugin:migration", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("dry_run_migration"):
|
||||
environments = config_manager.get_environments()
|
||||
env_map = {env.id: env for env in environments}
|
||||
source_env = env_map.get(selection.source_env_id)
|
||||
target_env = env_map.get(selection.target_env_id)
|
||||
if not source_env or not target_env:
|
||||
raise HTTPException(status_code=400, detail="Invalid source or target environment")
|
||||
if selection.source_env_id == selection.target_env_id:
|
||||
raise HTTPException(status_code=400, detail="Source and target environments must be different")
|
||||
if not selection.selected_ids:
|
||||
raise HTTPException(status_code=400, detail="No dashboards selected for dry run")
|
||||
|
||||
service = MigrationDryRunService()
|
||||
source_client = SupersetClient(source_env)
|
||||
target_client = SupersetClient(target_env)
|
||||
try:
|
||||
return service.run(
|
||||
selection=selection,
|
||||
source_client=source_client,
|
||||
target_client=target_client,
|
||||
db=db,
|
||||
)
|
||||
except ValueError as exc:
|
||||
raise HTTPException(status_code=500, detail=str(exc)) from exc
|
||||
# [/DEF:dry_run_migration:Function]
|
||||
|
||||
# [DEF:get_migration_settings:Function]
|
||||
# @PURPOSE: Get current migration Cron string explicitly.
|
||||
@router.get("/migration/settings", response_model=Dict[str, str])
|
||||
@@ -260,4 +221,4 @@ async def trigger_sync_now(
|
||||
}
|
||||
# [/DEF:trigger_sync_now:Function]
|
||||
|
||||
# [/DEF:backend.src.api.routes.migration:Module]
|
||||
# [/DEF:backend.src.api.routes.migration:Module]
|
||||
@@ -62,20 +62,6 @@ def _parse_csv_enum_list(raw: Optional[str], enum_cls, field_name: str) -> List:
|
||||
# @PRE: authenticated/authorized request and validated query params.
|
||||
# @POST: returns {items,total,page,page_size,has_next,applied_filters}.
|
||||
# @POST: deterministic error payload for invalid filters.
|
||||
#
|
||||
# @TEST_CONTRACT: ListReportsApi ->
|
||||
# {
|
||||
# required_fields: {page: int, page_size: int, sort_by: str, sort_order: str},
|
||||
# optional_fields: {task_types: str, statuses: str, search: str},
|
||||
# invariants: [
|
||||
# "Returns ReportCollection on success",
|
||||
# "Raises HTTPException 400 for invalid query parameters"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_list_request -> {"page": 1, "page_size": 20}
|
||||
# @TEST_EDGE: invalid_task_type_filter -> raises HTTPException(400)
|
||||
# @TEST_EDGE: malformed_query -> raises HTTPException(400)
|
||||
# @TEST_INVARIANT: consistent_list_payload -> verifies: [valid_list_request]
|
||||
@router.get("", response_model=ReportCollection)
|
||||
async def list_reports(
|
||||
page: int = Query(1, ge=1),
|
||||
|
||||
@@ -147,21 +147,6 @@ app.include_router(assistant.router, prefix="/api/assistant", tags=["Assistant"]
|
||||
# @POST: WebSocket connection is managed and logs are streamed until disconnect.
|
||||
# @TIER: CRITICAL
|
||||
# @UX_STATE: Connecting -> Streaming -> (Disconnected)
|
||||
#
|
||||
# @TEST_CONTRACT: WebSocketLogStreamApi ->
|
||||
# {
|
||||
# required_fields: {websocket: WebSocket, task_id: str},
|
||||
# optional_fields: {source: str, level: str},
|
||||
# invariants: [
|
||||
# "Accepts the WebSocket connection",
|
||||
# "Applies source and level filters correctly to streamed logs",
|
||||
# "Cleans up subscriptions on disconnect"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_ws_connection -> {"task_id": "test_1", "source": "plugin"}
|
||||
# @TEST_EDGE: task_not_found_ws -> closes connection or sends error
|
||||
# @TEST_EDGE: empty_task_logs -> waits for new logs
|
||||
# @TEST_INVARIANT: consistent_streaming -> verifies: [valid_ws_connection]
|
||||
@app.websocket("/ws/logs/{task_id}")
|
||||
async def websocket_endpoint(
|
||||
websocket: WebSocket,
|
||||
|
||||
@@ -14,8 +14,6 @@ import pytest
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from src.core.database import Base
|
||||
# Import all models to ensure they are registered with Base before create_all - must import both auth and mapping to ensure Base knows about all tables
|
||||
from src.models import mapping, auth, task, report
|
||||
from src.models.auth import User, Role, Permission, ADGroupMapping
|
||||
from src.services.auth_service import AuthService
|
||||
from src.core.auth.repository import AuthRepository
|
||||
@@ -178,94 +176,4 @@ def test_ad_group_mapping(auth_repo):
|
||||
assert retrieved_mapping.role_id == role.id
|
||||
|
||||
|
||||
def test_authenticate_user_updates_last_login(auth_service, auth_repo):
|
||||
"""@SIDE_EFFECT: authenticate_user updates last_login timestamp on success."""
|
||||
user = User(
|
||||
username="loginuser",
|
||||
email="login@example.com",
|
||||
password_hash=get_password_hash("mypassword"),
|
||||
auth_source="LOCAL"
|
||||
)
|
||||
auth_repo.db.add(user)
|
||||
auth_repo.db.commit()
|
||||
|
||||
assert user.last_login is None
|
||||
|
||||
authenticated = auth_service.authenticate_user("loginuser", "mypassword")
|
||||
assert authenticated is not None
|
||||
assert authenticated.last_login is not None
|
||||
|
||||
|
||||
def test_authenticate_inactive_user(auth_service, auth_repo):
|
||||
"""@PRE: User with is_active=False should not authenticate."""
|
||||
user = User(
|
||||
username="inactive_user",
|
||||
email="inactive@example.com",
|
||||
password_hash=get_password_hash("testpass"),
|
||||
auth_source="LOCAL",
|
||||
is_active=False
|
||||
)
|
||||
auth_repo.db.add(user)
|
||||
auth_repo.db.commit()
|
||||
|
||||
result = auth_service.authenticate_user("inactive_user", "testpass")
|
||||
assert result is None
|
||||
|
||||
|
||||
def test_verify_password_empty_hash():
|
||||
"""@PRE: verify_password with empty/None hash returns False."""
|
||||
assert verify_password("anypassword", "") is False
|
||||
assert verify_password("anypassword", None) is False
|
||||
|
||||
|
||||
def test_provision_adfs_user_new(auth_service, auth_repo):
|
||||
"""@POST: provision_adfs_user creates a new ADFS user with correct roles."""
|
||||
# Set up a role and AD group mapping
|
||||
role = Role(name="ADFS_Viewer", description="ADFS viewer role")
|
||||
auth_repo.db.add(role)
|
||||
auth_repo.db.commit()
|
||||
|
||||
mapping = ADGroupMapping(ad_group="DOMAIN\\Viewers", role_id=role.id)
|
||||
auth_repo.db.add(mapping)
|
||||
auth_repo.db.commit()
|
||||
|
||||
user_info = {
|
||||
"upn": "newadfsuser@domain.com",
|
||||
"email": "newadfsuser@domain.com",
|
||||
"groups": ["DOMAIN\\Viewers"]
|
||||
}
|
||||
|
||||
user = auth_service.provision_adfs_user(user_info)
|
||||
assert user is not None
|
||||
assert user.username == "newadfsuser@domain.com"
|
||||
assert user.auth_source == "ADFS"
|
||||
assert user.is_active is True
|
||||
assert len(user.roles) == 1
|
||||
assert user.roles[0].name == "ADFS_Viewer"
|
||||
|
||||
|
||||
def test_provision_adfs_user_existing(auth_service, auth_repo):
|
||||
"""@POST: provision_adfs_user updates roles for existing user."""
|
||||
# Create existing user
|
||||
existing = User(
|
||||
username="existingadfs@domain.com",
|
||||
email="existingadfs@domain.com",
|
||||
auth_source="ADFS",
|
||||
is_active=True
|
||||
)
|
||||
auth_repo.db.add(existing)
|
||||
auth_repo.db.commit()
|
||||
|
||||
user_info = {
|
||||
"upn": "existingadfs@domain.com",
|
||||
"email": "existingadfs@domain.com",
|
||||
"groups": []
|
||||
}
|
||||
|
||||
user = auth_service.provision_adfs_user(user_info)
|
||||
assert user is not None
|
||||
assert user.username == "existingadfs@domain.com"
|
||||
assert len(user.roles) == 0 # No matching group mappings
|
||||
|
||||
|
||||
# [/DEF:test_auth:Module]
|
||||
|
||||
@@ -30,7 +30,6 @@ class Environment(BaseModel):
|
||||
url: str
|
||||
username: str
|
||||
password: str # Will be masked in UI
|
||||
stage: str = Field(default="DEV", pattern="^(DEV|PREPROD|PROD)$")
|
||||
verify_ssl: bool = True
|
||||
timeout: int = 30
|
||||
is_default: bool = False
|
||||
|
||||
@@ -23,21 +23,6 @@ from src.core.logger import logger, belief_scope
|
||||
# [DEF:IdMappingService:Class]
|
||||
# @TIER: CRITICAL
|
||||
# @PURPOSE: Service handling the cataloging and retrieval of remote Superset Integer IDs.
|
||||
#
|
||||
# @TEST_CONTRACT: IdMappingServiceModel ->
|
||||
# {
|
||||
# required_fields: {db_session: Session},
|
||||
# invariants: [
|
||||
# "sync_environment correctly creates or updates ResourceMapping records",
|
||||
# "get_remote_id returns an integer or None",
|
||||
# "get_remote_ids_batch returns a dictionary of valid UUIDs to integers"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_mapping_service -> {"db_session": "MockSession()"}
|
||||
# @TEST_EDGE: sync_api_failure -> handles exception gracefully
|
||||
# @TEST_EDGE: get_remote_id_not_found -> returns None
|
||||
# @TEST_EDGE: get_batch_empty_list -> returns empty dict
|
||||
# @TEST_INVARIANT: resilient_fetching -> verifies: [sync_api_failure]
|
||||
class IdMappingService:
|
||||
|
||||
# [DEF:__init__:Function]
|
||||
|
||||
@@ -1,12 +0,0 @@
|
||||
# [DEF:backend.src.core.migration.__init__:Module]
|
||||
# @TIER: TRIVIAL
|
||||
# @SEMANTICS: migration, package, exports
|
||||
# @PURPOSE: Namespace package for migration pre-flight orchestration components.
|
||||
# @LAYER: Core
|
||||
|
||||
from .dry_run_orchestrator import MigrationDryRunService
|
||||
from .archive_parser import MigrationArchiveParser
|
||||
|
||||
__all__ = ["MigrationDryRunService", "MigrationArchiveParser"]
|
||||
|
||||
# [/DEF:backend.src.core.migration.__init__:Module]
|
||||
@@ -1,139 +0,0 @@
|
||||
# [DEF:backend.src.core.migration.archive_parser:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: migration, zip, parser, yaml, metadata
|
||||
# @PURPOSE: Parse Superset export ZIP archives into normalized object catalogs for diffing.
|
||||
# @LAYER: Core
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.logger
|
||||
# @INVARIANT: Parsing is read-only and never mutates archive files.
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import yaml
|
||||
|
||||
from ..logger import logger, belief_scope
|
||||
|
||||
|
||||
# [DEF:MigrationArchiveParser:Class]
|
||||
# @PURPOSE: Extract normalized dashboards/charts/datasets metadata from ZIP archives.
|
||||
class MigrationArchiveParser:
|
||||
# [DEF:extract_objects_from_zip:Function]
|
||||
# @PURPOSE: Extract object catalogs from Superset archive.
|
||||
# @PRE: zip_path points to a valid readable ZIP.
|
||||
# @POST: Returns object lists grouped by resource type.
|
||||
# @RETURN: Dict[str, List[Dict[str, Any]]]
|
||||
def extract_objects_from_zip(self, zip_path: str) -> Dict[str, List[Dict[str, Any]]]:
|
||||
with belief_scope("MigrationArchiveParser.extract_objects_from_zip"):
|
||||
result: Dict[str, List[Dict[str, Any]]] = {
|
||||
"dashboards": [],
|
||||
"charts": [],
|
||||
"datasets": [],
|
||||
}
|
||||
with tempfile.TemporaryDirectory() as temp_dir_str:
|
||||
temp_dir = Path(temp_dir_str)
|
||||
with zipfile.ZipFile(zip_path, "r") as zip_file:
|
||||
zip_file.extractall(temp_dir)
|
||||
|
||||
result["dashboards"] = self._collect_yaml_objects(temp_dir, "dashboards")
|
||||
result["charts"] = self._collect_yaml_objects(temp_dir, "charts")
|
||||
result["datasets"] = self._collect_yaml_objects(temp_dir, "datasets")
|
||||
|
||||
return result
|
||||
# [/DEF:extract_objects_from_zip:Function]
|
||||
|
||||
# [DEF:_collect_yaml_objects:Function]
|
||||
# @PURPOSE: Read and normalize YAML manifests for one object type.
|
||||
# @PRE: object_type is one of dashboards/charts/datasets.
|
||||
# @POST: Returns only valid normalized objects.
|
||||
def _collect_yaml_objects(self, root_dir: Path, object_type: str) -> List[Dict[str, Any]]:
|
||||
with belief_scope("MigrationArchiveParser._collect_yaml_objects"):
|
||||
files = list(root_dir.glob(f"**/{object_type}/**/*.yaml")) + list(root_dir.glob(f"**/{object_type}/*.yaml"))
|
||||
objects: List[Dict[str, Any]] = []
|
||||
for file_path in set(files):
|
||||
try:
|
||||
with open(file_path, "r") as file_obj:
|
||||
payload = yaml.safe_load(file_obj) or {}
|
||||
normalized = self._normalize_object_payload(payload, object_type)
|
||||
if normalized:
|
||||
objects.append(normalized)
|
||||
except Exception as exc:
|
||||
logger.reflect(
|
||||
"[MigrationArchiveParser._collect_yaml_objects][REFLECT] skip_invalid_yaml path=%s error=%s",
|
||||
file_path,
|
||||
exc,
|
||||
)
|
||||
return objects
|
||||
# [/DEF:_collect_yaml_objects:Function]
|
||||
|
||||
# [DEF:_normalize_object_payload:Function]
|
||||
# @PURPOSE: Convert raw YAML payload to stable diff signature shape.
|
||||
# @PRE: payload is parsed YAML mapping.
|
||||
# @POST: Returns normalized descriptor with `uuid`, `title`, and `signature`.
|
||||
def _normalize_object_payload(self, payload: Dict[str, Any], object_type: str) -> Optional[Dict[str, Any]]:
|
||||
with belief_scope("MigrationArchiveParser._normalize_object_payload"):
|
||||
if not isinstance(payload, dict):
|
||||
return None
|
||||
uuid = payload.get("uuid")
|
||||
if not uuid:
|
||||
return None
|
||||
|
||||
if object_type == "dashboards":
|
||||
title = payload.get("dashboard_title") or payload.get("title")
|
||||
signature = {
|
||||
"title": title,
|
||||
"slug": payload.get("slug"),
|
||||
"position_json": payload.get("position_json"),
|
||||
"json_metadata": payload.get("json_metadata"),
|
||||
"description": payload.get("description"),
|
||||
"owners": payload.get("owners"),
|
||||
}
|
||||
return {
|
||||
"uuid": str(uuid),
|
||||
"title": title or f"Dashboard {uuid}",
|
||||
"signature": json.dumps(signature, sort_keys=True, default=str),
|
||||
"owners": payload.get("owners") or [],
|
||||
}
|
||||
|
||||
if object_type == "charts":
|
||||
title = payload.get("slice_name") or payload.get("name")
|
||||
signature = {
|
||||
"title": title,
|
||||
"viz_type": payload.get("viz_type"),
|
||||
"params": payload.get("params"),
|
||||
"query_context": payload.get("query_context"),
|
||||
"datasource_uuid": payload.get("datasource_uuid"),
|
||||
"dataset_uuid": payload.get("dataset_uuid"),
|
||||
}
|
||||
return {
|
||||
"uuid": str(uuid),
|
||||
"title": title or f"Chart {uuid}",
|
||||
"signature": json.dumps(signature, sort_keys=True, default=str),
|
||||
"dataset_uuid": payload.get("datasource_uuid") or payload.get("dataset_uuid"),
|
||||
}
|
||||
|
||||
if object_type == "datasets":
|
||||
title = payload.get("table_name") or payload.get("name")
|
||||
signature = {
|
||||
"title": title,
|
||||
"schema": payload.get("schema"),
|
||||
"database_uuid": payload.get("database_uuid"),
|
||||
"sql": payload.get("sql"),
|
||||
"columns": payload.get("columns"),
|
||||
"metrics": payload.get("metrics"),
|
||||
}
|
||||
return {
|
||||
"uuid": str(uuid),
|
||||
"title": title or f"Dataset {uuid}",
|
||||
"signature": json.dumps(signature, sort_keys=True, default=str),
|
||||
"database_uuid": payload.get("database_uuid"),
|
||||
}
|
||||
|
||||
return None
|
||||
# [/DEF:_normalize_object_payload:Function]
|
||||
|
||||
|
||||
# [/DEF:MigrationArchiveParser:Class]
|
||||
# [/DEF:backend.src.core.migration.archive_parser:Module]
|
||||
@@ -1,235 +0,0 @@
|
||||
# [DEF:backend.src.core.migration.dry_run_orchestrator:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: migration, dry_run, diff, risk, superset
|
||||
# @PURPOSE: Compute pre-flight migration diff and risk scoring without apply.
|
||||
# @LAYER: Core
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.migration_engine
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.migration.archive_parser
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.migration.risk_assessor
|
||||
# @INVARIANT: Dry run is informative only and must not mutate target environment.
|
||||
|
||||
from datetime import datetime, timezone
|
||||
import json
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from ...models.dashboard import DashboardSelection
|
||||
from ...models.mapping import DatabaseMapping
|
||||
from ..logger import logger, belief_scope
|
||||
from .archive_parser import MigrationArchiveParser
|
||||
from .risk_assessor import build_risks, score_risks
|
||||
from ..migration_engine import MigrationEngine
|
||||
from ..superset_client import SupersetClient
|
||||
from ..utils.fileio import create_temp_file
|
||||
|
||||
|
||||
# [DEF:MigrationDryRunService:Class]
|
||||
# @PURPOSE: Build deterministic diff/risk payload for migration pre-flight.
|
||||
class MigrationDryRunService:
|
||||
# [DEF:__init__:Function]
|
||||
# @PURPOSE: Wire parser dependency for archive object extraction.
|
||||
# @PRE: parser can be omitted to use default implementation.
|
||||
# @POST: Service is ready to calculate dry-run payload.
|
||||
def __init__(self, parser: MigrationArchiveParser | None = None):
|
||||
self.parser = parser or MigrationArchiveParser()
|
||||
# [/DEF:__init__:Function]
|
||||
|
||||
# [DEF:run:Function]
|
||||
# @PURPOSE: Execute full dry-run computation for selected dashboards.
|
||||
# @PRE: source/target clients are authenticated and selection validated by caller.
|
||||
# @POST: Returns JSON-serializable pre-flight payload with summary, diff and risk.
|
||||
# @SIDE_EFFECT: Reads source export archives and target metadata via network.
|
||||
def run(
|
||||
self,
|
||||
selection: DashboardSelection,
|
||||
source_client: SupersetClient,
|
||||
target_client: SupersetClient,
|
||||
db: Session,
|
||||
) -> Dict[str, Any]:
|
||||
with belief_scope("MigrationDryRunService.run"):
|
||||
logger.explore("[MigrationDryRunService.run][EXPLORE] starting dry-run pipeline")
|
||||
engine = MigrationEngine()
|
||||
db_mapping = self._load_db_mapping(db, selection) if selection.replace_db_config else {}
|
||||
transformed = {"dashboards": {}, "charts": {}, "datasets": {}}
|
||||
|
||||
dashboards_preview = source_client.get_dashboards_summary()
|
||||
selected_preview = {
|
||||
item["id"]: item
|
||||
for item in dashboards_preview
|
||||
if item.get("id") in selection.selected_ids
|
||||
}
|
||||
|
||||
for dashboard_id in selection.selected_ids:
|
||||
exported_content, _ = source_client.export_dashboard(int(dashboard_id))
|
||||
with create_temp_file(content=exported_content, suffix=".zip") as source_zip:
|
||||
with create_temp_file(suffix=".zip") as transformed_zip:
|
||||
success = engine.transform_zip(
|
||||
str(source_zip),
|
||||
str(transformed_zip),
|
||||
db_mapping,
|
||||
strip_databases=False,
|
||||
target_env_id=selection.target_env_id,
|
||||
fix_cross_filters=selection.fix_cross_filters,
|
||||
)
|
||||
if not success:
|
||||
raise ValueError(f"Failed to transform export archive for dashboard {dashboard_id}")
|
||||
extracted = self.parser.extract_objects_from_zip(str(transformed_zip))
|
||||
self._accumulate_objects(transformed, extracted)
|
||||
|
||||
source_objects = {key: list(value.values()) for key, value in transformed.items()}
|
||||
target_objects = self._build_target_signatures(target_client)
|
||||
diff = {
|
||||
"dashboards": self._build_object_diff(source_objects["dashboards"], target_objects["dashboards"]),
|
||||
"charts": self._build_object_diff(source_objects["charts"], target_objects["charts"]),
|
||||
"datasets": self._build_object_diff(source_objects["datasets"], target_objects["datasets"]),
|
||||
}
|
||||
risk = self._build_risks(source_objects, target_objects, diff, target_client)
|
||||
|
||||
summary = {
|
||||
"dashboards": {action: len(diff["dashboards"][action]) for action in ("create", "update", "delete")},
|
||||
"charts": {action: len(diff["charts"][action]) for action in ("create", "update", "delete")},
|
||||
"datasets": {action: len(diff["datasets"][action]) for action in ("create", "update", "delete")},
|
||||
"selected_dashboards": len(selection.selected_ids),
|
||||
}
|
||||
selected_titles = [
|
||||
selected_preview[dash_id]["title"]
|
||||
for dash_id in selection.selected_ids
|
||||
if dash_id in selected_preview
|
||||
]
|
||||
|
||||
logger.reason("[MigrationDryRunService.run][REASON] dry-run payload assembled")
|
||||
return {
|
||||
"generated_at": datetime.now(timezone.utc).isoformat(),
|
||||
"selection": selection.model_dump(),
|
||||
"selected_dashboard_titles": selected_titles,
|
||||
"diff": diff,
|
||||
"summary": summary,
|
||||
"risk": score_risks(risk),
|
||||
}
|
||||
# [/DEF:run:Function]
|
||||
|
||||
# [DEF:_load_db_mapping:Function]
|
||||
# @PURPOSE: Resolve UUID mapping for optional DB config replacement.
|
||||
def _load_db_mapping(self, db: Session, selection: DashboardSelection) -> Dict[str, str]:
|
||||
rows = db.query(DatabaseMapping).filter(
|
||||
DatabaseMapping.source_env_id == selection.source_env_id,
|
||||
DatabaseMapping.target_env_id == selection.target_env_id,
|
||||
).all()
|
||||
return {row.source_db_uuid: row.target_db_uuid for row in rows}
|
||||
# [/DEF:_load_db_mapping:Function]
|
||||
|
||||
# [DEF:_accumulate_objects:Function]
|
||||
# @PURPOSE: Merge extracted resources by UUID to avoid duplicates.
|
||||
def _accumulate_objects(self, target: Dict[str, Dict[str, Dict[str, Any]]], source: Dict[str, List[Dict[str, Any]]]) -> None:
|
||||
for object_type in ("dashboards", "charts", "datasets"):
|
||||
for item in source.get(object_type, []):
|
||||
uuid = item.get("uuid")
|
||||
if uuid:
|
||||
target[object_type][str(uuid)] = item
|
||||
# [/DEF:_accumulate_objects:Function]
|
||||
|
||||
# [DEF:_index_by_uuid:Function]
|
||||
# @PURPOSE: Build UUID-index map for normalized resources.
|
||||
def _index_by_uuid(self, objects: List[Dict[str, Any]]) -> Dict[str, Dict[str, Any]]:
|
||||
indexed: Dict[str, Dict[str, Any]] = {}
|
||||
for obj in objects:
|
||||
uuid = obj.get("uuid")
|
||||
if uuid:
|
||||
indexed[str(uuid)] = obj
|
||||
return indexed
|
||||
# [/DEF:_index_by_uuid:Function]
|
||||
|
||||
# [DEF:_build_object_diff:Function]
|
||||
# @PURPOSE: Compute create/update/delete buckets by UUID+signature.
|
||||
def _build_object_diff(self, source_objects: List[Dict[str, Any]], target_objects: List[Dict[str, Any]]) -> Dict[str, List[Dict[str, Any]]]:
|
||||
target_index = self._index_by_uuid(target_objects)
|
||||
created: List[Dict[str, Any]] = []
|
||||
updated: List[Dict[str, Any]] = []
|
||||
deleted: List[Dict[str, Any]] = []
|
||||
for source_obj in source_objects:
|
||||
source_uuid = str(source_obj.get("uuid"))
|
||||
target_obj = target_index.get(source_uuid)
|
||||
if not target_obj:
|
||||
created.append({"uuid": source_uuid, "title": source_obj.get("title")})
|
||||
continue
|
||||
if source_obj.get("signature") != target_obj.get("signature"):
|
||||
updated.append({
|
||||
"uuid": source_uuid,
|
||||
"title": source_obj.get("title"),
|
||||
"target_title": target_obj.get("title"),
|
||||
})
|
||||
return {"create": created, "update": updated, "delete": deleted}
|
||||
# [/DEF:_build_object_diff:Function]
|
||||
|
||||
# [DEF:_build_target_signatures:Function]
|
||||
# @PURPOSE: Pull target metadata and normalize it into comparable signatures.
|
||||
def _build_target_signatures(self, client: SupersetClient) -> Dict[str, List[Dict[str, Any]]]:
|
||||
_, dashboards = client.get_dashboards(query={
|
||||
"columns": ["uuid", "dashboard_title", "slug", "position_json", "json_metadata", "description", "owners"],
|
||||
})
|
||||
_, datasets = client.get_datasets(query={
|
||||
"columns": ["uuid", "table_name", "schema", "database_uuid", "sql", "columns", "metrics"],
|
||||
})
|
||||
_, charts = client.get_charts(query={
|
||||
"columns": ["uuid", "slice_name", "viz_type", "params", "query_context", "datasource_uuid", "dataset_uuid"],
|
||||
})
|
||||
return {
|
||||
"dashboards": [{
|
||||
"uuid": str(item.get("uuid")),
|
||||
"title": item.get("dashboard_title"),
|
||||
"owners": item.get("owners") or [],
|
||||
"signature": json.dumps({
|
||||
"title": item.get("dashboard_title"),
|
||||
"slug": item.get("slug"),
|
||||
"position_json": item.get("position_json"),
|
||||
"json_metadata": item.get("json_metadata"),
|
||||
"description": item.get("description"),
|
||||
"owners": item.get("owners"),
|
||||
}, sort_keys=True, default=str),
|
||||
} for item in dashboards if item.get("uuid")],
|
||||
"datasets": [{
|
||||
"uuid": str(item.get("uuid")),
|
||||
"title": item.get("table_name"),
|
||||
"database_uuid": item.get("database_uuid"),
|
||||
"signature": json.dumps({
|
||||
"title": item.get("table_name"),
|
||||
"schema": item.get("schema"),
|
||||
"database_uuid": item.get("database_uuid"),
|
||||
"sql": item.get("sql"),
|
||||
"columns": item.get("columns"),
|
||||
"metrics": item.get("metrics"),
|
||||
}, sort_keys=True, default=str),
|
||||
} for item in datasets if item.get("uuid")],
|
||||
"charts": [{
|
||||
"uuid": str(item.get("uuid")),
|
||||
"title": item.get("slice_name") or item.get("name"),
|
||||
"dataset_uuid": item.get("datasource_uuid") or item.get("dataset_uuid"),
|
||||
"signature": json.dumps({
|
||||
"title": item.get("slice_name") or item.get("name"),
|
||||
"viz_type": item.get("viz_type"),
|
||||
"params": item.get("params"),
|
||||
"query_context": item.get("query_context"),
|
||||
"datasource_uuid": item.get("datasource_uuid"),
|
||||
"dataset_uuid": item.get("dataset_uuid"),
|
||||
}, sort_keys=True, default=str),
|
||||
} for item in charts if item.get("uuid")],
|
||||
}
|
||||
# [/DEF:_build_target_signatures:Function]
|
||||
|
||||
# [DEF:_build_risks:Function]
|
||||
# @PURPOSE: Build risk items for missing datasource, broken refs, overwrite, owner mismatch.
|
||||
def _build_risks(
|
||||
self,
|
||||
source_objects: Dict[str, List[Dict[str, Any]]],
|
||||
target_objects: Dict[str, List[Dict[str, Any]]],
|
||||
diff: Dict[str, Dict[str, List[Dict[str, Any]]]],
|
||||
target_client: SupersetClient,
|
||||
) -> List[Dict[str, Any]]:
|
||||
return build_risks(source_objects, target_objects, diff, target_client)
|
||||
# [/DEF:_build_risks:Function]
|
||||
|
||||
|
||||
# [/DEF:MigrationDryRunService:Class]
|
||||
# [/DEF:backend.src.core.migration.dry_run_orchestrator:Module]
|
||||
@@ -1,119 +0,0 @@
|
||||
# [DEF:backend.src.core.migration.risk_assessor:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: migration, dry_run, risk, scoring
|
||||
# @PURPOSE: Risk evaluation helpers for migration pre-flight reporting.
|
||||
# @LAYER: Core
|
||||
# @RELATION: USED_BY -> backend.src.core.migration.dry_run_orchestrator
|
||||
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from ..superset_client import SupersetClient
|
||||
|
||||
|
||||
# [DEF:index_by_uuid:Function]
|
||||
# @PURPOSE: Build UUID-index from normalized objects.
|
||||
def index_by_uuid(objects: List[Dict[str, Any]]) -> Dict[str, Dict[str, Any]]:
|
||||
indexed: Dict[str, Dict[str, Any]] = {}
|
||||
for obj in objects:
|
||||
uuid = obj.get("uuid")
|
||||
if uuid:
|
||||
indexed[str(uuid)] = obj
|
||||
return indexed
|
||||
# [/DEF:index_by_uuid:Function]
|
||||
|
||||
|
||||
# [DEF:extract_owner_identifiers:Function]
|
||||
# @PURPOSE: Normalize owner payloads for stable comparison.
|
||||
def extract_owner_identifiers(owners: Any) -> List[str]:
|
||||
if not isinstance(owners, list):
|
||||
return []
|
||||
ids: List[str] = []
|
||||
for owner in owners:
|
||||
if isinstance(owner, dict):
|
||||
if owner.get("username"):
|
||||
ids.append(str(owner["username"]))
|
||||
elif owner.get("id") is not None:
|
||||
ids.append(str(owner["id"]))
|
||||
elif owner is not None:
|
||||
ids.append(str(owner))
|
||||
return sorted(set(ids))
|
||||
# [/DEF:extract_owner_identifiers:Function]
|
||||
|
||||
|
||||
# [DEF:build_risks:Function]
|
||||
# @PURPOSE: Build risk list from computed diffs and target catalog state.
|
||||
def build_risks(
|
||||
source_objects: Dict[str, List[Dict[str, Any]]],
|
||||
target_objects: Dict[str, List[Dict[str, Any]]],
|
||||
diff: Dict[str, Dict[str, List[Dict[str, Any]]]],
|
||||
target_client: SupersetClient,
|
||||
) -> List[Dict[str, Any]]:
|
||||
risks: List[Dict[str, Any]] = []
|
||||
for object_type in ("dashboards", "charts", "datasets"):
|
||||
for item in diff[object_type]["update"]:
|
||||
risks.append({
|
||||
"code": "overwrite_existing",
|
||||
"severity": "medium",
|
||||
"object_type": object_type[:-1],
|
||||
"object_uuid": item["uuid"],
|
||||
"message": f"Object will be updated in target: {item.get('title') or item['uuid']}",
|
||||
})
|
||||
|
||||
target_dataset_uuids = set(index_by_uuid(target_objects["datasets"]).keys())
|
||||
_, target_databases = target_client.get_databases(query={"columns": ["uuid"]})
|
||||
target_database_uuids = {str(item.get("uuid")) for item in target_databases if item.get("uuid")}
|
||||
|
||||
for dataset in source_objects["datasets"]:
|
||||
db_uuid = dataset.get("database_uuid")
|
||||
if db_uuid and str(db_uuid) not in target_database_uuids:
|
||||
risks.append({
|
||||
"code": "missing_datasource",
|
||||
"severity": "high",
|
||||
"object_type": "dataset",
|
||||
"object_uuid": dataset.get("uuid"),
|
||||
"message": f"Target datasource is missing for dataset {dataset.get('title') or dataset.get('uuid')}",
|
||||
})
|
||||
|
||||
for chart in source_objects["charts"]:
|
||||
ds_uuid = chart.get("dataset_uuid")
|
||||
if ds_uuid and str(ds_uuid) not in target_dataset_uuids:
|
||||
risks.append({
|
||||
"code": "breaking_reference",
|
||||
"severity": "high",
|
||||
"object_type": "chart",
|
||||
"object_uuid": chart.get("uuid"),
|
||||
"message": f"Chart references dataset not found on target: {ds_uuid}",
|
||||
})
|
||||
|
||||
source_dash = index_by_uuid(source_objects["dashboards"])
|
||||
target_dash = index_by_uuid(target_objects["dashboards"])
|
||||
for item in diff["dashboards"]["update"]:
|
||||
source_obj = source_dash.get(item["uuid"])
|
||||
target_obj = target_dash.get(item["uuid"])
|
||||
if not source_obj or not target_obj:
|
||||
continue
|
||||
source_owners = extract_owner_identifiers(source_obj.get("owners"))
|
||||
target_owners = extract_owner_identifiers(target_obj.get("owners"))
|
||||
if source_owners and target_owners and source_owners != target_owners:
|
||||
risks.append({
|
||||
"code": "owner_mismatch",
|
||||
"severity": "low",
|
||||
"object_type": "dashboard",
|
||||
"object_uuid": item["uuid"],
|
||||
"message": f"Owner mismatch for dashboard {item.get('title') or item['uuid']}",
|
||||
})
|
||||
return risks
|
||||
# [/DEF:build_risks:Function]
|
||||
|
||||
|
||||
# [DEF:score_risks:Function]
|
||||
# @PURPOSE: Aggregate risk list into score and level.
|
||||
def score_risks(risk_items: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
weights = {"high": 25, "medium": 10, "low": 5}
|
||||
score = min(100, sum(weights.get(item.get("severity", "low"), 5) for item in risk_items))
|
||||
level = "low" if score < 25 else "medium" if score < 60 else "high"
|
||||
return {"score": score, "level": level, "items": risk_items}
|
||||
# [/DEF:score_risks:Function]
|
||||
|
||||
|
||||
# [/DEF:backend.src.core.migration.risk_assessor:Module]
|
||||
@@ -14,7 +14,7 @@ import json
|
||||
import re
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional, Tuple, Union, cast
|
||||
from typing import Dict, List, Optional, Tuple, Union, cast
|
||||
from requests import Response
|
||||
from datetime import datetime
|
||||
from .logger import logger as app_logger, belief_scope
|
||||
@@ -87,18 +87,7 @@ class SupersetClient:
|
||||
app_logger.info("[get_dashboards][Enter] Fetching dashboards.")
|
||||
validated_query = self._validate_query_params(query or {})
|
||||
if 'columns' not in validated_query:
|
||||
validated_query['columns'] = [
|
||||
"slug",
|
||||
"id",
|
||||
"url",
|
||||
"changed_on_utc",
|
||||
"dashboard_title",
|
||||
"published",
|
||||
"created_by",
|
||||
"changed_by",
|
||||
"changed_by_name",
|
||||
"owners",
|
||||
]
|
||||
validated_query['columns'] = ["slug", "id", "changed_on_utc", "dashboard_title", "published"]
|
||||
|
||||
paginated_data = self._fetch_all_pages(
|
||||
endpoint="/dashboard/",
|
||||
@@ -109,42 +98,6 @@ class SupersetClient:
|
||||
return total_count, paginated_data
|
||||
# [/DEF:get_dashboards:Function]
|
||||
|
||||
# [DEF:get_dashboards_page:Function]
|
||||
# @PURPOSE: Fetches a single dashboards page from Superset without iterating all pages.
|
||||
# @PARAM: query (Optional[Dict]) - Query with page/page_size and optional columns.
|
||||
# @PRE: Client is authenticated.
|
||||
# @POST: Returns total count and one page of dashboards.
|
||||
# @RETURN: Tuple[int, List[Dict]]
|
||||
def get_dashboards_page(self, query: Optional[Dict] = None) -> Tuple[int, List[Dict]]:
|
||||
with belief_scope("get_dashboards_page"):
|
||||
validated_query = self._validate_query_params(query or {})
|
||||
if "columns" not in validated_query:
|
||||
validated_query["columns"] = [
|
||||
"slug",
|
||||
"id",
|
||||
"url",
|
||||
"changed_on_utc",
|
||||
"dashboard_title",
|
||||
"published",
|
||||
"created_by",
|
||||
"changed_by",
|
||||
"changed_by_name",
|
||||
"owners",
|
||||
]
|
||||
|
||||
response_json = cast(
|
||||
Dict[str, Any],
|
||||
self.network.request(
|
||||
method="GET",
|
||||
endpoint="/dashboard/",
|
||||
params={"q": json.dumps(validated_query)},
|
||||
),
|
||||
)
|
||||
result = response_json.get("result", [])
|
||||
total_count = response_json.get("count", len(result))
|
||||
return total_count, result
|
||||
# [/DEF:get_dashboards_page:Function]
|
||||
|
||||
# [DEF:get_dashboards_summary:Function]
|
||||
# @PURPOSE: Fetches dashboard metadata optimized for the grid.
|
||||
# @PRE: Client is authenticated.
|
||||
@@ -152,171 +105,23 @@ class SupersetClient:
|
||||
# @RETURN: List[Dict]
|
||||
def get_dashboards_summary(self) -> List[Dict]:
|
||||
with belief_scope("SupersetClient.get_dashboards_summary"):
|
||||
# Rely on list endpoint default projection to stay compatible
|
||||
# across Superset versions and preserve owners in one request.
|
||||
query: Dict[str, Any] = {}
|
||||
query = {
|
||||
"columns": ["id", "dashboard_title", "changed_on_utc", "published"]
|
||||
}
|
||||
_, dashboards = self.get_dashboards(query=query)
|
||||
|
||||
# Map fields to DashboardMetadata schema
|
||||
result = []
|
||||
for dash in dashboards:
|
||||
owners = self._extract_owner_labels(dash.get("owners"))
|
||||
# No per-dashboard detail requests here: keep list endpoint O(1).
|
||||
if not owners:
|
||||
owners = self._extract_owner_labels(
|
||||
[dash.get("created_by"), dash.get("changed_by")],
|
||||
)
|
||||
|
||||
result.append({
|
||||
"id": dash.get("id"),
|
||||
"slug": dash.get("slug"),
|
||||
"title": dash.get("dashboard_title"),
|
||||
"url": dash.get("url"),
|
||||
"last_modified": dash.get("changed_on_utc"),
|
||||
"status": "published" if dash.get("published") else "draft",
|
||||
"created_by": self._extract_user_display(
|
||||
None,
|
||||
dash.get("created_by"),
|
||||
),
|
||||
"modified_by": self._extract_user_display(
|
||||
dash.get("changed_by_name"),
|
||||
dash.get("changed_by"),
|
||||
),
|
||||
"owners": owners,
|
||||
"status": "published" if dash.get("published") else "draft"
|
||||
})
|
||||
return result
|
||||
# [/DEF:get_dashboards_summary:Function]
|
||||
|
||||
# [DEF:get_dashboards_summary_page:Function]
|
||||
# @PURPOSE: Fetches one page of dashboard metadata optimized for the grid.
|
||||
# @PARAM: page (int) - 1-based page number from API route contract.
|
||||
# @PARAM: page_size (int) - Number of items per page.
|
||||
# @PRE: page >= 1 and page_size > 0.
|
||||
# @POST: Returns mapped summaries and total dashboard count.
|
||||
# @RETURN: Tuple[int, List[Dict]]
|
||||
def get_dashboards_summary_page(
|
||||
self,
|
||||
page: int,
|
||||
page_size: int,
|
||||
search: Optional[str] = None,
|
||||
) -> Tuple[int, List[Dict]]:
|
||||
with belief_scope("SupersetClient.get_dashboards_summary_page"):
|
||||
query: Dict[str, Any] = {
|
||||
"page": max(page - 1, 0),
|
||||
"page_size": page_size,
|
||||
}
|
||||
normalized_search = (search or "").strip()
|
||||
if normalized_search:
|
||||
# Superset list API supports filter objects with `opr` operator.
|
||||
# `ct` -> contains (ILIKE on most Superset backends).
|
||||
query["filters"] = [
|
||||
{
|
||||
"col": "dashboard_title",
|
||||
"opr": "ct",
|
||||
"value": normalized_search,
|
||||
}
|
||||
]
|
||||
|
||||
total_count, dashboards = self.get_dashboards_page(query=query)
|
||||
|
||||
result = []
|
||||
for dash in dashboards:
|
||||
owners = self._extract_owner_labels(dash.get("owners"))
|
||||
if not owners:
|
||||
owners = self._extract_owner_labels(
|
||||
[dash.get("created_by"), dash.get("changed_by")],
|
||||
)
|
||||
|
||||
result.append({
|
||||
"id": dash.get("id"),
|
||||
"slug": dash.get("slug"),
|
||||
"title": dash.get("dashboard_title"),
|
||||
"url": dash.get("url"),
|
||||
"last_modified": dash.get("changed_on_utc"),
|
||||
"status": "published" if dash.get("published") else "draft",
|
||||
"created_by": self._extract_user_display(
|
||||
None,
|
||||
dash.get("created_by"),
|
||||
),
|
||||
"modified_by": self._extract_user_display(
|
||||
dash.get("changed_by_name"),
|
||||
dash.get("changed_by"),
|
||||
),
|
||||
"owners": owners,
|
||||
})
|
||||
|
||||
return total_count, result
|
||||
# [/DEF:get_dashboards_summary_page:Function]
|
||||
|
||||
# [DEF:_extract_owner_labels:Function]
|
||||
# @PURPOSE: Normalize dashboard owners payload to stable display labels.
|
||||
# @PRE: owners payload can be scalar, object or list.
|
||||
# @POST: Returns deduplicated non-empty owner labels preserving order.
|
||||
# @RETURN: List[str]
|
||||
def _extract_owner_labels(self, owners_payload: Any) -> List[str]:
|
||||
if owners_payload is None:
|
||||
return []
|
||||
|
||||
owners_list: List[Any]
|
||||
if isinstance(owners_payload, list):
|
||||
owners_list = owners_payload
|
||||
else:
|
||||
owners_list = [owners_payload]
|
||||
|
||||
normalized: List[str] = []
|
||||
for owner in owners_list:
|
||||
label: Optional[str] = None
|
||||
if isinstance(owner, dict):
|
||||
label = self._extract_user_display(None, owner)
|
||||
else:
|
||||
label = self._sanitize_user_text(owner)
|
||||
if label and label not in normalized:
|
||||
normalized.append(label)
|
||||
return normalized
|
||||
# [/DEF:_extract_owner_labels:Function]
|
||||
|
||||
# [DEF:_extract_user_display:Function]
|
||||
# @PURPOSE: Normalize user payload to a stable display name.
|
||||
# @PRE: user payload can be string, dict or None.
|
||||
# @POST: Returns compact non-empty display value or None.
|
||||
# @RETURN: Optional[str]
|
||||
def _extract_user_display(self, preferred_value: Optional[str], user_payload: Optional[Dict]) -> Optional[str]:
|
||||
preferred = self._sanitize_user_text(preferred_value)
|
||||
if preferred:
|
||||
return preferred
|
||||
|
||||
if isinstance(user_payload, dict):
|
||||
full_name = self._sanitize_user_text(user_payload.get("full_name"))
|
||||
if full_name:
|
||||
return full_name
|
||||
first_name = self._sanitize_user_text(user_payload.get("first_name")) or ""
|
||||
last_name = self._sanitize_user_text(user_payload.get("last_name")) or ""
|
||||
combined = " ".join(part for part in [first_name, last_name] if part).strip()
|
||||
if combined:
|
||||
return combined
|
||||
username = self._sanitize_user_text(user_payload.get("username"))
|
||||
if username:
|
||||
return username
|
||||
email = self._sanitize_user_text(user_payload.get("email"))
|
||||
if email:
|
||||
return email
|
||||
return None
|
||||
# [/DEF:_extract_user_display:Function]
|
||||
|
||||
# [DEF:_sanitize_user_text:Function]
|
||||
# @PURPOSE: Convert scalar value to non-empty user-facing text.
|
||||
# @PRE: value can be any scalar type.
|
||||
# @POST: Returns trimmed string or None.
|
||||
# @RETURN: Optional[str]
|
||||
def _sanitize_user_text(self, value: Optional[Union[str, int]]) -> Optional[str]:
|
||||
if value is None:
|
||||
return None
|
||||
normalized = str(value).strip()
|
||||
if not normalized:
|
||||
return None
|
||||
return normalized
|
||||
# [/DEF:_sanitize_user_text:Function]
|
||||
|
||||
# [DEF:get_dashboard:Function]
|
||||
# @PURPOSE: Fetches a single dashboard by ID.
|
||||
# @PRE: Client is authenticated and dashboard_id exists.
|
||||
@@ -531,25 +336,6 @@ class SupersetClient:
|
||||
}
|
||||
# [/DEF:get_dashboard_detail:Function]
|
||||
|
||||
# [DEF:get_charts:Function]
|
||||
# @PURPOSE: Fetches all charts with pagination support.
|
||||
# @PARAM: query (Optional[Dict]) - Optional query params/columns/filters.
|
||||
# @PRE: Client is authenticated.
|
||||
# @POST: Returns total count and charts list.
|
||||
# @RETURN: Tuple[int, List[Dict]]
|
||||
def get_charts(self, query: Optional[Dict] = None) -> Tuple[int, List[Dict]]:
|
||||
with belief_scope("get_charts"):
|
||||
validated_query = self._validate_query_params(query or {})
|
||||
if "columns" not in validated_query:
|
||||
validated_query["columns"] = ["id", "uuid", "slice_name", "viz_type"]
|
||||
|
||||
paginated_data = self._fetch_all_pages(
|
||||
endpoint="/chart/",
|
||||
pagination_options={"base_query": validated_query, "results_field": "result"},
|
||||
)
|
||||
return len(paginated_data), paginated_data
|
||||
# [/DEF:get_charts:Function]
|
||||
|
||||
# [DEF:_extract_chart_ids_from_layout:Function]
|
||||
# @PURPOSE: Traverses dashboard layout metadata and extracts chart IDs from common keys.
|
||||
# @PRE: payload can be dict/list/scalar.
|
||||
|
||||
@@ -19,20 +19,6 @@ from ..logger import belief_scope
|
||||
# @TIER: CRITICAL
|
||||
# @INVARIANT: logger is always a valid TaskLogger instance.
|
||||
# @UX_STATE: Idle -> Active -> Complete
|
||||
#
|
||||
# @TEST_CONTRACT: TaskContextInit ->
|
||||
# {
|
||||
# required_fields: {task_id: str, add_log_fn: Callable, params: dict},
|
||||
# optional_fields: {default_source: str},
|
||||
# invariants: [
|
||||
# "task_id matches initialized logger's task_id",
|
||||
# "logger is a valid TaskLogger instance"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_context -> {"task_id": "123", "add_log_fn": lambda *args: None, "params": {"k": "v"}, "default_source": "plugin"}
|
||||
# @TEST_EDGE: missing_task_id -> raises TypeError
|
||||
# @TEST_EDGE: missing_add_log_fn -> raises TypeError
|
||||
# @TEST_INVARIANT: logger_initialized -> verifies: [valid_context]
|
||||
class TaskContext:
|
||||
"""
|
||||
Execution context provided to plugins during task execution.
|
||||
|
||||
@@ -6,17 +6,6 @@
|
||||
# @RELATION: Depends on PluginLoader to get plugin instances. It is used by the API layer to create and query tasks.
|
||||
# @INVARIANT: Task IDs are unique.
|
||||
# @CONSTRAINT: Must use belief_scope for logging.
|
||||
# @TEST_CONTRACT: TaskManagerModule -> {
|
||||
# required_fields: {plugin_loader: PluginLoader},
|
||||
# optional_fields: {},
|
||||
# invariants: ["Must use belief_scope for logging"]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_module -> {"manager_initialized": true}
|
||||
# @TEST_EDGE: missing_required_field -> {"plugin_loader": null}
|
||||
# @TEST_EDGE: empty_response -> {"tasks": []}
|
||||
# @TEST_EDGE: invalid_type -> {"plugin_loader": "string_instead_of_object"}
|
||||
# @TEST_EDGE: external_failure -> {"db_unavailable": true}
|
||||
# @TEST_INVARIANT: logger_compliance -> verifies: [valid_module]
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
import asyncio
|
||||
@@ -39,20 +28,6 @@ from ..logger import logger, belief_scope, should_log_task_level
|
||||
# @INVARIANT: Task IDs are unique within the registry.
|
||||
# @INVARIANT: Each task has exactly one status at any time.
|
||||
# @INVARIANT: Log entries are never deleted after being added to a task.
|
||||
#
|
||||
# @TEST_CONTRACT: TaskManagerModel ->
|
||||
# {
|
||||
# required_fields: {plugin_loader: PluginLoader},
|
||||
# invariants: [
|
||||
# "Tasks are persisted immediately upon creation",
|
||||
# "Running tasks use a thread pool or asyncio event loop based on executor type",
|
||||
# "Log flushing runs on a background thread"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_manager -> {"plugin_loader": "MockPluginLoader()"}
|
||||
# @TEST_EDGE: create_task_invalid_plugin -> raises ValueError
|
||||
# @TEST_EDGE: create_task_invalid_params -> raises ValueError
|
||||
# @TEST_INVARIANT: lifecycle_management -> verifies: [valid_manager]
|
||||
class TaskManager:
|
||||
"""
|
||||
Manages the lifecycle of tasks, including their creation, execution, and state tracking.
|
||||
|
||||
@@ -45,14 +45,6 @@ class LogLevel(str, Enum):
|
||||
# @PURPOSE: A Pydantic model representing a single, structured log entry associated with a task.
|
||||
# @TIER: CRITICAL
|
||||
# @INVARIANT: Each log entry has a unique timestamp and source.
|
||||
#
|
||||
# @TEST_CONTRACT: LogEntryModel ->
|
||||
# {
|
||||
# required_fields: {message: str},
|
||||
# optional_fields: {timestamp: datetime, level: str, source: str, context: dict, metadata: dict}
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_log_entry -> {"message": "Plugin initialized"}
|
||||
# @TEST_EDGE: empty_message -> {"message": ""}
|
||||
class LogEntry(BaseModel):
|
||||
timestamp: datetime = Field(default_factory=datetime.utcnow)
|
||||
level: str = Field(default="INFO")
|
||||
|
||||
@@ -24,20 +24,6 @@ from ..logger import logger, belief_scope
|
||||
# @SEMANTICS: persistence, service, database, sqlalchemy
|
||||
# @PURPOSE: Provides methods to save and load tasks from the tasks.db database using SQLAlchemy.
|
||||
# @INVARIANT: Persistence must handle potentially missing task fields natively.
|
||||
#
|
||||
# @TEST_CONTRACT: TaskPersistenceService ->
|
||||
# {
|
||||
# required_fields: {},
|
||||
# invariants: [
|
||||
# "persist_task creates or updates a record",
|
||||
# "load_tasks retrieves valid Task instances",
|
||||
# "delete_tasks correctly removes records from the database"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_task_persistence -> {"task_id": "123", "status": "PENDING"}
|
||||
# @TEST_EDGE: persist_invalid_task_type -> raises Exception
|
||||
# @TEST_EDGE: load_corrupt_json_params -> handled gracefully
|
||||
# @TEST_INVARIANT: accurate_round_trip -> verifies: [valid_task_persistence, load_corrupt_json_params]
|
||||
class TaskPersistenceService:
|
||||
# [DEF:_json_load_if_needed:Function]
|
||||
# @PURPOSE: Safely load JSON strings from DB if necessary
|
||||
@@ -259,19 +245,6 @@ class TaskPersistenceService:
|
||||
# @TIER: CRITICAL
|
||||
# @RELATION: DEPENDS_ON -> TaskLogRecord
|
||||
# @INVARIANT: Log entries are batch-inserted for performance.
|
||||
#
|
||||
# @TEST_CONTRACT: TaskLogPersistenceService ->
|
||||
# {
|
||||
# required_fields: {},
|
||||
# invariants: [
|
||||
# "add_logs efficiently saves logs to the database",
|
||||
# "get_logs retrieves properly filtered LogEntry objects"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_log_batch -> {"task_id": "123", "logs": [{"level": "INFO", "message": "msg"}]}
|
||||
# @TEST_EDGE: empty_log_list -> no-op behavior
|
||||
# @TEST_EDGE: add_logs_db_error -> rollback and log error
|
||||
# @TEST_INVARIANT: accurate_log_aggregation -> verifies: [valid_log_batch]
|
||||
class TaskLogPersistenceService:
|
||||
"""
|
||||
Service for persisting and querying task logs.
|
||||
|
||||
@@ -15,21 +15,8 @@ from typing import Dict, Any, Optional, Callable
|
||||
# @PURPOSE: A wrapper around TaskManager._add_log that carries task_id and source context.
|
||||
# @TIER: CRITICAL
|
||||
# @INVARIANT: All log calls include the task_id and source.
|
||||
# @TEST_DATA: task_logger -> {"task_id": "test_123", "source": "test_plugin"}
|
||||
# @UX_STATE: Idle -> Logging -> (system records log)
|
||||
#
|
||||
# @TEST_CONTRACT: TaskLoggerModel ->
|
||||
# {
|
||||
# required_fields: {task_id: str, add_log_fn: Callable},
|
||||
# optional_fields: {source: str},
|
||||
# invariants: [
|
||||
# "All specific log methods (info, error) delegate to _log",
|
||||
# "with_source creates a new logger with the same task_id"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_task_logger -> {"task_id": "test_123", "add_log_fn": lambda *args: None, "source": "test_plugin"}
|
||||
# @TEST_EDGE: missing_task_id -> raises TypeError
|
||||
# @TEST_EDGE: invalid_add_log_fn -> raises TypeError
|
||||
# @TEST_INVARIANT: consistent_delegation -> verifies: [valid_task_logger]
|
||||
class TaskLogger:
|
||||
"""
|
||||
A dedicated logger for tasks that automatically tags logs with source attribution.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# [DEF:test_report_models:Module]
|
||||
# @TIER: STANDARD
|
||||
# @TIER: CRITICAL
|
||||
# @PURPOSE: Unit tests for report Pydantic models and their validators
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.models.report
|
||||
|
||||
@@ -47,19 +47,6 @@ class ReportStatus(str, Enum):
|
||||
# @INVARIANT: The properties accurately describe error state.
|
||||
# @SEMANTICS: error, context, payload
|
||||
# @PURPOSE: Error and recovery context for failed/partial reports.
|
||||
#
|
||||
# @TEST_CONTRACT: ErrorContextModel ->
|
||||
# {
|
||||
# required_fields: {
|
||||
# message: str
|
||||
# },
|
||||
# optional_fields: {
|
||||
# code: str,
|
||||
# next_actions: list[str]
|
||||
# }
|
||||
# }
|
||||
# @TEST_FIXTURE: basic_error -> {"message": "Connection timeout", "code": "ERR_504", "next_actions": ["retry"]}
|
||||
# @TEST_EDGE: missing_message -> {"code": "ERR_504"}
|
||||
class ErrorContext(BaseModel):
|
||||
code: Optional[str] = None
|
||||
message: str
|
||||
@@ -72,36 +59,6 @@ class ErrorContext(BaseModel):
|
||||
# @INVARIANT: Must represent canonical task record attributes.
|
||||
# @SEMANTICS: report, model, summary
|
||||
# @PURPOSE: Canonical normalized report envelope for one task execution.
|
||||
#
|
||||
# @TEST_CONTRACT: TaskReportModel ->
|
||||
# {
|
||||
# required_fields: {
|
||||
# report_id: str,
|
||||
# task_id: str,
|
||||
# task_type: TaskType,
|
||||
# status: ReportStatus,
|
||||
# updated_at: datetime,
|
||||
# summary: str
|
||||
# },
|
||||
# invariants: [
|
||||
# "report_id is a non-empty string",
|
||||
# "task_id is a non-empty string",
|
||||
# "summary is a non-empty string"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_task_report ->
|
||||
# {
|
||||
# report_id: "rep-123",
|
||||
# task_id: "task-456",
|
||||
# task_type: "migration",
|
||||
# status: "success",
|
||||
# updated_at: "2026-02-26T12:00:00Z",
|
||||
# summary: "Migration completed successfully"
|
||||
# }
|
||||
# @TEST_EDGE: empty_report_id -> {"report_id": " ", "task_id": "task-456", "task_type": "migration", "status": "success", "updated_at": "2026-02-26T12:00:00Z", "summary": "Done"}
|
||||
# @TEST_EDGE: empty_summary -> {"report_id": "rep-123", "task_id": "task-456", "task_type": "migration", "status": "success", "updated_at": "2026-02-26T12:00:00Z", "summary": ""}
|
||||
# @TEST_EDGE: invalid_task_type -> {"report_id": "rep-123", "task_id": "task-456", "task_type": "invalid_type", "status": "success", "updated_at": "2026-02-26T12:00:00Z", "summary": "Done"}
|
||||
# @TEST_INVARIANT: non_empty_validators -> verifies: [empty_report_id, empty_summary]
|
||||
class TaskReport(BaseModel):
|
||||
report_id: str
|
||||
task_id: str
|
||||
@@ -128,25 +85,6 @@ class TaskReport(BaseModel):
|
||||
# @INVARIANT: Time and pagination queries are mutually consistent.
|
||||
# @SEMANTICS: query, filter, search
|
||||
# @PURPOSE: Query object for server-side report filtering, sorting, and pagination.
|
||||
#
|
||||
# @TEST_CONTRACT: ReportQueryModel ->
|
||||
# {
|
||||
# optional_fields: {
|
||||
# page: int, page_size: int, task_types: list[TaskType], statuses: list[ReportStatus],
|
||||
# time_from: datetime, time_to: datetime, search: str, sort_by: str, sort_order: str
|
||||
# },
|
||||
# invariants: [
|
||||
# "page >= 1", "1 <= page_size <= 100",
|
||||
# "sort_by in {'updated_at', 'status', 'task_type'}",
|
||||
# "sort_order in {'asc', 'desc'}",
|
||||
# "time_from <= time_to if both exist"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_query -> {"page": 1, "page_size":20, "sort_by": "updated_at", "sort_order": "desc"}
|
||||
# @TEST_EDGE: invalid_page_size_large -> {"page_size": 150}
|
||||
# @TEST_EDGE: invalid_sort_by -> {"sort_by": "unknown_field"}
|
||||
# @TEST_EDGE: invalid_time_range -> {"time_from": "2026-02-26T12:00:00Z", "time_to": "2026-02-25T12:00:00Z"}
|
||||
# @TEST_INVARIANT: attribute_constraints_enforced -> verifies: [invalid_page_size_large, invalid_sort_by, invalid_time_range]
|
||||
class ReportQuery(BaseModel):
|
||||
page: int = Field(default=1, ge=1)
|
||||
page_size: int = Field(default=20, ge=1, le=100)
|
||||
@@ -186,16 +124,6 @@ class ReportQuery(BaseModel):
|
||||
# @INVARIANT: Represents paginated data correctly.
|
||||
# @SEMANTICS: collection, pagination
|
||||
# @PURPOSE: Paginated collection of normalized task reports.
|
||||
#
|
||||
# @TEST_CONTRACT: ReportCollectionModel ->
|
||||
# {
|
||||
# required_fields: {
|
||||
# items: list[TaskReport], total: int, page: int, page_size: int, has_next: bool, applied_filters: ReportQuery
|
||||
# },
|
||||
# invariants: ["total >= 0", "page >= 1", "page_size >= 1"]
|
||||
# }
|
||||
# @TEST_FIXTURE: empty_collection -> {"items": [], "total": 0, "page": 1, "page_size": 20, "has_next": False, "applied_filters": {}}
|
||||
# @TEST_EDGE: negative_total -> {"items": [], "total": -5, "page": 1, "page_size": 20, "has_next": False, "applied_filters": {}}
|
||||
class ReportCollection(BaseModel):
|
||||
items: List[TaskReport]
|
||||
total: int = Field(ge=0)
|
||||
@@ -211,14 +139,6 @@ class ReportCollection(BaseModel):
|
||||
# @INVARIANT: Incorporates a report and logs correctly.
|
||||
# @SEMANTICS: view, detail, logs
|
||||
# @PURPOSE: Detailed report representation including diagnostics and recovery actions.
|
||||
#
|
||||
# @TEST_CONTRACT: ReportDetailViewModel ->
|
||||
# {
|
||||
# required_fields: {report: TaskReport},
|
||||
# optional_fields: {timeline: list[dict], diagnostics: dict, next_actions: list[str]}
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_detail -> {"report": {"report_id": "rep-1", "task_id": "task-1", "task_type": "backup", "status": "success", "updated_at": "2026-02-26T12:00:00Z", "summary": "Done"}}
|
||||
# @TEST_EDGE: missing_report -> {}
|
||||
class ReportDetailView(BaseModel):
|
||||
report: TaskReport
|
||||
timeline: List[Dict[str, Any]] = Field(default_factory=list)
|
||||
|
||||
@@ -39,61 +39,6 @@ class TaskRecord(Base):
|
||||
# @TIER: CRITICAL
|
||||
# @RELATION: DEPENDS_ON -> TaskRecord
|
||||
# @INVARIANT: Each log entry belongs to exactly one task.
|
||||
#
|
||||
# @TEST_CONTRACT: TaskLogCreate ->
|
||||
# {
|
||||
# required_fields: {
|
||||
# task_id: str,
|
||||
# timestamp: datetime,
|
||||
# level: str,
|
||||
# source: str,
|
||||
# message: str
|
||||
# },
|
||||
# optional_fields: {
|
||||
# metadata_json: str,
|
||||
# id: int
|
||||
# },
|
||||
# invariants: [
|
||||
# "task_id matches an existing TaskRecord.id"
|
||||
# ]
|
||||
# }
|
||||
#
|
||||
# @TEST_FIXTURE: basic_info_log ->
|
||||
# {
|
||||
# task_id: "00000000-0000-0000-0000-000000000000",
|
||||
# timestamp: "2026-02-26T12:00:00Z",
|
||||
# level: "INFO",
|
||||
# source: "system",
|
||||
# message: "Task initialization complete"
|
||||
# }
|
||||
#
|
||||
# @TEST_EDGE: missing_required_field ->
|
||||
# {
|
||||
# timestamp: "2026-02-26T12:00:00Z",
|
||||
# level: "ERROR",
|
||||
# source: "system",
|
||||
# message: "Missing task_id"
|
||||
# }
|
||||
#
|
||||
# @TEST_EDGE: invalid_type ->
|
||||
# {
|
||||
# task_id: "00000000-0000-0000-0000-000000000000",
|
||||
# timestamp: "2026-02-26T12:00:00Z",
|
||||
# level: 500,
|
||||
# source: "system",
|
||||
# message: "Integer level"
|
||||
# }
|
||||
#
|
||||
# @TEST_EDGE: empty_message ->
|
||||
# {
|
||||
# task_id: "00000000-0000-0000-0000-000000000000",
|
||||
# timestamp: "2026-02-26T12:00:00Z",
|
||||
# level: "DEBUG",
|
||||
# source: "system",
|
||||
# message: ""
|
||||
# }
|
||||
#
|
||||
# @TEST_INVARIANT: exact_one_task_association -> verifies: [basic_info_log, missing_required_field]
|
||||
class TaskLogRecord(Base):
|
||||
__tablename__ = "task_logs"
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
},
|
||||
"changed_by_name": "Superset Admin",
|
||||
"changed_on": "2026-02-10T13:39:35.945662",
|
||||
"changed_on_delta_humanized": "16 days ago",
|
||||
"changed_on_delta_humanized": "15 days ago",
|
||||
"charts": [
|
||||
"TA-0001-001 test_chart"
|
||||
],
|
||||
@@ -19,7 +19,7 @@
|
||||
"id": 1,
|
||||
"last_name": "Admin"
|
||||
},
|
||||
"created_on_delta_humanized": "16 days ago",
|
||||
"created_on_delta_humanized": "15 days ago",
|
||||
"css": null,
|
||||
"dashboard_title": "TA-0001 Test dashboard",
|
||||
"id": 13,
|
||||
@@ -54,7 +54,7 @@
|
||||
"last_name": "Admin"
|
||||
},
|
||||
"changed_on": "2026-02-10T13:38:26.175551",
|
||||
"changed_on_humanized": "16 days ago",
|
||||
"changed_on_humanized": "15 days ago",
|
||||
"column_formats": {},
|
||||
"columns": [
|
||||
{
|
||||
@@ -424,7 +424,7 @@
|
||||
"last_name": "Admin"
|
||||
},
|
||||
"created_on": "2026-02-10T13:38:26.050436",
|
||||
"created_on_humanized": "16 days ago",
|
||||
"created_on_humanized": "15 days ago",
|
||||
"database": {
|
||||
"allow_multi_catalog": false,
|
||||
"backend": "postgresql",
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# [DEF:test_encryption_manager:Module]
|
||||
# @TIER: STANDARD
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: encryption, security, fernet, api-keys, tests
|
||||
# @PURPOSE: Unit tests for EncryptionManager encrypt/decrypt functionality.
|
||||
# @LAYER: Domain
|
||||
@@ -27,7 +27,7 @@ class TestEncryptionManager:
|
||||
# Re-implement the same logic as EncryptionManager to avoid import issues
|
||||
# with the llm_provider module's relative imports
|
||||
import os
|
||||
key = os.getenv("ENCRYPTION_KEY", "REMOVED_HISTORICAL_SECRET_DO_NOT_USE").encode()
|
||||
key = os.getenv("ENCRYPTION_KEY", "ZcytYzi0iHIl4Ttr-GdAEk117aGRogkGvN3wiTxrPpE=").encode()
|
||||
fernet = Fernet(key)
|
||||
|
||||
class EncryptionManager:
|
||||
|
||||
@@ -33,43 +33,22 @@ async def test_get_dashboards_with_status():
|
||||
]
|
||||
|
||||
# Mock tasks
|
||||
task_prod_old = MagicMock()
|
||||
task_prod_old.id = "task-123"
|
||||
task_prod_old.plugin_id = "llm_dashboard_validation"
|
||||
task_prod_old.status = "SUCCESS"
|
||||
task_prod_old.params = {"dashboard_id": "1", "environment_id": "prod"}
|
||||
task_prod_old.started_at = datetime(2024, 1, 1, 10, 0, 0)
|
||||
|
||||
task_prod_new = MagicMock()
|
||||
task_prod_new.id = "task-124"
|
||||
task_prod_new.plugin_id = "llm_dashboard_validation"
|
||||
task_prod_new.status = "TaskStatus.FAILED"
|
||||
task_prod_new.params = {"dashboard_id": "1", "environment_id": "prod"}
|
||||
task_prod_new.result = {"status": "FAIL"}
|
||||
task_prod_new.started_at = datetime(2024, 1, 1, 12, 0, 0)
|
||||
|
||||
task_other_env = MagicMock()
|
||||
task_other_env.id = "task-200"
|
||||
task_other_env.plugin_id = "llm_dashboard_validation"
|
||||
task_other_env.status = "SUCCESS"
|
||||
task_other_env.params = {"dashboard_id": "1", "environment_id": "stage"}
|
||||
task_other_env.started_at = datetime(2024, 1, 1, 13, 0, 0)
|
||||
|
||||
mock_task = MagicMock()
|
||||
mock_task.id = "task-123"
|
||||
mock_task.status = "SUCCESS"
|
||||
mock_task.params = {"resource_id": "dashboard-1"}
|
||||
mock_task.created_at = datetime.now()
|
||||
|
||||
env = MagicMock()
|
||||
env.id = "prod"
|
||||
|
||||
result = await service.get_dashboards_with_status(
|
||||
env,
|
||||
[task_prod_old, task_prod_new, task_other_env],
|
||||
)
|
||||
|
||||
result = await service.get_dashboards_with_status(env, [mock_task])
|
||||
|
||||
assert len(result) == 2
|
||||
assert result[0]["id"] == 1
|
||||
assert "git_status" in result[0]
|
||||
assert "last_task" in result[0]
|
||||
assert result[0]["last_task"]["task_id"] == "task-124"
|
||||
assert result[0]["last_task"]["status"] == "FAILED"
|
||||
assert result[0]["last_task"]["validation_status"] == "FAIL"
|
||||
assert result[0]["last_task"]["task_id"] == "task-123"
|
||||
|
||||
|
||||
# [/DEF:test_get_dashboards_with_status:Function]
|
||||
@@ -166,9 +145,7 @@ def test_get_git_status_for_dashboard_no_repo():
|
||||
|
||||
result = service._get_git_status_for_dashboard(123)
|
||||
|
||||
assert result is not None
|
||||
assert result['sync_status'] == 'NO_REPO'
|
||||
assert result['has_repo'] is False
|
||||
assert result is None
|
||||
|
||||
|
||||
# [/DEF:test_get_git_status_for_dashboard_no_repo:Function]
|
||||
@@ -235,38 +212,4 @@ def test_extract_resource_name_from_task():
|
||||
# [/DEF:test_extract_resource_name_from_task:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_last_task_for_resource_empty_tasks:Function]
|
||||
# @TEST: _get_last_task_for_resource returns None for empty tasks list
|
||||
# @PRE: tasks is empty list
|
||||
# @POST: Returns None
|
||||
def test_get_last_task_for_resource_empty_tasks():
|
||||
from src.services.resource_service import ResourceService
|
||||
|
||||
service = ResourceService()
|
||||
|
||||
result = service._get_last_task_for_resource("dashboard-1", [])
|
||||
assert result is None
|
||||
# [/DEF:test_get_last_task_for_resource_empty_tasks:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_last_task_for_resource_no_match:Function]
|
||||
# @TEST: _get_last_task_for_resource returns None when no tasks match resource_id
|
||||
# @PRE: tasks list has no matching resource_id
|
||||
# @POST: Returns None
|
||||
def test_get_last_task_for_resource_no_match():
|
||||
from src.services.resource_service import ResourceService
|
||||
|
||||
service = ResourceService()
|
||||
|
||||
task = MagicMock()
|
||||
task.id = "task-999"
|
||||
task.status = "SUCCESS"
|
||||
task.params = {"resource_id": "dashboard-99"}
|
||||
task.created_at = datetime(2024, 1, 1, 10, 0, 0)
|
||||
|
||||
result = service._get_last_task_for_resource("dashboard-1", [task])
|
||||
assert result is None
|
||||
# [/DEF:test_get_last_task_for_resource_no_match:Function]
|
||||
|
||||
|
||||
# [/DEF:backend.src.services.__tests__.test_resource_service:Module]
|
||||
# [/DEF:backend.src.services.__tests__.test_resource_service:Module]
|
||||
@@ -13,9 +13,8 @@ import os
|
||||
import httpx
|
||||
from git import Repo
|
||||
from fastapi import HTTPException
|
||||
from typing import Any, Dict, List, Optional
|
||||
from typing import List
|
||||
from datetime import datetime
|
||||
from urllib.parse import quote, urlparse
|
||||
from src.core.logger import logger, belief_scope
|
||||
from src.models.git import GitProvider
|
||||
|
||||
@@ -252,21 +251,12 @@ class GitService:
|
||||
try:
|
||||
current_branch = repo.active_branch
|
||||
logger.info(f"[push_changes][Action] Pushing branch {current_branch.name} to origin")
|
||||
tracking_branch = None
|
||||
try:
|
||||
tracking_branch = current_branch.tracking_branch()
|
||||
except Exception:
|
||||
tracking_branch = None
|
||||
|
||||
# First push for a new branch must set upstream, otherwise future pull fails.
|
||||
if tracking_branch is None:
|
||||
repo.git.push("--set-upstream", "origin", f"{current_branch.name}:{current_branch.name}")
|
||||
else:
|
||||
push_info = origin.push(refspec=f'{current_branch.name}:{current_branch.name}')
|
||||
for info in push_info:
|
||||
if info.flags & info.ERROR:
|
||||
logger.error(f"[push_changes][Coherence:Failed] Error pushing ref {info.remote_ref_string}: {info.summary}")
|
||||
raise Exception(f"Git push error for {info.remote_ref_string}: {info.summary}")
|
||||
# Using a timeout for network operations
|
||||
push_info = origin.push(refspec=f'{current_branch.name}:{current_branch.name}')
|
||||
for info in push_info:
|
||||
if info.flags & info.ERROR:
|
||||
logger.error(f"[push_changes][Coherence:Failed] Error pushing ref {info.remote_ref_string}: {info.summary}")
|
||||
raise Exception(f"Git push error for {info.remote_ref_string}: {info.summary}")
|
||||
except Exception as e:
|
||||
logger.error(f"[push_changes][Coherence:Failed] Failed to push changes: {e}")
|
||||
raise HTTPException(status_code=500, detail=f"Git push failed: {str(e)}")
|
||||
@@ -281,17 +271,8 @@ class GitService:
|
||||
repo = self.get_repo(dashboard_id)
|
||||
try:
|
||||
origin = repo.remote(name='origin')
|
||||
current_branch = repo.active_branch.name
|
||||
remote_ref = f"origin/{current_branch}"
|
||||
has_remote_branch = any(ref.name == remote_ref for ref in repo.refs)
|
||||
if not has_remote_branch:
|
||||
raise HTTPException(
|
||||
status_code=409,
|
||||
detail=f"Remote branch '{current_branch}' does not exist yet. Push this branch first.",
|
||||
)
|
||||
|
||||
logger.info(f"[pull_changes][Action] Pulling changes from origin/{current_branch}")
|
||||
fetch_info = origin.pull(current_branch)
|
||||
logger.info("[pull_changes][Action] Pulling changes from origin")
|
||||
fetch_info = origin.pull()
|
||||
for info in fetch_info:
|
||||
if info.flags & info.ERROR:
|
||||
logger.error(f"[pull_changes][Coherence:Failed] Error pulling ref {info.ref}: {info.note}")
|
||||
@@ -299,8 +280,6 @@ class GitService:
|
||||
except ValueError:
|
||||
logger.error(f"[pull_changes][Coherence:Failed] Remote 'origin' not found for dashboard {dashboard_id}")
|
||||
raise HTTPException(status_code=400, detail="Remote 'origin' not configured")
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"[pull_changes][Coherence:Failed] Failed to pull changes: {e}")
|
||||
raise HTTPException(status_code=500, detail=f"Git pull failed: {str(e)}")
|
||||
@@ -323,62 +302,12 @@ class GitService:
|
||||
except (ValueError, Exception):
|
||||
has_commits = False
|
||||
|
||||
current_branch = repo.active_branch.name
|
||||
tracking_branch = None
|
||||
has_upstream = False
|
||||
ahead_count = 0
|
||||
behind_count = 0
|
||||
|
||||
try:
|
||||
tracking_branch = repo.active_branch.tracking_branch()
|
||||
has_upstream = tracking_branch is not None
|
||||
except Exception:
|
||||
tracking_branch = None
|
||||
has_upstream = False
|
||||
|
||||
if has_upstream and tracking_branch is not None:
|
||||
try:
|
||||
# Commits present locally but not in upstream.
|
||||
ahead_count = sum(
|
||||
1 for _ in repo.iter_commits(f"{tracking_branch.name}..{current_branch}")
|
||||
)
|
||||
# Commits present in upstream but not local.
|
||||
behind_count = sum(
|
||||
1 for _ in repo.iter_commits(f"{current_branch}..{tracking_branch.name}")
|
||||
)
|
||||
except Exception:
|
||||
ahead_count = 0
|
||||
behind_count = 0
|
||||
|
||||
is_dirty = repo.is_dirty(untracked_files=True)
|
||||
untracked_files = repo.untracked_files
|
||||
modified_files = [item.a_path for item in repo.index.diff(None)]
|
||||
staged_files = [item.a_path for item in repo.index.diff("HEAD")] if has_commits else []
|
||||
is_diverged = ahead_count > 0 and behind_count > 0
|
||||
|
||||
if is_diverged:
|
||||
sync_state = "DIVERGED"
|
||||
elif behind_count > 0:
|
||||
sync_state = "BEHIND_REMOTE"
|
||||
elif ahead_count > 0:
|
||||
sync_state = "AHEAD_REMOTE"
|
||||
elif is_dirty or modified_files or staged_files or untracked_files:
|
||||
sync_state = "CHANGES"
|
||||
else:
|
||||
sync_state = "SYNCED"
|
||||
|
||||
return {
|
||||
"is_dirty": is_dirty,
|
||||
"untracked_files": untracked_files,
|
||||
"modified_files": modified_files,
|
||||
"staged_files": staged_files,
|
||||
"current_branch": current_branch,
|
||||
"upstream_branch": tracking_branch.name if tracking_branch is not None else None,
|
||||
"has_upstream": has_upstream,
|
||||
"ahead_count": ahead_count,
|
||||
"behind_count": behind_count,
|
||||
"is_diverged": is_diverged,
|
||||
"sync_state": sync_state,
|
||||
"is_dirty": repo.is_dirty(untracked_files=True),
|
||||
"untracked_files": repo.untracked_files,
|
||||
"modified_files": [item.a_path for item in repo.index.diff(None)],
|
||||
"staged_files": [item.a_path for item in repo.index.diff("HEAD")] if has_commits else [],
|
||||
"current_branch": repo.active_branch.name
|
||||
}
|
||||
# [/DEF:get_status:Function]
|
||||
|
||||
@@ -481,524 +410,5 @@ class GitService:
|
||||
return False
|
||||
# [/DEF:test_connection:Function]
|
||||
|
||||
# [DEF:_normalize_git_server_url:Function]
|
||||
# @PURPOSE: Normalize Git server URL for provider API calls.
|
||||
# @PRE: raw_url is non-empty.
|
||||
# @POST: Returns URL without trailing slash.
|
||||
# @RETURN: str
|
||||
def _normalize_git_server_url(self, raw_url: str) -> str:
|
||||
normalized = (raw_url or "").strip()
|
||||
if not normalized:
|
||||
raise HTTPException(status_code=400, detail="Git server URL is required")
|
||||
return normalized.rstrip("/")
|
||||
# [/DEF:_normalize_git_server_url:Function]
|
||||
|
||||
# [DEF:_gitea_headers:Function]
|
||||
# @PURPOSE: Build Gitea API authorization headers.
|
||||
# @PRE: pat is provided.
|
||||
# @POST: Returns headers with token auth.
|
||||
# @RETURN: Dict[str, str]
|
||||
def _gitea_headers(self, pat: str) -> Dict[str, str]:
|
||||
token = (pat or "").strip()
|
||||
if not token:
|
||||
raise HTTPException(status_code=400, detail="Git PAT is required for Gitea operations")
|
||||
return {
|
||||
"Authorization": f"token {token}",
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/json",
|
||||
}
|
||||
# [/DEF:_gitea_headers:Function]
|
||||
|
||||
# [DEF:_gitea_request:Function]
|
||||
# @PURPOSE: Execute HTTP request against Gitea API with stable error mapping.
|
||||
# @PRE: method and endpoint are valid.
|
||||
# @POST: Returns decoded JSON payload.
|
||||
# @RETURN: Any
|
||||
async def _gitea_request(
|
||||
self,
|
||||
method: str,
|
||||
server_url: str,
|
||||
pat: str,
|
||||
endpoint: str,
|
||||
payload: Optional[Dict[str, Any]] = None,
|
||||
) -> Any:
|
||||
base_url = self._normalize_git_server_url(server_url)
|
||||
url = f"{base_url}/api/v1{endpoint}"
|
||||
headers = self._gitea_headers(pat)
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=20.0) as client:
|
||||
response = await client.request(
|
||||
method=method,
|
||||
url=url,
|
||||
headers=headers,
|
||||
json=payload,
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"[gitea_request][Coherence:Failed] Network error: {e}")
|
||||
raise HTTPException(status_code=503, detail=f"Gitea API is unavailable: {str(e)}")
|
||||
|
||||
if response.status_code >= 400:
|
||||
detail = response.text
|
||||
try:
|
||||
parsed = response.json()
|
||||
detail = parsed.get("message") or parsed.get("error") or detail
|
||||
except Exception:
|
||||
pass
|
||||
logger.error(
|
||||
f"[gitea_request][Coherence:Failed] method={method} endpoint={endpoint} status={response.status_code} detail={detail}"
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=response.status_code,
|
||||
detail=f"Gitea API error: {detail}",
|
||||
)
|
||||
|
||||
if response.status_code == 204:
|
||||
return None
|
||||
return response.json()
|
||||
# [/DEF:_gitea_request:Function]
|
||||
|
||||
# [DEF:get_gitea_current_user:Function]
|
||||
# @PURPOSE: Resolve current Gitea user for PAT.
|
||||
# @PRE: server_url and pat are valid.
|
||||
# @POST: Returns current username.
|
||||
# @RETURN: str
|
||||
async def get_gitea_current_user(self, server_url: str, pat: str) -> str:
|
||||
payload = await self._gitea_request("GET", server_url, pat, "/user")
|
||||
username = payload.get("login") or payload.get("username")
|
||||
if not username:
|
||||
raise HTTPException(status_code=500, detail="Failed to resolve Gitea username")
|
||||
return str(username)
|
||||
# [/DEF:get_gitea_current_user:Function]
|
||||
|
||||
# [DEF:list_gitea_repositories:Function]
|
||||
# @PURPOSE: List repositories visible to authenticated Gitea user.
|
||||
# @PRE: server_url and pat are valid.
|
||||
# @POST: Returns repository list from Gitea.
|
||||
# @RETURN: List[dict]
|
||||
async def list_gitea_repositories(self, server_url: str, pat: str) -> List[dict]:
|
||||
payload = await self._gitea_request(
|
||||
"GET",
|
||||
server_url,
|
||||
pat,
|
||||
"/user/repos?limit=100&page=1",
|
||||
)
|
||||
if not isinstance(payload, list):
|
||||
return []
|
||||
return payload
|
||||
# [/DEF:list_gitea_repositories:Function]
|
||||
|
||||
# [DEF:create_gitea_repository:Function]
|
||||
# @PURPOSE: Create repository in Gitea for authenticated user.
|
||||
# @PRE: name is non-empty and PAT has repo creation permission.
|
||||
# @POST: Returns created repository payload.
|
||||
# @RETURN: dict
|
||||
async def create_gitea_repository(
|
||||
self,
|
||||
server_url: str,
|
||||
pat: str,
|
||||
name: str,
|
||||
private: bool = True,
|
||||
description: Optional[str] = None,
|
||||
auto_init: bool = True,
|
||||
default_branch: Optional[str] = "main",
|
||||
) -> Dict[str, Any]:
|
||||
payload = {
|
||||
"name": name,
|
||||
"private": bool(private),
|
||||
"auto_init": bool(auto_init),
|
||||
}
|
||||
if description:
|
||||
payload["description"] = description
|
||||
if default_branch:
|
||||
payload["default_branch"] = default_branch
|
||||
created = await self._gitea_request(
|
||||
"POST",
|
||||
server_url,
|
||||
pat,
|
||||
"/user/repos",
|
||||
payload=payload,
|
||||
)
|
||||
if not isinstance(created, dict):
|
||||
raise HTTPException(status_code=500, detail="Unexpected Gitea response while creating repository")
|
||||
return created
|
||||
# [/DEF:create_gitea_repository:Function]
|
||||
|
||||
# [DEF:delete_gitea_repository:Function]
|
||||
# @PURPOSE: Delete repository in Gitea.
|
||||
# @PRE: owner and repo_name are non-empty.
|
||||
# @POST: Repository deleted on Gitea server.
|
||||
async def delete_gitea_repository(
|
||||
self,
|
||||
server_url: str,
|
||||
pat: str,
|
||||
owner: str,
|
||||
repo_name: str,
|
||||
) -> None:
|
||||
if not owner or not repo_name:
|
||||
raise HTTPException(status_code=400, detail="owner and repo_name are required")
|
||||
await self._gitea_request(
|
||||
"DELETE",
|
||||
server_url,
|
||||
pat,
|
||||
f"/repos/{owner}/{repo_name}",
|
||||
)
|
||||
# [/DEF:delete_gitea_repository:Function]
|
||||
|
||||
# [DEF:create_github_repository:Function]
|
||||
# @PURPOSE: Create repository in GitHub or GitHub Enterprise.
|
||||
# @PRE: PAT has repository create permission.
|
||||
# @POST: Returns created repository payload.
|
||||
# @RETURN: dict
|
||||
async def create_github_repository(
|
||||
self,
|
||||
server_url: str,
|
||||
pat: str,
|
||||
name: str,
|
||||
private: bool = True,
|
||||
description: Optional[str] = None,
|
||||
auto_init: bool = True,
|
||||
default_branch: Optional[str] = "main",
|
||||
) -> Dict[str, Any]:
|
||||
base_url = self._normalize_git_server_url(server_url)
|
||||
if "github.com" in base_url:
|
||||
api_url = "https://api.github.com/user/repos"
|
||||
else:
|
||||
api_url = f"{base_url}/api/v3/user/repos"
|
||||
headers = {
|
||||
"Authorization": f"token {pat.strip()}",
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/vnd.github+json",
|
||||
}
|
||||
payload: Dict[str, Any] = {
|
||||
"name": name,
|
||||
"private": bool(private),
|
||||
"auto_init": bool(auto_init),
|
||||
}
|
||||
if description:
|
||||
payload["description"] = description
|
||||
# GitHub API does not reliably support setting default branch on create without template/import.
|
||||
if default_branch:
|
||||
payload["default_branch"] = default_branch
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=20.0) as client:
|
||||
response = await client.post(api_url, headers=headers, json=payload)
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=503, detail=f"GitHub API is unavailable: {str(e)}")
|
||||
|
||||
if response.status_code >= 400:
|
||||
detail = response.text
|
||||
try:
|
||||
parsed = response.json()
|
||||
detail = parsed.get("message") or detail
|
||||
except Exception:
|
||||
pass
|
||||
raise HTTPException(status_code=response.status_code, detail=f"GitHub API error: {detail}")
|
||||
return response.json()
|
||||
# [/DEF:create_github_repository:Function]
|
||||
|
||||
# [DEF:create_gitlab_repository:Function]
|
||||
# @PURPOSE: Create repository(project) in GitLab.
|
||||
# @PRE: PAT has api scope.
|
||||
# @POST: Returns created repository payload.
|
||||
# @RETURN: dict
|
||||
async def create_gitlab_repository(
|
||||
self,
|
||||
server_url: str,
|
||||
pat: str,
|
||||
name: str,
|
||||
private: bool = True,
|
||||
description: Optional[str] = None,
|
||||
auto_init: bool = True,
|
||||
default_branch: Optional[str] = "main",
|
||||
) -> Dict[str, Any]:
|
||||
base_url = self._normalize_git_server_url(server_url)
|
||||
api_url = f"{base_url}/api/v4/projects"
|
||||
headers = {
|
||||
"PRIVATE-TOKEN": pat.strip(),
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/json",
|
||||
}
|
||||
payload: Dict[str, Any] = {
|
||||
"name": name,
|
||||
"visibility": "private" if private else "public",
|
||||
"initialize_with_readme": bool(auto_init),
|
||||
}
|
||||
if description:
|
||||
payload["description"] = description
|
||||
if default_branch:
|
||||
payload["default_branch"] = default_branch
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=20.0) as client:
|
||||
response = await client.post(api_url, headers=headers, json=payload)
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=503, detail=f"GitLab API is unavailable: {str(e)}")
|
||||
|
||||
if response.status_code >= 400:
|
||||
detail = response.text
|
||||
try:
|
||||
parsed = response.json()
|
||||
if isinstance(parsed, dict):
|
||||
detail = parsed.get("message") or detail
|
||||
except Exception:
|
||||
pass
|
||||
raise HTTPException(status_code=response.status_code, detail=f"GitLab API error: {detail}")
|
||||
|
||||
data = response.json()
|
||||
# Normalize clone URL key to keep route response stable.
|
||||
if "clone_url" not in data:
|
||||
data["clone_url"] = data.get("http_url_to_repo")
|
||||
if "html_url" not in data:
|
||||
data["html_url"] = data.get("web_url")
|
||||
if "ssh_url" not in data:
|
||||
data["ssh_url"] = data.get("ssh_url_to_repo")
|
||||
if "full_name" not in data:
|
||||
data["full_name"] = data.get("path_with_namespace") or data.get("name")
|
||||
return data
|
||||
# [/DEF:create_gitlab_repository:Function]
|
||||
|
||||
# [DEF:_parse_remote_repo_identity:Function]
|
||||
# @PURPOSE: Parse owner/repo from remote URL for Git server API operations.
|
||||
# @PRE: remote_url is a valid git URL.
|
||||
# @POST: Returns owner/repo tokens.
|
||||
# @RETURN: Dict[str, str]
|
||||
def _parse_remote_repo_identity(self, remote_url: str) -> Dict[str, str]:
|
||||
normalized = str(remote_url or "").strip()
|
||||
if not normalized:
|
||||
raise HTTPException(status_code=400, detail="Repository remote_url is empty")
|
||||
|
||||
if normalized.startswith("git@"):
|
||||
# git@host:owner/repo.git
|
||||
path = normalized.split(":", 1)[1] if ":" in normalized else ""
|
||||
else:
|
||||
parsed = urlparse(normalized)
|
||||
path = parsed.path or ""
|
||||
|
||||
path = path.strip("/")
|
||||
if path.endswith(".git"):
|
||||
path = path[:-4]
|
||||
parts = [segment for segment in path.split("/") if segment]
|
||||
if len(parts) < 2:
|
||||
raise HTTPException(status_code=400, detail=f"Cannot parse repository owner/name from remote URL: {remote_url}")
|
||||
|
||||
owner = parts[0]
|
||||
repo = parts[-1]
|
||||
namespace = "/".join(parts[:-1])
|
||||
return {
|
||||
"owner": owner,
|
||||
"repo": repo,
|
||||
"namespace": namespace,
|
||||
"full_name": f"{namespace}/{repo}",
|
||||
}
|
||||
# [/DEF:_parse_remote_repo_identity:Function]
|
||||
|
||||
# [DEF:promote_direct_merge:Function]
|
||||
# @PURPOSE: Perform direct merge between branches in local repo and push target branch.
|
||||
# @PRE: Repository exists and both branches are valid.
|
||||
# @POST: Target branch contains merged changes from source branch.
|
||||
# @RETURN: Dict[str, Any]
|
||||
def promote_direct_merge(
|
||||
self,
|
||||
dashboard_id: int,
|
||||
from_branch: str,
|
||||
to_branch: str,
|
||||
) -> Dict[str, Any]:
|
||||
with belief_scope("GitService.promote_direct_merge"):
|
||||
if not from_branch or not to_branch:
|
||||
raise HTTPException(status_code=400, detail="from_branch and to_branch are required")
|
||||
repo = self.get_repo(dashboard_id)
|
||||
source = from_branch.strip()
|
||||
target = to_branch.strip()
|
||||
if source == target:
|
||||
raise HTTPException(status_code=400, detail="from_branch and to_branch must be different")
|
||||
|
||||
try:
|
||||
origin = repo.remote(name="origin")
|
||||
except ValueError:
|
||||
raise HTTPException(status_code=400, detail="Remote 'origin' not configured")
|
||||
|
||||
try:
|
||||
origin.fetch()
|
||||
# Ensure local source branch exists.
|
||||
if source not in [head.name for head in repo.heads]:
|
||||
if f"origin/{source}" in [ref.name for ref in repo.refs]:
|
||||
repo.git.checkout("-b", source, f"origin/{source}")
|
||||
else:
|
||||
raise HTTPException(status_code=404, detail=f"Source branch '{source}' not found")
|
||||
|
||||
# Ensure local target branch exists and is checked out.
|
||||
if target in [head.name for head in repo.heads]:
|
||||
repo.git.checkout(target)
|
||||
elif f"origin/{target}" in [ref.name for ref in repo.refs]:
|
||||
repo.git.checkout("-b", target, f"origin/{target}")
|
||||
else:
|
||||
raise HTTPException(status_code=404, detail=f"Target branch '{target}' not found")
|
||||
|
||||
# Bring target up to date and merge source into target.
|
||||
try:
|
||||
origin.pull(target)
|
||||
except Exception:
|
||||
pass
|
||||
repo.git.merge(source, "--no-ff", "-m", f"chore(flow): promote {source} -> {target}")
|
||||
origin.push(refspec=f"{target}:{target}")
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
message = str(e)
|
||||
if "CONFLICT" in message.upper():
|
||||
raise HTTPException(status_code=409, detail=f"Merge conflict during direct promote: {message}")
|
||||
raise HTTPException(status_code=500, detail=f"Direct promote failed: {message}")
|
||||
|
||||
return {
|
||||
"mode": "direct",
|
||||
"from_branch": source,
|
||||
"to_branch": target,
|
||||
"status": "merged",
|
||||
}
|
||||
# [/DEF:promote_direct_merge:Function]
|
||||
|
||||
# [DEF:create_gitea_pull_request:Function]
|
||||
# @PURPOSE: Create pull request in Gitea.
|
||||
# @PRE: Config and remote URL are valid.
|
||||
# @POST: Returns normalized PR metadata.
|
||||
# @RETURN: Dict[str, Any]
|
||||
async def create_gitea_pull_request(
|
||||
self,
|
||||
server_url: str,
|
||||
pat: str,
|
||||
remote_url: str,
|
||||
from_branch: str,
|
||||
to_branch: str,
|
||||
title: str,
|
||||
description: Optional[str] = None,
|
||||
) -> Dict[str, Any]:
|
||||
identity = self._parse_remote_repo_identity(remote_url)
|
||||
payload = {
|
||||
"title": title,
|
||||
"head": from_branch,
|
||||
"base": to_branch,
|
||||
"body": description or "",
|
||||
}
|
||||
data = await self._gitea_request(
|
||||
"POST",
|
||||
server_url,
|
||||
pat,
|
||||
f"/repos/{identity['namespace']}/{identity['repo']}/pulls",
|
||||
payload=payload,
|
||||
)
|
||||
return {
|
||||
"id": data.get("number") or data.get("id"),
|
||||
"url": data.get("html_url") or data.get("url"),
|
||||
"status": data.get("state") or "open",
|
||||
}
|
||||
# [/DEF:create_gitea_pull_request:Function]
|
||||
|
||||
# [DEF:create_github_pull_request:Function]
|
||||
# @PURPOSE: Create pull request in GitHub or GitHub Enterprise.
|
||||
# @PRE: Config and remote URL are valid.
|
||||
# @POST: Returns normalized PR metadata.
|
||||
# @RETURN: Dict[str, Any]
|
||||
async def create_github_pull_request(
|
||||
self,
|
||||
server_url: str,
|
||||
pat: str,
|
||||
remote_url: str,
|
||||
from_branch: str,
|
||||
to_branch: str,
|
||||
title: str,
|
||||
description: Optional[str] = None,
|
||||
draft: bool = False,
|
||||
) -> Dict[str, Any]:
|
||||
identity = self._parse_remote_repo_identity(remote_url)
|
||||
base_url = self._normalize_git_server_url(server_url)
|
||||
if "github.com" in base_url:
|
||||
api_url = f"https://api.github.com/repos/{identity['namespace']}/{identity['repo']}/pulls"
|
||||
else:
|
||||
api_url = f"{base_url}/api/v3/repos/{identity['namespace']}/{identity['repo']}/pulls"
|
||||
headers = {
|
||||
"Authorization": f"token {pat.strip()}",
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/vnd.github+json",
|
||||
}
|
||||
payload = {
|
||||
"title": title,
|
||||
"head": from_branch,
|
||||
"base": to_branch,
|
||||
"body": description or "",
|
||||
"draft": bool(draft),
|
||||
}
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=20.0) as client:
|
||||
response = await client.post(api_url, headers=headers, json=payload)
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=503, detail=f"GitHub API is unavailable: {str(e)}")
|
||||
if response.status_code >= 400:
|
||||
detail = response.text
|
||||
try:
|
||||
detail = response.json().get("message") or detail
|
||||
except Exception:
|
||||
pass
|
||||
raise HTTPException(status_code=response.status_code, detail=f"GitHub API error: {detail}")
|
||||
data = response.json()
|
||||
return {
|
||||
"id": data.get("number") or data.get("id"),
|
||||
"url": data.get("html_url") or data.get("url"),
|
||||
"status": data.get("state") or "open",
|
||||
}
|
||||
# [/DEF:create_github_pull_request:Function]
|
||||
|
||||
# [DEF:create_gitlab_merge_request:Function]
|
||||
# @PURPOSE: Create merge request in GitLab.
|
||||
# @PRE: Config and remote URL are valid.
|
||||
# @POST: Returns normalized MR metadata.
|
||||
# @RETURN: Dict[str, Any]
|
||||
async def create_gitlab_merge_request(
|
||||
self,
|
||||
server_url: str,
|
||||
pat: str,
|
||||
remote_url: str,
|
||||
from_branch: str,
|
||||
to_branch: str,
|
||||
title: str,
|
||||
description: Optional[str] = None,
|
||||
remove_source_branch: bool = False,
|
||||
) -> Dict[str, Any]:
|
||||
identity = self._parse_remote_repo_identity(remote_url)
|
||||
base_url = self._normalize_git_server_url(server_url)
|
||||
project_id = quote(identity["full_name"], safe="")
|
||||
api_url = f"{base_url}/api/v4/projects/{project_id}/merge_requests"
|
||||
headers = {
|
||||
"PRIVATE-TOKEN": pat.strip(),
|
||||
"Content-Type": "application/json",
|
||||
"Accept": "application/json",
|
||||
}
|
||||
payload = {
|
||||
"source_branch": from_branch,
|
||||
"target_branch": to_branch,
|
||||
"title": title,
|
||||
"description": description or "",
|
||||
"remove_source_branch": bool(remove_source_branch),
|
||||
}
|
||||
try:
|
||||
async with httpx.AsyncClient(timeout=20.0) as client:
|
||||
response = await client.post(api_url, headers=headers, json=payload)
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=503, detail=f"GitLab API is unavailable: {str(e)}")
|
||||
if response.status_code >= 400:
|
||||
detail = response.text
|
||||
try:
|
||||
parsed = response.json()
|
||||
if isinstance(parsed, dict):
|
||||
detail = parsed.get("message") or detail
|
||||
except Exception:
|
||||
pass
|
||||
raise HTTPException(status_code=response.status_code, detail=f"GitLab API error: {detail}")
|
||||
data = response.json()
|
||||
return {
|
||||
"id": data.get("iid") or data.get("id"),
|
||||
"url": data.get("web_url") or data.get("url"),
|
||||
"status": data.get("state") or "opened",
|
||||
}
|
||||
# [/DEF:create_gitlab_merge_request:Function]
|
||||
|
||||
# [/DEF:GitService:Class]
|
||||
# [/DEF:backend.src.services.git_service:Module]
|
||||
# [/DEF:backend.src.services.git_service:Module]
|
||||
@@ -18,25 +18,13 @@ import os
|
||||
# @TIER: CRITICAL
|
||||
# @PURPOSE: Handles encryption and decryption of sensitive data like API keys.
|
||||
# @INVARIANT: Uses a secret key from environment or a default one (fallback only for dev).
|
||||
#
|
||||
# @TEST_CONTRACT: EncryptionManagerModel ->
|
||||
# {
|
||||
# required_fields: {},
|
||||
# invariants: [
|
||||
# "encrypted data can be decrypted back to the original string"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: basic_encryption_cycle -> {"data": "my_secret_key"}
|
||||
# @TEST_EDGE: decrypt_invalid_data -> raises Exception
|
||||
# @TEST_EDGE: empty_string_encryption -> {"data": ""}
|
||||
# @TEST_INVARIANT: symmetric_encryption -> verifies: [basic_encryption_cycle, empty_string_encryption]
|
||||
class EncryptionManager:
|
||||
# [DEF:EncryptionManager.__init__:Function]
|
||||
# @PURPOSE: Initialize the encryption manager with a Fernet key.
|
||||
# @PRE: ENCRYPTION_KEY env var must be set or use default dev key.
|
||||
# @POST: Fernet instance ready for encryption/decryption.
|
||||
def __init__(self):
|
||||
self.key = os.getenv("ENCRYPTION_KEY", "REMOVED_HISTORICAL_SECRET_DO_NOT_USE").encode()
|
||||
self.key = os.getenv("ENCRYPTION_KEY", "ZcytYzi0iHIl4Ttr-GdAEk117aGRogkGvN3wiTxrPpE=").encode()
|
||||
self.fernet = Fernet(self.key)
|
||||
# [/DEF:EncryptionManager.__init__:Function]
|
||||
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# [DEF:backend.tests.test_report_normalizer:Module]
|
||||
# @TIER: STANDARD
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: tests, reports, normalizer, fallback
|
||||
# @PURPOSE: Validate unknown task type fallback and partial payload normalization behavior.
|
||||
# @LAYER: Domain (Tests)
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# [DEF:test_report_service:Module]
|
||||
# @TIER: STANDARD
|
||||
# @TIER: CRITICAL
|
||||
# @PURPOSE: Unit tests for ReportsService list/detail operations
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.services.reports.report_service.ReportsService
|
||||
|
||||
@@ -113,20 +113,6 @@ def extract_error_context(task: Task, report_status: ReportStatus) -> Optional[E
|
||||
# @POST: Returns TaskReport with required fields and deterministic fallback behavior.
|
||||
# @PARAM: task (Task) - Source task.
|
||||
# @RETURN: TaskReport - Canonical normalized report.
|
||||
#
|
||||
# @TEST_CONTRACT: NormalizeTaskReport ->
|
||||
# {
|
||||
# required_fields: {task: Task},
|
||||
# invariants: [
|
||||
# "Returns a valid TaskReport object",
|
||||
# "Maps TaskStatus to ReportStatus deterministically",
|
||||
# "Extracts ErrorContext for FAILED/PARTIAL tasks"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_task -> {"task": "MockTask(id='1', plugin_id='superset-migration', status=TaskStatus.SUCCESS)"}
|
||||
# @TEST_EDGE: task_with_error -> {"task": "MockTask(status=TaskStatus.FAILED, logs=[LogEntry(level='ERROR', message='Failed')])"}
|
||||
# @TEST_EDGE: unknown_plugin_type -> {"task": "MockTask(plugin_id='unknown-plugin', status=TaskStatus.PENDING)"}
|
||||
# @TEST_INVARIANT: deterministic_normalization -> verifies: [valid_task, task_with_error, unknown_plugin_type]
|
||||
def normalize_task_report(task: Task) -> TaskReport:
|
||||
with belief_scope("normalize_task_report"):
|
||||
task_type = resolve_task_type(task.plugin_id)
|
||||
|
||||
@@ -26,19 +26,6 @@ from .normalizer import normalize_task_report
|
||||
# @PRE: TaskManager dependency is initialized.
|
||||
# @POST: Provides deterministic list/detail report responses.
|
||||
# @INVARIANT: Service methods are read-only over task history source.
|
||||
#
|
||||
# @TEST_CONTRACT: ReportsServiceModel ->
|
||||
# {
|
||||
# required_fields: {task_manager: TaskManager},
|
||||
# invariants: [
|
||||
# "list_reports returns a matching ReportCollection",
|
||||
# "get_report_detail returns a valid ReportDetailView or None"
|
||||
# ]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_service -> {"task_manager": "MockTaskManager()"}
|
||||
# @TEST_EDGE: empty_task_list -> returns empty ReportCollection
|
||||
# @TEST_EDGE: report_not_found -> get_report_detail returns None
|
||||
# @TEST_INVARIANT: consistent_pagination -> verifies: [valid_service]
|
||||
class ReportsService:
|
||||
# [DEF:__init__:Function]
|
||||
# @TIER: CRITICAL
|
||||
|
||||
@@ -71,17 +71,6 @@ TASK_TYPE_PROFILES: Dict[TaskType, Dict[str, Any]] = {
|
||||
# @POST: Always returns one of TaskType enum values.
|
||||
# @PARAM: plugin_id (Optional[str]) - Source plugin/task identifier from task record.
|
||||
# @RETURN: TaskType - Resolved canonical type or UNKNOWN fallback.
|
||||
#
|
||||
# @TEST_CONTRACT: ResolveTaskType ->
|
||||
# {
|
||||
# required_fields: {plugin_id: str},
|
||||
# invariants: ["returns TaskType.UNKNOWN for missing/unmapped plugin_id"]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_plugin -> {"plugin_id": "superset-migration"}
|
||||
# @TEST_EDGE: empty_plugin -> {"plugin_id": ""}
|
||||
# @TEST_EDGE: none_plugin -> {"plugin_id": None}
|
||||
# @TEST_EDGE: unknown_plugin -> {"plugin_id": "invalid-plugin"}
|
||||
# @TEST_INVARIANT: fallback_to_unknown -> verifies: [empty_plugin, none_plugin, unknown_plugin]
|
||||
def resolve_task_type(plugin_id: Optional[str]) -> TaskType:
|
||||
with belief_scope("resolve_task_type"):
|
||||
normalized = (plugin_id or "").strip()
|
||||
@@ -97,15 +86,6 @@ def resolve_task_type(plugin_id: Optional[str]) -> TaskType:
|
||||
# @POST: Returns a profile dict and never raises for unknown types.
|
||||
# @PARAM: task_type (TaskType) - Canonical task type.
|
||||
# @RETURN: Dict[str, Any] - Profile metadata used by normalization and UI contracts.
|
||||
#
|
||||
# @TEST_CONTRACT: GetTypeProfile ->
|
||||
# {
|
||||
# required_fields: {task_type: TaskType},
|
||||
# invariants: ["returns a valid metadata dictionary even for UNKNOWN"]
|
||||
# }
|
||||
# @TEST_FIXTURE: valid_profile -> {"task_type": "migration"}
|
||||
# @TEST_EDGE: missing_profile -> {"task_type": "some_new_type"}
|
||||
# @TEST_INVARIANT: always_returns_dict -> verifies: [valid_profile, missing_profile]
|
||||
def get_type_profile(task_type: TaskType) -> Dict[str, Any]:
|
||||
with belief_scope("get_type_profile"):
|
||||
return TASK_TYPE_PROFILES.get(task_type, TASK_TYPE_PROFILES[TaskType.UNKNOWN])
|
||||
|
||||
@@ -10,7 +10,6 @@
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
from typing import List, Dict, Optional, Any
|
||||
from datetime import datetime
|
||||
from ..core.superset_client import SupersetClient
|
||||
from ..core.task_manager.models import Task
|
||||
from ..services.git_service import GitService
|
||||
@@ -40,12 +39,11 @@ class ResourceService:
|
||||
# @RETURN: List[Dict] - Dashboards with git_status and last_task fields
|
||||
# @RELATION: CALLS -> SupersetClient.get_dashboards_summary
|
||||
# @RELATION: CALLS -> self._get_git_status_for_dashboard
|
||||
# @RELATION: CALLS -> self._get_last_llm_task_for_dashboard
|
||||
# @RELATION: CALLS -> self._get_last_task_for_resource
|
||||
async def get_dashboards_with_status(
|
||||
self,
|
||||
env: Any,
|
||||
tasks: Optional[List[Task]] = None,
|
||||
include_git_status: bool = True,
|
||||
tasks: Optional[List[Task]] = None
|
||||
) -> List[Dict[str, Any]]:
|
||||
with belief_scope("get_dashboards_with_status", f"env={env.id}"):
|
||||
client = SupersetClient(env)
|
||||
@@ -58,18 +56,14 @@ class ResourceService:
|
||||
dashboard_dict = dashboard
|
||||
dashboard_id = dashboard_dict.get('id')
|
||||
|
||||
# Git status can be skipped for list endpoints and loaded lazily on UI side.
|
||||
if include_git_status:
|
||||
git_status = self._get_git_status_for_dashboard(dashboard_id)
|
||||
dashboard_dict['git_status'] = git_status
|
||||
else:
|
||||
dashboard_dict['git_status'] = None
|
||||
# Get Git status if repo exists
|
||||
git_status = self._get_git_status_for_dashboard(dashboard_id)
|
||||
dashboard_dict['git_status'] = git_status
|
||||
|
||||
# Show status of the latest LLM validation for this dashboard.
|
||||
last_task = self._get_last_llm_task_for_dashboard(
|
||||
dashboard_id,
|
||||
env.id,
|
||||
tasks,
|
||||
# Get last task status
|
||||
last_task = self._get_last_task_for_resource(
|
||||
f"dashboard-{dashboard_id}",
|
||||
tasks
|
||||
)
|
||||
dashboard_dict['last_task'] = last_task
|
||||
|
||||
@@ -78,157 +72,6 @@ class ResourceService:
|
||||
logger.info(f"[ResourceService][Coherence:OK] Fetched {len(result)} dashboards with status")
|
||||
return result
|
||||
# [/DEF:get_dashboards_with_status:Function]
|
||||
|
||||
# [DEF:get_dashboards_page_with_status:Function]
|
||||
# @PURPOSE: Fetch one dashboard page from environment and enrich only that page with status metadata.
|
||||
# @PRE: env is valid; page >= 1; page_size > 0.
|
||||
# @POST: Returns page items plus total counters without scanning all pages locally.
|
||||
# @PARAM: env (Environment) - Source environment.
|
||||
# @PARAM: tasks (Optional[List[Task]]) - Tasks for latest LLM status.
|
||||
# @PARAM: page (int) - 1-based page number.
|
||||
# @PARAM: page_size (int) - Page size.
|
||||
# @RETURN: Dict[str, Any] - {"dashboards": List[Dict], "total": int, "total_pages": int}
|
||||
async def get_dashboards_page_with_status(
|
||||
self,
|
||||
env: Any,
|
||||
tasks: Optional[List[Task]] = None,
|
||||
page: int = 1,
|
||||
page_size: int = 10,
|
||||
search: Optional[str] = None,
|
||||
include_git_status: bool = True,
|
||||
) -> Dict[str, Any]:
|
||||
with belief_scope(
|
||||
"get_dashboards_page_with_status",
|
||||
f"env={env.id}, page={page}, page_size={page_size}, search={search}",
|
||||
):
|
||||
client = SupersetClient(env)
|
||||
total, dashboards_page = client.get_dashboards_summary_page(
|
||||
page=page,
|
||||
page_size=page_size,
|
||||
search=search,
|
||||
)
|
||||
|
||||
result = []
|
||||
for dashboard in dashboards_page:
|
||||
dashboard_dict = dashboard
|
||||
dashboard_id = dashboard_dict.get("id")
|
||||
|
||||
if include_git_status:
|
||||
dashboard_dict["git_status"] = self._get_git_status_for_dashboard(dashboard_id)
|
||||
else:
|
||||
dashboard_dict["git_status"] = None
|
||||
|
||||
dashboard_dict["last_task"] = self._get_last_llm_task_for_dashboard(
|
||||
dashboard_id,
|
||||
env.id,
|
||||
tasks,
|
||||
)
|
||||
result.append(dashboard_dict)
|
||||
|
||||
total_pages = (total + page_size - 1) // page_size if total > 0 else 1
|
||||
logger.info(
|
||||
"[ResourceService][Coherence:OK] Fetched dashboards page %s/%s (%s items, total=%s)",
|
||||
page,
|
||||
total_pages,
|
||||
len(result),
|
||||
total,
|
||||
)
|
||||
return {
|
||||
"dashboards": result,
|
||||
"total": total,
|
||||
"total_pages": total_pages,
|
||||
}
|
||||
# [/DEF:get_dashboards_page_with_status:Function]
|
||||
|
||||
# [DEF:_get_last_llm_task_for_dashboard:Function]
|
||||
# @PURPOSE: Get most recent LLM validation task for a dashboard in an environment
|
||||
# @PRE: dashboard_id is a valid integer identifier
|
||||
# @POST: Returns the newest llm_dashboard_validation task summary or None
|
||||
# @PARAM: dashboard_id (int) - The dashboard ID
|
||||
# @PARAM: env_id (Optional[str]) - Environment ID to match task params
|
||||
# @PARAM: tasks (Optional[List[Task]]) - List of tasks to search
|
||||
# @RETURN: Optional[Dict] - Task summary with task_id and status
|
||||
def _get_last_llm_task_for_dashboard(
|
||||
self,
|
||||
dashboard_id: int,
|
||||
env_id: Optional[str],
|
||||
tasks: Optional[List[Task]] = None,
|
||||
) -> Optional[Dict[str, Any]]:
|
||||
if not tasks:
|
||||
return None
|
||||
|
||||
dashboard_id_str = str(dashboard_id)
|
||||
matched_tasks = []
|
||||
|
||||
for task in tasks:
|
||||
if getattr(task, "plugin_id", None) != "llm_dashboard_validation":
|
||||
continue
|
||||
|
||||
params = getattr(task, "params", {}) or {}
|
||||
if str(params.get("dashboard_id")) != dashboard_id_str:
|
||||
continue
|
||||
|
||||
if env_id is not None:
|
||||
task_env = params.get("environment_id") or params.get("env")
|
||||
if str(task_env) != str(env_id):
|
||||
continue
|
||||
|
||||
matched_tasks.append(task)
|
||||
|
||||
if not matched_tasks:
|
||||
return None
|
||||
|
||||
def _task_time(task_obj: Any) -> datetime:
|
||||
return (
|
||||
getattr(task_obj, "started_at", None)
|
||||
or getattr(task_obj, "finished_at", None)
|
||||
or getattr(task_obj, "created_at", None)
|
||||
or datetime.min
|
||||
)
|
||||
|
||||
last_task = max(matched_tasks, key=_task_time)
|
||||
raw_result = getattr(last_task, "result", None)
|
||||
validation_status = None
|
||||
if isinstance(raw_result, dict):
|
||||
validation_status = self._normalize_validation_status(raw_result.get("status"))
|
||||
|
||||
return {
|
||||
"task_id": str(getattr(last_task, "id", "")),
|
||||
"status": self._normalize_task_status(getattr(last_task, "status", "")),
|
||||
"validation_status": validation_status,
|
||||
}
|
||||
# [/DEF:_get_last_llm_task_for_dashboard:Function]
|
||||
|
||||
# [DEF:_normalize_task_status:Function]
|
||||
# @PURPOSE: Normalize task status to stable uppercase values for UI/API projections
|
||||
# @PRE: raw_status can be enum or string
|
||||
# @POST: Returns uppercase status without enum class prefix
|
||||
# @PARAM: raw_status (Any) - Raw task status object/value
|
||||
# @RETURN: str - Normalized status token
|
||||
def _normalize_task_status(self, raw_status: Any) -> str:
|
||||
if raw_status is None:
|
||||
return ""
|
||||
value = getattr(raw_status, "value", raw_status)
|
||||
status_text = str(value).strip()
|
||||
if "." in status_text:
|
||||
status_text = status_text.split(".")[-1]
|
||||
return status_text.upper()
|
||||
# [/DEF:_normalize_task_status:Function]
|
||||
|
||||
# [DEF:_normalize_validation_status:Function]
|
||||
# @PURPOSE: Normalize LLM validation status to PASS/FAIL/WARN/UNKNOWN
|
||||
# @PRE: raw_status can be any scalar type
|
||||
# @POST: Returns normalized validation status token or None
|
||||
# @PARAM: raw_status (Any) - Raw validation status from task result
|
||||
# @RETURN: Optional[str] - PASS|FAIL|WARN|UNKNOWN
|
||||
def _normalize_validation_status(self, raw_status: Any) -> Optional[str]:
|
||||
if raw_status is None:
|
||||
return None
|
||||
status_text = str(raw_status).strip().upper()
|
||||
if status_text in {"PASS", "FAIL", "WARN"}:
|
||||
return status_text
|
||||
return "UNKNOWN"
|
||||
# [/DEF:_normalize_validation_status:Function]
|
||||
|
||||
# [DEF:get_datasets_with_status:Function]
|
||||
# @PURPOSE: Fetch datasets from environment with mapping progress and last task status
|
||||
|
||||
BIN
backend/tasks.db
Normal file
BIN
backend/tasks.db
Normal file
Binary file not shown.
76
backend/test_auth_debug.py
Normal file
76
backend/test_auth_debug.py
Normal file
@@ -0,0 +1,76 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Debug script to test Superset API authentication"""
|
||||
|
||||
from pprint import pprint
|
||||
from src.core.superset_client import SupersetClient
|
||||
from src.core.config_manager import ConfigManager
|
||||
|
||||
|
||||
def main():
|
||||
print("Debugging Superset API authentication...")
|
||||
|
||||
config = ConfigManager()
|
||||
|
||||
# Select first available environment
|
||||
environments = config.get_environments()
|
||||
|
||||
if not environments:
|
||||
print("No environments configured")
|
||||
return
|
||||
|
||||
env = environments[0]
|
||||
print(f"\nTesting environment: {env.name}")
|
||||
print(f"URL: {env.url}")
|
||||
|
||||
try:
|
||||
# Test API client authentication
|
||||
print("\n--- Testing API Authentication ---")
|
||||
client = SupersetClient(env)
|
||||
tokens = client.authenticate()
|
||||
|
||||
print("\nAPI Auth Success!")
|
||||
print(f"Access Token: {tokens.get('access_token', 'N/A')}")
|
||||
print(f"CSRF Token: {tokens.get('csrf_token', 'N/A')}")
|
||||
|
||||
# Debug cookies from session
|
||||
print("\n--- Session Cookies ---")
|
||||
for cookie in client.network.session.cookies:
|
||||
print(f"{cookie.name}={cookie.value}")
|
||||
|
||||
# Test accessing UI via requests
|
||||
print("\n--- Testing UI Access ---")
|
||||
ui_url = env.url.rstrip('/').replace('/api/v1', '')
|
||||
print(f"UI URL: {ui_url}")
|
||||
|
||||
# Try to access UI home page
|
||||
ui_response = client.network.session.get(ui_url, timeout=30, allow_redirects=True)
|
||||
print(f"Status Code: {ui_response.status_code}")
|
||||
print(f"URL: {ui_response.url}")
|
||||
|
||||
# Check response headers
|
||||
print("\n--- Response Headers ---")
|
||||
pprint(dict(ui_response.headers))
|
||||
|
||||
print("\n--- Response Content Preview (200 chars) ---")
|
||||
print(repr(ui_response.text[:200]))
|
||||
|
||||
if ui_response.status_code == 200:
|
||||
print("\nUI Access: Success")
|
||||
|
||||
# Try to access a dashboard
|
||||
# For testing, just use the home page
|
||||
print("\n--- Checking if login is required ---")
|
||||
if "login" in ui_response.url.lower() or "login" in ui_response.text.lower():
|
||||
print("❌ Not logged in to UI")
|
||||
else:
|
||||
print("✅ Logged in to UI")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error: {type(e).__name__}: {e}")
|
||||
import traceback
|
||||
print("\nStack Trace:")
|
||||
print(traceback.format_exc())
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
44
backend/test_decryption.py
Normal file
44
backend/test_decryption.py
Normal file
@@ -0,0 +1,44 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Test script to debug API key decryption issue."""
|
||||
|
||||
from src.core.database import SessionLocal
|
||||
from src.models.llm import LLMProvider
|
||||
from cryptography.fernet import Fernet
|
||||
import os
|
||||
|
||||
# Get the encryption key
|
||||
key = os.getenv("ENCRYPTION_KEY", "ZcytYzi0iHIl4Ttr-GdAEk117aGRogkGvN3wiTxrPpE=").encode()
|
||||
print(f"Encryption key (first 20 chars): {key[:20]}")
|
||||
print(f"Encryption key length: {len(key)}")
|
||||
|
||||
# Create Fernet instance
|
||||
fernet = Fernet(key)
|
||||
|
||||
# Get provider from database
|
||||
db = SessionLocal()
|
||||
provider = db.query(LLMProvider).filter(LLMProvider.id == '6c899741-4108-4196-aea4-f38ad2f0150e').first()
|
||||
|
||||
if provider:
|
||||
print("\nProvider found:")
|
||||
print(f" ID: {provider.id}")
|
||||
print(f" Name: {provider.name}")
|
||||
print(f" Encrypted API Key (first 50 chars): {provider.api_key[:50]}")
|
||||
print(f" Encrypted API Key Length: {len(provider.api_key)}")
|
||||
|
||||
# Test decryption
|
||||
print("\nAttempting decryption...")
|
||||
try:
|
||||
decrypted = fernet.decrypt(provider.api_key.encode()).decode()
|
||||
print("Decryption successful!")
|
||||
print(f" Decrypted key length: {len(decrypted)}")
|
||||
print(f" Decrypted key (first 8 chars): {decrypted[:8]}")
|
||||
print(f" Decrypted key is empty: {len(decrypted) == 0}")
|
||||
except Exception as e:
|
||||
print(f"Decryption failed with error: {e}")
|
||||
print(f"Error type: {type(e).__name__}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
else:
|
||||
print("Provider not found")
|
||||
|
||||
db.close()
|
||||
1
backend/test_encryption.py
Normal file
1
backend/test_encryption.py
Normal file
@@ -0,0 +1 @@
|
||||
[{"key[": 20, ")\n\n# Create Fernet instance\nfernet = Fernet(key)\n\n# Test encrypting an empty string\nempty_encrypted = fernet.encrypt(b\"": ".", "print(f": "nEncrypted empty string: {empty_encrypted"}, {"test-api-key-12345\"\ntest_encrypted = fernet.encrypt(test_key.encode()).decode()\nprint(f": "nEncrypted test key: {test_encrypted"}, {"gAAAAABphhwSZie0OwXjJ78Fk-c4Uo6doNJXipX49AX7Bypzp4ohiRX3hXPXKb45R1vhNUOqbm6Ke3-eRwu_KdWMZ9chFBKmqw==\"\nprint(f": "nStored encrypted key: {stored_key"}, {"len(stored_key)}": "Check if stored key matches empty string encryption\nif stored_key == empty_encrypted:\n print(", "string!": "else:\n print(", "print(f": "mpty string encryption: {empty_encrypted"}, {"stored_key}": "Try to decrypt the stored key\ntry:\n decrypted = fernet.decrypt(stored_key.encode()).decode()\n print(f", "print(f": "ecrypted key length: {len(decrypted)"}, {")\nexcept Exception as e:\n print(f": "nDecryption failed with error: {e"}]
|
||||
@@ -1,62 +0,0 @@
|
||||
# [DEF:backend.tests.core.migration.test_archive_parser:Module]
|
||||
#
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Unit tests for MigrationArchiveParser ZIP extraction contract.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: VERIFIES -> backend.src.core.migration.archive_parser
|
||||
#
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
|
||||
import yaml
|
||||
|
||||
backend_dir = str(Path(__file__).parent.parent.parent.parent.resolve())
|
||||
if backend_dir not in sys.path:
|
||||
sys.path.insert(0, backend_dir)
|
||||
|
||||
from src.core.migration.archive_parser import MigrationArchiveParser
|
||||
|
||||
|
||||
def test_extract_objects_from_zip_collects_all_types():
|
||||
parser = MigrationArchiveParser()
|
||||
with tempfile.TemporaryDirectory() as td:
|
||||
td_path = Path(td)
|
||||
zip_path = td_path / "objects.zip"
|
||||
src_dir = td_path / "src"
|
||||
(src_dir / "dashboards").mkdir(parents=True)
|
||||
(src_dir / "charts").mkdir(parents=True)
|
||||
(src_dir / "datasets").mkdir(parents=True)
|
||||
|
||||
with open(src_dir / "dashboards" / "dash.yaml", "w") as file_obj:
|
||||
yaml.dump({"uuid": "dash-u1", "dashboard_title": "D1", "json_metadata": "{}"}, file_obj)
|
||||
with open(src_dir / "charts" / "chart.yaml", "w") as file_obj:
|
||||
yaml.dump({"uuid": "chart-u1", "slice_name": "C1", "viz_type": "bar"}, file_obj)
|
||||
with open(src_dir / "datasets" / "dataset.yaml", "w") as file_obj:
|
||||
yaml.dump({"uuid": "ds-u1", "table_name": "orders", "database_uuid": "db-u1"}, file_obj)
|
||||
|
||||
with zipfile.ZipFile(zip_path, "w") as zip_obj:
|
||||
for root, _, files in os.walk(src_dir):
|
||||
for file_name in files:
|
||||
file_path = Path(root) / file_name
|
||||
zip_obj.write(file_path, file_path.relative_to(src_dir))
|
||||
|
||||
extracted = parser.extract_objects_from_zip(str(zip_path))
|
||||
|
||||
if len(extracted["dashboards"]) != 1:
|
||||
raise AssertionError("dashboards extraction size mismatch")
|
||||
if extracted["dashboards"][0]["uuid"] != "dash-u1":
|
||||
raise AssertionError("dashboard uuid mismatch")
|
||||
if len(extracted["charts"]) != 1:
|
||||
raise AssertionError("charts extraction size mismatch")
|
||||
if extracted["charts"][0]["uuid"] != "chart-u1":
|
||||
raise AssertionError("chart uuid mismatch")
|
||||
if len(extracted["datasets"]) != 1:
|
||||
raise AssertionError("datasets extraction size mismatch")
|
||||
if extracted["datasets"][0]["uuid"] != "ds-u1":
|
||||
raise AssertionError("dataset uuid mismatch")
|
||||
|
||||
|
||||
# [/DEF:backend.tests.core.migration.test_archive_parser:Module]
|
||||
@@ -1,110 +0,0 @@
|
||||
# [DEF:backend.tests.core.migration.test_dry_run_orchestrator:Module]
|
||||
#
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Unit tests for MigrationDryRunService diff and risk computation contracts.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: VERIFIES -> backend.src.core.migration.dry_run_orchestrator
|
||||
#
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from sqlalchemy.pool import StaticPool
|
||||
|
||||
backend_dir = str(Path(__file__).parent.parent.parent.parent.resolve())
|
||||
if backend_dir not in sys.path:
|
||||
sys.path.insert(0, backend_dir)
|
||||
|
||||
from src.core.migration.dry_run_orchestrator import MigrationDryRunService
|
||||
from src.models.dashboard import DashboardSelection
|
||||
from src.models.mapping import Base
|
||||
|
||||
|
||||
def _load_fixture() -> dict:
|
||||
fixture_path = Path(__file__).parents[2] / "fixtures" / "migration_dry_run_fixture.json"
|
||||
return json.loads(fixture_path.read_text())
|
||||
|
||||
|
||||
def _make_session():
|
||||
engine = create_engine(
|
||||
"sqlite:///:memory:",
|
||||
connect_args={"check_same_thread": False},
|
||||
poolclass=StaticPool,
|
||||
)
|
||||
Base.metadata.create_all(engine)
|
||||
Session = sessionmaker(bind=engine)
|
||||
return Session()
|
||||
|
||||
|
||||
def test_migration_dry_run_service_builds_diff_and_risk():
|
||||
# @TEST_CONTRACT: dry_run_result_contract -> {
|
||||
# required_fields: {diff: object, summary: object, risk: object},
|
||||
# invariants: ["risk.score >= 0", "summary.selected_dashboards == len(selection.selected_ids)"]
|
||||
# }
|
||||
# @TEST_FIXTURE: migration_dry_run_fixture -> backend/tests/fixtures/migration_dry_run_fixture.json
|
||||
# @TEST_EDGE: missing_target_datasource -> fixture.transformed_zip_objects.datasets[0].database_uuid
|
||||
# @TEST_EDGE: breaking_reference -> fixture.transformed_zip_objects.charts[0].dataset_uuid
|
||||
fixture = _load_fixture()
|
||||
db = _make_session()
|
||||
selection = DashboardSelection(
|
||||
selected_ids=[42],
|
||||
source_env_id="src",
|
||||
target_env_id="tgt",
|
||||
replace_db_config=False,
|
||||
fix_cross_filters=True,
|
||||
)
|
||||
|
||||
source_client = MagicMock()
|
||||
source_client.get_dashboards_summary.return_value = fixture["source_dashboard_summary"]
|
||||
source_client.export_dashboard.return_value = (b"PK\x03\x04", "source.zip")
|
||||
|
||||
target_client = MagicMock()
|
||||
target_client.get_dashboards.return_value = (
|
||||
len(fixture["target"]["dashboards"]),
|
||||
fixture["target"]["dashboards"],
|
||||
)
|
||||
target_client.get_datasets.return_value = (
|
||||
len(fixture["target"]["datasets"]),
|
||||
fixture["target"]["datasets"],
|
||||
)
|
||||
target_client.get_charts.return_value = (
|
||||
len(fixture["target"]["charts"]),
|
||||
fixture["target"]["charts"],
|
||||
)
|
||||
target_client.get_databases.return_value = (
|
||||
len(fixture["target"]["databases"]),
|
||||
fixture["target"]["databases"],
|
||||
)
|
||||
|
||||
parser = MagicMock()
|
||||
parser.extract_objects_from_zip.return_value = fixture["transformed_zip_objects"]
|
||||
service = MigrationDryRunService(parser=parser)
|
||||
|
||||
with patch("src.core.migration.dry_run_orchestrator.MigrationEngine") as EngineMock:
|
||||
engine = MagicMock()
|
||||
engine.transform_zip.return_value = True
|
||||
EngineMock.return_value = engine
|
||||
result = service.run(selection, source_client, target_client, db)
|
||||
|
||||
if "summary" not in result:
|
||||
raise AssertionError("summary is missing in dry-run payload")
|
||||
if result["summary"]["selected_dashboards"] != 1:
|
||||
raise AssertionError("selected_dashboards summary mismatch")
|
||||
if result["summary"]["dashboards"]["update"] != 1:
|
||||
raise AssertionError("dashboard update count mismatch")
|
||||
if result["summary"]["charts"]["create"] != 1:
|
||||
raise AssertionError("chart create count mismatch")
|
||||
if result["summary"]["datasets"]["create"] != 1:
|
||||
raise AssertionError("dataset create count mismatch")
|
||||
|
||||
risk_codes = {item["code"] for item in result["risk"]["items"]}
|
||||
if "missing_datasource" not in risk_codes:
|
||||
raise AssertionError("missing_datasource risk is not detected")
|
||||
if "breaking_reference" not in risk_codes:
|
||||
raise AssertionError("breaking_reference risk is not detected")
|
||||
|
||||
|
||||
# [/DEF:backend.tests.core.migration.test_dry_run_orchestrator:Module]
|
||||
@@ -1,58 +0,0 @@
|
||||
{
|
||||
"source_dashboard_summary": [
|
||||
{
|
||||
"id": 42,
|
||||
"title": "Sales"
|
||||
}
|
||||
],
|
||||
"target": {
|
||||
"dashboards": [
|
||||
{
|
||||
"uuid": "dash-1",
|
||||
"dashboard_title": "Sales Old",
|
||||
"slug": "sales-old",
|
||||
"position_json": "{}",
|
||||
"json_metadata": "{}",
|
||||
"description": "",
|
||||
"owners": [
|
||||
{
|
||||
"username": "owner-a"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"datasets": [],
|
||||
"charts": [],
|
||||
"databases": []
|
||||
},
|
||||
"transformed_zip_objects": {
|
||||
"dashboards": [
|
||||
{
|
||||
"uuid": "dash-1",
|
||||
"title": "Sales New",
|
||||
"signature": "{\"title\":\"Sales New\"}",
|
||||
"owners": [
|
||||
{
|
||||
"username": "owner-b"
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"charts": [
|
||||
{
|
||||
"uuid": "chart-1",
|
||||
"title": "Chart A",
|
||||
"signature": "{\"title\":\"Chart A\"}",
|
||||
"dataset_uuid": "dataset-404"
|
||||
}
|
||||
],
|
||||
"datasets": [
|
||||
{
|
||||
"uuid": "dataset-1",
|
||||
"title": "orders",
|
||||
"signature": "{\"title\":\"orders\"}",
|
||||
"database_uuid": "db-missing"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -1,366 +1,73 @@
|
||||
# [DEF:backend.tests.test_dashboards_api:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Comprehensive contract-driven tests for Dashboard Hub API
|
||||
# @PURPOSE: Contract-driven tests for Dashboard Hub API
|
||||
# @LAYER: Domain (Tests)
|
||||
# @SEMANTICS: tests, dashboards, api, contract, remediation
|
||||
import pytest
|
||||
# @SEMANTICS: tests, dashboards, api, contract
|
||||
# @RELATION: TESTS -> backend.src.api.routes.dashboards
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
from unittest.mock import MagicMock, patch, AsyncMock
|
||||
from datetime import datetime, timezone
|
||||
from unittest.mock import MagicMock, patch
|
||||
from src.app import app
|
||||
from src.api.routes.dashboards import DashboardsResponse, DashboardDetailResponse, DashboardTaskHistoryResponse, DatabaseMappingsResponse
|
||||
from src.dependencies import get_current_user, has_permission, get_config_manager, get_task_manager, get_resource_service, get_mapping_service
|
||||
from src.api.routes.dashboards import DashboardsResponse
|
||||
|
||||
# Global mock user
|
||||
mock_user = MagicMock()
|
||||
mock_user.username = "testuser"
|
||||
mock_user.roles = []
|
||||
admin_role = MagicMock()
|
||||
admin_role.name = "Admin"
|
||||
mock_user.roles.append(admin_role)
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def mock_deps():
|
||||
config_manager = MagicMock()
|
||||
task_manager = MagicMock()
|
||||
resource_service = MagicMock()
|
||||
mapping_service = MagicMock()
|
||||
|
||||
app.dependency_overrides[get_config_manager] = lambda: config_manager
|
||||
app.dependency_overrides[get_task_manager] = lambda: task_manager
|
||||
app.dependency_overrides[get_resource_service] = lambda: resource_service
|
||||
app.dependency_overrides[get_mapping_service] = lambda: mapping_service
|
||||
app.dependency_overrides[get_current_user] = lambda: mock_user
|
||||
|
||||
# Overrides for specific permission checks
|
||||
app.dependency_overrides[has_permission("plugin:migration", "READ")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("plugin:migration", "EXECUTE")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("plugin:backup", "EXECUTE")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("tasks", "READ")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("dashboards", "READ")] = lambda: mock_user
|
||||
|
||||
yield {
|
||||
"config": config_manager,
|
||||
"task": task_manager,
|
||||
"resource": resource_service,
|
||||
"mapping": mapping_service
|
||||
}
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
client = TestClient(app)
|
||||
|
||||
# --- 1. get_dashboards tests ---
|
||||
|
||||
def test_get_dashboards_success(mock_deps):
|
||||
"""Uses @TEST_FIXTURE: dashboard_list_happy data."""
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
|
||||
# @TEST_FIXTURE: dashboard_list_happy -> {"id": 1, "title": "Main Revenue"}
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[
|
||||
{"id": 1, "title": "Main Revenue", "slug": "main-revenue", "git_status": {"branch": "main", "sync_status": "OK"}}
|
||||
])
|
||||
|
||||
response = client.get("/api/dashboards?env_id=prod&page=1&page_size=10")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
|
||||
# exhaustive @POST assertions
|
||||
assert "dashboards" in data
|
||||
assert len(data["dashboards"]) == 1 # @TEST_FIXTURE: expected_count: 1
|
||||
assert data["dashboards"][0]["title"] == "Main Revenue"
|
||||
assert data["total"] == 1
|
||||
assert data["page"] == 1
|
||||
assert data["page_size"] == 10
|
||||
assert data["total_pages"] == 1
|
||||
|
||||
# schema validation
|
||||
DashboardsResponse(**data)
|
||||
|
||||
def test_get_dashboards_with_search(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[
|
||||
{"id": 1, "title": "Sales Report", "slug": "sales"},
|
||||
{"id": 2, "title": "Marketing", "slug": "marketing"}
|
||||
])
|
||||
# [DEF:test_get_dashboards_success:Function]
|
||||
# @TEST: GET /api/dashboards returns 200 and valid schema
|
||||
# @PRE: env_id exists
|
||||
# @POST: Response matches DashboardsResponse schema
|
||||
def test_get_dashboards_success():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.get_resource_service") as mock_service, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
|
||||
response = client.get("/api/dashboards?env_id=prod&search=sales")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert len(data["dashboards"]) == 1
|
||||
assert data["dashboards"][0]["title"] == "Sales Report"
|
||||
|
||||
def test_get_dashboards_empty(mock_deps):
|
||||
"""@TEST_EDGE: empty_dashboards -> {env_id: 'empty_env', expected_total: 0}"""
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "empty_env"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[])
|
||||
|
||||
response = client.get("/api/dashboards?env_id=empty_env")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["total"] == 0
|
||||
assert len(data["dashboards"]) == 0
|
||||
assert data["total_pages"] == 1
|
||||
DashboardsResponse(**data)
|
||||
|
||||
def test_get_dashboards_superset_failure(mock_deps):
|
||||
"""@TEST_EDGE: external_superset_failure -> {env_id: 'bad_conn', status: 503}"""
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "bad_conn"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(
|
||||
side_effect=Exception("Connection refused")
|
||||
)
|
||||
|
||||
response = client.get("/api/dashboards?env_id=bad_conn")
|
||||
assert response.status_code == 503
|
||||
assert "Failed to fetch dashboards" in response.json()["detail"]
|
||||
|
||||
def test_get_dashboards_env_not_found(mock_deps):
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.get("/api/dashboards?env_id=nonexistent")
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
|
||||
def test_get_dashboards_invalid_pagination(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
|
||||
# page < 1
|
||||
assert client.get("/api/dashboards?env_id=prod&page=0").status_code == 400
|
||||
assert client.get("/api/dashboards?env_id=prod&page=-1").status_code == 400
|
||||
|
||||
# page_size < 1
|
||||
assert client.get("/api/dashboards?env_id=prod&page_size=0").status_code == 400
|
||||
|
||||
# page_size > 100
|
||||
assert client.get("/api/dashboards?env_id=prod&page_size=101").status_code == 400
|
||||
|
||||
# --- 2. get_database_mappings tests ---
|
||||
|
||||
def test_get_database_mappings_success(mock_deps):
|
||||
mock_s = MagicMock(); mock_s.id = "s"
|
||||
mock_t = MagicMock(); mock_t.id = "t"
|
||||
mock_deps["config"].get_environments.return_value = [mock_s, mock_t]
|
||||
|
||||
mock_deps["mapping"].get_suggestions = AsyncMock(return_value=[
|
||||
{"source_db": "src", "target_db": "dst", "confidence": 0.9}
|
||||
])
|
||||
response = client.get("/api/dashboards/db-mappings?source_env_id=s&target_env_id=t")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert len(data["mappings"]) == 1
|
||||
assert data["mappings"][0]["confidence"] == 0.9
|
||||
DatabaseMappingsResponse(**data)
|
||||
|
||||
def test_get_database_mappings_env_not_found(mock_deps):
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.get("/api/dashboards/db-mappings?source_env_id=ghost&target_env_id=t")
|
||||
assert response.status_code == 404
|
||||
|
||||
# --- 3. get_dashboard_detail tests ---
|
||||
|
||||
def test_get_dashboard_detail_success(mock_deps):
|
||||
with patch("src.api.routes.dashboards.SupersetClient") as mock_client_cls:
|
||||
# Mock environment
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_config.return_value.get_environments.return_value = [mock_env]
|
||||
|
||||
mock_client = MagicMock()
|
||||
detail_payload = {
|
||||
"id": 42, "title": "Detail", "charts": [], "datasets": [],
|
||||
"chart_count": 0, "dataset_count": 0
|
||||
}
|
||||
mock_client.get_dashboard_detail.return_value = detail_payload
|
||||
mock_client_cls.return_value = mock_client
|
||||
# Mock resource service response
|
||||
mock_service.return_value.get_dashboards_with_status.return_value = [
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Sales Report",
|
||||
"slug": "sales",
|
||||
"git_status": {"branch": "main", "sync_status": "OK"},
|
||||
"last_task": {"task_id": "task-1", "status": "SUCCESS"}
|
||||
}
|
||||
]
|
||||
|
||||
# Mock permission
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
response = client.get("/api/dashboards/42?env_id=prod")
|
||||
response = client.get("/api/dashboards?env_id=prod")
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["id"] == 42
|
||||
DashboardDetailResponse(**data)
|
||||
assert "dashboards" in data
|
||||
assert len(data["dashboards"]) == 1
|
||||
assert data["dashboards"][0]["title"] == "Sales Report"
|
||||
# Validate against Pydantic model
|
||||
DashboardsResponse(**data)
|
||||
|
||||
def test_get_dashboard_detail_env_not_found(mock_deps):
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.get("/api/dashboards/42?env_id=missing")
|
||||
assert response.status_code == 404
|
||||
# [/DEF:test_get_dashboards_success:Function]
|
||||
|
||||
# --- 4. get_dashboard_tasks_history tests ---
|
||||
|
||||
def test_get_dashboard_tasks_history_success(mock_deps):
|
||||
now = datetime.now(timezone.utc)
|
||||
task1 = MagicMock(id="t1", plugin_id="superset-backup", status="SUCCESS", started_at=now, finished_at=None, params={"env": "prod", "dashboards": [42]}, result={})
|
||||
mock_deps["task"].get_all_tasks.return_value = [task1]
|
||||
|
||||
response = client.get("/api/dashboards/42/tasks?env_id=prod")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["dashboard_id"] == 42
|
||||
assert len(data["items"]) == 1
|
||||
DashboardTaskHistoryResponse(**data)
|
||||
|
||||
def test_get_dashboard_tasks_history_sorting(mock_deps):
|
||||
"""@POST: Response contains sorted task history (newest first)."""
|
||||
from datetime import timedelta
|
||||
now = datetime.now(timezone.utc)
|
||||
older = now - timedelta(hours=2)
|
||||
newest = now
|
||||
|
||||
task_old = MagicMock(id="t-old", plugin_id="superset-backup", status="SUCCESS",
|
||||
started_at=older, finished_at=None,
|
||||
params={"env": "prod", "dashboards": [42]}, result={})
|
||||
task_new = MagicMock(id="t-new", plugin_id="superset-backup", status="RUNNING",
|
||||
started_at=newest, finished_at=None,
|
||||
params={"env": "prod", "dashboards": [42]}, result={})
|
||||
|
||||
# Provide in wrong order to verify the endpoint sorts
|
||||
mock_deps["task"].get_all_tasks.return_value = [task_old, task_new]
|
||||
|
||||
response = client.get("/api/dashboards/42/tasks?env_id=prod")
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert len(data["items"]) == 2
|
||||
# Newest first
|
||||
assert data["items"][0]["id"] == "t-new"
|
||||
assert data["items"][1]["id"] == "t-old"
|
||||
|
||||
# --- 5. get_dashboard_thumbnail tests ---
|
||||
|
||||
def test_get_dashboard_thumbnail_success(mock_deps):
|
||||
with patch("src.api.routes.dashboards.SupersetClient") as mock_client_cls:
|
||||
mock_env = MagicMock(); mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_client = MagicMock()
|
||||
mock_response = MagicMock(status_code=200, content=b"img", headers={"Content-Type": "image/png"})
|
||||
mock_client.network.request.side_effect = lambda method, endpoint, **kw: {"image_url": "url"} if method == "POST" else mock_response
|
||||
mock_client_cls.return_value = mock_client
|
||||
|
||||
response = client.get("/api/dashboards/42/thumbnail?env_id=prod")
|
||||
assert response.status_code == 200
|
||||
assert response.content == b"img"
|
||||
|
||||
def test_get_dashboard_thumbnail_env_not_found(mock_deps):
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.get("/api/dashboards/42/thumbnail?env_id=missing")
|
||||
assert response.status_code == 404
|
||||
|
||||
def test_get_dashboard_thumbnail_202(mock_deps):
|
||||
"""@POST: Returns 202 when thumbnail is being prepared by Superset."""
|
||||
with patch("src.api.routes.dashboards.SupersetClient") as mock_client_cls:
|
||||
mock_env = MagicMock(); mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_client = MagicMock()
|
||||
# [DEF:test_get_dashboards_env_not_found:Function]
|
||||
# @TEST: GET /api/dashboards returns 404 if env_id missing
|
||||
# @PRE: env_id does not exist
|
||||
# @POST: Returns 404 error
|
||||
def test_get_dashboards_env_not_found():
|
||||
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
|
||||
patch("src.api.routes.dashboards.has_permission") as mock_perm:
|
||||
|
||||
# POST cache_dashboard_screenshot returns image_url
|
||||
mock_client.network.request.side_effect = [
|
||||
{"image_url": "/api/v1/dashboard/42/thumbnail/abc123/"}, # POST
|
||||
MagicMock(status_code=202, json=lambda: {"message": "Thumbnail is being generated"},
|
||||
headers={"Content-Type": "application/json"}) # GET thumbnail -> 202
|
||||
]
|
||||
mock_client_cls.return_value = mock_client
|
||||
mock_config.return_value.get_environments.return_value = []
|
||||
mock_perm.return_value = lambda: True
|
||||
|
||||
response = client.get("/api/dashboards/42/thumbnail?env_id=prod")
|
||||
assert response.status_code == 202
|
||||
assert "Thumbnail is being generated" in response.json()["message"]
|
||||
response = client.get("/api/dashboards?env_id=nonexistent")
|
||||
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
|
||||
# --- 6. migrate_dashboards tests ---
|
||||
|
||||
def test_migrate_dashboards_success(mock_deps):
|
||||
mock_s = MagicMock(); mock_s.id = "s"
|
||||
mock_t = MagicMock(); mock_t.id = "t"
|
||||
mock_deps["config"].get_environments.return_value = [mock_s, mock_t]
|
||||
mock_deps["task"].create_task = AsyncMock(return_value=MagicMock(id="task-123"))
|
||||
|
||||
response = client.post("/api/dashboards/migrate", json={
|
||||
"source_env_id": "s", "target_env_id": "t", "dashboard_ids": [1]
|
||||
})
|
||||
assert response.status_code == 200
|
||||
assert response.json()["task_id"] == "task-123"
|
||||
|
||||
def test_migrate_dashboards_pre_checks(mock_deps):
|
||||
# Missing IDs
|
||||
response = client.post("/api/dashboards/migrate", json={
|
||||
"source_env_id": "s", "target_env_id": "t", "dashboard_ids": []
|
||||
})
|
||||
assert response.status_code == 400
|
||||
assert "At least one dashboard ID must be provided" in response.json()["detail"]
|
||||
|
||||
def test_migrate_dashboards_env_not_found(mock_deps):
|
||||
"""@PRE: source_env_id and target_env_id are valid environment IDs."""
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.post("/api/dashboards/migrate", json={
|
||||
"source_env_id": "ghost", "target_env_id": "t", "dashboard_ids": [1]
|
||||
})
|
||||
assert response.status_code == 404
|
||||
assert "Source environment not found" in response.json()["detail"]
|
||||
|
||||
# --- 7. backup_dashboards tests ---
|
||||
|
||||
def test_backup_dashboards_success(mock_deps):
|
||||
mock_env = MagicMock(); mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].create_task = AsyncMock(return_value=MagicMock(id="backup-123"))
|
||||
|
||||
response = client.post("/api/dashboards/backup", json={
|
||||
"env_id": "prod", "dashboard_ids": [1]
|
||||
})
|
||||
assert response.status_code == 200
|
||||
assert response.json()["task_id"] == "backup-123"
|
||||
|
||||
def test_backup_dashboards_pre_checks(mock_deps):
|
||||
response = client.post("/api/dashboards/backup", json={
|
||||
"env_id": "prod", "dashboard_ids": []
|
||||
})
|
||||
assert response.status_code == 400
|
||||
|
||||
def test_backup_dashboards_env_not_found(mock_deps):
|
||||
"""@PRE: env_id is a valid environment ID."""
|
||||
mock_deps["config"].get_environments.return_value = []
|
||||
response = client.post("/api/dashboards/backup", json={
|
||||
"env_id": "ghost", "dashboard_ids": [1]
|
||||
})
|
||||
assert response.status_code == 404
|
||||
assert "Environment not found" in response.json()["detail"]
|
||||
|
||||
def test_backup_dashboards_with_schedule(mock_deps):
|
||||
"""@POST: If schedule is provided, a scheduled task is created."""
|
||||
mock_env = MagicMock(); mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].create_task = AsyncMock(return_value=MagicMock(id="sched-456"))
|
||||
|
||||
response = client.post("/api/dashboards/backup", json={
|
||||
"env_id": "prod", "dashboard_ids": [1], "schedule": "0 0 * * *"
|
||||
})
|
||||
assert response.status_code == 200
|
||||
assert response.json()["task_id"] == "sched-456"
|
||||
|
||||
# Verify schedule was propagated to create_task
|
||||
call_kwargs = mock_deps["task"].create_task.call_args
|
||||
task_params = call_kwargs.kwargs.get("params") or call_kwargs[1].get("params", {})
|
||||
assert task_params["schedule"] == "0 0 * * *"
|
||||
|
||||
# --- 8. Internal logic: _task_matches_dashboard ---
|
||||
from src.api.routes.dashboards import _task_matches_dashboard
|
||||
|
||||
def test_task_matches_dashboard_logic():
|
||||
task = MagicMock(plugin_id="superset-backup", params={"dashboards": [42], "env": "prod"})
|
||||
assert _task_matches_dashboard(task, 42, "prod") is True
|
||||
assert _task_matches_dashboard(task, 43, "prod") is False
|
||||
assert _task_matches_dashboard(task, 42, "dev") is False
|
||||
|
||||
llm_task = MagicMock(plugin_id="llm_dashboard_validation", params={"dashboard_id": 42, "environment_id": "prod"})
|
||||
assert _task_matches_dashboard(llm_task, 42, "prod") is True
|
||||
assert _task_matches_dashboard(llm_task, 42, None) is True
|
||||
# [/DEF:test_get_dashboards_env_not_found:Function]
|
||||
|
||||
# [/DEF:backend.tests.test_dashboards_api:Module]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
from unittest.mock import MagicMock, AsyncMock
|
||||
from unittest.mock import MagicMock
|
||||
from src.app import app
|
||||
from src.dependencies import get_config_manager, get_task_manager, get_resource_service, has_permission
|
||||
|
||||
@@ -27,10 +27,10 @@ def mock_deps():
|
||||
task_manager.get_all_tasks.return_value = []
|
||||
|
||||
# Mock dashboards
|
||||
resource_service.get_dashboards_with_status = AsyncMock(return_value=[
|
||||
resource_service.get_dashboards_with_status.return_value = [
|
||||
{"id": 1, "title": "Sales", "slug": "sales", "git_status": {"branch": "main", "sync_status": "OK"}, "last_task": None},
|
||||
{"id": 2, "title": "Marketing", "slug": "mkt", "git_status": None, "last_task": {"task_id": "t1", "status": "SUCCESS"}}
|
||||
])
|
||||
]
|
||||
|
||||
app.dependency_overrides[get_config_manager] = lambda: config_manager
|
||||
app.dependency_overrides[get_task_manager] = lambda: task_manager
|
||||
@@ -39,10 +39,6 @@ def mock_deps():
|
||||
# Bypass permission check
|
||||
mock_user = MagicMock()
|
||||
mock_user.username = "testadmin"
|
||||
mock_user.roles = []
|
||||
admin_role = MagicMock()
|
||||
admin_role.name = "Admin"
|
||||
mock_user.roles.append(admin_role)
|
||||
|
||||
# Override both get_current_user and has_permission
|
||||
from src.dependencies import get_current_user
|
||||
@@ -89,9 +85,9 @@ def test_get_dashboards_search(mock_deps):
|
||||
# @TEST: Negative - Service failure returns 503
|
||||
|
||||
def test_get_datasets_success(mock_deps):
|
||||
mock_deps["resource"].get_datasets_with_status = AsyncMock(return_value=[
|
||||
mock_deps["resource"].get_datasets_with_status.return_value = [
|
||||
{"id": 1, "table_name": "orders", "schema": "public", "database": "db1", "mapped_fields": {"total": 10, "mapped": 5}, "last_task": None}
|
||||
])
|
||||
]
|
||||
|
||||
response = client.get("/api/datasets?env_id=env1")
|
||||
assert response.status_code == 200
|
||||
@@ -106,10 +102,10 @@ def test_get_datasets_not_found(mock_deps):
|
||||
assert response.status_code == 404
|
||||
|
||||
def test_get_datasets_search(mock_deps):
|
||||
mock_deps["resource"].get_datasets_with_status = AsyncMock(return_value=[
|
||||
mock_deps["resource"].get_datasets_with_status.return_value = [
|
||||
{"id": 1, "table_name": "orders", "schema": "public", "database": "db1", "mapped_fields": {"total": 10, "mapped": 5}, "last_task": None},
|
||||
{"id": 2, "table_name": "users", "schema": "public", "database": "db1", "mapped_fields": {"total": 5, "mapped": 5}, "last_task": None}
|
||||
])
|
||||
]
|
||||
|
||||
response = client.get("/api/datasets?env_id=env1&search=orders")
|
||||
assert response.status_code == 200
|
||||
@@ -118,39 +114,10 @@ def test_get_datasets_search(mock_deps):
|
||||
assert data["datasets"][0]["table_name"] == "orders"
|
||||
|
||||
def test_get_datasets_service_failure(mock_deps):
|
||||
mock_deps["resource"].get_datasets_with_status = AsyncMock(side_effect=Exception("Superset down"))
|
||||
mock_deps["resource"].get_datasets_with_status.side_effect = Exception("Superset down")
|
||||
|
||||
response = client.get("/api/datasets?env_id=env1")
|
||||
assert response.status_code == 503
|
||||
assert "Failed to fetch datasets" in response.json()["detail"]
|
||||
|
||||
# [/DEF:test_datasets_api:Test]
|
||||
|
||||
|
||||
# [DEF:test_pagination_boundaries:Test]
|
||||
# @PURPOSE: Verify pagination validation for GET endpoints
|
||||
# @TEST: page<1 and page_size>100 return 400
|
||||
|
||||
def test_get_dashboards_pagination_zero_page(mock_deps):
|
||||
"""@TEST_EDGE: pagination_zero_page -> {page:0, status:400}"""
|
||||
response = client.get("/api/dashboards?env_id=env1&page=0")
|
||||
assert response.status_code == 400
|
||||
assert "Page must be >= 1" in response.json()["detail"]
|
||||
|
||||
def test_get_dashboards_pagination_oversize(mock_deps):
|
||||
"""@TEST_EDGE: pagination_oversize -> {page_size:101, status:400}"""
|
||||
response = client.get("/api/dashboards?env_id=env1&page_size=101")
|
||||
assert response.status_code == 400
|
||||
assert "Page size must be between 1 and 100" in response.json()["detail"]
|
||||
|
||||
def test_get_datasets_pagination_zero_page(mock_deps):
|
||||
"""@TEST_EDGE: pagination_zero_page on datasets"""
|
||||
response = client.get("/api/datasets?env_id=env1&page=0")
|
||||
assert response.status_code == 400
|
||||
|
||||
def test_get_datasets_pagination_oversize(mock_deps):
|
||||
"""@TEST_EDGE: pagination_oversize on datasets"""
|
||||
response = client.get("/api/datasets?env_id=env1&page_size=101")
|
||||
assert response.status_code == 400
|
||||
|
||||
# [/DEF:test_pagination_boundaries:Test]
|
||||
|
||||
27
check_test_data.py
Normal file
27
check_test_data.py
Normal file
@@ -0,0 +1,27 @@
|
||||
import os
|
||||
|
||||
def check_file(filepath):
|
||||
try:
|
||||
with open(filepath, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
if '@TIER: CRITICAL' in content:
|
||||
if '@TEST_DATA' not in content:
|
||||
return filepath
|
||||
except Exception as e:
|
||||
print(f"Error reading {filepath}: {e}")
|
||||
return None
|
||||
|
||||
missing_files = []
|
||||
for root_dir in ['backend/src', 'frontend/src']:
|
||||
for dirpath, _, filenames in os.walk(root_dir):
|
||||
for name in filenames:
|
||||
ext = os.path.splitext(name)[1]
|
||||
if ext in ['.py', '.js', '.ts', '.svelte']:
|
||||
full_path = os.path.join(dirpath, name)
|
||||
res = check_file(full_path)
|
||||
if res:
|
||||
missing_files.append(res)
|
||||
|
||||
print("Files missing @TEST_DATA:")
|
||||
for f in missing_files:
|
||||
print(f)
|
||||
@@ -35,7 +35,7 @@
|
||||
|
||||
// [SECTION: UI STATE]
|
||||
let showGitManager = $state(false);
|
||||
let gitDashboardId: string | null = $state(null);
|
||||
let gitDashboardId: number | null = $state(null);
|
||||
let gitDashboardTitle = $state("");
|
||||
let validatingIds: Set<number> = $state(new Set());
|
||||
// [/SECTION]
|
||||
@@ -178,7 +178,7 @@
|
||||
* @purpose Opens the Git management modal for a dashboard.
|
||||
*/
|
||||
function openGit(dashboard: DashboardMetadata) {
|
||||
gitDashboardId = String(dashboard.slug || dashboard.id);
|
||||
gitDashboardId = dashboard.id;
|
||||
gitDashboardTitle = dashboard.title;
|
||||
showGitManager = true;
|
||||
}
|
||||
|
||||
@@ -1,613 +0,0 @@
|
||||
<!-- [DEF:DashboardGrid:Component] -->
|
||||
<!--
|
||||
@TIER: STANDARD
|
||||
@SEMANTICS: dashboard, grid, selection, pagination
|
||||
@PURPOSE: Displays a grid of dashboards with selection and pagination.
|
||||
@LAYER: Component
|
||||
@RELATION: USED_BY -> frontend/src/routes/migration/+page.svelte
|
||||
|
||||
@INVARIANT: Selected IDs must be a subset of available dashboards.
|
||||
-->
|
||||
|
||||
<script lang="ts">
|
||||
// [SECTION: IMPORTS]
|
||||
import { createEventDispatcher, untrack } from "svelte";
|
||||
import type { DashboardMetadata } from "../types/dashboard";
|
||||
import { t } from "../lib/i18n";
|
||||
import { Button, Input } from "../lib/ui";
|
||||
import GitManager from "./git/GitManager.svelte";
|
||||
import { gitService } from "../services/gitService";
|
||||
import { addToast } from "../lib/toasts.js";
|
||||
// [/SECTION]
|
||||
|
||||
// [SECTION: PROPS]
|
||||
let { dashboards = [], selectedIds = [], statusMode = "dashboard" } = $props();
|
||||
|
||||
// [/SECTION]
|
||||
|
||||
// [SECTION: STATE]
|
||||
let filterText = $state("");
|
||||
let currentPage = $state(0);
|
||||
let pageSize = $state(20);
|
||||
let sortColumn: keyof DashboardMetadata = $state("title");
|
||||
let sortDirection: "asc" | "desc" = $state("asc");
|
||||
// [/SECTION]
|
||||
|
||||
// [SECTION: UI STATE]
|
||||
let showGitManager = $state(false);
|
||||
let gitDashboardId: string | null = $state(null);
|
||||
let gitDashboardTitle = $state("");
|
||||
let repositoryStatusByDashboardId = $state<Record<number, string>>({});
|
||||
let repositoryStatusRequestId = $state(0);
|
||||
let bulkActionRunning = $state(false);
|
||||
// [/SECTION]
|
||||
|
||||
// [SECTION: DERIVED]
|
||||
let filteredDashboards = $derived(
|
||||
dashboards.filter((d) =>
|
||||
d.title.toLowerCase().includes(filterText.toLowerCase()),
|
||||
),
|
||||
);
|
||||
|
||||
let sortedDashboards = $derived(
|
||||
[...filteredDashboards].sort((a, b) => {
|
||||
let aVal =
|
||||
sortColumn === "status"
|
||||
? getSortStatusValue(a)
|
||||
: a[sortColumn];
|
||||
let bVal =
|
||||
sortColumn === "status"
|
||||
? getSortStatusValue(b)
|
||||
: b[sortColumn];
|
||||
if (sortColumn === "id") {
|
||||
aVal = Number(aVal);
|
||||
bVal = Number(bVal);
|
||||
}
|
||||
if (aVal < bVal) return sortDirection === "asc" ? -1 : 1;
|
||||
if (aVal > bVal) return sortDirection === "asc" ? 1 : -1;
|
||||
return 0;
|
||||
}),
|
||||
);
|
||||
|
||||
let paginatedDashboards = $derived(
|
||||
sortedDashboards.slice(
|
||||
currentPage * pageSize,
|
||||
(currentPage + 1) * pageSize,
|
||||
),
|
||||
);
|
||||
|
||||
let totalPages = $derived(Math.ceil(sortedDashboards.length / pageSize));
|
||||
|
||||
let allSelected = $derived(
|
||||
paginatedDashboards.length > 0 &&
|
||||
paginatedDashboards.every((d) => selectedIds.includes(d.id)),
|
||||
);
|
||||
let someSelected = $derived(
|
||||
paginatedDashboards.some((d) => selectedIds.includes(d.id)),
|
||||
);
|
||||
// [/SECTION]
|
||||
|
||||
// [SECTION: EVENTS]
|
||||
const dispatch = createEventDispatcher<{ selectionChanged: number[] }>();
|
||||
// [/SECTION]
|
||||
|
||||
// [DEF:handleSort:Function]
|
||||
// @PURPOSE: Toggles sort direction or changes sort column.
|
||||
// @PRE: column name is provided.
|
||||
// @POST: sortColumn and sortDirection state updated.
|
||||
function handleSort(column: keyof DashboardMetadata) {
|
||||
if (sortColumn === column) {
|
||||
sortDirection = sortDirection === "asc" ? "desc" : "asc";
|
||||
} else {
|
||||
sortColumn = column;
|
||||
sortDirection = "asc";
|
||||
}
|
||||
}
|
||||
// [/DEF:handleSort:Function]
|
||||
|
||||
// [DEF:handleSelectionChange:Function]
|
||||
// @PURPOSE: Handles individual checkbox changes.
|
||||
// @PRE: dashboard ID and checked status provided.
|
||||
// @POST: selectedIds array updated and selectionChanged event dispatched.
|
||||
function handleSelectionChange(id: number, checked: boolean) {
|
||||
let newSelected = [...selectedIds];
|
||||
if (checked) {
|
||||
if (!newSelected.includes(id)) newSelected.push(id);
|
||||
} else {
|
||||
newSelected = newSelected.filter((sid) => sid !== id);
|
||||
}
|
||||
selectedIds = newSelected;
|
||||
dispatch("selectionChanged", newSelected);
|
||||
}
|
||||
// [/DEF:handleSelectionChange:Function]
|
||||
|
||||
// [DEF:handleSelectAll:Function]
|
||||
// @PURPOSE: Handles select all checkbox.
|
||||
// @PRE: checked status provided.
|
||||
// @POST: selectedIds array updated for all paginated items and event dispatched.
|
||||
function handleSelectAll(checked: boolean) {
|
||||
let newSelected = [...selectedIds];
|
||||
if (checked) {
|
||||
paginatedDashboards.forEach((d) => {
|
||||
if (!newSelected.includes(d.id)) newSelected.push(d.id);
|
||||
});
|
||||
} else {
|
||||
paginatedDashboards.forEach((d) => {
|
||||
newSelected = newSelected.filter((sid) => sid !== d.id);
|
||||
});
|
||||
}
|
||||
selectedIds = newSelected;
|
||||
dispatch("selectionChanged", newSelected);
|
||||
}
|
||||
// [/DEF:handleSelectAll:Function]
|
||||
|
||||
// [DEF:goToPage:Function]
|
||||
// @PURPOSE: Changes current page.
|
||||
// @PRE: page index is provided.
|
||||
// @POST: currentPage state updated if within valid range.
|
||||
function goToPage(page: number) {
|
||||
if (page >= 0 && page < totalPages) {
|
||||
currentPage = page;
|
||||
}
|
||||
}
|
||||
// [/DEF:goToPage:Function]
|
||||
|
||||
// [DEF:getRepositoryStatusToken:Function]
|
||||
/**
|
||||
* @purpose Returns normalized repository status token for a dashboard.
|
||||
* @pre Dashboard exists.
|
||||
* @post Returns one of loading|no_repo|synced|changes|behind_remote|ahead_remote|diverged|error.
|
||||
*/
|
||||
function getRepositoryStatusToken(dashboardId: number): string {
|
||||
return repositoryStatusByDashboardId[dashboardId] || "loading";
|
||||
}
|
||||
// [/DEF:getRepositoryStatusToken:Function]
|
||||
|
||||
// [DEF:isRepositoryReady:Function]
|
||||
// @PURPOSE: Determines whether git actions can run for a dashboard.
|
||||
function isRepositoryReady(dashboardId: number): boolean {
|
||||
const token = getRepositoryStatusToken(dashboardId);
|
||||
return token !== "loading" && token !== "no_repo" && token !== "error";
|
||||
}
|
||||
// [/DEF:isRepositoryReady:Function]
|
||||
|
||||
// [DEF:invalidateRepositoryStatuses:Function]
|
||||
// @PURPOSE: Marks dashboard statuses as loading so they are refetched.
|
||||
function invalidateRepositoryStatuses(dashboardIds: number[]): void {
|
||||
if (dashboardIds.length === 0) return;
|
||||
const nextStatuses = { ...repositoryStatusByDashboardId };
|
||||
dashboardIds.forEach((dashboardId) => {
|
||||
nextStatuses[dashboardId] = "loading";
|
||||
});
|
||||
repositoryStatusByDashboardId = nextStatuses;
|
||||
}
|
||||
// [/DEF:invalidateRepositoryStatuses:Function]
|
||||
|
||||
// [DEF:resolveRepositoryStatusToken:Function]
|
||||
/**
|
||||
* @purpose Converts git status payload into a stable UI status token.
|
||||
*/
|
||||
function resolveRepositoryStatusToken(status: any): string {
|
||||
const syncState = String(status?.sync_state || "").toUpperCase();
|
||||
if (syncState === "DIVERGED") return "diverged";
|
||||
if (syncState === "BEHIND_REMOTE") return "behind_remote";
|
||||
if (syncState === "AHEAD_REMOTE") return "ahead_remote";
|
||||
if (syncState === "CHANGES") return "changes";
|
||||
if (syncState === "SYNCED") return "synced";
|
||||
|
||||
const syncStatus = String(status?.sync_status || "").toUpperCase();
|
||||
if (syncStatus === "NO_REPO") return "no_repo";
|
||||
if (syncStatus === "ERROR") return "error";
|
||||
if (syncStatus === "DIFF") return "changes";
|
||||
if (syncStatus === "OK") return "synced";
|
||||
|
||||
const aheadCount = Number(status?.ahead_count || 0);
|
||||
const behindCount = Number(status?.behind_count || 0);
|
||||
if (aheadCount > 0 && behindCount > 0) return "diverged";
|
||||
if (behindCount > 0) return "behind_remote";
|
||||
if (aheadCount > 0) return "ahead_remote";
|
||||
|
||||
const hasChanges =
|
||||
Boolean(status?.is_dirty) ||
|
||||
(status?.untracked_files?.length || 0) > 0 ||
|
||||
(status?.modified_files?.length || 0) > 0 ||
|
||||
(status?.staged_files?.length || 0) > 0;
|
||||
|
||||
return hasChanges ? "changes" : "synced";
|
||||
}
|
||||
// [/DEF:resolveRepositoryStatusToken:Function]
|
||||
|
||||
// [DEF:loadRepositoryStatuses:Function]
|
||||
/**
|
||||
* @purpose Hydrates repository status map for dashboards in repository mode.
|
||||
*/
|
||||
async function loadRepositoryStatuses() {
|
||||
if (statusMode !== "repository") {
|
||||
repositoryStatusByDashboardId = {};
|
||||
return;
|
||||
}
|
||||
|
||||
const requestId = repositoryStatusRequestId + 1;
|
||||
repositoryStatusRequestId = requestId;
|
||||
|
||||
const visibleDashboards = paginatedDashboards;
|
||||
const visibleIds = visibleDashboards.map((dashboard) => dashboard.id);
|
||||
|
||||
if (visibleDashboards.length === 0) {
|
||||
repositoryStatusByDashboardId = {};
|
||||
return;
|
||||
}
|
||||
|
||||
const missingIds = visibleIds.filter((dashboardId) => {
|
||||
const token = repositoryStatusByDashboardId[dashboardId];
|
||||
return token === undefined || token === "loading";
|
||||
});
|
||||
|
||||
if (missingIds.length === 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
const loadingStatuses = { ...repositoryStatusByDashboardId };
|
||||
missingIds.forEach((dashboardId) => {
|
||||
loadingStatuses[dashboardId] = "loading";
|
||||
});
|
||||
repositoryStatusByDashboardId = loadingStatuses;
|
||||
|
||||
let entries: Array<readonly [number, string]> = [];
|
||||
try {
|
||||
const batchResult = await gitService.getStatusesBatch(
|
||||
missingIds,
|
||||
);
|
||||
const statusesByDashboardId = batchResult?.statuses || {};
|
||||
entries = missingIds.map((dashboardId) => {
|
||||
const status =
|
||||
statusesByDashboardId[dashboardId] ??
|
||||
statusesByDashboardId[String(dashboardId)];
|
||||
if (!status) return [dashboardId, "error"] as const;
|
||||
return [dashboardId, resolveRepositoryStatusToken(status)] as const;
|
||||
});
|
||||
} catch (error) {
|
||||
entries = missingIds.map((dashboardId) => [dashboardId, "error"] as const);
|
||||
}
|
||||
|
||||
if (requestId !== repositoryStatusRequestId) return;
|
||||
repositoryStatusByDashboardId = {
|
||||
...repositoryStatusByDashboardId,
|
||||
...Object.fromEntries(entries),
|
||||
};
|
||||
}
|
||||
// [/DEF:loadRepositoryStatuses:Function]
|
||||
|
||||
// [DEF:runBulkGitAction:Function]
|
||||
// @PURPOSE: Executes git action for selected dashboards with limited parallelism.
|
||||
async function runBulkGitAction(
|
||||
actionToken: string,
|
||||
action: (dashboardId: number) => Promise<void>,
|
||||
): Promise<void> {
|
||||
if (bulkActionRunning) return;
|
||||
const selectedDashboardIds = selectedIds.filter((dashboardId) =>
|
||||
isRepositoryReady(dashboardId),
|
||||
);
|
||||
|
||||
if (selectedDashboardIds.length === 0) {
|
||||
addToast($t.git?.no_repositories_selected, "error");
|
||||
return;
|
||||
}
|
||||
|
||||
bulkActionRunning = true;
|
||||
const concurrency = 3;
|
||||
const idsQueue = [...selectedDashboardIds];
|
||||
let successCount = 0;
|
||||
let failedCount = 0;
|
||||
|
||||
const worker = async () => {
|
||||
while (idsQueue.length > 0) {
|
||||
const dashboardId = idsQueue.shift();
|
||||
if (dashboardId === undefined) break;
|
||||
try {
|
||||
await action(dashboardId);
|
||||
successCount += 1;
|
||||
} catch (_error) {
|
||||
failedCount += 1;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
try {
|
||||
await Promise.all(
|
||||
Array.from({ length: Math.min(concurrency, selectedDashboardIds.length) }, () => worker()),
|
||||
);
|
||||
invalidateRepositoryStatuses(selectedDashboardIds);
|
||||
const actionLabel = $t.git?.[`bulk_action_${actionToken}`];
|
||||
addToast(
|
||||
$t.git?.bulk_result
|
||||
.replace("{action}", actionLabel)
|
||||
.replace("{success}", String(successCount))
|
||||
.replace("{failed}", String(failedCount)),
|
||||
failedCount > 0 ? "warning" : "success",
|
||||
);
|
||||
} finally {
|
||||
bulkActionRunning = false;
|
||||
}
|
||||
}
|
||||
// [/DEF:runBulkGitAction:Function]
|
||||
|
||||
// [DEF:handleBulkSync:Function]
|
||||
async function handleBulkSync(): Promise<void> {
|
||||
await runBulkGitAction("sync", (dashboardId) => gitService.sync(dashboardId));
|
||||
}
|
||||
// [/DEF:handleBulkSync:Function]
|
||||
|
||||
// [DEF:handleBulkCommit:Function]
|
||||
async function handleBulkCommit(): Promise<void> {
|
||||
const message = prompt($t.git?.commit_message);
|
||||
if (!message?.trim()) return;
|
||||
await runBulkGitAction("commit", (dashboardId) =>
|
||||
gitService.commit(dashboardId, message.trim(), []),
|
||||
);
|
||||
}
|
||||
// [/DEF:handleBulkCommit:Function]
|
||||
|
||||
// [DEF:handleBulkPull:Function]
|
||||
async function handleBulkPull(): Promise<void> {
|
||||
await runBulkGitAction("pull", (dashboardId) => gitService.pull(dashboardId));
|
||||
}
|
||||
// [/DEF:handleBulkPull:Function]
|
||||
|
||||
// [DEF:handleBulkPush:Function]
|
||||
async function handleBulkPush(): Promise<void> {
|
||||
await runBulkGitAction("push", (dashboardId) => gitService.push(dashboardId));
|
||||
}
|
||||
// [/DEF:handleBulkPush:Function]
|
||||
|
||||
// [DEF:handleManageSelected:Function]
|
||||
// @PURPOSE: Opens Git manager for exactly one selected dashboard.
|
||||
async function handleManageSelected(): Promise<void> {
|
||||
if (selectedIds.length !== 1) {
|
||||
addToast($t.git?.select_single_for_manage, "warning");
|
||||
return;
|
||||
}
|
||||
|
||||
const selectedDashboardId = selectedIds[0];
|
||||
const selectedDashboard = dashboards.find(
|
||||
(dashboard) => dashboard.id === selectedDashboardId,
|
||||
);
|
||||
|
||||
gitDashboardId = String(selectedDashboard?.slug || selectedDashboardId);
|
||||
gitDashboardTitle = selectedDashboard?.title || "";
|
||||
showGitManager = true;
|
||||
}
|
||||
// [/DEF:handleManageSelected:Function]
|
||||
|
||||
// [DEF:getSortStatusValue:Function]
|
||||
/**
|
||||
* @purpose Returns sort value for status column based on mode.
|
||||
*/
|
||||
function getSortStatusValue(dashboard: DashboardMetadata): string {
|
||||
if (statusMode === "repository") {
|
||||
return getRepositoryStatusToken(dashboard.id);
|
||||
}
|
||||
return String(dashboard.status || "").toLowerCase();
|
||||
}
|
||||
// [/DEF:getSortStatusValue:Function]
|
||||
|
||||
// [DEF:getStatusLabel:Function]
|
||||
/**
|
||||
* @purpose Returns localized label for status column.
|
||||
*/
|
||||
function getStatusLabel(dashboard: DashboardMetadata): string {
|
||||
if (statusMode !== "repository") {
|
||||
return String(dashboard.status || "");
|
||||
}
|
||||
const token = getRepositoryStatusToken(dashboard.id);
|
||||
return $t.git?.repo_status?.[token] || token;
|
||||
}
|
||||
// [/DEF:getStatusLabel:Function]
|
||||
|
||||
// [DEF:getStatusBadgeClass:Function]
|
||||
/**
|
||||
* @purpose Returns badge style for status column.
|
||||
*/
|
||||
function getStatusBadgeClass(dashboard: DashboardMetadata): string {
|
||||
if (statusMode !== "repository") {
|
||||
return dashboard.status === "published"
|
||||
? "bg-green-100 text-green-800"
|
||||
: "bg-gray-100 text-gray-800";
|
||||
}
|
||||
|
||||
const token = getRepositoryStatusToken(dashboard.id);
|
||||
if (token === "loading") return "bg-slate-100 text-slate-500";
|
||||
if (token === "no_repo") return "bg-gray-100 text-gray-800";
|
||||
if (token === "synced") return "bg-green-100 text-green-800";
|
||||
if (token === "changes") return "bg-amber-100 text-amber-800";
|
||||
if (token === "behind_remote") return "bg-blue-100 text-blue-800";
|
||||
if (token === "ahead_remote") return "bg-indigo-100 text-indigo-800";
|
||||
if (token === "diverged") return "bg-purple-100 text-purple-800";
|
||||
return "bg-rose-100 text-rose-800";
|
||||
}
|
||||
// [/DEF:getStatusBadgeClass:Function]
|
||||
|
||||
$effect(() => {
|
||||
dashboards;
|
||||
statusMode;
|
||||
currentPage;
|
||||
pageSize;
|
||||
filterText;
|
||||
sortColumn;
|
||||
sortDirection;
|
||||
untrack(() => {
|
||||
void loadRepositoryStatuses();
|
||||
});
|
||||
});
|
||||
</script>
|
||||
|
||||
<!-- [SECTION: TEMPLATE] -->
|
||||
<div class="dashboard-grid">
|
||||
<!-- Filter Input -->
|
||||
<div class="mb-6">
|
||||
<Input bind:value={filterText} placeholder={$t.dashboard.search} />
|
||||
</div>
|
||||
|
||||
{#if selectedIds.length > 0}
|
||||
<div class="mb-4 flex flex-wrap items-center gap-2 rounded-lg border border-blue-100 bg-blue-50/60 px-3 py-2">
|
||||
<Button
|
||||
size="sm"
|
||||
variant="secondary"
|
||||
onclick={handleManageSelected}
|
||||
disabled={bulkActionRunning || selectedIds.length !== 1}
|
||||
class="border-blue-200 bg-white text-blue-700 hover:bg-blue-50 disabled:opacity-40"
|
||||
>
|
||||
{$t.git?.manage_selected}
|
||||
</Button>
|
||||
<Button size="sm" variant="secondary" onclick={handleBulkSync} disabled={bulkActionRunning} class="border-blue-200 bg-white text-blue-700 hover:bg-blue-50">
|
||||
{$t.git?.bulk_sync}
|
||||
</Button>
|
||||
<Button size="sm" variant="secondary" onclick={handleBulkCommit} disabled={bulkActionRunning} class="border-amber-200 bg-white text-amber-700 hover:bg-amber-50">
|
||||
{$t.git?.bulk_commit}
|
||||
</Button>
|
||||
<Button size="sm" variant="secondary" onclick={handleBulkPull} disabled={bulkActionRunning} class="border-cyan-200 bg-white text-cyan-700 hover:bg-cyan-50">
|
||||
{$t.git?.bulk_pull}
|
||||
</Button>
|
||||
<Button size="sm" variant="secondary" onclick={handleBulkPush} disabled={bulkActionRunning} class="border-indigo-200 bg-white text-indigo-700 hover:bg-indigo-50">
|
||||
{$t.git?.bulk_push}
|
||||
</Button>
|
||||
<span class="ml-1 text-xs font-medium text-slate-600">
|
||||
{$t.git?.selected_count.replace(
|
||||
"{count}",
|
||||
String(selectedIds.length),
|
||||
)}
|
||||
</span>
|
||||
</div>
|
||||
{/if}
|
||||
|
||||
<!-- Grid/Table -->
|
||||
<div class="overflow-x-auto rounded-lg border border-gray-200">
|
||||
<table class="min-w-full divide-y divide-gray-200">
|
||||
<thead class="bg-gray-50">
|
||||
<tr>
|
||||
<th class="px-6 py-3 text-left">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={allSelected}
|
||||
indeterminate={someSelected && !allSelected}
|
||||
onchange={(e) =>
|
||||
handleSelectAll((e.target as HTMLInputElement).checked)}
|
||||
class="h-4 w-4 text-blue-600 border-gray-300 rounded focus:ring-blue-500"
|
||||
/>
|
||||
</th>
|
||||
<th
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider cursor-pointer hover:text-gray-700 transition-colors"
|
||||
onclick={() => handleSort("title")}
|
||||
>
|
||||
{$t.dashboard.title}
|
||||
{sortColumn === "title"
|
||||
? sortDirection === "asc"
|
||||
? "↑"
|
||||
: "↓"
|
||||
: ""}
|
||||
</th>
|
||||
<th
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider cursor-pointer hover:text-gray-700 transition-colors"
|
||||
onclick={() => handleSort("last_modified")}
|
||||
>
|
||||
{$t.dashboard.last_modified}
|
||||
{sortColumn === "last_modified"
|
||||
? sortDirection === "asc"
|
||||
? "↑"
|
||||
: "↓"
|
||||
: ""}
|
||||
</th>
|
||||
<th
|
||||
class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider cursor-pointer hover:text-gray-700 transition-colors"
|
||||
onclick={() => handleSort("status")}
|
||||
>
|
||||
{$t.dashboard.status}
|
||||
{sortColumn === "status"
|
||||
? sortDirection === "asc"
|
||||
? "↑"
|
||||
: "↓"
|
||||
: ""}
|
||||
</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody class="bg-white divide-y divide-gray-200">
|
||||
{#each paginatedDashboards as dashboard (dashboard.id)}
|
||||
<tr class="hover:bg-gray-50 transition-colors">
|
||||
<td class="px-6 py-4 whitespace-nowrap">
|
||||
<input
|
||||
type="checkbox"
|
||||
checked={selectedIds.includes(dashboard.id)}
|
||||
onchange={(e) =>
|
||||
handleSelectionChange(
|
||||
dashboard.id,
|
||||
(e.target as HTMLInputElement).checked,
|
||||
)}
|
||||
class="h-4 w-4 text-blue-600 border-gray-300 rounded focus:ring-blue-500"
|
||||
/>
|
||||
</td>
|
||||
<td
|
||||
class="px-6 py-4 whitespace-nowrap text-sm font-medium text-gray-900"
|
||||
>{dashboard.title}</td
|
||||
>
|
||||
<td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500"
|
||||
>{new Date(dashboard.last_modified).toLocaleDateString()}</td
|
||||
>
|
||||
<td class="px-6 py-4 whitespace-nowrap text-sm text-gray-500">
|
||||
<span
|
||||
class="px-2 py-1 text-xs font-medium rounded-full {getStatusBadgeClass(dashboard)}"
|
||||
>
|
||||
{getStatusLabel(dashboard)}
|
||||
</span>
|
||||
</td>
|
||||
</tr>
|
||||
{/each}
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
|
||||
<!-- Pagination Controls -->
|
||||
<div class="flex items-center justify-between mt-6">
|
||||
<div class="text-sm text-gray-500">
|
||||
{($t.dashboard?.showing )
|
||||
.replace("{start}", (currentPage * pageSize + 1).toString())
|
||||
.replace(
|
||||
"{end}",
|
||||
Math.min(
|
||||
(currentPage + 1) * pageSize,
|
||||
sortedDashboards.length,
|
||||
).toString(),
|
||||
)
|
||||
.replace("{total}", sortedDashboards.length.toString())}
|
||||
</div>
|
||||
<div class="flex gap-2">
|
||||
<Button
|
||||
variant="secondary"
|
||||
size="sm"
|
||||
disabled={currentPage === 0}
|
||||
onclick={() => goToPage(currentPage - 1)}
|
||||
>
|
||||
{$t.dashboard.previous}
|
||||
</Button>
|
||||
<Button
|
||||
variant="secondary"
|
||||
size="sm"
|
||||
disabled={currentPage >= totalPages - 1}
|
||||
onclick={() => goToPage(currentPage + 1)}
|
||||
>
|
||||
{$t.dashboard.next}
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
{#if showGitManager && gitDashboardId}
|
||||
<GitManager
|
||||
dashboardId={gitDashboardId}
|
||||
dashboardTitle={gitDashboardTitle}
|
||||
bind:show={showGitManager}
|
||||
/>
|
||||
{/if}
|
||||
|
||||
<!-- [/SECTION] -->
|
||||
|
||||
<!-- [/DEF:DashboardGrid:Component] -->
|
||||
@@ -19,21 +19,6 @@
|
||||
* @UX_STATE Error -> Shows error message with recovery option
|
||||
* @UX_FEEDBACK Auto-scroll keeps newest logs visible
|
||||
* @UX_RECOVERY Refresh button re-fetches logs from API
|
||||
*
|
||||
* @TEST_CONTRACT Component_TaskLogViewer ->
|
||||
* {
|
||||
* required_props: {taskId: string},
|
||||
* optional_props: {show: boolean, inline: boolean, taskStatus: string, realTimeLogs: array},
|
||||
* invariants: [
|
||||
* "Fetches initial logs on mount if taskId is provided",
|
||||
* "Updates log list when realTimeLogs prop changes without duplicating entries",
|
||||
* "Displays Loading, Error, or Data states correctly"
|
||||
* ]
|
||||
* }
|
||||
* @TEST_FIXTURE valid_viewer -> {taskId: "123", show: true}
|
||||
* @TEST_EDGE no_task_id -> does not fetch, shows empty/loading indefinitely if show=true
|
||||
* @TEST_EDGE api_error -> transitions to Error state and displays retry button
|
||||
* @TEST_INVARIANT displays_logs -> verifies: [valid_viewer]
|
||||
*/
|
||||
import { createEventDispatcher, onDestroy } from "svelte";
|
||||
import { getTaskLogs } from "../services/taskService.js";
|
||||
@@ -159,7 +144,7 @@
|
||||
<div
|
||||
class="w-5 h-5 border-2 border-terminal-border border-t-primary rounded-full animate-spin"
|
||||
></div>
|
||||
<span>{$t.tasks?.loading}</span>
|
||||
<span>{$t.tasks?.loading }</span>
|
||||
</div>
|
||||
{:else if error}
|
||||
<div
|
||||
@@ -169,7 +154,7 @@
|
||||
<span>{error}</span>
|
||||
<button
|
||||
class="bg-terminal-surface text-terminal-text-subtle border border-terminal-border rounded-md px-3 py-1 text-xs cursor-pointer transition-all hover:bg-terminal-border hover:text-terminal-text-bright"
|
||||
onclick={handleRefresh}>{$t.common?.retry}</button
|
||||
onclick={handleRefresh}>{$t.common?.retry }</button
|
||||
>
|
||||
</div>
|
||||
{:else}
|
||||
@@ -218,7 +203,7 @@
|
||||
class="text-lg font-medium text-gray-100"
|
||||
id="modal-title"
|
||||
>
|
||||
{$t.tasks?.logs_title}
|
||||
{$t.tasks?.logs_title }
|
||||
</h3>
|
||||
<button
|
||||
class="text-gray-500 hover:text-gray-300"
|
||||
@@ -226,13 +211,13 @@
|
||||
show = false;
|
||||
dispatch("close");
|
||||
}}
|
||||
aria-label={$t.common?.close}>✕</button
|
||||
aria-label={$t.common?.close }>✕</button
|
||||
>
|
||||
</div>
|
||||
<div class="h-[500px]">
|
||||
{#if loading && logs.length === 0}
|
||||
<p class="text-gray-500 text-center">
|
||||
{$t.tasks?.loading}
|
||||
{$t.tasks?.loading }
|
||||
</p>
|
||||
{:else if error}
|
||||
<p class="text-red-400 text-center">{error}</p>
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
// [DEF:frontend.src.components.__tests__.task_log_viewer:Module]
|
||||
// @TIER: STANDARD
|
||||
// @TIER: CRITICAL
|
||||
// @SEMANTICS: tests, task-log, viewer, mount, components
|
||||
// @PURPOSE: Unit tests for TaskLogViewer component by mounting it and observing the DOM.
|
||||
// @LAYER: UI (Tests)
|
||||
|
||||
@@ -21,7 +21,6 @@
|
||||
// [SECTION: PROPS]
|
||||
let {
|
||||
dashboardId,
|
||||
envId = null,
|
||||
currentBranch = 'main',
|
||||
} = $props();
|
||||
|
||||
@@ -57,7 +56,7 @@
|
||||
console.log(`[BranchSelector][Action] Loading branches for dashboard ${dashboardId}`);
|
||||
loading = true;
|
||||
try {
|
||||
branches = await gitService.getBranches(dashboardId, envId);
|
||||
branches = await gitService.getBranches(dashboardId);
|
||||
console.log(`[BranchSelector][Coherence:OK] Loaded ${branches.length} branches`);
|
||||
} catch (e) {
|
||||
console.error(`[BranchSelector][Coherence:Failed] ${e.message}`);
|
||||
@@ -88,7 +87,7 @@
|
||||
async function handleCheckout(branchName) {
|
||||
console.log(`[BranchSelector][Action] Checking out branch ${branchName}`);
|
||||
try {
|
||||
await gitService.checkoutBranch(dashboardId, branchName, envId);
|
||||
await gitService.checkoutBranch(dashboardId, branchName);
|
||||
currentBranch = branchName;
|
||||
dispatch('change', { branch: branchName });
|
||||
toast($t.git?.switched_to?.replace('{branch}', branchName), 'success');
|
||||
@@ -110,7 +109,7 @@
|
||||
if (!newBranchName) return;
|
||||
console.log(`[BranchSelector][Action] Creating branch ${newBranchName} from ${currentBranch}`);
|
||||
try {
|
||||
await gitService.createBranch(dashboardId, newBranchName, currentBranch, envId);
|
||||
await gitService.createBranch(dashboardId, newBranchName, currentBranch);
|
||||
toast($t.git?.created_branch?.replace('{branch}', newBranchName), 'success');
|
||||
showCreate = false;
|
||||
newBranchName = '';
|
||||
|
||||
@@ -18,7 +18,6 @@
|
||||
// [SECTION: PROPS]
|
||||
let {
|
||||
dashboardId,
|
||||
envId = null,
|
||||
} = $props();
|
||||
|
||||
// [/SECTION]
|
||||
@@ -49,7 +48,7 @@
|
||||
console.log(`[CommitHistory][Action] Loading history for dashboard ${dashboardId}`);
|
||||
loading = true;
|
||||
try {
|
||||
history = await gitService.getHistory(dashboardId, 50, envId);
|
||||
history = await gitService.getHistory(dashboardId);
|
||||
console.log(`[CommitHistory][Coherence:OK] Loaded ${history.length} commits`);
|
||||
} catch (e) {
|
||||
console.error(`[CommitHistory][Coherence:Failed] ${e.message}`);
|
||||
|
||||
@@ -20,7 +20,7 @@
|
||||
// [/SECTION]
|
||||
|
||||
// [SECTION: PROPS]
|
||||
let { dashboardId, envId = null, show = false } = $props();
|
||||
let { dashboardId, show = false } = $props();
|
||||
|
||||
// [/SECTION]
|
||||
|
||||
@@ -47,7 +47,7 @@
|
||||
);
|
||||
// postApi returns the JSON data directly or throws an error
|
||||
const data = await api.postApi(
|
||||
`/git/repositories/${encodeURIComponent(String(dashboardId))}/generate-message${envId ? `?env_id=${encodeURIComponent(String(envId))}` : ""}`,
|
||||
`/git/repositories/${dashboardId}/generate-message`,
|
||||
);
|
||||
message = data.message;
|
||||
toast($t.git?.commit_message_generated, "success");
|
||||
@@ -72,19 +72,17 @@
|
||||
console.log(
|
||||
`[CommitModal][Action] Loading status and diff for ${dashboardId}`,
|
||||
);
|
||||
status = await gitService.getStatus(dashboardId, envId);
|
||||
status = await gitService.getStatus(dashboardId);
|
||||
// Fetch both unstaged and staged diffs to show complete picture
|
||||
const unstagedDiff = await gitService.getDiff(
|
||||
dashboardId,
|
||||
null,
|
||||
false,
|
||||
envId,
|
||||
);
|
||||
const stagedDiff = await gitService.getDiff(
|
||||
dashboardId,
|
||||
null,
|
||||
true,
|
||||
envId,
|
||||
);
|
||||
|
||||
diff = "";
|
||||
@@ -116,7 +114,7 @@
|
||||
);
|
||||
committing = true;
|
||||
try {
|
||||
await gitService.commit(dashboardId, message, [], envId);
|
||||
await gitService.commit(dashboardId, message, []);
|
||||
toast($t.git?.commit_success, "success");
|
||||
dispatch("commit");
|
||||
show = false;
|
||||
|
||||
@@ -11,15 +11,14 @@
|
||||
|
||||
<script>
|
||||
// [SECTION: IMPORTS]
|
||||
import { createEventDispatcher } from "svelte";
|
||||
import { onMount, createEventDispatcher } from "svelte";
|
||||
import { gitService } from "../../services/gitService";
|
||||
import { api } from "../../lib/api.js";
|
||||
import { addToast as toast } from "../../lib/toasts.js";
|
||||
import { t } from "../../lib/i18n";
|
||||
// [/SECTION]
|
||||
|
||||
// [SECTION: PROPS]
|
||||
let { dashboardId, envId = null, show = false, preferredTargetStage = "" } = $props();
|
||||
let { dashboardId, show = false } = $props();
|
||||
|
||||
// [/SECTION]
|
||||
|
||||
@@ -31,13 +30,6 @@
|
||||
// [/SECTION]
|
||||
|
||||
const dispatch = createEventDispatcher();
|
||||
const normalizedPreferredStage = $derived(String(preferredTargetStage || "").toUpperCase());
|
||||
const deploymentCandidates = $derived.by(() => {
|
||||
const all = environments.filter((env) => env?.id !== envId);
|
||||
if (!normalizedPreferredStage) return all;
|
||||
const stageMatched = all.filter((env) => normalizeEnvStage(env) === normalizedPreferredStage);
|
||||
return stageMatched.length > 0 ? stageMatched : all;
|
||||
});
|
||||
|
||||
// [DEF:loadStatus:Watcher]
|
||||
$effect(() => {
|
||||
@@ -45,29 +37,6 @@
|
||||
});
|
||||
// [/DEF:loadStatus:Watcher]
|
||||
|
||||
// [DEF:normalizeEnvStage:Function]
|
||||
/**
|
||||
* @purpose Normalize environment stage with legacy production fallback.
|
||||
* @post Returns DEV/PREPROD/PROD.
|
||||
*/
|
||||
function normalizeEnvStage(env) {
|
||||
if (env?.is_production) return "PROD";
|
||||
const stage = String(env?.stage || "").trim().toUpperCase();
|
||||
if (stage === "PROD" || stage === "PREPROD") return stage;
|
||||
return "DEV";
|
||||
}
|
||||
// [/DEF:normalizeEnvStage:Function]
|
||||
|
||||
// [DEF:resolveEnvUrl:Function]
|
||||
/**
|
||||
* @purpose Resolve environment URL from consolidated or git-specific payload shape.
|
||||
* @post Returns stable URL string.
|
||||
*/
|
||||
function resolveEnvUrl(env) {
|
||||
return String(env?.superset_url || env?.url || "");
|
||||
}
|
||||
// [/DEF:resolveEnvUrl:Function]
|
||||
|
||||
// [DEF:loadEnvironments:Function]
|
||||
/**
|
||||
* @purpose Fetch available environments from API.
|
||||
@@ -78,13 +47,9 @@
|
||||
console.log(`[DeploymentModal][Action] Loading environments`);
|
||||
loading = true;
|
||||
try {
|
||||
environments = await api.getEnvironmentsList();
|
||||
const candidates = (environments || []).filter((env) => env?.id !== envId);
|
||||
if (normalizedPreferredStage) {
|
||||
const stageMatched = candidates.filter((env) => normalizeEnvStage(env) === normalizedPreferredStage);
|
||||
selectedEnv = (stageMatched[0]?.id) || (candidates[0]?.id) || "";
|
||||
} else {
|
||||
selectedEnv = (candidates[0]?.id) || "";
|
||||
environments = await gitService.getEnvironments();
|
||||
if (environments.length > 0) {
|
||||
selectedEnv = environments[0].id;
|
||||
}
|
||||
console.log(
|
||||
`[DeploymentModal][Coherence:OK] Loaded ${environments.length} environments`,
|
||||
@@ -110,7 +75,7 @@
|
||||
console.log(`[DeploymentModal][Action] Deploying to ${selectedEnv}`);
|
||||
deploying = true;
|
||||
try {
|
||||
const result = await gitService.deploy(dashboardId, selectedEnv, envId);
|
||||
const result = await gitService.deploy(dashboardId, selectedEnv);
|
||||
toast(
|
||||
result.message || $t.git?.deploy_success,
|
||||
"success",
|
||||
@@ -138,7 +103,7 @@
|
||||
|
||||
{#if loading}
|
||||
<p class="text-gray-500">{$t.migration?.loading_envs}</p>
|
||||
{:else if deploymentCandidates.length === 0}
|
||||
{:else if environments.length === 0}
|
||||
<p class="text-red-500 mb-4">
|
||||
{$t.git?.no_deploy_envs}
|
||||
</p>
|
||||
@@ -151,11 +116,6 @@
|
||||
</button>
|
||||
</div>
|
||||
{:else}
|
||||
{#if normalizedPreferredStage}
|
||||
<p class="mb-3 rounded border border-amber-200 bg-amber-50 px-3 py-2 text-xs text-amber-800">
|
||||
GitFlow target stage: {normalizedPreferredStage}
|
||||
</p>
|
||||
{/if}
|
||||
<div class="mb-6">
|
||||
<label class="block text-sm font-medium text-gray-700 mb-2"
|
||||
>{$t.migration?.target_env}</label
|
||||
@@ -164,9 +124,9 @@
|
||||
bind:value={selectedEnv}
|
||||
class="w-full border rounded p-2 focus:ring-2 focus:ring-blue-500 outline-none bg-white"
|
||||
>
|
||||
{#each deploymentCandidates as env}
|
||||
{#each environments as env}
|
||||
<option value={env.id}
|
||||
>{env.name} [{normalizeEnvStage(env)}] ({resolveEnvUrl(env)})</option
|
||||
>{env.name} ({env.superset_url})</option
|
||||
>
|
||||
{/each}
|
||||
</select>
|
||||
|
||||
@@ -15,7 +15,6 @@
|
||||
// [SECTION: IMPORTS]
|
||||
import { onMount } from 'svelte';
|
||||
import { gitService } from '../../services/gitService';
|
||||
import { api } from '../../lib/api.js';
|
||||
import { addToast as toast } from '../../lib/toasts.js';
|
||||
import { t } from '../../lib/i18n';
|
||||
import { Button, Card, PageHeader, Select, Input } from '../../lib/ui';
|
||||
@@ -29,7 +28,6 @@
|
||||
// [SECTION: PROPS]
|
||||
let {
|
||||
dashboardId,
|
||||
envId = null,
|
||||
dashboardTitle = "",
|
||||
show = false,
|
||||
} = $props();
|
||||
@@ -51,94 +49,8 @@
|
||||
let configs = $state([]);
|
||||
let selectedConfigId = $state("");
|
||||
let remoteUrl = $state("");
|
||||
let creatingRemoteRepo = $state(false);
|
||||
let promoting = $state(false);
|
||||
let promoteFromBranch = $state("dev");
|
||||
let promoteToBranch = $state("preprod");
|
||||
let promoteMode = $state("mr");
|
||||
let promoteReason = $state("");
|
||||
let currentEnvStage = $state("");
|
||||
let preferredDeployTargetStage = $state("");
|
||||
// [/SECTION]
|
||||
|
||||
// [DEF:normalizeEnvStage:Function]
|
||||
/**
|
||||
* @purpose Normalize environment stage with legacy fallback.
|
||||
* @post Returns DEV/PREPROD/PROD.
|
||||
*/
|
||||
function normalizeEnvStage(env) {
|
||||
if (env?.is_production) return 'PROD';
|
||||
const stage = String(env?.stage || '').trim().toUpperCase();
|
||||
if (stage === 'PROD' || stage === 'PREPROD') return stage;
|
||||
return 'DEV';
|
||||
}
|
||||
// [/DEF:normalizeEnvStage:Function]
|
||||
|
||||
// [DEF:resolveCurrentEnvironmentId:Function]
|
||||
/**
|
||||
* @purpose Resolve active environment id for current dashboard view.
|
||||
* @post Returns env id from prop or selected_env_id in localStorage.
|
||||
*/
|
||||
function resolveCurrentEnvironmentId() {
|
||||
if (envId) return String(envId);
|
||||
if (typeof window === 'undefined') return null;
|
||||
return localStorage.getItem('selected_env_id');
|
||||
}
|
||||
// [/DEF:resolveCurrentEnvironmentId:Function]
|
||||
|
||||
// [DEF:applyGitflowStageDefaults:Function]
|
||||
/**
|
||||
* @purpose Apply branch promotion/deploy defaults based on environment stage.
|
||||
* @post Promote fields and deploy target stage are updated.
|
||||
*/
|
||||
function applyGitflowStageDefaults(stage) {
|
||||
const normalizedStage = String(stage || '').toUpperCase();
|
||||
if (normalizedStage === 'DEV') {
|
||||
promoteFromBranch = 'dev';
|
||||
promoteToBranch = 'preprod';
|
||||
preferredDeployTargetStage = 'PREPROD';
|
||||
return;
|
||||
}
|
||||
if (normalizedStage === 'PREPROD') {
|
||||
promoteFromBranch = 'preprod';
|
||||
promoteToBranch = 'main';
|
||||
preferredDeployTargetStage = 'PROD';
|
||||
return;
|
||||
}
|
||||
preferredDeployTargetStage = '';
|
||||
}
|
||||
// [/DEF:applyGitflowStageDefaults:Function]
|
||||
|
||||
// [DEF:loadCurrentEnvironmentStage:Function]
|
||||
/**
|
||||
* @purpose Detect current environment stage and bind Gitflow defaults.
|
||||
* @post currentEnvStage and defaults are set when environment is found.
|
||||
*/
|
||||
async function loadCurrentEnvironmentStage() {
|
||||
try {
|
||||
const currentEnvId = resolveCurrentEnvironmentId();
|
||||
if (!currentEnvId) return;
|
||||
const environments = await api.getEnvironmentsList();
|
||||
const currentEnv = (environments || []).find((item) => item.id === currentEnvId);
|
||||
if (!currentEnv) return;
|
||||
currentEnvStage = normalizeEnvStage(currentEnv);
|
||||
applyGitflowStageDefaults(currentEnvStage);
|
||||
} catch (e) {
|
||||
console.error(`[GitManager][Coherence:Failed] Failed to resolve environment stage: ${e.message}`);
|
||||
}
|
||||
}
|
||||
// [/DEF:loadCurrentEnvironmentStage:Function]
|
||||
|
||||
// [DEF:isNumericDashboardRef:Function]
|
||||
/**
|
||||
* @purpose Checks whether current dashboard reference is numeric ID.
|
||||
* @post Returns true when dashboardId is digits-only.
|
||||
*/
|
||||
function isNumericDashboardRef() {
|
||||
return /^\d+$/.test(String(dashboardId || "").trim());
|
||||
}
|
||||
// [/DEF:isNumericDashboardRef:Function]
|
||||
|
||||
// [DEF:checkStatus:Function]
|
||||
/**
|
||||
* @purpose Проверяет, инициализирован ли репозиторий для данного дашборда.
|
||||
@@ -146,23 +58,16 @@
|
||||
* @post initialized state is set; configs loaded if not initialized.
|
||||
*/
|
||||
async function checkStatus() {
|
||||
if (isNumericDashboardRef()) {
|
||||
checkingStatus = false;
|
||||
initialized = false;
|
||||
toast('GitManager requires dashboard slug. Numeric ID is forbidden.', 'error');
|
||||
return;
|
||||
}
|
||||
checkingStatus = true;
|
||||
try {
|
||||
// If we can get branches, it means repo exists
|
||||
await gitService.getBranches(dashboardId, envId);
|
||||
await gitService.getBranches(dashboardId);
|
||||
initialized = true;
|
||||
} catch (e) {
|
||||
initialized = false;
|
||||
// Load configs if not initialized
|
||||
configs = await gitService.getConfigs();
|
||||
const defaultConfig = resolveDefaultConfig(configs);
|
||||
if (defaultConfig?.id) selectedConfigId = defaultConfig.id;
|
||||
if (configs.length > 0) selectedConfigId = configs[0].id;
|
||||
} finally {
|
||||
checkingStatus = false;
|
||||
}
|
||||
@@ -176,17 +81,13 @@
|
||||
* @post Repository is created on backend; initialized set to true.
|
||||
*/
|
||||
async function handleInit() {
|
||||
if (isNumericDashboardRef()) {
|
||||
toast('GitManager requires dashboard slug. Numeric ID is forbidden.', 'error');
|
||||
return;
|
||||
}
|
||||
if (!selectedConfigId || !remoteUrl) {
|
||||
toast($t.git?.init_validation_error, 'error');
|
||||
return;
|
||||
}
|
||||
loading = true;
|
||||
try {
|
||||
await gitService.initRepository(dashboardId, selectedConfigId, remoteUrl, envId);
|
||||
await gitService.initRepository(dashboardId, selectedConfigId, remoteUrl);
|
||||
toast($t.git?.init_success, 'success');
|
||||
initialized = true;
|
||||
} catch (e) {
|
||||
@@ -197,91 +98,6 @@
|
||||
}
|
||||
// [/DEF:handleInit:Function]
|
||||
|
||||
// [DEF:getSelectedConfig:Function]
|
||||
/**
|
||||
* @purpose Returns currently selected Git server config.
|
||||
* @post Config object or null.
|
||||
*/
|
||||
function getSelectedConfig() {
|
||||
return configs.find((item) => item.id === selectedConfigId) || null;
|
||||
}
|
||||
// [/DEF:getSelectedConfig:Function]
|
||||
|
||||
// [DEF:resolveDefaultConfig:Function]
|
||||
/**
|
||||
* @purpose Resolves default Git config for current dashboard session.
|
||||
* @post Returns config by priority: selected -> is_default -> CONNECTED -> first.
|
||||
*/
|
||||
function resolveDefaultConfig(configList) {
|
||||
if (!Array.isArray(configList) || configList.length === 0) return null;
|
||||
const selected = configList.find((item) => item.id === selectedConfigId);
|
||||
if (selected) return selected;
|
||||
const explicitDefault = configList.find((item) => item?.is_default);
|
||||
if (explicitDefault) return explicitDefault;
|
||||
const connected = configList.find((item) => item?.status === 'CONNECTED');
|
||||
return connected || configList[0];
|
||||
}
|
||||
// [/DEF:resolveDefaultConfig:Function]
|
||||
|
||||
// [DEF:buildSuggestedRepoName:Function]
|
||||
/**
|
||||
* @purpose Builds deterministic repository name from dashboard title/id.
|
||||
* @post Returns kebab-case name.
|
||||
*/
|
||||
function buildSuggestedRepoName() {
|
||||
const source = (dashboardTitle || `dashboard-${dashboardId}`).toLowerCase();
|
||||
const slug = source
|
||||
.replace(/[^a-z0-9]+/g, '-')
|
||||
.replace(/^-+|-+$/g, '')
|
||||
.slice(0, 48);
|
||||
return `${slug || 'dashboard'}-${dashboardId}`;
|
||||
}
|
||||
// [/DEF:buildSuggestedRepoName:Function]
|
||||
|
||||
// [DEF:handleCreateRemoteRepo:Function]
|
||||
/**
|
||||
* @purpose Creates remote repository on selected Git server and fills remote URL.
|
||||
* @pre selectedConfigId is selected.
|
||||
* @post remoteUrl is set from created repository clone URL.
|
||||
*/
|
||||
async function handleCreateRemoteRepo() {
|
||||
const config = getSelectedConfig() || resolveDefaultConfig(configs);
|
||||
if (!config) {
|
||||
toast($t.git?.init_validation_error || 'Select Git server first', 'error');
|
||||
return;
|
||||
}
|
||||
if (!selectedConfigId && config.id) selectedConfigId = config.id;
|
||||
const suggestedName = buildSuggestedRepoName();
|
||||
const inputName = prompt(
|
||||
`Repository name for ${config.provider}:`,
|
||||
suggestedName,
|
||||
);
|
||||
const repoName = String(inputName || '').trim();
|
||||
if (!repoName) return;
|
||||
|
||||
creatingRemoteRepo = true;
|
||||
try {
|
||||
const repo = await gitService.createRemoteRepository(config.id, {
|
||||
name: repoName,
|
||||
private: true,
|
||||
description: `Superset dashboard ${dashboardId}: ${dashboardTitle || repoName}`,
|
||||
auto_init: true,
|
||||
default_branch: 'main',
|
||||
});
|
||||
const resolvedRemoteUrl = repo?.clone_url || repo?.html_url || '';
|
||||
if (!resolvedRemoteUrl) {
|
||||
throw new Error('Remote repository created, but URL is empty');
|
||||
}
|
||||
remoteUrl = resolvedRemoteUrl;
|
||||
toast(`Repository created on ${config.provider}`, 'success');
|
||||
} catch (e) {
|
||||
toast(e.message, 'error');
|
||||
} finally {
|
||||
creatingRemoteRepo = false;
|
||||
}
|
||||
}
|
||||
// [/DEF:handleCreateRemoteRepo:Function]
|
||||
|
||||
// [DEF:handleSync:Function]
|
||||
/**
|
||||
* @purpose Синхронизирует состояние Superset с локальным Git-репозиторием.
|
||||
@@ -289,15 +105,11 @@
|
||||
* @post Dashboard YAMLs are exported to Git and staged.
|
||||
*/
|
||||
async function handleSync() {
|
||||
if (isNumericDashboardRef()) {
|
||||
toast('GitManager requires dashboard slug. Numeric ID is forbidden.', 'error');
|
||||
return;
|
||||
}
|
||||
loading = true;
|
||||
try {
|
||||
// Try to get selected environment from localStorage (set by EnvSelector)
|
||||
const sourceEnvId = localStorage.getItem('selected_env_id');
|
||||
await gitService.sync(dashboardId, sourceEnvId, envId);
|
||||
await gitService.sync(dashboardId, sourceEnvId);
|
||||
toast($t.git?.sync_success, 'success');
|
||||
} catch (e) {
|
||||
toast(e.message, 'error');
|
||||
@@ -314,13 +126,9 @@
|
||||
* @post Changes are pushed to origin.
|
||||
*/
|
||||
async function handlePush() {
|
||||
if (isNumericDashboardRef()) {
|
||||
toast('GitManager requires dashboard slug. Numeric ID is forbidden.', 'error');
|
||||
return;
|
||||
}
|
||||
loading = true;
|
||||
try {
|
||||
await gitService.push(dashboardId, envId);
|
||||
await gitService.push(dashboardId);
|
||||
toast($t.git?.push_success, 'success');
|
||||
} catch (e) {
|
||||
toast(e.message, 'error');
|
||||
@@ -337,13 +145,9 @@
|
||||
* @post Local branch is updated with remote changes.
|
||||
*/
|
||||
async function handlePull() {
|
||||
if (isNumericDashboardRef()) {
|
||||
toast('GitManager requires dashboard slug. Numeric ID is forbidden.', 'error');
|
||||
return;
|
||||
}
|
||||
loading = true;
|
||||
try {
|
||||
await gitService.pull(dashboardId, envId);
|
||||
await gitService.pull(dashboardId);
|
||||
toast($t.git?.pull_success, 'success');
|
||||
} catch (e) {
|
||||
toast(e.message, 'error');
|
||||
@@ -353,113 +157,17 @@
|
||||
}
|
||||
// [/DEF:handlePull:Function]
|
||||
|
||||
// [DEF:handlePromote:Function]
|
||||
/**
|
||||
* @purpose Promotes changes between branches via MR (default) or direct merge (unsafe).
|
||||
* @pre Repository is initialized and source/target branches are provided.
|
||||
* @post Promotion request is sent to backend and user receives result feedback.
|
||||
*/
|
||||
async function handlePromote() {
|
||||
if (isNumericDashboardRef()) {
|
||||
toast('GitManager requires dashboard slug. Numeric ID is forbidden.', 'error');
|
||||
return;
|
||||
}
|
||||
const fromBranch = String(promoteFromBranch || '').trim();
|
||||
const toBranch = String(promoteToBranch || '').trim();
|
||||
if (!fromBranch || !toBranch || fromBranch === toBranch) {
|
||||
toast('Select different source and target branches', 'error');
|
||||
return;
|
||||
}
|
||||
if (promoteMode === 'direct' && !String(promoteReason || '').trim()) {
|
||||
toast('Unsafe direct promote requires explicit reason', 'error');
|
||||
return;
|
||||
}
|
||||
|
||||
promoting = true;
|
||||
try {
|
||||
const response = await gitService.promote(
|
||||
dashboardId,
|
||||
{
|
||||
from_branch: fromBranch,
|
||||
to_branch: toBranch,
|
||||
mode: promoteMode,
|
||||
title: `Promote ${fromBranch} -> ${toBranch}: ${dashboardTitle || dashboardId}`,
|
||||
description: promoteMode === 'direct'
|
||||
? `Unsafe direct promote requested.\nReason: ${promoteReason}`
|
||||
: undefined,
|
||||
reason: promoteMode === 'direct' ? promoteReason : undefined,
|
||||
},
|
||||
envId,
|
||||
);
|
||||
|
||||
if (promoteMode === 'direct') {
|
||||
toast('Unsafe direct promote completed. Policy violation logged.', 'warning');
|
||||
} else {
|
||||
if (response?.url) {
|
||||
window.open(response.url, '_blank', 'noopener,noreferrer');
|
||||
}
|
||||
toast('Merge request created on Git server', 'success');
|
||||
}
|
||||
} catch (e) {
|
||||
toast(e.message, 'error');
|
||||
} finally {
|
||||
promoting = false;
|
||||
}
|
||||
}
|
||||
// [/DEF:handlePromote:Function]
|
||||
|
||||
// [DEF:closeModal:Function]
|
||||
/**
|
||||
* @purpose Закрывает модальное окно управления Git.
|
||||
* @post show=false.
|
||||
*/
|
||||
function closeModal() {
|
||||
show = false;
|
||||
}
|
||||
// [/DEF:closeModal:Function]
|
||||
|
||||
// [DEF:handleBackdropClick:Function]
|
||||
/**
|
||||
* @purpose Закрывает модалку по клику на подложку.
|
||||
* @pre Событие пришло с оверлея.
|
||||
* @post show=false.
|
||||
*/
|
||||
function handleBackdropClick(event) {
|
||||
if (event.target === event.currentTarget) {
|
||||
closeModal();
|
||||
}
|
||||
}
|
||||
// [/DEF:handleBackdropClick:Function]
|
||||
|
||||
onMount(async () => {
|
||||
await Promise.all([checkStatus(), loadCurrentEnvironmentStage()]);
|
||||
});
|
||||
onMount(checkStatus);
|
||||
</script>
|
||||
|
||||
<!-- [SECTION: TEMPLATE] -->
|
||||
{#if show}
|
||||
<div
|
||||
class="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50"
|
||||
onclick={handleBackdropClick}
|
||||
>
|
||||
<div
|
||||
class="relative bg-white p-6 rounded-lg shadow-2xl w-full max-w-4xl max-h-[90vh] overflow-y-auto"
|
||||
onclick={(event) => event.stopPropagation()}
|
||||
>
|
||||
<button
|
||||
type="button"
|
||||
onclick={closeModal}
|
||||
class="absolute right-4 top-4 z-10 rounded p-1 text-gray-400 hover:text-gray-700 hover:bg-gray-100 transition-colors"
|
||||
aria-label={$t.common?.close || "Close"}
|
||||
>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" class="h-6 w-6" fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12" />
|
||||
</svg>
|
||||
</button>
|
||||
<div class="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50">
|
||||
<div class="bg-white p-6 rounded-lg shadow-2xl w-full max-w-4xl max-h-[90vh] overflow-y-auto">
|
||||
<PageHeader title={`${$t.git?.management}: ${dashboardTitle}`}>
|
||||
<div slot="subtitle" class="text-sm text-gray-500">{$t.common?.id}: {dashboardId}</div>
|
||||
<div slot="actions">
|
||||
<button type="button" onclick={closeModal} class="text-gray-400 hover:text-gray-600 transition-colors">
|
||||
<button onclick={() => show = false} class="text-gray-400 hover:text-gray-600 transition-colors">
|
||||
<svg xmlns="http://www.w3.org/2000/svg" class="h-6 w-6" fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12" />
|
||||
</svg>
|
||||
@@ -493,19 +201,10 @@
|
||||
bind:value={remoteUrl}
|
||||
placeholder={$t.git?.remote_url_placeholder}
|
||||
/>
|
||||
<Button
|
||||
variant="secondary"
|
||||
onclick={handleCreateRemoteRepo}
|
||||
disabled={creatingRemoteRepo || configs.length === 0 || !selectedConfigId}
|
||||
isLoading={creatingRemoteRepo}
|
||||
class="w-full"
|
||||
>
|
||||
Create repo
|
||||
</Button>
|
||||
|
||||
<Button
|
||||
onclick={handleInit}
|
||||
disabled={loading || configs.length === 0 || creatingRemoteRepo}
|
||||
disabled={loading || configs.length === 0}
|
||||
isLoading={loading}
|
||||
class="w-full"
|
||||
>
|
||||
@@ -520,7 +219,7 @@
|
||||
<div class="md:col-span-1 space-y-6">
|
||||
<section>
|
||||
<h3 class="text-sm font-semibold text-gray-400 uppercase tracking-wider mb-3">{$t.git.branch}</h3>
|
||||
<BranchSelector {dashboardId} {envId} bind:currentBranch />
|
||||
<BranchSelector {dashboardId} bind:currentBranch />
|
||||
</section>
|
||||
|
||||
<section class="space-y-3">
|
||||
@@ -560,59 +259,6 @@
|
||||
</div>
|
||||
</section>
|
||||
|
||||
<section class="space-y-3">
|
||||
<h3 class="text-sm font-semibold text-gray-400 uppercase tracking-wider mb-3">Promote</h3>
|
||||
{#if currentEnvStage}
|
||||
<div class="rounded-lg border border-amber-200 bg-amber-50 px-3 py-2 text-xs text-amber-800">
|
||||
Stage: {currentEnvStage}
|
||||
{#if preferredDeployTargetStage}
|
||||
| Next deploy target: {preferredDeployTargetStage}
|
||||
{/if}
|
||||
</div>
|
||||
{/if}
|
||||
<div class="grid grid-cols-2 gap-2">
|
||||
<Input
|
||||
label="From branch"
|
||||
bind:value={promoteFromBranch}
|
||||
placeholder="dev"
|
||||
/>
|
||||
<Input
|
||||
label="To branch"
|
||||
bind:value={promoteToBranch}
|
||||
placeholder="preprod"
|
||||
/>
|
||||
</div>
|
||||
<Select
|
||||
label="Promotion mode"
|
||||
bind:value={promoteMode}
|
||||
options={[
|
||||
{ value: 'mr', label: 'Create MR/PR (Safe)' },
|
||||
{ value: 'direct', label: 'Direct merge without MR (Unsafe)' },
|
||||
]}
|
||||
/>
|
||||
{#if promoteMode === 'direct'}
|
||||
<div class="rounded-lg border border-red-300 bg-red-50 p-3 text-sm text-red-800">
|
||||
<div class="font-semibold">Warning: Unsafe direct promote</div>
|
||||
<div class="mt-1">This bypasses MR approval rules and writes a policy violation to logs.</div>
|
||||
</div>
|
||||
<Input
|
||||
label="Reason (required)"
|
||||
bind:value={promoteReason}
|
||||
placeholder="Why MR is bypassed?"
|
||||
/>
|
||||
{/if}
|
||||
<Button
|
||||
onclick={handlePromote}
|
||||
disabled={promoting || loading}
|
||||
isLoading={promoting}
|
||||
class={`w-full ${promoteMode === 'direct' ? 'bg-red-600 hover:bg-red-700 focus-visible:ring-red-500' : ''}`}
|
||||
>
|
||||
{promoteMode === 'direct'
|
||||
? 'Direct promote (unsafe)'
|
||||
: 'Create MR/PR for promote'}
|
||||
</Button>
|
||||
</section>
|
||||
|
||||
<section>
|
||||
<h3 class="text-sm font-semibold text-gray-400 uppercase tracking-wider mb-3">{$t.git.deployment}</h3>
|
||||
<Button
|
||||
@@ -628,7 +274,7 @@
|
||||
|
||||
<!-- Right Column: History -->
|
||||
<div class="md:col-span-2 border-l pl-6">
|
||||
<CommitHistory {dashboardId} {envId} />
|
||||
<CommitHistory {dashboardId} />
|
||||
</div>
|
||||
</div>
|
||||
{/if}
|
||||
@@ -638,15 +284,12 @@
|
||||
|
||||
<CommitModal
|
||||
{dashboardId}
|
||||
{envId}
|
||||
bind:show={showCommitModal}
|
||||
on:commit={() => { /* Refresh history */ }}
|
||||
/>
|
||||
|
||||
<DeploymentModal
|
||||
{dashboardId}
|
||||
{envId}
|
||||
preferredTargetStage={preferredDeployTargetStage}
|
||||
bind:show={showDeployModal}
|
||||
/>
|
||||
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
// [DEF:api_module:Module]
|
||||
// @TIER: STANDARD
|
||||
// @SEMANTICS: api, client, fetch, rest
|
||||
// @PURPOSE: Handles all communication with the backend API.
|
||||
// @LAYER: Infra-API
|
||||
|
||||
import { addToast } from './toasts.js';
|
||||
import { PUBLIC_WS_URL } from '$env/static/public';
|
||||
|
||||
// [DEF:api_module:Module]
|
||||
// @TIER: STANDARD
|
||||
// @SEMANTICS: api, client, fetch, rest
|
||||
// @PURPOSE: Handles all communication with the backend API.
|
||||
// @LAYER: Infra-API
|
||||
|
||||
import { addToast } from './toasts.js';
|
||||
import { PUBLIC_WS_URL } from '$env/static/public';
|
||||
|
||||
const API_BASE_URL = '/api';
|
||||
|
||||
// [DEF:buildApiError:Function]
|
||||
@@ -40,69 +40,52 @@ function notifyApiError(error) {
|
||||
addToast(error.message, 'error');
|
||||
}
|
||||
// [/DEF:notifyApiError:Function]
|
||||
|
||||
// [DEF:shouldSuppressApiErrorToast:Function]
|
||||
// @PURPOSE: Avoid noisy toasts for expected non-critical API failures.
|
||||
// @PRE: endpoint can be empty; error can be null.
|
||||
// @POST: Returns true only for explicitly allowed suppressed scenarios.
|
||||
function shouldSuppressApiErrorToast(endpoint, error) {
|
||||
const isGitStatusEndpoint =
|
||||
typeof endpoint === 'string' &&
|
||||
endpoint.startsWith('/git/repositories/') &&
|
||||
endpoint.endsWith('/status');
|
||||
const isNoRepoError =
|
||||
(error?.status === 400 || error?.status === 404) &&
|
||||
/Repository for dashboard .* not found/i.test(String(error?.message || ''));
|
||||
|
||||
return isGitStatusEndpoint && isNoRepoError;
|
||||
}
|
||||
// [/DEF:shouldSuppressApiErrorToast:Function]
|
||||
|
||||
// [DEF:getWsUrl:Function]
|
||||
// @PURPOSE: Returns the WebSocket URL for a specific task, with fallback logic.
|
||||
// @PRE: taskId is provided.
|
||||
// @POST: Returns valid WebSocket URL string.
|
||||
// @PARAM: taskId (string) - The ID of the task.
|
||||
// @RETURN: string - The WebSocket URL.
|
||||
export const getWsUrl = (taskId) => {
|
||||
let baseUrl = PUBLIC_WS_URL;
|
||||
if (!baseUrl) {
|
||||
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
||||
// Use the current host and port to allow Vite proxy to handle the connection
|
||||
baseUrl = `${protocol}//${window.location.host}`;
|
||||
}
|
||||
return `${baseUrl}/ws/logs/${taskId}`;
|
||||
};
|
||||
// [/DEF:getWsUrl:Function]
|
||||
|
||||
// [DEF:getAuthHeaders:Function]
|
||||
// @PURPOSE: Returns headers with Authorization if token exists.
|
||||
function getAuthHeaders() {
|
||||
const headers = {
|
||||
'Content-Type': 'application/json',
|
||||
};
|
||||
if (typeof window !== 'undefined') {
|
||||
const token = localStorage.getItem('auth_token');
|
||||
if (token) {
|
||||
headers['Authorization'] = `Bearer ${token}`;
|
||||
}
|
||||
}
|
||||
return headers;
|
||||
}
|
||||
// [/DEF:getAuthHeaders:Function]
|
||||
|
||||
// [DEF:fetchApi:Function]
|
||||
// @PURPOSE: Generic GET request wrapper.
|
||||
// @PRE: endpoint string is provided.
|
||||
// @POST: Returns Promise resolving to JSON data or throws on error.
|
||||
// @PARAM: endpoint (string) - API endpoint.
|
||||
// @RETURN: Promise<any> - JSON response.
|
||||
|
||||
// [DEF:getWsUrl:Function]
|
||||
// @PURPOSE: Returns the WebSocket URL for a specific task, with fallback logic.
|
||||
// @PRE: taskId is provided.
|
||||
// @POST: Returns valid WebSocket URL string.
|
||||
// @PARAM: taskId (string) - The ID of the task.
|
||||
// @RETURN: string - The WebSocket URL.
|
||||
export const getWsUrl = (taskId) => {
|
||||
let baseUrl = PUBLIC_WS_URL;
|
||||
if (!baseUrl) {
|
||||
const protocol = window.location.protocol === 'https:' ? 'wss:' : 'ws:';
|
||||
// Use the current host and port to allow Vite proxy to handle the connection
|
||||
baseUrl = `${protocol}//${window.location.host}`;
|
||||
}
|
||||
return `${baseUrl}/ws/logs/${taskId}`;
|
||||
};
|
||||
// [/DEF:getWsUrl:Function]
|
||||
|
||||
// [DEF:getAuthHeaders:Function]
|
||||
// @PURPOSE: Returns headers with Authorization if token exists.
|
||||
function getAuthHeaders() {
|
||||
const headers = {
|
||||
'Content-Type': 'application/json',
|
||||
};
|
||||
if (typeof window !== 'undefined') {
|
||||
const token = localStorage.getItem('auth_token');
|
||||
if (token) {
|
||||
headers['Authorization'] = `Bearer ${token}`;
|
||||
}
|
||||
}
|
||||
return headers;
|
||||
}
|
||||
// [/DEF:getAuthHeaders:Function]
|
||||
|
||||
// [DEF:fetchApi:Function]
|
||||
// @PURPOSE: Generic GET request wrapper.
|
||||
// @PRE: endpoint string is provided.
|
||||
// @POST: Returns Promise resolving to JSON data or throws on error.
|
||||
// @PARAM: endpoint (string) - API endpoint.
|
||||
// @RETURN: Promise<any> - JSON response.
|
||||
async function fetchApi(endpoint) {
|
||||
try {
|
||||
console.log(`[api.fetchApi][Action] Fetching from context={{'endpoint': '${endpoint}'}}`);
|
||||
const response = await fetch(`${API_BASE_URL}${endpoint}`, {
|
||||
headers: getAuthHeaders()
|
||||
});
|
||||
try {
|
||||
console.log(`[api.fetchApi][Action] Fetching from context={{'endpoint': '${endpoint}'}}`);
|
||||
const response = await fetch(`${API_BASE_URL}${endpoint}`, {
|
||||
headers: getAuthHeaders()
|
||||
});
|
||||
console.log(`[api.fetchApi][Action] Received response context={{'status': ${response.status}, 'ok': ${response.ok}}}`);
|
||||
if (!response.ok) {
|
||||
throw await buildApiError(response);
|
||||
@@ -146,22 +129,22 @@ async function fetchApiBlob(endpoint, options = {}) {
|
||||
}
|
||||
}
|
||||
// [/DEF:fetchApiBlob:Function]
|
||||
|
||||
// [DEF:postApi:Function]
|
||||
// @PURPOSE: Generic POST request wrapper.
|
||||
// @PRE: endpoint and body are provided.
|
||||
// @POST: Returns Promise resolving to JSON data or throws on error.
|
||||
// @PARAM: endpoint (string) - API endpoint.
|
||||
// @PARAM: body (object) - Request payload.
|
||||
// @RETURN: Promise<any> - JSON response.
|
||||
async function postApi(endpoint, body) {
|
||||
try {
|
||||
console.log(`[api.postApi][Action] Posting to context={{'endpoint': '${endpoint}'}}`);
|
||||
const response = await fetch(`${API_BASE_URL}${endpoint}`, {
|
||||
method: 'POST',
|
||||
headers: getAuthHeaders(),
|
||||
body: JSON.stringify(body),
|
||||
});
|
||||
|
||||
// [DEF:postApi:Function]
|
||||
// @PURPOSE: Generic POST request wrapper.
|
||||
// @PRE: endpoint and body are provided.
|
||||
// @POST: Returns Promise resolving to JSON data or throws on error.
|
||||
// @PARAM: endpoint (string) - API endpoint.
|
||||
// @PARAM: body (object) - Request payload.
|
||||
// @RETURN: Promise<any> - JSON response.
|
||||
async function postApi(endpoint, body) {
|
||||
try {
|
||||
console.log(`[api.postApi][Action] Posting to context={{'endpoint': '${endpoint}'}}`);
|
||||
const response = await fetch(`${API_BASE_URL}${endpoint}`, {
|
||||
method: 'POST',
|
||||
headers: getAuthHeaders(),
|
||||
body: JSON.stringify(body),
|
||||
});
|
||||
console.log(`[api.postApi][Action] Received response context={{'status': ${response.status}, 'ok': ${response.ok}}}`);
|
||||
if (!response.ok) {
|
||||
throw await buildApiError(response);
|
||||
@@ -174,22 +157,22 @@ async function postApi(endpoint, body) {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
// [/DEF:postApi:Function]
|
||||
|
||||
// [DEF:requestApi:Function]
|
||||
// @PURPOSE: Generic request wrapper.
|
||||
// @PRE: endpoint and method are provided.
|
||||
// @POST: Returns Promise resolving to JSON data or throws on error.
|
||||
async function requestApi(endpoint, method = 'GET', body = null) {
|
||||
try {
|
||||
console.log(`[api.requestApi][Action] ${method} to context={{'endpoint': '${endpoint}'}}`);
|
||||
const options = {
|
||||
method,
|
||||
headers: getAuthHeaders(),
|
||||
};
|
||||
if (body) {
|
||||
options.body = JSON.stringify(body);
|
||||
}
|
||||
// [/DEF:postApi:Function]
|
||||
|
||||
// [DEF:requestApi:Function]
|
||||
// @PURPOSE: Generic request wrapper.
|
||||
// @PRE: endpoint and method are provided.
|
||||
// @POST: Returns Promise resolving to JSON data or throws on error.
|
||||
async function requestApi(endpoint, method = 'GET', body = null) {
|
||||
try {
|
||||
console.log(`[api.requestApi][Action] ${method} to context={{'endpoint': '${endpoint}'}}`);
|
||||
const options = {
|
||||
method,
|
||||
headers: getAuthHeaders(),
|
||||
};
|
||||
if (body) {
|
||||
options.body = JSON.stringify(body);
|
||||
}
|
||||
const response = await fetch(`${API_BASE_URL}${endpoint}`, options);
|
||||
console.log(`[api.requestApi][Action] Received response context={{'status': ${response.status}, 'ok': ${response.ok}}}`);
|
||||
if (!response.ok) {
|
||||
@@ -204,129 +187,124 @@ async function requestApi(endpoint, method = 'GET', body = null) {
|
||||
return await response.json();
|
||||
} catch (error) {
|
||||
console.error(`[api.requestApi][Coherence:Failed] Error ${method} to ${endpoint}:`, error);
|
||||
if (!shouldSuppressApiErrorToast(endpoint, error)) {
|
||||
notifyApiError(error);
|
||||
}
|
||||
notifyApiError(error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
// [/DEF:requestApi:Function]
|
||||
|
||||
// [DEF:api:Data]
|
||||
// @PURPOSE: API client object with specific methods.
|
||||
export const api = {
|
||||
fetchApi,
|
||||
postApi,
|
||||
requestApi,
|
||||
getPlugins: () => fetchApi('/plugins'),
|
||||
// [/DEF:requestApi:Function]
|
||||
|
||||
// [DEF:api:Data]
|
||||
// @PURPOSE: API client object with specific methods.
|
||||
export const api = {
|
||||
fetchApi,
|
||||
postApi,
|
||||
requestApi,
|
||||
getPlugins: () => fetchApi('/plugins'),
|
||||
getTasks: (options = {}) => {
|
||||
const params = new URLSearchParams();
|
||||
if (options.limit != null) params.append('limit', String(options.limit));
|
||||
if (options.offset != null) params.append('offset', String(options.offset));
|
||||
if (options.status) params.append('status', options.status);
|
||||
if (options.task_type) params.append('task_type', options.task_type);
|
||||
if (options.completed_only != null) params.append('completed_only', String(Boolean(options.completed_only)));
|
||||
if (Array.isArray(options.plugin_id)) {
|
||||
options.plugin_id.forEach((pluginId) => params.append('plugin_id', pluginId));
|
||||
}
|
||||
const query = params.toString();
|
||||
return fetchApi(`/tasks${query ? `?${query}` : ''}`);
|
||||
const params = new URLSearchParams();
|
||||
if (options.limit != null) params.append('limit', String(options.limit));
|
||||
if (options.offset != null) params.append('offset', String(options.offset));
|
||||
if (options.status) params.append('status', options.status);
|
||||
if (options.task_type) params.append('task_type', options.task_type);
|
||||
if (options.completed_only != null) params.append('completed_only', String(Boolean(options.completed_only)));
|
||||
if (Array.isArray(options.plugin_id)) {
|
||||
options.plugin_id.forEach((pluginId) => params.append('plugin_id', pluginId));
|
||||
}
|
||||
const query = params.toString();
|
||||
return fetchApi(`/tasks${query ? `?${query}` : ''}`);
|
||||
},
|
||||
getTask: (taskId) => fetchApi(`/tasks/${taskId}`),
|
||||
getTaskLogs: (taskId, options = {}) => {
|
||||
const params = new URLSearchParams();
|
||||
if (options.level) params.append('level', options.level);
|
||||
if (options.source) params.append('source', options.source);
|
||||
if (options.search) params.append('search', options.search);
|
||||
if (options.offset != null) params.append('offset', String(options.offset));
|
||||
if (options.limit != null) params.append('limit', String(options.limit));
|
||||
const query = params.toString();
|
||||
return fetchApi(`/tasks/${taskId}/logs${query ? `?${query}` : ''}`);
|
||||
const params = new URLSearchParams();
|
||||
if (options.level) params.append('level', options.level);
|
||||
if (options.source) params.append('source', options.source);
|
||||
if (options.search) params.append('search', options.search);
|
||||
if (options.offset != null) params.append('offset', String(options.offset));
|
||||
if (options.limit != null) params.append('limit', String(options.limit));
|
||||
const query = params.toString();
|
||||
return fetchApi(`/tasks/${taskId}/logs${query ? `?${query}` : ''}`);
|
||||
},
|
||||
createTask: (pluginId, params) => postApi('/tasks', { plugin_id: pluginId, params }),
|
||||
|
||||
|
||||
// Settings
|
||||
getSettings: () => fetchApi('/settings'),
|
||||
updateGlobalSettings: (settings) => requestApi('/settings/global', 'PATCH', settings),
|
||||
getEnvironments: () => fetchApi('/settings/environments'),
|
||||
addEnvironment: (env) => postApi('/settings/environments', env),
|
||||
updateEnvironment: (id, env) => requestApi(`/settings/environments/${id}`, 'PUT', env),
|
||||
deleteEnvironment: (id) => requestApi(`/settings/environments/${id}`, 'DELETE'),
|
||||
testEnvironmentConnection: (id) => postApi(`/settings/environments/${id}/test`, {}),
|
||||
updateEnvironmentSchedule: (id, schedule) => requestApi(`/environments/${id}/schedule`, 'PUT', schedule),
|
||||
getStorageSettings: () => fetchApi('/settings/storage'),
|
||||
updateStorageSettings: (storage) => requestApi('/settings/storage', 'PUT', storage),
|
||||
getSettings: () => fetchApi('/settings'),
|
||||
updateGlobalSettings: (settings) => requestApi('/settings/global', 'PATCH', settings),
|
||||
getEnvironments: () => fetchApi('/settings/environments'),
|
||||
addEnvironment: (env) => postApi('/settings/environments', env),
|
||||
updateEnvironment: (id, env) => requestApi(`/settings/environments/${id}`, 'PUT', env),
|
||||
deleteEnvironment: (id) => requestApi(`/settings/environments/${id}`, 'DELETE'),
|
||||
testEnvironmentConnection: (id) => postApi(`/settings/environments/${id}/test`, {}),
|
||||
updateEnvironmentSchedule: (id, schedule) => requestApi(`/environments/${id}/schedule`, 'PUT', schedule),
|
||||
getStorageSettings: () => fetchApi('/settings/storage'),
|
||||
updateStorageSettings: (storage) => requestApi('/settings/storage', 'PUT', storage),
|
||||
getEnvironmentsList: () => fetchApi('/environments'),
|
||||
getLlmStatus: () => fetchApi('/llm/status'),
|
||||
getEnvironmentDatabases: (id) => fetchApi(`/environments/${id}/databases`),
|
||||
getStorageFileBlob: (path) =>
|
||||
fetchApiBlob(`/storage/file?path=${encodeURIComponent(path)}`),
|
||||
|
||||
// Dashboards
|
||||
|
||||
// Dashboards
|
||||
getDashboards: (envId, options = {}) => {
|
||||
const params = new URLSearchParams({ env_id: envId });
|
||||
if (options.search) params.append('search', options.search);
|
||||
if (options.page) params.append('page', options.page);
|
||||
if (options.page_size) params.append('page_size', options.page_size);
|
||||
return fetchApi(`/dashboards?${params.toString()}`);
|
||||
const params = new URLSearchParams({ env_id: envId });
|
||||
if (options.search) params.append('search', options.search);
|
||||
if (options.page) params.append('page', options.page);
|
||||
if (options.page_size) params.append('page_size', options.page_size);
|
||||
return fetchApi(`/dashboards?${params.toString()}`);
|
||||
},
|
||||
getDashboardDetail: (envId, dashboardRef) => fetchApi(`/dashboards/${encodeURIComponent(String(dashboardRef))}?env_id=${envId}`),
|
||||
getDashboardTaskHistory: (envId, dashboardRef, options = {}) => {
|
||||
const params = new URLSearchParams();
|
||||
if (envId) params.append('env_id', envId);
|
||||
if (options.limit) params.append('limit', options.limit);
|
||||
return fetchApi(`/dashboards/${encodeURIComponent(String(dashboardRef))}/tasks?${params.toString()}`);
|
||||
getDashboardDetail: (envId, dashboardId) => fetchApi(`/dashboards/${dashboardId}?env_id=${envId}`),
|
||||
getDashboardTaskHistory: (envId, dashboardId, options = {}) => {
|
||||
const params = new URLSearchParams();
|
||||
if (envId) params.append('env_id', envId);
|
||||
if (options.limit) params.append('limit', options.limit);
|
||||
return fetchApi(`/dashboards/${dashboardId}/tasks?${params.toString()}`);
|
||||
},
|
||||
getDashboardThumbnail: (envId, dashboardRef, options = {}) => {
|
||||
const params = new URLSearchParams();
|
||||
params.append('env_id', envId);
|
||||
if (options.force != null) params.append('force', String(Boolean(options.force)));
|
||||
return fetchApiBlob(`/dashboards/${encodeURIComponent(String(dashboardRef))}/thumbnail?${params.toString()}`, { notifyError: false });
|
||||
getDashboardThumbnail: (envId, dashboardId, options = {}) => {
|
||||
const params = new URLSearchParams();
|
||||
params.append('env_id', envId);
|
||||
if (options.force != null) params.append('force', String(Boolean(options.force)));
|
||||
return fetchApiBlob(`/dashboards/${dashboardId}/thumbnail?${params.toString()}`, { notifyError: false });
|
||||
},
|
||||
getDatabaseMappings: (sourceEnvId, targetEnvId) => fetchApi(`/dashboards/db-mappings?source_env_id=${sourceEnvId}&target_env_id=${targetEnvId}`),
|
||||
calculateMigrationDryRun: (payload) => postApi('/migration/dry-run', payload),
|
||||
|
||||
// Datasets
|
||||
getDatasets: (envId, options = {}) => {
|
||||
const params = new URLSearchParams({ env_id: envId });
|
||||
if (options.search) params.append('search', options.search);
|
||||
if (options.page) params.append('page', options.page);
|
||||
if (options.page_size) params.append('page_size', options.page_size);
|
||||
return fetchApi(`/datasets?${params.toString()}`);
|
||||
},
|
||||
getDatasetIds: (envId, options = {}) => {
|
||||
const params = new URLSearchParams({ env_id: envId });
|
||||
if (options.search) params.append('search', options.search);
|
||||
return fetchApi(`/datasets/ids?${params.toString()}`);
|
||||
},
|
||||
getDatasetDetail: (envId, datasetId) => fetchApi(`/datasets/${datasetId}?env_id=${envId}`),
|
||||
|
||||
// Settings
|
||||
getConsolidatedSettings: () => fetchApi('/settings/consolidated'),
|
||||
updateConsolidatedSettings: (settings) => requestApi('/settings/consolidated', 'PATCH', settings),
|
||||
};
|
||||
// [/DEF:api:Data]
|
||||
|
||||
// [/DEF:api_module:Module]
|
||||
|
||||
// Export individual functions for easier use in components
|
||||
export { requestApi };
|
||||
export const getPlugins = api.getPlugins;
|
||||
export const getTasks = api.getTasks;
|
||||
export const getTask = api.getTask;
|
||||
export const createTask = api.createTask;
|
||||
export const getSettings = api.getSettings;
|
||||
export const updateGlobalSettings = api.updateGlobalSettings;
|
||||
export const getEnvironments = api.getEnvironments;
|
||||
export const addEnvironment = api.addEnvironment;
|
||||
export const updateEnvironment = api.updateEnvironment;
|
||||
export const deleteEnvironment = api.deleteEnvironment;
|
||||
export const testEnvironmentConnection = api.testEnvironmentConnection;
|
||||
export const updateEnvironmentSchedule = api.updateEnvironmentSchedule;
|
||||
export const getEnvironmentsList = api.getEnvironmentsList;
|
||||
export const getStorageSettings = api.getStorageSettings;
|
||||
export const updateStorageSettings = api.updateStorageSettings;
|
||||
export const getDashboards = api.getDashboards;
|
||||
export const getDatasets = api.getDatasets;
|
||||
export const getConsolidatedSettings = api.getConsolidatedSettings;
|
||||
export const updateConsolidatedSettings = api.updateConsolidatedSettings;
|
||||
|
||||
// Datasets
|
||||
getDatasets: (envId, options = {}) => {
|
||||
const params = new URLSearchParams({ env_id: envId });
|
||||
if (options.search) params.append('search', options.search);
|
||||
if (options.page) params.append('page', options.page);
|
||||
if (options.page_size) params.append('page_size', options.page_size);
|
||||
return fetchApi(`/datasets?${params.toString()}`);
|
||||
},
|
||||
getDatasetIds: (envId, options = {}) => {
|
||||
const params = new URLSearchParams({ env_id: envId });
|
||||
if (options.search) params.append('search', options.search);
|
||||
return fetchApi(`/datasets/ids?${params.toString()}`);
|
||||
},
|
||||
getDatasetDetail: (envId, datasetId) => fetchApi(`/datasets/${datasetId}?env_id=${envId}`),
|
||||
|
||||
// Settings
|
||||
getConsolidatedSettings: () => fetchApi('/settings/consolidated'),
|
||||
updateConsolidatedSettings: (settings) => requestApi('/settings/consolidated', 'PATCH', settings),
|
||||
};
|
||||
// [/DEF:api:Data]
|
||||
|
||||
// [/DEF:api_module:Module]
|
||||
|
||||
// Export individual functions for easier use in components
|
||||
export { requestApi };
|
||||
export const getPlugins = api.getPlugins;
|
||||
export const getTasks = api.getTasks;
|
||||
export const getTask = api.getTask;
|
||||
export const createTask = api.createTask;
|
||||
export const getSettings = api.getSettings;
|
||||
export const updateGlobalSettings = api.updateGlobalSettings;
|
||||
export const getEnvironments = api.getEnvironments;
|
||||
export const addEnvironment = api.addEnvironment;
|
||||
export const updateEnvironment = api.updateEnvironment;
|
||||
export const deleteEnvironment = api.deleteEnvironment;
|
||||
export const testEnvironmentConnection = api.testEnvironmentConnection;
|
||||
export const updateEnvironmentSchedule = api.updateEnvironmentSchedule;
|
||||
export const getEnvironmentsList = api.getEnvironmentsList;
|
||||
export const getStorageSettings = api.getStorageSettings;
|
||||
export const updateStorageSettings = api.updateStorageSettings;
|
||||
export const getDashboards = api.getDashboards;
|
||||
export const getDatasets = api.getDatasets;
|
||||
export const getConsolidatedSettings = api.getConsolidatedSettings;
|
||||
export const updateConsolidatedSettings = api.updateConsolidatedSettings;
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
// [DEF:frontend.src.lib.api.__tests__.reports_api:Module]
|
||||
// @TIER: STANDARD
|
||||
// @TIER: CRITICAL
|
||||
// @SEMANTICS: tests, reports, api-client, query-string, error-normalization
|
||||
// @PURPOSE: Unit tests for reports API client functions: query string building, error normalization, and fetch wrappers.
|
||||
// @LAYER: Infra (Tests)
|
||||
|
||||
@@ -75,14 +75,4 @@ export function getAssistantConversations(
|
||||
return requestApi(`/assistant/conversations?${params.toString()}`, 'GET');
|
||||
}
|
||||
// [/DEF:getAssistantConversations:Function]
|
||||
|
||||
// [DEF:deleteAssistantConversation:Function]
|
||||
// @PURPOSE: Soft-delete or hard-delete a conversation.
|
||||
// @PRE: conversationId string is provided.
|
||||
// @POST: Returns success status.
|
||||
export function deleteAssistantConversation(conversationId) {
|
||||
return requestApi(`/assistant/conversations/${conversationId}`, 'DELETE');
|
||||
}
|
||||
// [/DEF:deleteAssistantConversation:Function]
|
||||
|
||||
// [/DEF:frontend.src.lib.api.assistant:Module]
|
||||
|
||||
@@ -59,20 +59,6 @@ export function normalizeApiError(error) {
|
||||
// @PURPOSE: Fetch unified report list using existing request wrapper.
|
||||
// @PRE: valid auth context for protected endpoint.
|
||||
// @POST: Returns parsed payload or structured error for UI-state mapping.
|
||||
//
|
||||
// @TEST_CONTRACT: GetReportsApi ->
|
||||
// {
|
||||
// required_fields: {},
|
||||
// optional_fields: {options: Object},
|
||||
// invariants: [
|
||||
// "Fetches from /reports with built query string",
|
||||
// "Returns response payload on success",
|
||||
// "Catches and normalizes errors using normalizeApiError"
|
||||
// ]
|
||||
// }
|
||||
// @TEST_FIXTURE: valid_get_reports -> {"options": {"page": 1}}
|
||||
// @TEST_EDGE: api_fetch_failure -> api.fetchApi throws error
|
||||
// @TEST_INVARIANT: error_normalization -> verifies: [api_fetch_failure]
|
||||
export async function getReports(options = {}) {
|
||||
try {
|
||||
console.log("[reports][api][getReports:STARTED]", options);
|
||||
|
||||
@@ -23,21 +23,6 @@
|
||||
* @UX_TEST: NeedsConfirmation -> {click: confirm action, expected: started response with task_id}
|
||||
* @TEST_DATA: assistant_llm_ready -> {"llmStatus":{"configured":true,"reason":"ok"},"messages":[{"role":"assistant","text":"Ready","state":"success"}]}
|
||||
* @TEST_DATA: assistant_llm_not_configured -> {"llmStatus":{"configured":false,"reason":"invalid_api_key"}}
|
||||
*
|
||||
* @TEST_CONTRACT Component_AssistantChatPanel ->
|
||||
* {
|
||||
* required_props: {},
|
||||
* optional_props: {},
|
||||
* invariants: [
|
||||
* "Loads history and LLM status on mount/open",
|
||||
* "Appends messages and sends to API correctly",
|
||||
* "Handles action buttons (confirm, open task) properly"
|
||||
* ]
|
||||
* }
|
||||
* @TEST_FIXTURE chat_open -> {"isOpen": true, "messages": [{"role": "assistant", "text": "Hello"}]}
|
||||
* @TEST_EDGE server_error -> Appends error message with "failed" state
|
||||
* @TEST_EDGE llm_not_ready -> Renders LLM config warning banner
|
||||
* @TEST_INVARIANT action_handling -> verifies: [chat_open]
|
||||
*/
|
||||
|
||||
import { onMount } from "svelte";
|
||||
@@ -56,7 +41,6 @@
|
||||
cancelAssistantOperation,
|
||||
getAssistantHistory,
|
||||
getAssistantConversations,
|
||||
deleteAssistantConversation,
|
||||
} from "$lib/api/assistant.js";
|
||||
import { api } from "$lib/api.js";
|
||||
import { gitService } from "../../../services/gitService.js";
|
||||
@@ -176,32 +160,6 @@
|
||||
}
|
||||
// [/DEF:loadConversations:Function]
|
||||
|
||||
// [DEF:removeConversation:Function]
|
||||
// @PURPOSE: Removes a conversation from the list and deletes it from the backend.
|
||||
// @PRE: conversationId string is provided.
|
||||
// @POST: It is soft-deleted from the API and removed from local UI. If active, reset state.
|
||||
async function removeConversation(e, conversationIdTemp) {
|
||||
if (e) {
|
||||
e.stopPropagation();
|
||||
e.preventDefault();
|
||||
}
|
||||
try {
|
||||
await deleteAssistantConversation(conversationIdTemp);
|
||||
conversations = conversations.filter(
|
||||
(c) => c.conversation_id !== conversationIdTemp,
|
||||
);
|
||||
if (conversationId === conversationIdTemp) {
|
||||
$assistantChatStore.conversationId = null;
|
||||
$assistantChatStore.messages = [];
|
||||
$assistantChatStore.state = "idle";
|
||||
}
|
||||
addToast("Conversation deleted", "success");
|
||||
} catch (err) {
|
||||
addToast("Failed to delete conversation: " + err.message, "error");
|
||||
}
|
||||
}
|
||||
// [/DEF:removeConversation:Function]
|
||||
|
||||
// [DEF:loadOlderMessages:Function]
|
||||
/**
|
||||
* @PURPOSE: Lazy-load older messages for active conversation when user scrolls to top.
|
||||
@@ -593,22 +551,15 @@
|
||||
|
||||
<div class="flex h-[calc(100%-56px)] flex-col">
|
||||
{#if !llmReady}
|
||||
<div
|
||||
class="mx-3 mt-3 rounded-lg border border-rose-300 bg-rose-50 px-3 py-2 text-xs text-rose-800"
|
||||
>
|
||||
<div class="font-semibold">
|
||||
{$t.dashboard?.llm_not_configured || "LLM is not configured"}
|
||||
</div>
|
||||
<div class="mx-3 mt-3 rounded-lg border border-rose-300 bg-rose-50 px-3 py-2 text-xs text-rose-800">
|
||||
<div class="font-semibold">{$t.dashboard?.llm_not_configured || "LLM is not configured"}</div>
|
||||
<div class="mt-1 text-rose-700">
|
||||
{#if llmStatusReason === "no_active_provider"}
|
||||
{$t.dashboard?.llm_configure_provider ||
|
||||
"No active LLM provider. Configure it in Admin -> LLM Settings."}
|
||||
{$t.dashboard?.llm_configure_provider || "No active LLM provider. Configure it in Admin -> LLM Settings."}
|
||||
{:else if llmStatusReason === "invalid_api_key"}
|
||||
{$t.dashboard?.llm_configure_key ||
|
||||
"Invalid LLM API key. Update and save a real key in Admin -> LLM Settings."}
|
||||
{$t.dashboard?.llm_configure_key || "Invalid LLM API key. Update and save a real key in Admin -> LLM Settings."}
|
||||
{:else}
|
||||
{$t.dashboard?.llm_status_unavailable ||
|
||||
"LLM status is unavailable. Check settings and backend logs."}
|
||||
{$t.dashboard?.llm_status_unavailable || "LLM status is unavailable. Check settings and backend logs."}
|
||||
{/if}
|
||||
</div>
|
||||
</div>
|
||||
@@ -649,30 +600,21 @@
|
||||
</div>
|
||||
<div class="flex gap-2 overflow-x-auto pb-1">
|
||||
{#each conversations as convo (convo.conversation_id)}
|
||||
<div class="relative group min-w-[140px] max-w-[220px]">
|
||||
<button
|
||||
class="w-full rounded-lg border px-2.5 py-1.5 text-left text-xs transition {convo.conversation_id ===
|
||||
conversationId
|
||||
? 'border-sky-300 bg-sky-50 text-sky-900'
|
||||
: 'border-slate-200 bg-white text-slate-700 hover:bg-slate-50'}"
|
||||
on:click={() => selectConversation(convo)}
|
||||
title={formatConversationTime(convo.updated_at)}
|
||||
>
|
||||
<div class="truncate font-semibold pr-4">
|
||||
{buildConversationTitle(convo)}
|
||||
</div>
|
||||
<div class="truncate text-[10px] text-slate-500 pr-4">
|
||||
{convo.last_message || ""}
|
||||
</div>
|
||||
</button>
|
||||
<button
|
||||
class="absolute right-1.5 top-1.5 hidden group-hover:block p-1 text-slate-400 hover:text-red-500 rounded bg-white/80 hover:bg-red-50"
|
||||
on:click={(e) => removeConversation(e, convo.conversation_id)}
|
||||
title="Удалить диалог"
|
||||
>
|
||||
<Icon name="trash" size={12} />
|
||||
</button>
|
||||
</div>
|
||||
<button
|
||||
class="min-w-[140px] max-w-[220px] rounded-lg border px-2.5 py-1.5 text-left text-xs transition {convo.conversation_id ===
|
||||
conversationId
|
||||
? 'border-sky-300 bg-sky-50 text-sky-900'
|
||||
: 'border-slate-200 bg-white text-slate-700 hover:bg-slate-50'}"
|
||||
on:click={() => selectConversation(convo)}
|
||||
title={formatConversationTime(convo.updated_at)}
|
||||
>
|
||||
<div class="truncate font-semibold">
|
||||
{buildConversationTitle(convo)}
|
||||
</div>
|
||||
<div class="truncate text-[10px] text-slate-500">
|
||||
{convo.last_message || ""}
|
||||
</div>
|
||||
</button>
|
||||
{/each}
|
||||
{#if loadingConversations}
|
||||
<div
|
||||
@@ -816,9 +758,7 @@
|
||||
bind:value={input}
|
||||
rows="2"
|
||||
placeholder={$t.assistant?.input_placeholder}
|
||||
class="min-h-[52px] w-full resize-y rounded-lg border px-3 py-2 text-sm outline-none transition {llmReady
|
||||
? 'border-slate-300 focus:border-sky-400 focus:ring-2 focus:ring-sky-100'
|
||||
: 'border-rose-300 bg-rose-50 focus:border-rose-400 focus:ring-2 focus:ring-rose-100'}"
|
||||
class="min-h-[52px] w-full resize-y rounded-lg border px-3 py-2 text-sm outline-none transition {llmReady ? 'border-slate-300 focus:border-sky-400 focus:ring-2 focus:ring-sky-100' : 'border-rose-300 bg-rose-50 focus:border-rose-400 focus:ring-2 focus:ring-rose-100'}"
|
||||
on:keydown={handleKeydown}
|
||||
></textarea>
|
||||
<button
|
||||
|
||||
@@ -13,12 +13,6 @@ vi.mock('$lib/api/assistant', () => ({
|
||||
sendAssistantMessage: vi.fn()
|
||||
}));
|
||||
|
||||
vi.mock('$lib/api', () => ({
|
||||
api: {
|
||||
getLlmStatus: vi.fn(() => Promise.resolve({ configured: true }))
|
||||
}
|
||||
}));
|
||||
|
||||
vi.mock('$lib/toasts', () => ({
|
||||
addToast: vi.fn()
|
||||
}));
|
||||
@@ -55,16 +49,15 @@ vi.mock('$lib/i18n', () => ({
|
||||
|
||||
describe('AssistantChatPanel confirmation functional tests', () => {
|
||||
const mockMessage = {
|
||||
message_id: 'msg-123',
|
||||
id: 'msg-123',
|
||||
role: 'assistant',
|
||||
text: 'Confirm migration?',
|
||||
created_at: new Date().toISOString(),
|
||||
conversation_id: 'conv-1',
|
||||
confirmation_id: 'conf-123',
|
||||
actions: [
|
||||
{ type: 'confirm', label: 'Confirm' },
|
||||
{ type: 'cancel', label: 'Cancel' }
|
||||
]
|
||||
confirmation: {
|
||||
id: 'conf-123',
|
||||
type: 'migration_execute',
|
||||
status: 'pending'
|
||||
}
|
||||
};
|
||||
|
||||
beforeEach(() => {
|
||||
@@ -73,16 +66,20 @@ describe('AssistantChatPanel confirmation functional tests', () => {
|
||||
|
||||
it('renders action buttons and triggers confirm API call', async () => {
|
||||
// Mock getAssistantHistory to return our message
|
||||
api.getAssistantHistory.mockImplementation(async () => ({
|
||||
api.getAssistantHistory.mockResolvedValue({
|
||||
items: [mockMessage],
|
||||
total: 1,
|
||||
has_next: false
|
||||
}));
|
||||
});
|
||||
|
||||
render(AssistantChatPanel);
|
||||
|
||||
// Wait for message to render
|
||||
const confirmBtn = await screen.findByText('Confirm', {}, { timeout: 3000 });
|
||||
await waitFor(() => {
|
||||
expect(screen.getByText('Confirm migration?')).toBeTruthy();
|
||||
});
|
||||
|
||||
const confirmBtn = screen.getByText('Confirm');
|
||||
expect(confirmBtn).toBeTruthy();
|
||||
|
||||
await fireEvent.click(confirmBtn);
|
||||
@@ -91,38 +88,40 @@ describe('AssistantChatPanel confirmation functional tests', () => {
|
||||
});
|
||||
|
||||
it('triggers cancel API call when cancel button is clicked', async () => {
|
||||
api.getAssistantHistory.mockImplementation(async () => ({
|
||||
api.getAssistantHistory.mockResolvedValue({
|
||||
items: [mockMessage],
|
||||
total: 1,
|
||||
has_next: false
|
||||
}));
|
||||
});
|
||||
|
||||
render(AssistantChatPanel);
|
||||
|
||||
const cancelBtn = await screen.findByText('Cancel', {}, { timeout: 3000 });
|
||||
await waitFor(() => {
|
||||
expect(screen.getByText('Cancel')).toBeTruthy();
|
||||
});
|
||||
|
||||
const cancelBtn = screen.getByText('Cancel');
|
||||
await fireEvent.click(cancelBtn);
|
||||
|
||||
expect(api.cancelAssistantOperation).toHaveBeenCalledWith('conf-123');
|
||||
});
|
||||
|
||||
it('shows toast error when action fails', async () => {
|
||||
api.getAssistantHistory.mockImplementation(async () => ({
|
||||
api.getAssistantHistory.mockResolvedValue({
|
||||
items: [mockMessage],
|
||||
total: 1,
|
||||
has_next: false
|
||||
}));
|
||||
api.confirmAssistantOperation.mockImplementation(async () => {
|
||||
throw new Error('Network error');
|
||||
});
|
||||
api.confirmAssistantOperation.mockRejectedValue(new Error('Network error'));
|
||||
|
||||
render(AssistantChatPanel);
|
||||
|
||||
const confirmBtn = await screen.findByText('Confirm', {}, { timeout: 3000 });
|
||||
await fireEvent.click(confirmBtn);
|
||||
await waitFor(() => screen.getByText('Confirm'));
|
||||
await fireEvent.click(screen.getByText('Confirm'));
|
||||
|
||||
await waitFor(() => {
|
||||
// The component appends a failed message to the chat
|
||||
expect(screen.getAllByText(/Network error/)).toBeTruthy();
|
||||
}, { timeout: 3000 });
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -12,20 +12,6 @@
|
||||
* @UX_STATE: Toggling -> Animation plays for 200ms
|
||||
* @UX_FEEDBACK: Active item highlighted with different background
|
||||
* @UX_RECOVERY: Click outside on mobile closes overlay
|
||||
*
|
||||
* @TEST_CONTRACT Component_Sidebar ->
|
||||
* {
|
||||
* required_props: {},
|
||||
* optional_props: {},
|
||||
* invariants: [
|
||||
* "Highlights active category and sub-item based on current page URL",
|
||||
* "Toggles sidebar via toggleSidebar store action",
|
||||
* "Closes mobile overlay on click outside"
|
||||
* ]
|
||||
* }
|
||||
* @TEST_FIXTURE idle_state -> {}
|
||||
* @TEST_EDGE mobile_open -> shows mobile overlay mask
|
||||
* @TEST_INVARIANT navigation -> verifies: [idle_state]
|
||||
*/
|
||||
|
||||
import { onMount } from "svelte";
|
||||
@@ -44,52 +30,56 @@
|
||||
return [
|
||||
{
|
||||
id: "dashboards",
|
||||
label: $t.nav?.dashboards,
|
||||
label: $t.nav?.dashboards ,
|
||||
icon: "dashboard",
|
||||
tone: "from-sky-100 to-sky-200 text-sky-700 ring-sky-200",
|
||||
path: "/dashboards",
|
||||
subItems: [{ label: $t.nav?.overview, path: "/dashboards" }],
|
||||
subItems: [
|
||||
{ label: $t.nav?.overview , path: "/dashboards" },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: "datasets",
|
||||
label: $t.nav?.datasets,
|
||||
label: $t.nav?.datasets ,
|
||||
icon: "database",
|
||||
tone: "from-emerald-100 to-emerald-200 text-emerald-700 ring-emerald-200",
|
||||
path: "/datasets",
|
||||
subItems: [{ label: $t.nav?.all_datasets, path: "/datasets" }],
|
||||
subItems: [
|
||||
{ label: $t.nav?.all_datasets , path: "/datasets" },
|
||||
],
|
||||
},
|
||||
{
|
||||
id: "storage",
|
||||
label: $t.nav?.storage,
|
||||
label: $t.nav?.storage ,
|
||||
icon: "storage",
|
||||
tone: "from-amber-100 to-amber-200 text-amber-800 ring-amber-200",
|
||||
path: "/storage",
|
||||
subItems: [
|
||||
{ label: $t.nav?.backups, path: "/storage/backups" },
|
||||
{ label: $t.nav?.backups , path: "/storage/backups" },
|
||||
{
|
||||
label: $t.nav?.repositories,
|
||||
label: $t.nav?.repositories ,
|
||||
path: "/storage/repos",
|
||||
},
|
||||
],
|
||||
},
|
||||
{
|
||||
id: "reports",
|
||||
label: $t.nav?.reports,
|
||||
label: $t.nav?.reports ,
|
||||
icon: "reports",
|
||||
tone: "from-violet-100 to-fuchsia-100 text-violet-700 ring-violet-200",
|
||||
path: "/reports",
|
||||
subItems: [{ label: $t.nav?.reports, path: "/reports" }],
|
||||
subItems: [{ label: $t.nav?.reports , path: "/reports" }],
|
||||
},
|
||||
{
|
||||
id: "admin",
|
||||
label: $t.nav?.admin,
|
||||
label: $t.nav?.admin ,
|
||||
icon: "admin",
|
||||
tone: "from-rose-100 to-rose-200 text-rose-700 ring-rose-200",
|
||||
path: "/admin",
|
||||
subItems: [
|
||||
{ label: $t.nav?.admin_users, path: "/admin/users" },
|
||||
{ label: $t.nav?.admin_roles, path: "/admin/roles" },
|
||||
{ label: $t.nav?.settings, path: "/settings" },
|
||||
{ label: $t.nav?.admin_users , path: "/admin/users" },
|
||||
{ label: $t.nav?.admin_roles , path: "/admin/roles" },
|
||||
{ label: $t.nav?.settings , path: "/settings" },
|
||||
],
|
||||
},
|
||||
];
|
||||
@@ -208,12 +198,10 @@
|
||||
>
|
||||
{#if isExpanded}
|
||||
<span class="font-semibold text-gray-800 flex items-center gap-2">
|
||||
<span
|
||||
class="inline-flex h-6 w-6 items-center justify-center rounded-md bg-gradient-to-br from-slate-100 to-slate-200 text-slate-700 ring-1 ring-slate-200"
|
||||
>
|
||||
<span class="inline-flex h-6 w-6 items-center justify-center rounded-md bg-gradient-to-br from-slate-100 to-slate-200 text-slate-700 ring-1 ring-slate-200">
|
||||
<Icon name="layers" size={14} />
|
||||
</span>
|
||||
{$t.nav?.menu}
|
||||
{$t.nav?.menu }
|
||||
</span>
|
||||
{:else}
|
||||
<span class="text-xs text-gray-500">M</span>
|
||||
@@ -240,9 +228,7 @@
|
||||
aria-expanded={expandedCategories.has(category.id)}
|
||||
>
|
||||
<div class="flex items-center">
|
||||
<span
|
||||
class="inline-flex h-8 w-8 shrink-0 items-center justify-center rounded-lg bg-gradient-to-br ring-1 transition-all {category.tone}"
|
||||
>
|
||||
<span class="inline-flex h-8 w-8 shrink-0 items-center justify-center rounded-lg bg-gradient-to-br ring-1 transition-all {category.tone}">
|
||||
<Icon name={category.icon} size={16} strokeWidth={2} />
|
||||
</span>
|
||||
{#if isExpanded}
|
||||
@@ -296,12 +282,10 @@
|
||||
class="flex items-center justify-center w-full px-4 py-2 text-sm text-gray-600 hover:bg-gray-100 rounded-lg transition-colors"
|
||||
on:click={handleToggleClick}
|
||||
>
|
||||
<span
|
||||
class="mr-2 inline-flex h-6 w-6 items-center justify-center rounded-md bg-slate-100 text-slate-600"
|
||||
>
|
||||
<span class="mr-2 inline-flex h-6 w-6 items-center justify-center rounded-md bg-slate-100 text-slate-600">
|
||||
<Icon name="chevronLeft" size={14} />
|
||||
</span>
|
||||
{$t.nav?.collapse}
|
||||
{$t.nav?.collapse }
|
||||
</button>
|
||||
</div>
|
||||
{:else}
|
||||
@@ -309,10 +293,10 @@
|
||||
<button
|
||||
class="flex items-center justify-center w-full px-4 py-2 text-sm text-gray-600 hover:bg-gray-100 rounded-lg transition-colors"
|
||||
on:click={handleToggleClick}
|
||||
aria-label={$t.nav?.expand_sidebar}
|
||||
aria-label={$t.nav?.expand_sidebar }
|
||||
>
|
||||
<Icon name="chevronRight" size={16} />
|
||||
<span class="ml-2">{$t.nav?.expand}</span>
|
||||
<span class="ml-2">{$t.nav?.expand }</span>
|
||||
</button>
|
||||
</div>
|
||||
{/if}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user