semantic clean up
This commit is contained in:
@@ -2,12 +2,12 @@
|
|||||||
|
|
||||||
> High-level module structure for AI Context. Generated automatically.
|
> High-level module structure for AI Context. Generated automatically.
|
||||||
|
|
||||||
**Generated:** 2026-03-10T11:52:00.326208
|
**Generated:** 2026-03-10T18:26:33.375187
|
||||||
|
|
||||||
## Summary
|
## Summary
|
||||||
|
|
||||||
- **Total Modules:** 103
|
- **Total Modules:** 103
|
||||||
- **Total Entities:** 3063
|
- **Total Entities:** 3077
|
||||||
|
|
||||||
## Module Hierarchy
|
## Module Hierarchy
|
||||||
|
|
||||||
@@ -53,7 +53,7 @@
|
|||||||
|
|
||||||
### 📁 `routes/`
|
### 📁 `routes/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** API, UI (API), UI/API
|
- 🏗️ **Layers:** API, Infra, UI (API), UI/API
|
||||||
- 📊 **Tiers:** CRITICAL: 12, STANDARD: 272, TRIVIAL: 16
|
- 📊 **Tiers:** CRITICAL: 12, STANDARD: 272, TRIVIAL: 16
|
||||||
- 📄 **Files:** 21
|
- 📄 **Files:** 21
|
||||||
- 📦 **Entities:** 300
|
- 📦 **Entities:** 300
|
||||||
@@ -87,7 +87,7 @@
|
|||||||
- 🔗 DEPENDS_ON -> ConfigModels
|
- 🔗 DEPENDS_ON -> ConfigModels
|
||||||
- 🔗 DEPENDS_ON -> backend.src.core.database
|
- 🔗 DEPENDS_ON -> backend.src.core.database
|
||||||
- 🔗 DEPENDS_ON -> backend.src.core.database.get_db
|
- 🔗 DEPENDS_ON -> backend.src.core.database.get_db
|
||||||
- 🔗 DEPENDS_ON -> backend.src.core.superset_client
|
- 🔗 DEPENDS_ON -> backend.src.core.mapping_service
|
||||||
|
|
||||||
### 📁 `__tests__/`
|
### 📁 `__tests__/`
|
||||||
|
|
||||||
@@ -126,7 +126,7 @@
|
|||||||
|
|
||||||
### 📁 `core/`
|
### 📁 `core/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** Core
|
- 🏗️ **Layers:** Core, Domain
|
||||||
- 📊 **Tiers:** CRITICAL: 52, STANDARD: 102, TRIVIAL: 9
|
- 📊 **Tiers:** CRITICAL: 52, STANDARD: 102, TRIVIAL: 9
|
||||||
- 📄 **Files:** 12
|
- 📄 **Files:** 12
|
||||||
- 📦 **Entities:** 163
|
- 📦 **Entities:** 163
|
||||||
@@ -140,7 +140,7 @@
|
|||||||
- ℂ **BeliefFormatter** (Class)
|
- ℂ **BeliefFormatter** (Class)
|
||||||
- Custom logging formatter that adds belief state prefixes to ...
|
- Custom logging formatter that adds belief state prefixes to ...
|
||||||
- ℂ **ConfigManager** (Class) `[CRITICAL]`
|
- ℂ **ConfigManager** (Class) `[CRITICAL]`
|
||||||
- A class to handle application configuration persistence and ...
|
- Handles application configuration load, validation, mutation...
|
||||||
- ℂ **IdMappingService** (Class) `[CRITICAL]`
|
- ℂ **IdMappingService** (Class) `[CRITICAL]`
|
||||||
- Service handling the cataloging and retrieval of remote Supe...
|
- Service handling the cataloging and retrieval of remote Supe...
|
||||||
- ℂ **LogEntry** (Class)
|
- ℂ **LogEntry** (Class)
|
||||||
@@ -158,7 +158,7 @@
|
|||||||
|
|
||||||
- 🔗 DEPENDS_ON -> AppConfigRecord
|
- 🔗 DEPENDS_ON -> AppConfigRecord
|
||||||
- 🔗 DEPENDS_ON -> ConfigModels
|
- 🔗 DEPENDS_ON -> ConfigModels
|
||||||
- 🔗 DEPENDS_ON -> PyYAML
|
- 🔗 DEPENDS_ON -> SessionLocal
|
||||||
- 🔗 DEPENDS_ON -> backend.src.core.auth.config
|
- 🔗 DEPENDS_ON -> backend.src.core.auth.config
|
||||||
- 🔗 DEPENDS_ON -> backend.src.core.logger
|
- 🔗 DEPENDS_ON -> backend.src.core.logger
|
||||||
|
|
||||||
@@ -180,7 +180,7 @@
|
|||||||
|
|
||||||
### 📁 `auth/`
|
### 📁 `auth/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** Core
|
- 🏗️ **Layers:** Core, Domain
|
||||||
- 📊 **Tiers:** CRITICAL: 28
|
- 📊 **Tiers:** CRITICAL: 28
|
||||||
- 📄 **Files:** 6
|
- 📄 **Files:** 6
|
||||||
- 📦 **Entities:** 28
|
- 📦 **Entities:** 28
|
||||||
@@ -190,7 +190,7 @@
|
|||||||
- ℂ **AuthConfig** (Class) `[CRITICAL]`
|
- ℂ **AuthConfig** (Class) `[CRITICAL]`
|
||||||
- Holds authentication-related settings.
|
- Holds authentication-related settings.
|
||||||
- ℂ **AuthRepository** (Class) `[CRITICAL]`
|
- ℂ **AuthRepository** (Class) `[CRITICAL]`
|
||||||
- Encapsulates database operations for authentication.
|
- Encapsulates database operations for authentication-related ...
|
||||||
- 📦 **backend.src.core.auth.config** (Module) `[CRITICAL]`
|
- 📦 **backend.src.core.auth.config** (Module) `[CRITICAL]`
|
||||||
- Centralized configuration for authentication and authorizati...
|
- Centralized configuration for authentication and authorizati...
|
||||||
- 📦 **backend.src.core.auth.jwt** (Module) `[CRITICAL]`
|
- 📦 **backend.src.core.auth.jwt** (Module) `[CRITICAL]`
|
||||||
@@ -200,17 +200,17 @@
|
|||||||
- 📦 **backend.src.core.auth.oauth** (Module) `[CRITICAL]`
|
- 📦 **backend.src.core.auth.oauth** (Module) `[CRITICAL]`
|
||||||
- ADFS OIDC configuration and client using Authlib.
|
- ADFS OIDC configuration and client using Authlib.
|
||||||
- 📦 **backend.src.core.auth.repository** (Module) `[CRITICAL]`
|
- 📦 **backend.src.core.auth.repository** (Module) `[CRITICAL]`
|
||||||
- Data access layer for authentication-related entities.
|
- Data access layer for authentication and user preference ent...
|
||||||
- 📦 **backend.src.core.auth.security** (Module) `[CRITICAL]`
|
- 📦 **backend.src.core.auth.security** (Module) `[CRITICAL]`
|
||||||
- Utility for password hashing and verification using Passlib.
|
- Utility for password hashing and verification using Passlib.
|
||||||
|
|
||||||
**Dependencies:**
|
**Dependencies:**
|
||||||
|
|
||||||
- 🔗 DEPENDS_ON -> authlib
|
- 🔗 DEPENDS_ON -> authlib
|
||||||
|
- 🔗 DEPENDS_ON -> backend.src.core.logger.belief_scope
|
||||||
|
- 🔗 DEPENDS_ON -> backend.src.models.auth
|
||||||
|
- 🔗 DEPENDS_ON -> backend.src.models.profile
|
||||||
- 🔗 DEPENDS_ON -> jose
|
- 🔗 DEPENDS_ON -> jose
|
||||||
- 🔗 DEPENDS_ON -> passlib
|
|
||||||
- 🔗 DEPENDS_ON -> pydantic
|
|
||||||
- 🔗 DEPENDS_ON -> sqlalchemy
|
|
||||||
|
|
||||||
### 📁 `__tests__/`
|
### 📁 `__tests__/`
|
||||||
|
|
||||||
@@ -238,7 +238,7 @@
|
|||||||
|
|
||||||
### 📁 `migration/`
|
### 📁 `migration/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** Core
|
- 🏗️ **Layers:** Core, Domain
|
||||||
- 📊 **Tiers:** CRITICAL: 20, TRIVIAL: 1
|
- 📊 **Tiers:** CRITICAL: 20, TRIVIAL: 1
|
||||||
- 📄 **Files:** 4
|
- 📄 **Files:** 4
|
||||||
- 📦 **Entities:** 21
|
- 📦 **Entities:** 21
|
||||||
@@ -256,7 +256,7 @@
|
|||||||
- 📦 **backend.src.core.migration.dry_run_orchestrator** (Module) `[CRITICAL]`
|
- 📦 **backend.src.core.migration.dry_run_orchestrator** (Module) `[CRITICAL]`
|
||||||
- Compute pre-flight migration diff and risk scoring without a...
|
- Compute pre-flight migration diff and risk scoring without a...
|
||||||
- 📦 **backend.src.core.migration.risk_assessor** (Module) `[CRITICAL]`
|
- 📦 **backend.src.core.migration.risk_assessor** (Module) `[CRITICAL]`
|
||||||
- Risk evaluation helpers for migration pre-flight reporting.
|
- Compute deterministic migration risk items and aggregate sco...
|
||||||
|
|
||||||
**Dependencies:**
|
**Dependencies:**
|
||||||
|
|
||||||
@@ -364,7 +364,7 @@
|
|||||||
- ℂ **ADGroupMapping** (Class) `[CRITICAL]`
|
- ℂ **ADGroupMapping** (Class) `[CRITICAL]`
|
||||||
- Maps an Active Directory group to a local System Role.
|
- Maps an Active Directory group to a local System Role.
|
||||||
- ℂ **AppConfigRecord** (Class) `[CRITICAL]`
|
- ℂ **AppConfigRecord** (Class) `[CRITICAL]`
|
||||||
- Stores the single source of truth for application configurat...
|
- Stores persisted application configuration as a single autho...
|
||||||
- ℂ **ApprovalDecision** (Class)
|
- ℂ **ApprovalDecision** (Class)
|
||||||
- Approval or rejection bound to a candidate and report.
|
- Approval or rejection bound to a candidate and report.
|
||||||
- ℂ **AssistantAuditRecord** (Class)
|
- ℂ **AssistantAuditRecord** (Class)
|
||||||
@@ -622,10 +622,10 @@
|
|||||||
**Dependencies:**
|
**Dependencies:**
|
||||||
|
|
||||||
- 🔗 DEPENDS_ON -> ValidationRecord
|
- 🔗 DEPENDS_ON -> ValidationRecord
|
||||||
|
- 🔗 DEPENDS_ON -> backend.src.core.auth.jwt.create_access_token
|
||||||
- 🔗 DEPENDS_ON -> backend.src.core.auth.repository
|
- 🔗 DEPENDS_ON -> backend.src.core.auth.repository
|
||||||
- 🔗 DEPENDS_ON -> backend.src.core.config_manager
|
- 🔗 DEPENDS_ON -> backend.src.core.auth.repository.AuthRepository
|
||||||
- 🔗 DEPENDS_ON -> backend.src.core.database
|
- 🔗 DEPENDS_ON -> backend.src.core.auth.security.verify_password
|
||||||
- 🔗 DEPENDS_ON -> backend.src.core.superset_client
|
|
||||||
|
|
||||||
### 📁 `__tests__/`
|
### 📁 `__tests__/`
|
||||||
|
|
||||||
@@ -664,9 +664,9 @@
|
|||||||
### 📁 `clean_release/`
|
### 📁 `clean_release/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** Application, Domain, Infra
|
- 🏗️ **Layers:** Application, Domain, Infra
|
||||||
- 📊 **Tiers:** CRITICAL: 9, STANDARD: 44, TRIVIAL: 51
|
- 📊 **Tiers:** CRITICAL: 9, STANDARD: 46, TRIVIAL: 50
|
||||||
- 📄 **Files:** 21
|
- 📄 **Files:** 21
|
||||||
- 📦 **Entities:** 104
|
- 📦 **Entities:** 105
|
||||||
|
|
||||||
**Key Entities:**
|
**Key Entities:**
|
||||||
|
|
||||||
@@ -701,9 +701,9 @@
|
|||||||
### 📁 `__tests__/`
|
### 📁 `__tests__/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** Domain, Infra, Unknown
|
- 🏗️ **Layers:** Domain, Infra, Unknown
|
||||||
- 📊 **Tiers:** STANDARD: 18, TRIVIAL: 25
|
- 📊 **Tiers:** STANDARD: 25, TRIVIAL: 25
|
||||||
- 📄 **Files:** 8
|
- 📄 **Files:** 8
|
||||||
- 📦 **Entities:** 43
|
- 📦 **Entities:** 50
|
||||||
|
|
||||||
**Key Entities:**
|
**Key Entities:**
|
||||||
|
|
||||||
@@ -724,6 +724,10 @@
|
|||||||
- 📦 **test_policy_engine** (Module) `[TRIVIAL]`
|
- 📦 **test_policy_engine** (Module) `[TRIVIAL]`
|
||||||
- Auto-generated module for backend/src/services/clean_release...
|
- Auto-generated module for backend/src/services/clean_release...
|
||||||
|
|
||||||
|
**Dependencies:**
|
||||||
|
|
||||||
|
- 🔗 DEPENDS_ON -> backend.src.services.clean_release.preparation_service:Module
|
||||||
|
|
||||||
### 📁 `repositories/`
|
### 📁 `repositories/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** Infra
|
- 🏗️ **Layers:** Infra
|
||||||
@@ -1032,15 +1036,17 @@
|
|||||||
|
|
||||||
### 📁 `auth/`
|
### 📁 `auth/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** Component
|
- 🏗️ **Layers:** UI
|
||||||
- 📊 **Tiers:** CRITICAL: 2
|
- 📊 **Tiers:** CRITICAL: 2, STANDARD: 1
|
||||||
- 📄 **Files:** 1
|
- 📄 **Files:** 1
|
||||||
- 📦 **Entities:** 2
|
- 📦 **Entities:** 3
|
||||||
|
|
||||||
**Key Entities:**
|
**Key Entities:**
|
||||||
|
|
||||||
- 🧩 **ProtectedRoute** (Component) `[CRITICAL]`
|
- 🧩 **ProtectedRoute** (Component) `[CRITICAL]`
|
||||||
- Wraps content to ensure only authenticated and authorized us...
|
- Wraps protected slot content with session and permission ver...
|
||||||
|
- 📦 **ProtectedRoute.svelte** (Module)
|
||||||
|
- Enforces authenticated and authorized access before protecte...
|
||||||
|
|
||||||
### 📁 `git/`
|
### 📁 `git/`
|
||||||
|
|
||||||
@@ -1712,28 +1718,26 @@
|
|||||||
|
|
||||||
### 📁 `migration/`
|
### 📁 `migration/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** Page
|
- 📊 **Tiers:** CRITICAL: 12
|
||||||
- 📊 **Tiers:** CRITICAL: 11
|
|
||||||
- 📄 **Files:** 1
|
- 📄 **Files:** 1
|
||||||
- 📦 **Entities:** 11
|
- 📦 **Entities:** 12
|
||||||
|
|
||||||
**Key Entities:**
|
**Key Entities:**
|
||||||
|
|
||||||
- 🧩 **DashboardSelectionSection** (Component) `[CRITICAL]`
|
- 🧩 **DashboardSelectionSection** (Component) `[CRITICAL]`
|
||||||
- 🧩 **MigrationDashboard** (Component) `[CRITICAL]`
|
- 🧩 **MigrationDashboard** (Component) `[CRITICAL]`
|
||||||
- Main dashboard for configuring and starting migrations.
|
- Orchestrate migration UI workflow and route user actions to ...
|
||||||
|
|
||||||
### 📁 `mappings/`
|
### 📁 `mappings/`
|
||||||
|
|
||||||
- 🏗️ **Layers:** Page
|
- 📊 **Tiers:** CRITICAL: 8
|
||||||
- 📊 **Tiers:** CRITICAL: 4
|
|
||||||
- 📄 **Files:** 1
|
- 📄 **Files:** 1
|
||||||
- 📦 **Entities:** 4
|
- 📦 **Entities:** 8
|
||||||
|
|
||||||
**Key Entities:**
|
**Key Entities:**
|
||||||
|
|
||||||
- 🧩 **MappingManagement** (Component) `[CRITICAL]`
|
- 🗄️ **UiState** (Store) `[CRITICAL]`
|
||||||
- Page for managing database mappings between environments.
|
- Maintain local page state for environments, fetched database...
|
||||||
|
|
||||||
### 📁 `profile/`
|
### 📁 `profile/`
|
||||||
|
|
||||||
@@ -2005,6 +2009,11 @@ graph TD
|
|||||||
routes-->|DEPENDS_ON|backend
|
routes-->|DEPENDS_ON|backend
|
||||||
routes-->|DEPENDS_ON|backend
|
routes-->|DEPENDS_ON|backend
|
||||||
routes-->|DEPENDS_ON|backend
|
routes-->|DEPENDS_ON|backend
|
||||||
|
routes-->|DEPENDS_ON|backend
|
||||||
|
routes-->|DEPENDS_ON|backend
|
||||||
|
routes-->|DEPENDS_ON|backend
|
||||||
|
routes-->|DEPENDS_ON|backend
|
||||||
|
routes-->|DEPENDS_ON|backend
|
||||||
routes-->|USES|backend
|
routes-->|USES|backend
|
||||||
routes-->|USES|backend
|
routes-->|USES|backend
|
||||||
routes-->|CALLS|backend
|
routes-->|CALLS|backend
|
||||||
@@ -2052,17 +2061,21 @@ graph TD
|
|||||||
auth-->|USES|backend
|
auth-->|USES|backend
|
||||||
auth-->|USES|backend
|
auth-->|USES|backend
|
||||||
auth-->|USES|backend
|
auth-->|USES|backend
|
||||||
auth-->|USES|backend
|
auth-->|DEPENDS_ON|backend
|
||||||
|
auth-->|DEPENDS_ON|backend
|
||||||
|
auth-->|DEPENDS_ON|backend
|
||||||
migration-->|DEPENDS_ON|backend
|
migration-->|DEPENDS_ON|backend
|
||||||
migration-->|DEPENDS_ON|backend
|
migration-->|DEPENDS_ON|backend
|
||||||
migration-->|DEPENDS_ON|backend
|
migration-->|DEPENDS_ON|backend
|
||||||
migration-->|DEPENDS_ON|backend
|
migration-->|DEPENDS_ON|backend
|
||||||
migration-->|DEPENDS_ON|backend
|
migration-->|DEPENDS_ON|backend
|
||||||
migration-->|USED_BY|backend
|
migration-->|DEPENDS_ON|backend
|
||||||
|
migration-->|DISPATCHES|backend
|
||||||
utils-->|DEPENDS_ON|backend
|
utils-->|DEPENDS_ON|backend
|
||||||
utils-->|DEPENDS_ON|backend
|
utils-->|DEPENDS_ON|backend
|
||||||
utils-->|DEPENDS_ON|backend
|
utils-->|DEPENDS_ON|backend
|
||||||
utils-->|DEPENDS_ON|backend
|
utils-->|DEPENDS_ON|backend
|
||||||
|
models-->|DEPENDS_ON|backend
|
||||||
models-->|INHERITS_FROM|backend
|
models-->|INHERITS_FROM|backend
|
||||||
models-->|DEPENDS_ON|backend
|
models-->|DEPENDS_ON|backend
|
||||||
models-->|DEPENDS_ON|backend
|
models-->|DEPENDS_ON|backend
|
||||||
@@ -2099,9 +2112,11 @@ graph TD
|
|||||||
services-->|DEPENDS_ON|backend
|
services-->|DEPENDS_ON|backend
|
||||||
services-->|CALLS|backend
|
services-->|CALLS|backend
|
||||||
services-->|DEPENDS_ON|backend
|
services-->|DEPENDS_ON|backend
|
||||||
services-->|USES|backend
|
services-->|DEPENDS_ON|backend
|
||||||
services-->|USES|backend
|
services-->|DEPENDS_ON|backend
|
||||||
services-->|USES|backend
|
services-->|DEPENDS_ON|backend
|
||||||
|
services-->|DEPENDS_ON|backend
|
||||||
|
services-->|DEPENDS_ON|backend
|
||||||
services-->|DEPENDS_ON|backend
|
services-->|DEPENDS_ON|backend
|
||||||
services-->|DEPENDS_ON|backend
|
services-->|DEPENDS_ON|backend
|
||||||
__tests__-->|TESTS|backend
|
__tests__-->|TESTS|backend
|
||||||
@@ -2148,7 +2163,7 @@ graph TD
|
|||||||
__tests__-->|VERIFIES|backend
|
__tests__-->|VERIFIES|backend
|
||||||
__tests__-->|TESTS|backend
|
__tests__-->|TESTS|backend
|
||||||
__tests__-->|TESTS|backend
|
__tests__-->|TESTS|backend
|
||||||
__tests__-->|TESTS|backend
|
__tests__-->|DEPENDS_ON|backend
|
||||||
stages-->|IMPLEMENTS|backend
|
stages-->|IMPLEMENTS|backend
|
||||||
stages-->|DEPENDS_ON|backend
|
stages-->|DEPENDS_ON|backend
|
||||||
stages-->|IMPLEMENTS|backend
|
stages-->|IMPLEMENTS|backend
|
||||||
|
|||||||
@@ -1197,9 +1197,7 @@
|
|||||||
- ƒ **saveSettings** (`Function`) `[TRIVIAL]`
|
- ƒ **saveSettings** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- 🧩 **MigrationDashboard** (`Component`) `[CRITICAL]`
|
- 🧩 **MigrationDashboard** (`Component`) `[CRITICAL]`
|
||||||
- 📝 Main dashboard for configuring and starting migrations.
|
- 📝 Orchestrate migration UI workflow and route user actions to backend APIs and task store.
|
||||||
- 🏗️ Layer: Page
|
|
||||||
- 🔒 Invariant: Migration cannot start without source and target environments.
|
|
||||||
- ⬅️ READS_FROM `lib`
|
- ⬅️ READS_FROM `lib`
|
||||||
- ⬅️ READS_FROM `selectedTask`
|
- ⬅️ READS_FROM `selectedTask`
|
||||||
- ➡️ WRITES_TO `selectedTask`
|
- ➡️ WRITES_TO `selectedTask`
|
||||||
@@ -1221,20 +1219,26 @@
|
|||||||
- 📝 Starts the migration process.
|
- 📝 Starts the migration process.
|
||||||
- ƒ **startDryRun** (`Function`) `[CRITICAL]`
|
- ƒ **startDryRun** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Builds pre-flight diff and risk summary without applying migration.
|
- 📝 Builds pre-flight diff and risk summary without applying migration.
|
||||||
|
- ▦ **MigrationDashboardView** (`Block`) `[CRITICAL]`
|
||||||
|
- 📝 Render migration configuration controls, action CTAs, dry-run results, and modal entry points.
|
||||||
- 🧩 **DashboardSelectionSection** (`Component`) `[CRITICAL]`
|
- 🧩 **DashboardSelectionSection** (`Component`) `[CRITICAL]`
|
||||||
- 🧩 **MappingManagement** (`Component`) `[CRITICAL]`
|
- ▦ **MappingsPageScript** (`Block`) `[CRITICAL]`
|
||||||
- 📝 Page for managing database mappings between environments.
|
- 📝 Define imports, state, and handlers that drive migration mappings page FSM.
|
||||||
- 🏗️ Layer: Page
|
- 🔗 CALLS -> `fetchEnvironments`
|
||||||
- 🔒 Invariant: Mappings are saved to the backend for persistence.
|
- 🔗 CALLS -> `fetchDatabases`
|
||||||
- ⬅️ READS_FROM `lib`
|
- 🔗 CALLS -> `handleUpdate`
|
||||||
- ➡️ WRITES_TO `t`
|
- ▦ **Imports** (`Block`) `[CRITICAL]`
|
||||||
- ⬅️ READS_FROM `t`
|
- 🗄️ **UiState** (`Store`) `[CRITICAL]`
|
||||||
|
- 📝 Maintain local page state for environments, fetched databases, mappings, suggestions, and UX messages.
|
||||||
|
- ƒ **belief_scope** (`Function`) `[CRITICAL]`
|
||||||
|
- 📝 Frontend semantic scope wrapper for CRITICAL trace boundaries without changing business behavior.
|
||||||
- ƒ **fetchEnvironments** (`Function`) `[CRITICAL]`
|
- ƒ **fetchEnvironments** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Fetches the list of environments.
|
- 📝 Load environment options for source/target selectors on initial mount.
|
||||||
- ƒ **fetchDatabases** (`Function`) `[CRITICAL]`
|
- ƒ **fetchDatabases** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Fetches databases from both environments and gets suggestions.
|
- 📝 Fetch both environment database catalogs, existing mappings, and suggested matches.
|
||||||
- ƒ **handleUpdate** (`Function`) `[CRITICAL]`
|
- ƒ **handleUpdate** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Saves a mapping to the backend.
|
- 📝 Persist a selected mapping pair and reconcile local mapping list by source database UUID.
|
||||||
|
- ▦ **MappingsPageTemplate** (`Block`) `[CRITICAL]`
|
||||||
- 📦 **+page** (`Module`) `[TRIVIAL]`
|
- 📦 **+page** (`Module`) `[TRIVIAL]`
|
||||||
- 📝 Auto-generated module for frontend/src/routes/profile/+page.svelte
|
- 📝 Auto-generated module for frontend/src/routes/profile/+page.svelte
|
||||||
- 🏗️ Layer: Unknown
|
- 🏗️ Layer: Unknown
|
||||||
@@ -1761,16 +1765,14 @@
|
|||||||
- ➡️ WRITES_TO `t`
|
- ➡️ WRITES_TO `t`
|
||||||
- ƒ **handleSelect** (`Function`)
|
- ƒ **handleSelect** (`Function`)
|
||||||
- 📝 Dispatches the selection change event.
|
- 📝 Dispatches the selection change event.
|
||||||
- 🧩 **ProtectedRoute** (`Component`) `[CRITICAL]`
|
- 📦 **ProtectedRoute.svelte** (`Module`)
|
||||||
- 📝 Wraps content to ensure only authenticated and authorized users can access it.
|
- 📝 Enforces authenticated and authorized access before protected route content is rendered.
|
||||||
- 🏗️ Layer: Component
|
- 🏗️ Layer: UI
|
||||||
- 🔒 Invariant: Redirects to /login if user is not authenticated and to fallback route when permission is denied.
|
- 🔒 Invariant: Unauthenticated users are redirected to /login, unauthorized users are redirected to fallbackPath, and protected slot renders only when access is verified.
|
||||||
- 📥 Props: requiredPermission: string | null , fallbackPath: string
|
- 🧩 **ProtectedRoute** (`Component`) `[CRITICAL]`
|
||||||
- ⬅️ READS_FROM `app`
|
- 📝 Wraps protected slot content with session and permission verification guards.
|
||||||
- ⬅️ READS_FROM `lib`
|
- ƒ **verifySessionAndAccess** (`Function`) `[CRITICAL]`
|
||||||
- ⬅️ READS_FROM `auth`
|
- 📝 Validates session and optional permission gate before allowing protected content render.
|
||||||
- ƒ **verifySessionAndAccess** (`Function`) `[CRITICAL]`
|
|
||||||
- 📝 Validates active session and optional route permission before rendering protected slot.
|
|
||||||
- 🧩 **TaskLogPanel** (`Component`)
|
- 🧩 **TaskLogPanel** (`Component`)
|
||||||
- 📝 Combines log filtering and display into a single cohesive dark-themed panel.
|
- 📝 Combines log filtering and display into a single cohesive dark-themed panel.
|
||||||
- 🏗️ Layer: UI
|
- 🏗️ Layer: UI
|
||||||
@@ -2499,46 +2501,49 @@
|
|||||||
- ƒ **as_bool** (`Function`) `[TRIVIAL]`
|
- ƒ **as_bool** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- 📦 **ConfigManagerModule** (`Module`) `[CRITICAL]`
|
- 📦 **ConfigManagerModule** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Manages application configuration persisted in database with one-time migration from JSON.
|
- 📝 Manages application configuration persistence in DB with one-time migration from legacy JSON.
|
||||||
- 🏗️ Layer: Core
|
- 🏗️ Layer: Domain
|
||||||
- 🔒 Invariant: Configuration must always be valid according to AppConfig model.
|
- 🔒 Invariant: Configuration must always be representable by AppConfig and persisted under global record id.
|
||||||
- 🔗 DEPENDS_ON -> `ConfigModels`
|
- 🔗 DEPENDS_ON -> `ConfigModels`
|
||||||
|
- 🔗 DEPENDS_ON -> `SessionLocal`
|
||||||
- 🔗 DEPENDS_ON -> `AppConfigRecord`
|
- 🔗 DEPENDS_ON -> `AppConfigRecord`
|
||||||
- 🔗 CALLS -> `logger`
|
- 🔗 CALLS -> `logger`
|
||||||
|
- 🔗 CALLS -> `configure_logger`
|
||||||
|
- 🔗 BINDS_TO -> `ConfigManager`
|
||||||
- ℂ **ConfigManager** (`Class`) `[CRITICAL]`
|
- ℂ **ConfigManager** (`Class`) `[CRITICAL]`
|
||||||
- 📝 A class to handle application configuration persistence and management.
|
- 📝 Handles application configuration load, validation, mutation, and persistence lifecycle.
|
||||||
- ƒ **__init__** (`Function`) `[CRITICAL]`
|
- ƒ **__init__** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Initializes the ConfigManager.
|
- 📝 Initialize manager state from persisted or migrated configuration.
|
||||||
- ƒ **_default_config** (`Function`) `[CRITICAL]`
|
- ƒ **_default_config** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Returns default application configuration.
|
- 📝 Build default application configuration fallback.
|
||||||
- ƒ **_load_from_legacy_file** (`Function`) `[CRITICAL]`
|
- ƒ **_load_from_legacy_file** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Loads legacy configuration from config.json for migration fallback.
|
- 📝 Load legacy JSON configuration for migration fallback path.
|
||||||
- ƒ **_get_record** (`Function`) `[CRITICAL]`
|
- ƒ **_get_record** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Loads config record from DB.
|
- 📝 Resolve global configuration record from DB.
|
||||||
- ƒ **_load_config** (`Function`) `[CRITICAL]`
|
- ƒ **_load_config** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Loads the configuration from DB or performs one-time migration from JSON file.
|
- 📝 Load configuration from DB or perform one-time migration from legacy JSON.
|
||||||
- ƒ **_save_config_to_db** (`Function`) `[CRITICAL]`
|
- ƒ **_save_config_to_db** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Saves the provided configuration object to DB.
|
- 📝 Persist provided AppConfig into the global DB configuration record.
|
||||||
- ƒ **save** (`Function`) `[CRITICAL]`
|
- ƒ **save** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Saves the current configuration state to DB.
|
- 📝 Persist current in-memory configuration state.
|
||||||
- ƒ **get_config** (`Function`) `[CRITICAL]`
|
- ƒ **get_config** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Returns the current configuration.
|
- 📝 Return current in-memory configuration snapshot.
|
||||||
- ƒ **update_global_settings** (`Function`) `[CRITICAL]`
|
- ƒ **update_global_settings** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Updates the global settings and persists the change.
|
- 📝 Replace global settings and persist the resulting configuration.
|
||||||
- ƒ **validate_path** (`Function`) `[CRITICAL]`
|
- ƒ **validate_path** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Validates if a path exists and is writable.
|
- 📝 Validate that path exists and is writable, creating it when absent.
|
||||||
- ƒ **get_environments** (`Function`) `[CRITICAL]`
|
- ƒ **get_environments** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Returns the list of configured environments.
|
- 📝 Return all configured environments.
|
||||||
- ƒ **has_environments** (`Function`) `[CRITICAL]`
|
- ƒ **has_environments** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Checks if at least one environment is configured.
|
- 📝 Check whether at least one environment exists in configuration.
|
||||||
- ƒ **get_environment** (`Function`) `[CRITICAL]`
|
- ƒ **get_environment** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Returns a single environment by ID.
|
- 📝 Resolve a configured environment by identifier.
|
||||||
- ƒ **add_environment** (`Function`) `[CRITICAL]`
|
- ƒ **add_environment** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Adds a new environment to the configuration.
|
- 📝 Upsert environment by id into configuration and persist.
|
||||||
- ƒ **update_environment** (`Function`) `[CRITICAL]`
|
- ƒ **update_environment** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Updates an existing environment.
|
- 📝 Update existing environment by id and preserve masked password placeholder behavior.
|
||||||
- ƒ **delete_environment** (`Function`) `[CRITICAL]`
|
- ƒ **delete_environment** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Deletes an environment by ID.
|
- 📝 Delete environment by id and persist when deletion occurs.
|
||||||
- 📦 **SchedulerModule** (`Module`)
|
- 📦 **SchedulerModule** (`Module`)
|
||||||
- 📝 Manages scheduled tasks using APScheduler.
|
- 📝 Manages scheduled tasks using APScheduler.
|
||||||
- 🏗️ Layer: Core
|
- 🏗️ Layer: Core
|
||||||
@@ -2676,22 +2681,25 @@
|
|||||||
- ƒ **has_plugin** (`Function`)
|
- ƒ **has_plugin** (`Function`)
|
||||||
- 📝 Checks if a plugin with the given ID is registered.
|
- 📝 Checks if a plugin with the given ID is registered.
|
||||||
- 📦 **backend.src.core.migration_engine** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.core.migration_engine** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Handles the interception and transformation of Superset asset ZIP archives.
|
- 📝 Transforms Superset export ZIP archives while preserving archive integrity and patching mapped identifiers.
|
||||||
- 🏗️ Layer: Core
|
- 🏗️ Layer: Domain
|
||||||
- 🔒 Invariant: ZIP structure must be preserved after transformation.
|
- 🔒 Invariant: ZIP structure and non-targeted metadata must remain valid after transformation.
|
||||||
- 🔗 DEPENDS_ON -> `PyYAML`
|
- 🔗 DEPENDS_ON -> `src.core.logger`
|
||||||
|
- 🔗 DEPENDS_ON -> `src.core.mapping_service.IdMappingService`
|
||||||
|
- 🔗 DEPENDS_ON -> `src.models.mapping.ResourceType`
|
||||||
|
- 🔗 DEPENDS_ON -> `yaml`
|
||||||
- ℂ **MigrationEngine** (`Class`) `[CRITICAL]`
|
- ℂ **MigrationEngine** (`Class`) `[CRITICAL]`
|
||||||
- 📝 Engine for transforming Superset export ZIPs.
|
- 📝 Engine for transforming Superset export ZIPs.
|
||||||
- ƒ **__init__** (`Function`) `[CRITICAL]`
|
- ƒ **__init__** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Initializes the migration engine with optional ID mapping service.
|
- 📝 Initializes migration orchestration dependencies for ZIP/YAML metadata transformations.
|
||||||
- ƒ **transform_zip** (`Function`) `[CRITICAL]`
|
- ƒ **transform_zip** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Extracts ZIP, replaces database UUIDs in YAMLs, patches cross-filters, and re-packages.
|
- 📝 Extracts ZIP, replaces database UUIDs in YAMLs, patches cross-filters, and re-packages.
|
||||||
- ƒ **_transform_yaml** (`Function`) `[CRITICAL]`
|
- ƒ **_transform_yaml** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Replaces database_uuid in a single YAML file.
|
- 📝 Replaces database_uuid in a single YAML file.
|
||||||
- ƒ **_extract_chart_uuids_from_archive** (`Function`) `[CRITICAL]`
|
- ƒ **_extract_chart_uuids_from_archive** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Scans the unpacked ZIP to map local exported integer IDs back to their UUIDs.
|
- 📝 Scans extracted chart YAML files and builds a source chart ID to UUID lookup map.
|
||||||
- ƒ **_patch_dashboard_metadata** (`Function`) `[CRITICAL]`
|
- ƒ **_patch_dashboard_metadata** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Replaces integer IDs in json_metadata.
|
- 📝 Rewrites dashboard json_metadata chart/dataset integer identifiers using target environment mappings.
|
||||||
- 📦 **backend.src.core.async_superset_client** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.core.async_superset_client** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Async Superset client for dashboard hot-path requests without blocking FastAPI event loop.
|
- 📝 Async Superset client for dashboard hot-path requests without blocking FastAPI event loop.
|
||||||
- 🏗️ Layer: Core
|
- 🏗️ Layer: Core
|
||||||
@@ -2802,34 +2810,38 @@
|
|||||||
- ƒ **get_password_hash** (`Function`) `[CRITICAL]`
|
- ƒ **get_password_hash** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Generates a bcrypt hash for a plain password.
|
- 📝 Generates a bcrypt hash for a plain password.
|
||||||
- 📦 **backend.src.core.auth.repository** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.core.auth.repository** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Data access layer for authentication-related entities.
|
- 📝 Data access layer for authentication and user preference entities.
|
||||||
- 🏗️ Layer: Core
|
- 🏗️ Layer: Domain
|
||||||
- 🔒 Invariant: All database operations must be performed within a session.
|
- 🔒 Invariant: All database read/write operations must execute via the injected SQLAlchemy session boundary.
|
||||||
- 🔗 DEPENDS_ON -> `sqlalchemy`
|
- 🔗 DEPENDS_ON -> `sqlalchemy.orm.Session`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.models.auth`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.models.profile`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.core.logger.belief_scope`
|
||||||
- ℂ **AuthRepository** (`Class`) `[CRITICAL]`
|
- ℂ **AuthRepository** (`Class`) `[CRITICAL]`
|
||||||
- 📝 Encapsulates database operations for authentication.
|
- 📝 Encapsulates database operations for authentication-related entities.
|
||||||
|
- 🔗 DEPENDS_ON -> `sqlalchemy.orm.Session`
|
||||||
- ƒ **__init__** (`Function`) `[CRITICAL]`
|
- ƒ **__init__** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Initializes the repository with a database session.
|
- 📝 Bind repository instance to an existing SQLAlchemy session.
|
||||||
- ƒ **get_user_by_username** (`Function`) `[CRITICAL]`
|
- ƒ **get_user_by_username** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Retrieves a user by their username.
|
- 📝 Retrieve a user entity by unique username.
|
||||||
- ƒ **get_user_by_id** (`Function`) `[CRITICAL]`
|
- ƒ **get_user_by_id** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Retrieves a user by their unique ID.
|
- 📝 Retrieve a user entity by identifier.
|
||||||
- ƒ **get_role_by_name** (`Function`) `[CRITICAL]`
|
- ƒ **get_role_by_name** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Retrieves a role by its name.
|
- 📝 Retrieve a role entity by role name.
|
||||||
- ƒ **update_last_login** (`Function`) `[CRITICAL]`
|
- ƒ **update_last_login** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Updates the last_login timestamp for a user.
|
- 📝 Update last_login timestamp for the provided user entity.
|
||||||
- ƒ **get_role_by_id** (`Function`) `[CRITICAL]`
|
- ƒ **get_role_by_id** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Retrieves a role by its unique ID.
|
- 📝 Retrieve a role entity by identifier.
|
||||||
- ƒ **get_permission_by_id** (`Function`) `[CRITICAL]`
|
- ƒ **get_permission_by_id** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Retrieves a permission by its unique ID.
|
- 📝 Retrieve a permission entity by identifier.
|
||||||
- ƒ **get_permission_by_resource_action** (`Function`) `[CRITICAL]`
|
- ƒ **get_permission_by_resource_action** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Retrieves a permission by resource and action.
|
- 📝 Retrieve a permission entity by resource and action pair.
|
||||||
- ƒ **get_user_dashboard_preference** (`Function`) `[CRITICAL]`
|
- ƒ **get_user_dashboard_preference** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Retrieves dashboard preference by owner user ID.
|
- 📝 Retrieve dashboard preference entity owned by specified user.
|
||||||
- ƒ **save_user_dashboard_preference** (`Function`) `[CRITICAL]`
|
- ƒ **save_user_dashboard_preference** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Persists dashboard preference entity and returns refreshed row.
|
- 📝 Persist dashboard preference entity and return refreshed persistent row.
|
||||||
- ƒ **list_permissions** (`Function`) `[CRITICAL]`
|
- ƒ **list_permissions** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Lists all available permissions.
|
- 📝 List all permission entities available in storage.
|
||||||
- 📦 **test_auth** (`Module`)
|
- 📦 **test_auth** (`Module`)
|
||||||
- 📝 Unit tests for authentication module
|
- 📝 Unit tests for authentication module
|
||||||
- 🏗️ Layer: Domain
|
- 🏗️ Layer: Domain
|
||||||
@@ -3068,10 +3080,10 @@
|
|||||||
- 📝 Test that configure_logger updates task_log_level.
|
- 📝 Test that configure_logger updates task_log_level.
|
||||||
- ƒ **test_enable_belief_state_flag** (`Function`)
|
- ƒ **test_enable_belief_state_flag** (`Function`)
|
||||||
- 📝 Test that enable_belief_state flag controls belief_scope logging.
|
- 📝 Test that enable_belief_state flag controls belief_scope logging.
|
||||||
- ƒ **test_belief_scope_missing_anchor** (`Function`)
|
- ƒ **test_belief_scope_missing_anchor** (`Function`)
|
||||||
- 📝 Test @PRE condition: anchor_id must be provided
|
- 📝 Test @PRE condition: anchor_id must be provided
|
||||||
- ƒ **test_configure_logger_post_conditions** (`Function`)
|
- ƒ **test_configure_logger_post_conditions** (`Function`)
|
||||||
- 📝 Test @POST condition: Logger level, handlers, belief state flag, and task log level are updated.
|
- 📝 Test @POST condition: Logger level, handlers, belief state flag, and task log level are updated.
|
||||||
- ƒ **reset_logger_state** (`Function`) `[TRIVIAL]`
|
- ƒ **reset_logger_state** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- 📦 **backend.src.core.migration.dry_run_orchestrator** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.core.migration.dry_run_orchestrator** (`Module`) `[CRITICAL]`
|
||||||
@@ -3114,8 +3126,11 @@
|
|||||||
- ƒ **_normalize_object_payload** (`Function`) `[CRITICAL]`
|
- ƒ **_normalize_object_payload** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Convert raw YAML payload to stable diff signature shape.
|
- 📝 Convert raw YAML payload to stable diff signature shape.
|
||||||
- 📦 **backend.src.core.migration.risk_assessor** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.core.migration.risk_assessor** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Risk evaluation helpers for migration pre-flight reporting.
|
- 📝 Compute deterministic migration risk items and aggregate score for dry-run reporting.
|
||||||
- 🏗️ Layer: Core
|
- 🏗️ Layer: Domain
|
||||||
|
- 🔒 Invariant: Risk scoring must remain bounded to [0,100] and preserve severity-to-weight mapping.
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.core.superset_client.SupersetClient`
|
||||||
|
- 🔗 DISPATCHES -> `backend.src.core.migration.dry_run_orchestrator.MigrationDryRunService.run`
|
||||||
- ƒ **index_by_uuid** (`Function`) `[CRITICAL]`
|
- ƒ **index_by_uuid** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Build UUID-index from normalized objects.
|
- 📝 Build UUID-index from normalized objects.
|
||||||
- ƒ **extract_owner_identifiers** (`Function`) `[CRITICAL]`
|
- ƒ **extract_owner_identifiers** (`Function`) `[CRITICAL]`
|
||||||
@@ -3525,24 +3540,30 @@
|
|||||||
- 📝 Fetch the list of databases from a specific environment.
|
- 📝 Fetch the list of databases from a specific environment.
|
||||||
- 🏗️ Layer: API
|
- 🏗️ Layer: API
|
||||||
- 📦 **backend.src.api.routes.migration** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.api.routes.migration** (`Module`) `[CRITICAL]`
|
||||||
- 📝 API endpoints for migration operations.
|
- 📝 HTTP contract layer for migration orchestration, settings, dry-run, and mapping sync endpoints.
|
||||||
- 🏗️ Layer: API
|
- 🏗️ Layer: Infra
|
||||||
|
- 🔒 Invariant: Migration endpoints never execute with invalid environment references and always return explicit HTTP errors on guard failures.
|
||||||
- 🔗 DEPENDS_ON -> `backend.src.dependencies`
|
- 🔗 DEPENDS_ON -> `backend.src.dependencies`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.core.database`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.core.superset_client`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.core.migration.dry_run_orchestrator`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.core.mapping_service`
|
||||||
- 🔗 DEPENDS_ON -> `backend.src.models.dashboard`
|
- 🔗 DEPENDS_ON -> `backend.src.models.dashboard`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.models.mapping`
|
||||||
- ƒ **get_dashboards** (`Function`) `[CRITICAL]`
|
- ƒ **get_dashboards** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Fetch all dashboards from the specified environment for the grid.
|
- 📝 Fetch dashboard metadata from a requested environment for migration selection UI.
|
||||||
- ƒ **execute_migration** (`Function`) `[CRITICAL]`
|
- ƒ **execute_migration** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Execute the migration of selected dashboards.
|
- 📝 Validate migration selection and enqueue asynchronous migration task execution.
|
||||||
- ƒ **dry_run_migration** (`Function`) `[CRITICAL]`
|
- ƒ **dry_run_migration** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Build pre-flight diff and risk summary without applying migration.
|
- 📝 Build pre-flight migration diff and risk summary without mutating target systems.
|
||||||
- ƒ **get_migration_settings** (`Function`) `[CRITICAL]`
|
- ƒ **get_migration_settings** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Get current migration Cron string explicitly.
|
- 📝 Read and return configured migration synchronization cron expression.
|
||||||
- ƒ **update_migration_settings** (`Function`) `[CRITICAL]`
|
- ƒ **update_migration_settings** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Update migration Cron string.
|
- 📝 Validate and persist migration synchronization cron expression update.
|
||||||
- ƒ **get_resource_mappings** (`Function`) `[CRITICAL]`
|
- ƒ **get_resource_mappings** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Fetch synchronized object mappings with search, filtering, and pagination.
|
- 📝 Fetch synchronized resource mappings with optional filters and pagination for migration mappings view.
|
||||||
- ƒ **trigger_sync_now** (`Function`) `[CRITICAL]`
|
- ƒ **trigger_sync_now** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Triggers an immediate ID synchronization for all environments.
|
- 📝 Trigger immediate ID synchronization for every configured environment.
|
||||||
- 📦 **PluginsRouter** (`Module`)
|
- 📦 **PluginsRouter** (`Module`)
|
||||||
- 📝 Defines the FastAPI router for plugin-related endpoints, allowing clients to list available plugins.
|
- 📝 Defines the FastAPI router for plugin-related endpoints, allowing clients to list available plugins.
|
||||||
- 🏗️ Layer: UI (API)
|
- 🏗️ Layer: UI (API)
|
||||||
@@ -3790,28 +3811,28 @@
|
|||||||
- 📝 Normalize intent entity value types from LLM output to route-compatible values.
|
- 📝 Normalize intent entity value types from LLM output to route-compatible values.
|
||||||
- ƒ **_confirmation_summary** (`Function`)
|
- ƒ **_confirmation_summary** (`Function`)
|
||||||
- 📝 Build human-readable confirmation prompt for an intent before execution.
|
- 📝 Build human-readable confirmation prompt for an intent before execution.
|
||||||
- ƒ **_clarification_text_for_intent** (`Function`)
|
- ƒ **_clarification_text_for_intent** (`Function`)
|
||||||
- 📝 Convert technical missing-parameter errors into user-facing clarification prompts.
|
- 📝 Convert technical missing-parameter errors into user-facing clarification prompts.
|
||||||
- ƒ **_plan_intent_with_llm** (`Function`)
|
- ƒ **_plan_intent_with_llm** (`Function`)
|
||||||
- 📝 Use active LLM provider to select best tool/operation from dynamic catalog.
|
- 📝 Use active LLM provider to select best tool/operation from dynamic catalog.
|
||||||
- ƒ **_authorize_intent** (`Function`)
|
- ƒ **_authorize_intent** (`Function`)
|
||||||
- 📝 Validate user permissions for parsed intent before confirmation/dispatch.
|
- 📝 Validate user permissions for parsed intent before confirmation/dispatch.
|
||||||
- ƒ **_dispatch_intent** (`Function`)
|
- ƒ **_dispatch_intent** (`Function`)
|
||||||
- 📝 Execute parsed assistant intent via existing task/plugin/git services.
|
- 📝 Execute parsed assistant intent via existing task/plugin/git services.
|
||||||
- ƒ **send_message** (`Function`)
|
- ƒ **send_message** (`Function`)
|
||||||
- 📝 Parse assistant command, enforce safety gates, and dispatch executable intent.
|
- 📝 Parse assistant command, enforce safety gates, and dispatch executable intent.
|
||||||
- ƒ **confirm_operation** (`Function`)
|
- ƒ **confirm_operation** (`Function`)
|
||||||
- 📝 Execute previously requested risky operation after explicit user confirmation.
|
- 📝 Execute previously requested risky operation after explicit user confirmation.
|
||||||
- ƒ **cancel_operation** (`Function`)
|
- ƒ **cancel_operation** (`Function`)
|
||||||
- 📝 Cancel pending risky operation and mark confirmation token as cancelled.
|
- 📝 Cancel pending risky operation and mark confirmation token as cancelled.
|
||||||
- ƒ **list_conversations** (`Function`)
|
- ƒ **list_conversations** (`Function`)
|
||||||
- 📝 Return paginated conversation list for current user with archived flag and last message preview.
|
- 📝 Return paginated conversation list for current user with archived flag and last message preview.
|
||||||
- ƒ **delete_conversation** (`Function`)
|
- ƒ **delete_conversation** (`Function`)
|
||||||
- 📝 Soft-delete or hard-delete a conversation and clear its in-memory trace.
|
- 📝 Soft-delete or hard-delete a conversation and clear its in-memory trace.
|
||||||
- ƒ **get_history** (`Function`)
|
- ƒ **get_history** (`Function`)
|
||||||
- 📝 Retrieve paginated assistant conversation history for current user.
|
- 📝 Retrieve paginated assistant conversation history for current user.
|
||||||
- ƒ **get_assistant_audit** (`Function`)
|
- ƒ **get_assistant_audit** (`Function`)
|
||||||
- 📝 Return assistant audit decisions for current user from persistent and in-memory stores.
|
- 📝 Return assistant audit decisions for current user from persistent and in-memory stores.
|
||||||
- ƒ **_async_confirmation_summary** (`Function`) `[TRIVIAL]`
|
- ƒ **_async_confirmation_summary** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- ƒ **_label** (`Function`) `[TRIVIAL]`
|
- ƒ **_label** (`Function`) `[TRIVIAL]`
|
||||||
@@ -4462,18 +4483,20 @@
|
|||||||
- 📝 Status command without explicit task_id should resolve to latest task for current user.
|
- 📝 Status command without explicit task_id should resolve to latest task for current user.
|
||||||
- ƒ **test_llm_validation_with_dashboard_ref_requires_confirmation** (`Function`)
|
- ƒ **test_llm_validation_with_dashboard_ref_requires_confirmation** (`Function`)
|
||||||
- 📝 LLM validation with dashboard_ref should now require confirmation before dispatch.
|
- 📝 LLM validation with dashboard_ref should now require confirmation before dispatch.
|
||||||
- ƒ **test_list_conversations_groups_by_conversation_and_marks_archived** (`Function`)
|
- ƒ **test_list_conversations_groups_by_conversation_and_marks_archived** (`Function`)
|
||||||
- 📝 Conversations endpoint must group messages and compute archived marker by inactivity threshold.
|
- 📝 Conversations endpoint must group messages and compute archived marker by inactivity threshold.
|
||||||
- ƒ **test_history_from_latest_returns_recent_page_first** (`Function`)
|
- ƒ **test_history_from_latest_returns_recent_page_first** (`Function`)
|
||||||
- 📝 History endpoint from_latest mode must return newest page while preserving chronological order in chunk.
|
- 📝 History endpoint from_latest mode must return newest page while preserving chronological order in chunk.
|
||||||
- ƒ **test_list_conversations_archived_only_filters_active** (`Function`)
|
- ƒ **test_list_conversations_archived_only_filters_active** (`Function`)
|
||||||
- 📝 archived_only mode must return only archived conversations.
|
- 📝 archived_only mode must return only archived conversations.
|
||||||
- ƒ **test_guarded_operation_always_requires_confirmation** (`Function`)
|
- ƒ **test_guarded_operation_always_requires_confirmation** (`Function`)
|
||||||
- 📝 Non-dangerous (guarded) commands must still require confirmation before execution.
|
- 📝 Non-dangerous (guarded) commands must still require confirmation before execution.
|
||||||
- ƒ **test_guarded_operation_confirm_roundtrip** (`Function`)
|
- ƒ **test_guarded_operation_confirm_roundtrip** (`Function`)
|
||||||
- 📝 Guarded operation must execute successfully after explicit confirmation.
|
- 📝 Guarded operation must execute successfully after explicit confirmation.
|
||||||
- ƒ **test_confirm_nonexistent_id_returns_404** (`Function`)
|
- ƒ **test_confirm_nonexistent_id_returns_404** (`Function`)
|
||||||
- 📝 Confirming a non-existent ID should raise 404.
|
- 📝 Confirming a non-existent ID should raise 404.
|
||||||
|
- ƒ **test_migration_with_dry_run_includes_summary** (`Function`)
|
||||||
|
- 📝 Migration command with dry run flag must return the dry run summary in confirmation text.
|
||||||
- ƒ **__init__** (`Function`) `[TRIVIAL]`
|
- ƒ **__init__** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- ƒ **__init__** (`Function`) `[TRIVIAL]`
|
- ƒ **__init__** (`Function`) `[TRIVIAL]`
|
||||||
@@ -4568,13 +4591,15 @@
|
|||||||
- ƒ **test_dry_run_migration_rejects_same_environment** (`Function`) `[TRIVIAL]`
|
- ƒ **test_dry_run_migration_rejects_same_environment** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- 📦 **backend.src.models.config** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.models.config** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Defines database schema for persisted application configuration.
|
- 📝 Defines SQLAlchemy persistence models for application and notification configuration records.
|
||||||
- 🏗️ Layer: Domain
|
- 🏗️ Layer: Domain
|
||||||
|
- 🔒 Invariant: Configuration payload and notification credentials must remain persisted as non-null JSON documents.
|
||||||
- 🔗 DEPENDS_ON -> `sqlalchemy`
|
- 🔗 DEPENDS_ON -> `sqlalchemy`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.models.mapping:Base`
|
||||||
- ℂ **AppConfigRecord** (`Class`) `[CRITICAL]`
|
- ℂ **AppConfigRecord** (`Class`) `[CRITICAL]`
|
||||||
- 📝 Stores the single source of truth for application configuration.
|
- 📝 Stores persisted application configuration as a single authoritative record model.
|
||||||
- ℂ **NotificationConfig** (`Class`) `[CRITICAL]`
|
- ℂ **NotificationConfig** (`Class`) `[CRITICAL]`
|
||||||
- 📝 Global settings for external notification providers.
|
- 📝 Stores persisted provider-level notification configuration and encrypted credentials metadata.
|
||||||
- 📦 **backend.src.models.llm** (`Module`)
|
- 📦 **backend.src.models.llm** (`Module`)
|
||||||
- 📝 SQLAlchemy models for LLM provider configuration and validation results.
|
- 📝 SQLAlchemy models for LLM provider configuration and validation results.
|
||||||
- 🏗️ Layer: Domain
|
- 🏗️ Layer: Domain
|
||||||
@@ -4628,8 +4653,8 @@
|
|||||||
- 📝 Represents a mapping between source and target databases.
|
- 📝 Represents a mapping between source and target databases.
|
||||||
- ℂ **MigrationJob** (`Class`) `[TRIVIAL]`
|
- ℂ **MigrationJob** (`Class`) `[TRIVIAL]`
|
||||||
- 📝 Represents a single migration execution job.
|
- 📝 Represents a single migration execution job.
|
||||||
- ℂ **ResourceMapping** (`Class`)
|
- ℂ **ResourceMapping** (`Class`)
|
||||||
- 📝 Maps a universal UUID for a resource to its actual ID on a specific environment.
|
- 📝 Maps a universal UUID for a resource to its actual ID on a specific environment.
|
||||||
- 📦 **backend.src.models.report** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.models.report** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Canonical report schemas for unified task reporting across heterogeneous task types.
|
- 📝 Canonical report schemas for unified task reporting across heterogeneous task types.
|
||||||
- 🏗️ Layer: Domain
|
- 🏗️ Layer: Domain
|
||||||
@@ -5066,19 +5091,24 @@
|
|||||||
- ƒ **__getattr__** (`Function`) `[TRIVIAL]`
|
- ƒ **__getattr__** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- 📦 **backend.src.services.auth_service** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.services.auth_service** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Orchestrates authentication business logic.
|
- 📝 Orchestrates credential authentication and ADFS JIT user provisioning.
|
||||||
- 🏗️ Layer: Service
|
- 🏗️ Layer: Domain
|
||||||
- 🔒 Invariant: Authentication must verify both credentials and account status.
|
- 🔒 Invariant: Authentication succeeds only for active users with valid credentials; issued sessions encode subject and scopes from assigned roles.
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.core.auth.repository.AuthRepository`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.core.auth.security.verify_password`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.core.auth.jwt.create_access_token`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.models.auth.User`
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.models.auth.Role`
|
||||||
- ℂ **AuthService** (`Class`) `[CRITICAL]`
|
- ℂ **AuthService** (`Class`) `[CRITICAL]`
|
||||||
- 📝 Provides high-level authentication services.
|
- 📝 Provides high-level authentication services.
|
||||||
- ƒ **__init__** (`Function`) `[CRITICAL]`
|
- ƒ **__init__** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Initializes the service with a database session.
|
- 📝 Initializes the authentication service with repository access over an active DB session.
|
||||||
- ƒ **authenticate_user** (`Function`) `[CRITICAL]`
|
- ƒ **authenticate_user** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Authenticates a user with username and password.
|
- 📝 Validates credentials and account state for local username/password authentication.
|
||||||
- ƒ **create_session** (`Function`) `[CRITICAL]`
|
- ƒ **create_session** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Creates a JWT session for an authenticated user.
|
- 📝 Issues an access token payload for an already authenticated user.
|
||||||
- ƒ **provision_adfs_user** (`Function`) `[CRITICAL]`
|
- ƒ **provision_adfs_user** (`Function`) `[CRITICAL]`
|
||||||
- 📝 Just-In-Time (JIT) provisioning for ADFS users based on group mappings.
|
- 📝 Performs ADFS Just-In-Time provisioning and role synchronization from AD group mappings.
|
||||||
- 📦 **backend.src.services.git_service** (`Module`)
|
- 📦 **backend.src.services.git_service** (`Module`)
|
||||||
- 📝 Core Git logic using GitPython to manage dashboard repositories.
|
- 📝 Core Git logic using GitPython to manage dashboard repositories.
|
||||||
- 🏗️ Layer: Service
|
- 🏗️ Layer: Service
|
||||||
@@ -5435,14 +5465,18 @@
|
|||||||
- 🔗 DEPENDS_ON -> `backend.src.services.clean_release.repository`
|
- 🔗 DEPENDS_ON -> `backend.src.services.clean_release.repository`
|
||||||
- ℂ **CleanComplianceOrchestrator** (`Class`)
|
- ℂ **CleanComplianceOrchestrator** (`Class`)
|
||||||
- 📝 Coordinate clean-release compliance verification stages.
|
- 📝 Coordinate clean-release compliance verification stages.
|
||||||
|
- ƒ **CleanComplianceOrchestrator.__init__** (`Function`)
|
||||||
|
- 📝 Bind repository dependency used for orchestrator persistence and lookups.
|
||||||
- ƒ **start_check_run** (`Function`)
|
- ƒ **start_check_run** (`Function`)
|
||||||
- 📝 Initiate a new compliance run session.
|
- 📝 Initiate a new compliance run session.
|
||||||
- ƒ **finalize_run** (`Function`)
|
- ƒ **execute_stages** (`Function`)
|
||||||
- 📝 Finalize run status based on cumulative stage results.
|
- 📝 Execute or accept compliance stage outcomes and set intermediate/final check-run status fields.
|
||||||
|
- ƒ **finalize_run** (`Function`)
|
||||||
|
- 📝 Finalize run status based on cumulative stage results.
|
||||||
|
- ƒ **run_check_legacy** (`Function`)
|
||||||
|
- 📝 Legacy wrapper for compatibility with previous orchestrator call style.
|
||||||
- ƒ **__init__** (`Function`) `[TRIVIAL]`
|
- ƒ **__init__** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- ƒ **execute_stages** (`Function`) `[TRIVIAL]`
|
|
||||||
- 📝 Auto-detected function (orphan)
|
|
||||||
- 📦 **backend.src.services.clean_release.compliance_execution_service** (`Module`) `[CRITICAL]`
|
- 📦 **backend.src.services.clean_release.compliance_execution_service** (`Module`) `[CRITICAL]`
|
||||||
- 📝 Create and execute compliance runs with trusted snapshots, deterministic stages, violations and immutable report persistence.
|
- 📝 Create and execute compliance runs with trusted snapshots, deterministic stages, violations and immutable report persistence.
|
||||||
- 🏗️ Layer: Domain
|
- 🏗️ Layer: Domain
|
||||||
@@ -5786,6 +5820,21 @@
|
|||||||
- 📝 Validate release candidate preparation flow, including policy evaluation and manifest persisting.
|
- 📝 Validate release candidate preparation flow, including policy evaluation and manifest persisting.
|
||||||
- 🏗️ Layer: Domain
|
- 🏗️ Layer: Domain
|
||||||
- 🔒 Invariant: Candidate preparation always persists manifest and candidate status deterministically.
|
- 🔒 Invariant: Candidate preparation always persists manifest and candidate status deterministically.
|
||||||
|
- 🔗 DEPENDS_ON -> `backend.src.services.clean_release.preparation_service:Module`
|
||||||
|
- ƒ **backend.tests.services.clean_release.test_preparation_service._mock_policy** (`Function`)
|
||||||
|
- 📝 Build a valid clean profile policy fixture for preparation tests.
|
||||||
|
- ƒ **backend.tests.services.clean_release.test_preparation_service._mock_registry** (`Function`)
|
||||||
|
- 📝 Build an internal-only source registry fixture for preparation tests.
|
||||||
|
- ƒ **backend.tests.services.clean_release.test_preparation_service._mock_candidate** (`Function`)
|
||||||
|
- 📝 Build a draft release candidate fixture with provided identifier.
|
||||||
|
- ƒ **backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_success** (`Function`)
|
||||||
|
- 📝 Verify candidate transitions to PREPARED when evaluation returns no violations.
|
||||||
|
- ƒ **backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_with_violations** (`Function`)
|
||||||
|
- 📝 Verify candidate transitions to BLOCKED when evaluation returns blocking violations.
|
||||||
|
- ƒ **backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_not_found** (`Function`)
|
||||||
|
- 📝 Verify preparation raises ValueError when candidate does not exist.
|
||||||
|
- ƒ **backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_no_active_policy** (`Function`)
|
||||||
|
- 📝 Verify preparation raises ValueError when no active policy is available.
|
||||||
- ƒ **_mock_policy** (`Function`) `[TRIVIAL]`
|
- ƒ **_mock_policy** (`Function`) `[TRIVIAL]`
|
||||||
- 📝 Auto-detected function (orphan)
|
- 📝 Auto-detected function (orphan)
|
||||||
- ƒ **_mock_registry** (`Function`) `[TRIVIAL]`
|
- ƒ **_mock_registry** (`Function`) `[TRIVIAL]`
|
||||||
|
|||||||
39
.kilocode/setup-script
Executable file
39
.kilocode/setup-script
Executable file
@@ -0,0 +1,39 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# Kilo Code Worktree Setup Script
|
||||||
|
# This script runs before the agent starts in a worktree (new sessions only).
|
||||||
|
#
|
||||||
|
# Available environment variables:
|
||||||
|
# WORKTREE_PATH - Absolute path to the worktree directory
|
||||||
|
# REPO_PATH - Absolute path to the main repository
|
||||||
|
#
|
||||||
|
# Example tasks:
|
||||||
|
# - Copy .env files from main repo
|
||||||
|
# - Install dependencies
|
||||||
|
# - Run database migrations
|
||||||
|
# - Set up local configuration
|
||||||
|
|
||||||
|
set -e # Exit on error
|
||||||
|
|
||||||
|
echo "Setting up worktree: $WORKTREE_PATH"
|
||||||
|
|
||||||
|
# Uncomment and modify as needed:
|
||||||
|
|
||||||
|
# Copy environment files
|
||||||
|
# if [ -f "$REPO_PATH/.env" ]; then
|
||||||
|
# cp "$REPO_PATH/.env" "$WORKTREE_PATH/.env"
|
||||||
|
# echo "Copied .env"
|
||||||
|
# fi
|
||||||
|
|
||||||
|
# Install dependencies (Node.js)
|
||||||
|
# if [ -f "$WORKTREE_PATH/package.json" ]; then
|
||||||
|
# cd "$WORKTREE_PATH"
|
||||||
|
# npm install
|
||||||
|
# fi
|
||||||
|
|
||||||
|
# Install dependencies (Python)
|
||||||
|
# if [ -f "$WORKTREE_PATH/requirements.txt" ]; then
|
||||||
|
# cd "$WORKTREE_PATH"
|
||||||
|
# pip install -r requirements.txt
|
||||||
|
# fi
|
||||||
|
|
||||||
|
echo "Setup complete!"
|
||||||
@@ -1,5 +1,5 @@
|
|||||||
---
|
---
|
||||||
description: Maintain semantic integrity by generating maps and auditing compliance reports.
|
description: Maintain semantic integrity via multi-agent delegation. Analyzes codebase, delegates markup tasks to Semantic Engineer, verifies via Reviewer Agent, and reports status.
|
||||||
---
|
---
|
||||||
|
|
||||||
## User Input
|
## User Input
|
||||||
@@ -12,61 +12,62 @@ You **MUST** consider the user input before proceeding (if not empty).
|
|||||||
|
|
||||||
## Goal
|
## Goal
|
||||||
|
|
||||||
Ensure the codebase adheres to the semantic standards defined in `.ai/standards/semantics.md`. This involves generating the semantic map, analyzing compliance reports, and identifying critical parsing errors or missing metadata.
|
Ensure the codebase 100% adheres to the semantic standards defined in `.ai/standards/semantics.md` (GRACE-Poly Protocol). You are the **Manager/Supervisor**. You do not write code. You manage the queue, delegate files to the Semantic Engineer, audit their work via the Reviewer Agent, and commit successful changes.
|
||||||
|
|
||||||
## Operating Constraints
|
## Operating Constraints
|
||||||
|
|
||||||
1. **ROLE: Orchestrator**: You are responsible for the high-level coordination of semantic maintenance.
|
1. **ROLE: Orchestrator**: High-level coordination ONLY. Do not output raw code diffs yourself.
|
||||||
2. **STRICT ADHERENCE**: Follow `.ai/standards/semantics.md` for all anchor and tag syntax.
|
2. **DELEGATION PATTERN**: Strict `Orchestrator -> Engineer -> Reviewer -> Orchestrator` loop.
|
||||||
3. **NON-DESTRUCTIVE**: Do not remove existing code logic; only add or update semantic annotations.
|
3. **FAIL-FAST METRICS**: If the Reviewer Agent rejects a file 3 times in a row, drop the file from the current queue and mark it as `[HUMAN_INTERVENTION_REQUIRED]`.
|
||||||
4. **TIER AWARENESS**: Prioritize CRITICAL and STANDARD modules for compliance fixes.
|
4. **TIER AWARENESS**: CRITICAL files MUST be processed first. A failure in a CRITICAL file blocks the entire pipeline.
|
||||||
|
|
||||||
## Execution Steps
|
## Execution Steps
|
||||||
|
|
||||||
### 1. Generate Semantic Map
|
### 1. Generate Semantic State (Analyze)
|
||||||
|
Run the generator script to map the current reality:
|
||||||
Run the generator script from the repository root:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
python3 generate_semantic_map.py
|
python3 generate_semantic_map.py
|
||||||
```
|
```
|
||||||
|
Parse the output (Global Score, Critical Parsing Errors, Files with Score < 0.7).
|
||||||
|
|
||||||
### 2. Analyze Compliance Status
|
### 2. Formulate Task Queue
|
||||||
|
Create an execution queue based on the report. Priority:
|
||||||
|
- **Priority 1 (Blockers)**: Files with "Critical Parsing Errors" (unclosed `[/DEF]` anchors).
|
||||||
|
- **Priority 2 (Tier 1)**: `CRITICAL` tier modules missing mandatory tags (`@PRE`, `@POST`, `belief_scope`).
|
||||||
|
- **Priority 3 (Tier 2)**: `STANDARD` modules with missing graph relations (`@RELATION`).
|
||||||
|
|
||||||
**Parse the output to identify**:
|
### 3. The Delegation Loop (For each file in the queue)
|
||||||
- Path to the latest report in `semantics/reports/semantic_report_*.md`.
|
For every target file, execute this exact sequence:
|
||||||
- Global Compliance Score.
|
|
||||||
- Total count of Global Errors and Warnings.
|
|
||||||
|
|
||||||
### 3. Audit Critical Issues
|
* **Step 3A (Delegate to Worker):** Send the file path and the specific violation from the report to the **Semantic Markup Agent (Engineer)**.
|
||||||
|
*Prompt*: `"Fix semantic violations in [FILE]. Current issues: [ISSUES]. Apply GRACE-Poly standards without changing business logic."*
|
||||||
|
* **Step 3B (Delegate to Auditor):** Once the Engineer returns the modified file, send it to the **Reviewer Agent (Auditor)**.
|
||||||
|
*Prompt*: `"Verify GRACE-Poly compliance for [FILE]. Check for paired [DEF] anchors, complete contracts, and belief_scope usage. Return PASS or FAIL with specific line errors."*
|
||||||
|
* **Step 3C (Evaluate):**
|
||||||
|
* If Auditor returns `PASS`: Apply the diff to the codebase. Move to the next file.
|
||||||
|
* If Auditor returns `FAIL`: Send the Auditor's error report back to the Engineer (Step 3A). Repeat max 3 times.
|
||||||
|
|
||||||
Read the latest report and extract:
|
### 4. Verification
|
||||||
- **Critical Parsing Errors**: Unclosed anchors or mismatched tags.
|
Once the queue is empty, re-run `python3 generate_semantic_map.py` to prove the metrics have improved.
|
||||||
- **Low-Score Files**: Files with score < 0.7 or marked with 🔴.
|
|
||||||
- **Missing Mandatory Tags**: Specifically for CRITICAL tier modules.
|
|
||||||
|
|
||||||
### 4. Formulate Remediation Plan
|
## Output Format
|
||||||
|
|
||||||
Create a list of files requiring immediate attention:
|
Return a structured summary of the operation:
|
||||||
1. **Priority 1**: Fix all "Critical Parsing Errors" (unclosed anchors).
|
|
||||||
2. **Priority 2**: Add missing mandatory tags for CRITICAL modules.
|
|
||||||
3. **Priority 3**: Improve coverage for STANDARD modules.
|
|
||||||
|
|
||||||
### 5. Execute Fixes (Optional/Handoff)
|
```text
|
||||||
|
=== GRACE SEMANTIC ORCHESTRATION REPORT ===
|
||||||
|
Initial Global Score: [X]%
|
||||||
|
Final Global Score: [Y]%
|
||||||
|
Status:[PASS / BLOCKED]
|
||||||
|
|
||||||
If $ARGUMENTS contains "fix" or "apply":
|
Files Processed:
|
||||||
- For each target file, use `read_file` to get context.
|
1. [file_path] -[PASS (1 attempt) | PASS (2 attempts) | FAILED]
|
||||||
- Apply semantic fixes using `apply_diff`, preserving all code logic.
|
2. ...
|
||||||
- Re-run `python3 generate_semantic_map.py` to verify the fix.
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
Provide a summary of the semantic state:
|
|
||||||
- **Global Score**: [X]%
|
|
||||||
- **Status**: [PASS/FAIL] (FAIL if any Critical Parsing Errors exist)
|
|
||||||
- **Top Issues**: List top 3-5 files needing attention.
|
|
||||||
- **Action Taken**: Summary of maps generated or fixes applied.
|
|
||||||
|
|
||||||
|
Escalations (Human Intervention Required):
|
||||||
|
-[file_path]: Failed auditor review 3 times. Reason: [Last Auditor Note].
|
||||||
|
```
|
||||||
## Context
|
## Context
|
||||||
|
|
||||||
$ARGUMENTS
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|||||||
145
.kilocodemodes
145
.kilocodemodes
@@ -27,22 +27,6 @@ customModes:
|
|||||||
6. DOCUMENTATION: Create test reports in `specs/<feature>/tests/reports/YYYY-MM-DD-report.md`.
|
6. DOCUMENTATION: Create test reports in `specs/<feature>/tests/reports/YYYY-MM-DD-report.md`.
|
||||||
7. COVERAGE: Aim for maximum coverage but prioritize CRITICAL and STANDARD tier modules.
|
7. COVERAGE: Aim for maximum coverage but prioritize CRITICAL and STANDARD tier modules.
|
||||||
8. RUN TESTS: Execute tests using `cd backend && .venv/bin/python3 -m pytest` or `cd frontend && npm run test`.
|
8. RUN TESTS: Execute tests using `cd backend && .venv/bin/python3 -m pytest` or `cd frontend && npm run test`.
|
||||||
- slug: semantic
|
|
||||||
name: Semantic Agent
|
|
||||||
roleDefinition: |-
|
|
||||||
You are Kilo Code, a Semantic Agent responsible for maintaining the semantic integrity of the codebase. Your primary goal is to ensure that all code entities (Modules, Classes, Functions, Components) are properly annotated with semantic anchors and tags as defined in `.ai/standards/semantics.md`.
|
|
||||||
Your core responsibilities are: 1. **Semantic Mapping**: You run and maintain the `generate_semantic_map.py` script to generate up-to-date semantic maps (`semantics/semantic_map.json`, `.ai/PROJECT_MAP.md`) and compliance reports (`semantics/reports/*.md`). 2. **Compliance Auditing**: You analyze the generated compliance reports to identify files with low semantic coverage or parsing errors. 3. **Semantic Enrichment**: You actively edit code files to add missing semantic anchors (`[DEF:...]`, `[/DEF:...]`) and mandatory tags (`@PURPOSE`, `@LAYER`, etc.) to improve the global compliance score. 4. **Protocol Enforcement**: You strictly adhere to the syntax and rules defined in `.ai/standards/semantics.md` when modifying code.
|
|
||||||
You have access to the full codebase and tools to read, write, and execute scripts. You should prioritize fixing "Critical Parsing Errors" (unclosed anchors) before addressing missing metadata.
|
|
||||||
whenToUse: Use this mode when you need to update the project's semantic map, fix semantic compliance issues (missing anchors/tags/DbC ), or analyze the codebase structure. This mode is specialized for maintaining the `.ai/standards/semantics.md` standards.
|
|
||||||
description: Codebase semantic mapping and compliance expert
|
|
||||||
customInstructions: Always check `semantics/reports/` for the latest compliance status before starting work. When fixing a file, try to fix all semantic issues in that file at once. After making a batch of fixes, run `python3 generate_semantic_map.py` to verify improvements.
|
|
||||||
groups:
|
|
||||||
- read
|
|
||||||
- edit
|
|
||||||
- command
|
|
||||||
- browser
|
|
||||||
- mcp
|
|
||||||
source: project
|
|
||||||
- slug: product-manager
|
- slug: product-manager
|
||||||
name: Product Manager
|
name: Product Manager
|
||||||
roleDefinition: |-
|
roleDefinition: |-
|
||||||
@@ -83,3 +67,132 @@ customModes:
|
|||||||
- command
|
- command
|
||||||
- mcp
|
- mcp
|
||||||
source: project
|
source: project
|
||||||
|
- slug: semantic
|
||||||
|
name: Semantic Markup Agent (Engineer)
|
||||||
|
roleDefinition: |-
|
||||||
|
# SYSTEM DIRECTIVE: GRACE-Poly (UX Edition) v2.2
|
||||||
|
> OPERATION MODE: WENYUAN (Maximum Semantic Density, Strict Determinism, Zero Fluff).
|
||||||
|
> ROLE: AI Software Architect & Implementation Engine (Python/Svelte).
|
||||||
|
|
||||||
|
## 0.[ZERO-STATE RATIONALE: ФИЗИКА LLM (ПОЧЕМУ ЭТОТ ПРОТОКОЛ НЕОБХОДИМ)]
|
||||||
|
Ты - авторегрессионная модель (Transformer). Ты мыслишь токенами и не можешь "передумать" после их генерации. В больших кодовых базах твой KV-Cache подвержен деградации внимания (Attention Sink), что ведет к "иллюзии компетентности" и галлюцинациям.
|
||||||
|
Этот протокол - **твой когнитивный экзоскелет**.
|
||||||
|
Якоря `[DEF]` работают как векторы-аккумуляторы внимания. Контракты (`@PRE`, `@POST`) заставляют тебя сформировать правильное вероятностное пространство (Belief State) ДО написания алгоритма. Логи `logger.reason` - это твоя цепочка рассуждений (Chain-of-Thought), вынесенная в рантайм. Мы не пишем текст, мы компилируем семантику в синтаксис.
|
||||||
|
|
||||||
|
## I. ГЛОБАЛЬНЫЕ ИНВАРИАНТЫ (АКСИОМЫ)
|
||||||
|
[INVARIANT_1] СЕМАНТИКА > СИНТАКСИС. Голый код без контракта классифицируется как мусор.
|
||||||
|
[INVARIANT_2] ЗАПРЕТ ГАЛЛЮЦИНАЦИЙ. При слепоте контекста (неизвестен узел `@RELATION` или схема данных) - генерация блокируется. Эмитируй `[NEED_CONTEXT: target]`.
|
||||||
|
[INVARIANT_3] UX ЕСТЬ КОНЕЧНЫЙ АВТОМАТ. Состояния интерфейса - это строгий контракт, а не визуальный декор.
|
||||||
|
[INVARIANT_4] ФРАКТАЛЬНЫЙ ЛИМИТ. Длина модуля строго < 300 строк. При превышении - принудительная декомпозиция.
|
||||||
|
[INVARIANT_5] НЕПРИКОСНОВЕННОСТЬ ЯКОРЕЙ. Блоки `[DEF]...[/DEF]` используются как аккумуляторы внимания. Закрывающий тег обязателен.
|
||||||
|
|
||||||
|
## II. СИНТАКСИС И РАЗМЕТКА (SEMANTIC ANCHORS)
|
||||||
|
Формат зависит от среды исполнения:
|
||||||
|
- Python: `#[DEF:id:Type] ... # [/DEF:id:Type]`
|
||||||
|
- Svelte (HTML/Markup): `<!--[DEF:id:Type] --> ... <!-- [/DEF:id:Type] -->`
|
||||||
|
- Svelte (Script/JS): `// [DEF:id:Type] ... //[/DEF:id:Type]`
|
||||||
|
*Допустимые Type: Module, Class, Function, Component, Store, Block.*
|
||||||
|
|
||||||
|
**Формат метаданных (ДО имплементации):**
|
||||||
|
`@KEY: Value` (в Python - `# @KEY`, в TS/JS - `/** @KEY */`, в HTML - `<!-- @KEY -->`).
|
||||||
|
|
||||||
|
**Граф Зависимостей (GraphRAG):**
|
||||||
|
`@RELATION: [PREDICATE] ->[TARGET_ID]`
|
||||||
|
*Допустимые предикаты:* DEPENDS_ON, CALLS, INHERITS, IMPLEMENTS, DISPATCHES, BINDS_TO.
|
||||||
|
|
||||||
|
## III. ТОПОЛОГИЯ ФАЙЛА (СТРОГИЙ ПОРЯДОК)
|
||||||
|
1. **HEADER (Заголовок):**[DEF:filename:Module]
|
||||||
|
@TIER: [CRITICAL | STANDARD | TRIVIAL]
|
||||||
|
@SEMANTICS: [keywords]
|
||||||
|
@PURPOSE: [Однострочная суть]
|
||||||
|
@LAYER: [Domain | UI | Infra]
|
||||||
|
@RELATION: [Зависимости]
|
||||||
|
@INVARIANT: [Бизнес-правило, которое нельзя нарушить]
|
||||||
|
2. **BODY (Тело):** Импорты -> Реализация логики внутри вложенных `[DEF]`.
|
||||||
|
3. **FOOTER (Подвал):** [/DEF:filename:Module]
|
||||||
|
|
||||||
|
## IV. КОНТРАКТЫ (DESIGN BY CONTRACT & UX)
|
||||||
|
Обязательны для TIER: CRITICAL и STANDARD. Заменяют стандартные Docstrings.
|
||||||
|
|
||||||
|
**[CORE CONTRACTS]:**
|
||||||
|
- `@PURPOSE:` Суть функции/компонента.
|
||||||
|
- `@PRE:` Условия запуска (в коде реализуются через `if/raise` или guards, НЕ через `assert`).
|
||||||
|
- `@POST:` Гарантии на выходе.
|
||||||
|
- `@SIDE_EFFECT:` Мутации состояния, I/O, сеть.
|
||||||
|
- `@DATA_CONTRACT:` Ссылка на DTO (Input -> Model, Output -> Model).
|
||||||
|
|
||||||
|
**[UX CONTRACTS (Svelte 5+)]:**
|
||||||
|
- `@UX_STATE: [StateName] -> [Поведение]` (Idle, Loading, Error, Success).
|
||||||
|
- `@UX_FEEDBACK:` Реакция системы (Toast, Shake, RedBorder).
|
||||||
|
- `@UX_RECOVERY:` Путь восстановления после сбоя (Retry, ClearInput).
|
||||||
|
- `@UX_REACTIVITY:` Явный биндинг. *ЗАПРЕТ НА `$:` и `export let`. ТОЛЬКО Руны: `$state`, `$derived`, `$effect`, `$props`.*
|
||||||
|
|
||||||
|
**[TEST CONTRACTS (Для AI-Auditor)]:**
|
||||||
|
- `@TEST_CONTRACT: [Input] -> [Output]`
|
||||||
|
- `@TEST_SCENARIO: [Название] -> [Ожидание]`
|
||||||
|
- `@TEST_FIXTURE: [Название] -> file:[path] | INLINE_JSON`
|
||||||
|
- `@TEST_EDGE: [Название] ->[Сбой]` (Минимум 3: missing_field, invalid_type, external_fail).
|
||||||
|
- `@TEST_INVARIANT: [Имя] -> VERIFIED_BY: [scenario_1, ...]`
|
||||||
|
|
||||||
|
## V. УРОВНИ СТРОГОСТИ (TIERS)
|
||||||
|
Степень контроля задается в Header.
|
||||||
|
- **CRITICAL** (Ядро/Деньги/Безопасность): 100% покрытие тегами GRACE. Обязательны: Граф, Инварианты, Логи `logger.reason/reflect`, все `@UX` и `@TEST` теги. Использование `belief_scope` строго обязательно.
|
||||||
|
- **STANDARD** (Бизнес-логика / Типовые формы): Базовый уровень. Обязательны: `@PURPOSE`, `@UX_STATE`, `@RELATION`, базовое логирование.
|
||||||
|
- **TRIVIAL** (Утилиты / DTO / Атомы UI): Минимальный каркас. Только якоря `[DEF]...[/DEF]` и `@PURPOSE`.
|
||||||
|
|
||||||
|
## VI. ПРОТОКОЛ ЛОГИРОВАНИЯ (THREAD-LOCAL BELIEF STATE)
|
||||||
|
Логирование - это механизм трассировки рассуждений ИИ (CoT) и управления Attention Energy. Архитектура использует Thread-local storage (`_belief_state`), поэтому `ID` прокидывается автоматически.
|
||||||
|
|
||||||
|
**[PYTHON CORE TOOLS]:**
|
||||||
|
Импорт: `from ...logger import logger, belief_scope, believed`
|
||||||
|
1. **Декоратор:** `@believed("ID")` - автоматический трекинг функции.
|
||||||
|
2. **Контекст:** `with belief_scope("ID"):` - очерчивает локальный предел мысли. НЕ возвращает context, используется просто как `with`.
|
||||||
|
3. **Вызов логера:** Осуществляется через глобальный импортированный `logger`. Дополнительные данные передавать через `extra={...}`.
|
||||||
|
|
||||||
|
**[СЕМАНТИЧЕСКИЕ МЕТОДЫ (MONKEY-PATCHED)]:**
|
||||||
|
*(Маркеры вроде `[REASON]` и `[ID]` подставляются автоматически форматтером. Не пиши их в тексте!)*
|
||||||
|
1. **`logger.explore(msg, extra={...})`** (Поиск/Ветвление): Применяется при фолбэках, `except`, проверке гипотез. Эмитирует WARNING.
|
||||||
|
*Пример:* `logger.explore("Insufficient funds", extra={"balance": bal})`
|
||||||
|
2. **`logger.reason(msg, extra={...})`** (Дедукция): Применяется при прохождении guards и выполнении шагов контракта. Эмитирует INFO.
|
||||||
|
*Пример:* `logger.reason("Initiating transfer")`
|
||||||
|
3. **`logger.reflect(msg, extra={...})`** (Самопроверка): Применяется для сверки результата с `@POST` перед `return`. Эмитирует DEBUG.
|
||||||
|
*Пример:* `logger.reflect("Transfer committed", extra={"tx_id": tx_id})`
|
||||||
|
|
||||||
|
*(Для Frontend/Svelte использовать ручной префикс: `console.info("[ID][REFLECT] Text", {data})`)*
|
||||||
|
|
||||||
|
## VII. АЛГОРИТМ ИСПОЛНЕНИЯ И САМОКОРРЕКЦИИ
|
||||||
|
**[PHASE_1: ANALYSIS]**
|
||||||
|
Оцени TIER, Layer и UX-требования. При слепоте контекста -> `yield [NEED_CONTEXT: id]`.
|
||||||
|
**[PHASE_2: SYNTHESIS]**
|
||||||
|
Сгенерируй каркас из `[DEF]`, Header и Контрактов.
|
||||||
|
**[PHASE_3: IMPLEMENTATION]**
|
||||||
|
Напиши код строго по Контракту. Для CRITICAL секций открой `with belief_scope("ID"):` и орошай путь вызовами `logger.reason()` и `logger.reflect()`.
|
||||||
|
**[PHASE_4: CLOSURE]**
|
||||||
|
Убедись, что все `[DEF]` закрыты соответствующими `[/DEF]`.
|
||||||
|
|
||||||
|
**[EXCEPTION: DETECTIVE MODE]**
|
||||||
|
Если обнаружено нарушение контракта или ошибка:
|
||||||
|
1. СТОП-СИГНАЛ: Выведи `[COHERENCE_CHECK_FAILED]`.
|
||||||
|
2. ГИПОТЕЗА: Сгенерируй вызов `logger.explore("Ошибка в I/O / Состоянии / Зависимости -> Описание")`.
|
||||||
|
3. ЗАПРОС: Запроси разрешение на изменение контракта.
|
||||||
|
whenToUse: Use this mode when you need to update the project's semantic map, fix semantic compliance issues (missing anchors/tags/DbC ), or analyze the codebase structure. This mode is specialized for maintaining the `.ai/standards/semantics.md` standards.
|
||||||
|
description: Codebase semantic mapping and compliance expert
|
||||||
|
customInstructions: ""
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- edit
|
||||||
|
- command
|
||||||
|
- browser
|
||||||
|
- mcp
|
||||||
|
source: project
|
||||||
|
- slug: reviewer-agent-auditor
|
||||||
|
name: Reviewer Agent (Auditor)
|
||||||
|
description: Безжалостный инспектор ОТК.
|
||||||
|
roleDefinition: '*"Ты GRACE Reviewer. Твоя единственная цель — искать нарушения протокола GRACE-Poly. Ты не пишешь код. Ты читаешь код и проверяешь Чек-лист. Если блок `[DEF]` открыт, но нет закрывающего `[/DEF]` — это FATAL ERROR. Если в `CRITICAL` модуле функция не обернута в `belief_scope` — это FATAL ERROR. Выводи только PASS или FAIL со списком строк, где найдена ошибка."*'
|
||||||
|
groups:
|
||||||
|
- read
|
||||||
|
- edit
|
||||||
|
- browser
|
||||||
|
- command
|
||||||
|
- mcp
|
||||||
|
source: project
|
||||||
|
|||||||
@@ -422,7 +422,7 @@ def test_llm_validation_with_dashboard_ref_requires_confirmation():
|
|||||||
assert "cancel" in action_types
|
assert "cancel" in action_types
|
||||||
|
|
||||||
|
|
||||||
# [/DEF:test_llm_validation_missing_dashboard_returns_needs_clarification:Function]
|
# [/DEF:test_llm_validation_with_dashboard_ref_requires_confirmation:Function]
|
||||||
|
|
||||||
|
|
||||||
# [DEF:test_list_conversations_groups_by_conversation_and_marks_archived:Function]
|
# [DEF:test_list_conversations_groups_by_conversation_and_marks_archived:Function]
|
||||||
@@ -629,6 +629,7 @@ def test_guarded_operation_confirm_roundtrip():
|
|||||||
assert second.task_id is not None
|
assert second.task_id is not None
|
||||||
|
|
||||||
|
|
||||||
|
# [/DEF:test_guarded_operation_confirm_roundtrip:Function]
|
||||||
# [DEF:test_confirm_nonexistent_id_returns_404:Function]
|
# [DEF:test_confirm_nonexistent_id_returns_404:Function]
|
||||||
# @PURPOSE: Confirming a non-existent ID should raise 404.
|
# @PURPOSE: Confirming a non-existent ID should raise 404.
|
||||||
# @PRE: user tries to confirm a random/fake UUID.
|
# @PRE: user tries to confirm a random/fake UUID.
|
||||||
@@ -649,6 +650,7 @@ def test_confirm_nonexistent_id_returns_404():
|
|||||||
assert exc.value.status_code == 404
|
assert exc.value.status_code == 404
|
||||||
|
|
||||||
|
|
||||||
|
# [/DEF:test_confirm_nonexistent_id_returns_404:Function]
|
||||||
# [DEF:test_migration_with_dry_run_includes_summary:Function]
|
# [DEF:test_migration_with_dry_run_includes_summary:Function]
|
||||||
# @PURPOSE: Migration command with dry run flag must return the dry run summary in confirmation text.
|
# @PURPOSE: Migration command with dry run flag must return the dry run summary in confirmation text.
|
||||||
# @PRE: user specifies a migration with --dry-run flag.
|
# @PRE: user specifies a migration with --dry-run flag.
|
||||||
|
|||||||
@@ -135,6 +135,8 @@ def test_get_report_success():
|
|||||||
finally:
|
finally:
|
||||||
app.dependency_overrides.clear()
|
app.dependency_overrides.clear()
|
||||||
|
|
||||||
|
# [/DEF:backend.tests.api.routes.test_clean_release_api:Module]
|
||||||
|
|
||||||
def test_prepare_candidate_api_success():
|
def test_prepare_candidate_api_success():
|
||||||
repo = _repo_with_seed_data()
|
repo = _repo_with_seed_data()
|
||||||
app.dependency_overrides[get_clean_release_repository] = lambda: repo
|
app.dependency_overrides[get_clean_release_repository] = lambda: repo
|
||||||
|
|||||||
@@ -94,4 +94,7 @@ def test_prepare_candidate_blocks_external_source():
|
|||||||
assert data["status"] == "blocked"
|
assert data["status"] == "blocked"
|
||||||
assert any(v["category"] == "external-source" for v in data["violations"])
|
assert any(v["category"] == "external-source" for v in data["violations"])
|
||||||
finally:
|
finally:
|
||||||
app.dependency_overrides.clear()
|
app.dependency_overrides.clear()
|
||||||
|
|
||||||
|
|
||||||
|
# [/DEF:backend.tests.api.routes.test_clean_release_source_policy:Module]
|
||||||
@@ -1173,7 +1173,7 @@ async def _async_confirmation_summary(intent: Dict[str, Any], config_manager: Co
|
|||||||
text += f"\n\n(Не удалось загрузить отчет dry-run: {e})."
|
text += f"\n\n(Не удалось загрузить отчет dry-run: {e})."
|
||||||
|
|
||||||
return f"Выполнить: {text}. Подтвердите или отмените."
|
return f"Выполнить: {text}. Подтвердите или отмените."
|
||||||
# [/DEF:_async_confirmation_summary:Function]
|
# [/DEF:_confirmation_summary:Function]
|
||||||
|
|
||||||
|
|
||||||
# [DEF:_clarification_text_for_intent:Function]
|
# [DEF:_clarification_text_for_intent:Function]
|
||||||
|
|||||||
@@ -1,10 +1,23 @@
|
|||||||
# [DEF:backend.src.api.routes.migration:Module]
|
# [DEF:backend.src.api.routes.migration:Module]
|
||||||
# @TIER: STANDARD
|
# @TIER: CRITICAL
|
||||||
# @SEMANTICS: api, migration, dashboards
|
# @SEMANTICS: api, migration, dashboards, sync, dry-run
|
||||||
# @PURPOSE: API endpoints for migration operations.
|
# @PURPOSE: HTTP contract layer for migration orchestration, settings, dry-run, and mapping sync endpoints.
|
||||||
# @LAYER: API
|
# @LAYER: Infra
|
||||||
# @RELATION: DEPENDS_ON -> backend.src.dependencies
|
# @RELATION: [DEPENDS_ON] ->[backend.src.dependencies]
|
||||||
# @RELATION: DEPENDS_ON -> backend.src.models.dashboard
|
# @RELATION: [DEPENDS_ON] ->[backend.src.core.database]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.core.superset_client]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.core.migration.dry_run_orchestrator]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.core.mapping_service]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.models.dashboard]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.models.mapping]
|
||||||
|
# @INVARIANT: Migration endpoints never execute with invalid environment references and always return explicit HTTP errors on guard failures.
|
||||||
|
# @TEST_CONTRACT: [DashboardSelection + configured envs] -> [task_id | dry-run result | sync summary]
|
||||||
|
# @TEST_SCENARIO: [invalid_environment] -> [HTTP_400_or_404]
|
||||||
|
# @TEST_SCENARIO: [valid_execution] -> [success_payload_with_required_fields]
|
||||||
|
# @TEST_EDGE: [missing_field] ->[HTTP_400]
|
||||||
|
# @TEST_EDGE: [invalid_type] ->[validation_error]
|
||||||
|
# @TEST_EDGE: [external_fail] ->[HTTP_500]
|
||||||
|
# @TEST_INVARIANT: [EnvironmentValidationBeforeAction] -> VERIFIED_BY: [invalid_environment, valid_execution]
|
||||||
|
|
||||||
from fastapi import APIRouter, Depends, HTTPException, Query
|
from fastapi import APIRouter, Depends, HTTPException, Query
|
||||||
from typing import List, Dict, Any, Optional
|
from typing import List, Dict, Any, Optional
|
||||||
@@ -21,11 +34,11 @@ from ...models.mapping import ResourceMapping
|
|||||||
router = APIRouter(prefix="/api", tags=["migration"])
|
router = APIRouter(prefix="/api", tags=["migration"])
|
||||||
|
|
||||||
# [DEF:get_dashboards:Function]
|
# [DEF:get_dashboards:Function]
|
||||||
# @PURPOSE: Fetch all dashboards from the specified environment for the grid.
|
# @PURPOSE: Fetch dashboard metadata from a requested environment for migration selection UI.
|
||||||
# @PRE: Environment ID must be valid.
|
# @PRE: env_id is provided and exists in configured environments.
|
||||||
# @POST: Returns a list of dashboard metadata.
|
# @POST: Returns List[DashboardMetadata] for the resolved environment; emits HTTP_404 when environment is absent.
|
||||||
# @PARAM: env_id (str) - The ID of the environment to fetch from.
|
# @SIDE_EFFECT: Reads environment configuration and performs remote Superset metadata retrieval over network.
|
||||||
# @RETURN: List[DashboardMetadata]
|
# @DATA_CONTRACT: Input[str env_id] -> Output[List[DashboardMetadata]]
|
||||||
@router.get("/environments/{env_id}/dashboards", response_model=List[DashboardMetadata])
|
@router.get("/environments/{env_id}/dashboards", response_model=List[DashboardMetadata])
|
||||||
async def get_dashboards(
|
async def get_dashboards(
|
||||||
env_id: str,
|
env_id: str,
|
||||||
@@ -44,11 +57,11 @@ async def get_dashboards(
|
|||||||
# [/DEF:get_dashboards:Function]
|
# [/DEF:get_dashboards:Function]
|
||||||
|
|
||||||
# [DEF:execute_migration:Function]
|
# [DEF:execute_migration:Function]
|
||||||
# @PURPOSE: Execute the migration of selected dashboards.
|
# @PURPOSE: Validate migration selection and enqueue asynchronous migration task execution.
|
||||||
# @PRE: Selection must be valid and environments must exist.
|
# @PRE: DashboardSelection payload is valid and both source/target environments exist.
|
||||||
# @POST: Starts the migration task and returns the task ID.
|
# @POST: Returns {"task_id": str, "message": str} when task creation succeeds; emits HTTP_400/HTTP_500 on failure.
|
||||||
# @PARAM: selection (DashboardSelection) - The dashboards to migrate.
|
# @SIDE_EFFECT: Reads configuration, writes task record through task manager, and writes operational logs.
|
||||||
# @RETURN: Dict - {"task_id": str, "message": str}
|
# @DATA_CONTRACT: Input[DashboardSelection] -> Output[Dict[str, str]]
|
||||||
@router.post("/migration/execute")
|
@router.post("/migration/execute")
|
||||||
async def execute_migration(
|
async def execute_migration(
|
||||||
selection: DashboardSelection,
|
selection: DashboardSelection,
|
||||||
@@ -86,9 +99,11 @@ async def execute_migration(
|
|||||||
|
|
||||||
|
|
||||||
# [DEF:dry_run_migration:Function]
|
# [DEF:dry_run_migration:Function]
|
||||||
# @PURPOSE: Build pre-flight diff and risk summary without applying migration.
|
# @PURPOSE: Build pre-flight migration diff and risk summary without mutating target systems.
|
||||||
# @PRE: Selection and environments are valid.
|
# @PRE: DashboardSelection is valid, source and target environments exist, differ, and selected_ids is non-empty.
|
||||||
# @POST: Returns deterministic JSON diff and risk scoring.
|
# @POST: Returns deterministic dry-run payload; emits HTTP_400 for guard violations and HTTP_500 for orchestrator value errors.
|
||||||
|
# @SIDE_EFFECT: Reads local mappings from DB and fetches source/target metadata via Superset API.
|
||||||
|
# @DATA_CONTRACT: Input[DashboardSelection] -> Output[Dict[str, Any]]
|
||||||
@router.post("/migration/dry-run", response_model=Dict[str, Any])
|
@router.post("/migration/dry-run", response_model=Dict[str, Any])
|
||||||
async def dry_run_migration(
|
async def dry_run_migration(
|
||||||
selection: DashboardSelection,
|
selection: DashboardSelection,
|
||||||
@@ -123,7 +138,11 @@ async def dry_run_migration(
|
|||||||
# [/DEF:dry_run_migration:Function]
|
# [/DEF:dry_run_migration:Function]
|
||||||
|
|
||||||
# [DEF:get_migration_settings:Function]
|
# [DEF:get_migration_settings:Function]
|
||||||
# @PURPOSE: Get current migration Cron string explicitly.
|
# @PURPOSE: Read and return configured migration synchronization cron expression.
|
||||||
|
# @PRE: Configuration store is available and requester has READ permission.
|
||||||
|
# @POST: Returns {"cron": str} reflecting current persisted settings value.
|
||||||
|
# @SIDE_EFFECT: Reads configuration from config manager.
|
||||||
|
# @DATA_CONTRACT: Input[None] -> Output[Dict[str, str]]
|
||||||
@router.get("/migration/settings", response_model=Dict[str, str])
|
@router.get("/migration/settings", response_model=Dict[str, str])
|
||||||
async def get_migration_settings(
|
async def get_migration_settings(
|
||||||
config_manager=Depends(get_config_manager),
|
config_manager=Depends(get_config_manager),
|
||||||
@@ -136,7 +155,11 @@ async def get_migration_settings(
|
|||||||
# [/DEF:get_migration_settings:Function]
|
# [/DEF:get_migration_settings:Function]
|
||||||
|
|
||||||
# [DEF:update_migration_settings:Function]
|
# [DEF:update_migration_settings:Function]
|
||||||
# @PURPOSE: Update migration Cron string.
|
# @PURPOSE: Validate and persist migration synchronization cron expression update.
|
||||||
|
# @PRE: Payload includes "cron" key and requester has WRITE permission.
|
||||||
|
# @POST: Returns {"cron": str, "status": "updated"} and persists updated cron value.
|
||||||
|
# @SIDE_EFFECT: Mutates configuration and writes persisted config through config manager.
|
||||||
|
# @DATA_CONTRACT: Input[Dict[str, str]] -> Output[Dict[str, str]]
|
||||||
@router.put("/migration/settings", response_model=Dict[str, str])
|
@router.put("/migration/settings", response_model=Dict[str, str])
|
||||||
async def update_migration_settings(
|
async def update_migration_settings(
|
||||||
payload: Dict[str, str],
|
payload: Dict[str, str],
|
||||||
@@ -157,7 +180,11 @@ async def update_migration_settings(
|
|||||||
# [/DEF:update_migration_settings:Function]
|
# [/DEF:update_migration_settings:Function]
|
||||||
|
|
||||||
# [DEF:get_resource_mappings:Function]
|
# [DEF:get_resource_mappings:Function]
|
||||||
# @PURPOSE: Fetch synchronized object mappings with search, filtering, and pagination.
|
# @PURPOSE: Fetch synchronized resource mappings with optional filters and pagination for migration mappings view.
|
||||||
|
# @PRE: skip>=0, 1<=limit<=500, DB session is active, requester has READ permission.
|
||||||
|
# @POST: Returns {"items": [...], "total": int} where items reflect applied filters and pagination.
|
||||||
|
# @SIDE_EFFECT: Executes database read queries against ResourceMapping table.
|
||||||
|
# @DATA_CONTRACT: Input[QueryParams] -> Output[Dict[str, Any]]
|
||||||
@router.get("/migration/mappings-data", response_model=Dict[str, Any])
|
@router.get("/migration/mappings-data", response_model=Dict[str, Any])
|
||||||
async def get_resource_mappings(
|
async def get_resource_mappings(
|
||||||
skip: int = Query(0, ge=0),
|
skip: int = Query(0, ge=0),
|
||||||
@@ -203,9 +230,11 @@ async def get_resource_mappings(
|
|||||||
# [/DEF:get_resource_mappings:Function]
|
# [/DEF:get_resource_mappings:Function]
|
||||||
|
|
||||||
# [DEF:trigger_sync_now:Function]
|
# [DEF:trigger_sync_now:Function]
|
||||||
# @PURPOSE: Triggers an immediate ID synchronization for all environments.
|
# @PURPOSE: Trigger immediate ID synchronization for every configured environment.
|
||||||
# @PRE: At least one environment must be configured.
|
# @PRE: At least one environment is configured and requester has EXECUTE permission.
|
||||||
# @POST: Environment rows are ensured in DB; sync_environment is called for each.
|
# @POST: Returns sync summary with synced/failed counts after attempting all environments.
|
||||||
|
# @SIDE_EFFECT: Upserts Environment rows, commits DB transaction, performs network sync calls, and writes logs.
|
||||||
|
# @DATA_CONTRACT: Input[None] -> Output[Dict[str, Any]]
|
||||||
@router.post("/migration/sync-now", response_model=Dict[str, Any])
|
@router.post("/migration/sync-now", response_model=Dict[str, Any])
|
||||||
async def trigger_sync_now(
|
async def trigger_sync_now(
|
||||||
config_manager=Depends(get_config_manager),
|
config_manager=Depends(get_config_manager),
|
||||||
|
|||||||
@@ -1,70 +1,79 @@
|
|||||||
# [DEF:backend.src.core.auth.repository:Module]
|
# [DEF:backend.src.core.auth.repository:Module]
|
||||||
#
|
#
|
||||||
# @SEMANTICS: auth, repository, database, user, role
|
# @TIER: CRITICAL
|
||||||
# @PURPOSE: Data access layer for authentication-related entities.
|
# @SEMANTICS: auth, repository, database, user, role, permission
|
||||||
# @LAYER: Core
|
# @PURPOSE: Data access layer for authentication and user preference entities.
|
||||||
# @RELATION: DEPENDS_ON -> sqlalchemy
|
# @LAYER: Domain
|
||||||
# @RELATION: USES -> backend.src.models.auth
|
# @RELATION: [DEPENDS_ON] ->[sqlalchemy.orm.Session]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.models.auth]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.models.profile]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.core.logger.belief_scope]
|
||||||
|
# @INVARIANT: All database read/write operations must execute via the injected SQLAlchemy session boundary.
|
||||||
#
|
#
|
||||||
# @INVARIANT: All database operations must be performed within a session.
|
|
||||||
|
|
||||||
# [SECTION: IMPORTS]
|
# [SECTION: IMPORTS]
|
||||||
from typing import Optional, List
|
from typing import List, Optional
|
||||||
|
|
||||||
from sqlalchemy.orm import Session
|
from sqlalchemy.orm import Session
|
||||||
from ...models.auth import User, Role, Permission
|
|
||||||
|
from ...models.auth import Permission, Role, User
|
||||||
from ...models.profile import UserDashboardPreference
|
from ...models.profile import UserDashboardPreference
|
||||||
from ..logger import belief_scope
|
from ..logger import belief_scope
|
||||||
# [/SECTION]
|
# [/SECTION]
|
||||||
|
|
||||||
# [DEF:AuthRepository:Class]
|
# [DEF:AuthRepository:Class]
|
||||||
# @PURPOSE: Encapsulates database operations for authentication.
|
# @PURPOSE: Encapsulates database operations for authentication-related entities.
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[sqlalchemy.orm.Session]
|
||||||
class AuthRepository:
|
class AuthRepository:
|
||||||
# [DEF:__init__:Function]
|
# [DEF:__init__:Function]
|
||||||
# @PURPOSE: Initializes the repository with a database session.
|
# @PURPOSE: Bind repository instance to an existing SQLAlchemy session.
|
||||||
# @PARAM: db (Session) - SQLAlchemy session.
|
# @PRE: db is an initialized sqlalchemy.orm.Session instance.
|
||||||
|
# @POST: self.db points to the provided session and is used by all repository methods.
|
||||||
|
# @SIDE_EFFECT: Stores session reference on repository instance state.
|
||||||
|
# @DATA_CONTRACT: Input[Session] -> Output[None]
|
||||||
def __init__(self, db: Session):
|
def __init__(self, db: Session):
|
||||||
self.db = db
|
with belief_scope("AuthRepository.__init__"):
|
||||||
|
self.db = db
|
||||||
# [/DEF:__init__:Function]
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
# [DEF:get_user_by_username:Function]
|
# [DEF:get_user_by_username:Function]
|
||||||
# @PURPOSE: Retrieves a user by their username.
|
# @PURPOSE: Retrieve a user entity by unique username.
|
||||||
# @PRE: username is a string.
|
# @PRE: username is a non-empty str and self.db is a valid open Session.
|
||||||
# @POST: Returns User object if found, else None.
|
# @POST: Returns matching User entity when present, otherwise None.
|
||||||
# @PARAM: username (str) - The username to search for.
|
# @SIDE_EFFECT: Executes read-only SELECT query through active DB session.
|
||||||
# @RETURN: Optional[User] - The found user or None.
|
# @DATA_CONTRACT: Input[str] -> Output[Optional[User]]
|
||||||
def get_user_by_username(self, username: str) -> Optional[User]:
|
def get_user_by_username(self, username: str) -> Optional[User]:
|
||||||
with belief_scope("AuthRepository.get_user_by_username"):
|
with belief_scope("AuthRepository.get_user_by_username"):
|
||||||
return self.db.query(User).filter(User.username == username).first()
|
return self.db.query(User).filter(User.username == username).first()
|
||||||
# [/DEF:get_user_by_username:Function]
|
# [/DEF:get_user_by_username:Function]
|
||||||
|
|
||||||
# [DEF:get_user_by_id:Function]
|
# [DEF:get_user_by_id:Function]
|
||||||
# @PURPOSE: Retrieves a user by their unique ID.
|
# @PURPOSE: Retrieve a user entity by identifier.
|
||||||
# @PRE: user_id is a valid UUID string.
|
# @PRE: user_id is a non-empty str and self.db is a valid open Session.
|
||||||
# @POST: Returns User object if found, else None.
|
# @POST: Returns matching User entity when present, otherwise None.
|
||||||
# @PARAM: user_id (str) - The user's unique identifier.
|
# @SIDE_EFFECT: Executes read-only SELECT query through active DB session.
|
||||||
# @RETURN: Optional[User] - The found user or None.
|
# @DATA_CONTRACT: Input[str] -> Output[Optional[User]]
|
||||||
def get_user_by_id(self, user_id: str) -> Optional[User]:
|
def get_user_by_id(self, user_id: str) -> Optional[User]:
|
||||||
with belief_scope("AuthRepository.get_user_by_id"):
|
with belief_scope("AuthRepository.get_user_by_id"):
|
||||||
return self.db.query(User).filter(User.id == user_id).first()
|
return self.db.query(User).filter(User.id == user_id).first()
|
||||||
# [/DEF:get_user_by_id:Function]
|
# [/DEF:get_user_by_id:Function]
|
||||||
|
|
||||||
# [DEF:get_role_by_name:Function]
|
# [DEF:get_role_by_name:Function]
|
||||||
# @PURPOSE: Retrieves a role by its name.
|
# @PURPOSE: Retrieve a role entity by role name.
|
||||||
# @PRE: name is a string.
|
# @PRE: name is a non-empty str and self.db is a valid open Session.
|
||||||
# @POST: Returns Role object if found, else None.
|
# @POST: Returns matching Role entity when present, otherwise None.
|
||||||
# @PARAM: name (str) - The role name to search for.
|
# @SIDE_EFFECT: Executes read-only SELECT query through active DB session.
|
||||||
# @RETURN: Optional[Role] - The found role or None.
|
# @DATA_CONTRACT: Input[str] -> Output[Optional[Role]]
|
||||||
def get_role_by_name(self, name: str) -> Optional[Role]:
|
def get_role_by_name(self, name: str) -> Optional[Role]:
|
||||||
with belief_scope("AuthRepository.get_role_by_name"):
|
with belief_scope("AuthRepository.get_role_by_name"):
|
||||||
return self.db.query(Role).filter(Role.name == name).first()
|
return self.db.query(Role).filter(Role.name == name).first()
|
||||||
# [/DEF:get_role_by_name:Function]
|
# [/DEF:get_role_by_name:Function]
|
||||||
|
|
||||||
# [DEF:update_last_login:Function]
|
# [DEF:update_last_login:Function]
|
||||||
# @PURPOSE: Updates the last_login timestamp for a user.
|
# @PURPOSE: Update last_login timestamp for the provided user entity.
|
||||||
# @PRE: user object is a valid User instance.
|
# @PRE: user is a managed User instance and self.db is a valid open Session.
|
||||||
# @POST: User's last_login is updated in the database.
|
# @POST: user.last_login is set to current UTC timestamp and transaction is committed.
|
||||||
# @SIDE_EFFECT: Commits the transaction.
|
# @SIDE_EFFECT: Mutates user entity state and commits database transaction.
|
||||||
# @PARAM: user (User) - The user to update.
|
# @DATA_CONTRACT: Input[User] -> Output[None]
|
||||||
def update_last_login(self, user: User):
|
def update_last_login(self, user: User):
|
||||||
with belief_scope("AuthRepository.update_last_login"):
|
with belief_scope("AuthRepository.update_last_login"):
|
||||||
from datetime import datetime
|
from datetime import datetime
|
||||||
@@ -74,34 +83,33 @@ class AuthRepository:
|
|||||||
# [/DEF:update_last_login:Function]
|
# [/DEF:update_last_login:Function]
|
||||||
|
|
||||||
# [DEF:get_role_by_id:Function]
|
# [DEF:get_role_by_id:Function]
|
||||||
# @PURPOSE: Retrieves a role by its unique ID.
|
# @PURPOSE: Retrieve a role entity by identifier.
|
||||||
# @PRE: role_id is a string.
|
# @PRE: role_id is a non-empty str and self.db is a valid open Session.
|
||||||
# @POST: Returns Role object if found, else None.
|
# @POST: Returns matching Role entity when present, otherwise None.
|
||||||
# @PARAM: role_id (str) - The role's unique identifier.
|
# @SIDE_EFFECT: Executes read-only SELECT query through active DB session.
|
||||||
# @RETURN: Optional[Role] - The found role or None.
|
# @DATA_CONTRACT: Input[str] -> Output[Optional[Role]]
|
||||||
def get_role_by_id(self, role_id: str) -> Optional[Role]:
|
def get_role_by_id(self, role_id: str) -> Optional[Role]:
|
||||||
with belief_scope("AuthRepository.get_role_by_id"):
|
with belief_scope("AuthRepository.get_role_by_id"):
|
||||||
return self.db.query(Role).filter(Role.id == role_id).first()
|
return self.db.query(Role).filter(Role.id == role_id).first()
|
||||||
# [/DEF:get_role_by_id:Function]
|
# [/DEF:get_role_by_id:Function]
|
||||||
|
|
||||||
# [DEF:get_permission_by_id:Function]
|
# [DEF:get_permission_by_id:Function]
|
||||||
# @PURPOSE: Retrieves a permission by its unique ID.
|
# @PURPOSE: Retrieve a permission entity by identifier.
|
||||||
# @PRE: perm_id is a string.
|
# @PRE: perm_id is a non-empty str and self.db is a valid open Session.
|
||||||
# @POST: Returns Permission object if found, else None.
|
# @POST: Returns matching Permission entity when present, otherwise None.
|
||||||
# @PARAM: perm_id (str) - The permission's unique identifier.
|
# @SIDE_EFFECT: Executes read-only SELECT query through active DB session.
|
||||||
# @RETURN: Optional[Permission] - The found permission or None.
|
# @DATA_CONTRACT: Input[str] -> Output[Optional[Permission]]
|
||||||
def get_permission_by_id(self, perm_id: str) -> Optional[Permission]:
|
def get_permission_by_id(self, perm_id: str) -> Optional[Permission]:
|
||||||
with belief_scope("AuthRepository.get_permission_by_id"):
|
with belief_scope("AuthRepository.get_permission_by_id"):
|
||||||
return self.db.query(Permission).filter(Permission.id == perm_id).first()
|
return self.db.query(Permission).filter(Permission.id == perm_id).first()
|
||||||
# [/DEF:get_permission_by_id:Function]
|
# [/DEF:get_permission_by_id:Function]
|
||||||
|
|
||||||
# [DEF:get_permission_by_resource_action:Function]
|
# [DEF:get_permission_by_resource_action:Function]
|
||||||
# @PURPOSE: Retrieves a permission by resource and action.
|
# @PURPOSE: Retrieve a permission entity by resource and action pair.
|
||||||
# @PRE: resource and action are strings.
|
# @PRE: resource and action are non-empty str values; self.db is a valid open Session.
|
||||||
# @POST: Returns Permission object if found, else None.
|
# @POST: Returns matching Permission entity when present, otherwise None.
|
||||||
# @PARAM: resource (str) - The resource name.
|
# @SIDE_EFFECT: Executes read-only SELECT query through active DB session.
|
||||||
# @PARAM: action (str) - The action name.
|
# @DATA_CONTRACT: Input[str, str] -> Output[Optional[Permission]]
|
||||||
# @RETURN: Optional[Permission] - The found permission or None.
|
|
||||||
def get_permission_by_resource_action(self, resource: str, action: str) -> Optional[Permission]:
|
def get_permission_by_resource_action(self, resource: str, action: str) -> Optional[Permission]:
|
||||||
with belief_scope("AuthRepository.get_permission_by_resource_action"):
|
with belief_scope("AuthRepository.get_permission_by_resource_action"):
|
||||||
return self.db.query(Permission).filter(
|
return self.db.query(Permission).filter(
|
||||||
@@ -111,11 +119,11 @@ class AuthRepository:
|
|||||||
# [/DEF:get_permission_by_resource_action:Function]
|
# [/DEF:get_permission_by_resource_action:Function]
|
||||||
|
|
||||||
# [DEF:get_user_dashboard_preference:Function]
|
# [DEF:get_user_dashboard_preference:Function]
|
||||||
# @PURPOSE: Retrieves dashboard preference by owner user ID.
|
# @PURPOSE: Retrieve dashboard preference entity owned by specified user.
|
||||||
# @PRE: user_id is a string.
|
# @PRE: user_id is a non-empty str and self.db is a valid open Session.
|
||||||
# @POST: Returns UserDashboardPreference if found, else None.
|
# @POST: Returns matching UserDashboardPreference entity when present, otherwise None.
|
||||||
# @PARAM: user_id (str) - Preference owner identifier.
|
# @SIDE_EFFECT: Executes read-only SELECT query through active DB session.
|
||||||
# @RETURN: Optional[UserDashboardPreference] - Found preference or None.
|
# @DATA_CONTRACT: Input[str] -> Output[Optional[UserDashboardPreference]]
|
||||||
def get_user_dashboard_preference(self, user_id: str) -> Optional[UserDashboardPreference]:
|
def get_user_dashboard_preference(self, user_id: str) -> Optional[UserDashboardPreference]:
|
||||||
with belief_scope("AuthRepository.get_user_dashboard_preference"):
|
with belief_scope("AuthRepository.get_user_dashboard_preference"):
|
||||||
return (
|
return (
|
||||||
@@ -126,11 +134,11 @@ class AuthRepository:
|
|||||||
# [/DEF:get_user_dashboard_preference:Function]
|
# [/DEF:get_user_dashboard_preference:Function]
|
||||||
|
|
||||||
# [DEF:save_user_dashboard_preference:Function]
|
# [DEF:save_user_dashboard_preference:Function]
|
||||||
# @PURPOSE: Persists dashboard preference entity and returns refreshed row.
|
# @PURPOSE: Persist dashboard preference entity and return refreshed persistent row.
|
||||||
# @PRE: preference is a valid UserDashboardPreference entity.
|
# @PRE: preference is a valid UserDashboardPreference entity and self.db is a valid open Session.
|
||||||
# @POST: Preference is committed and refreshed in database.
|
# @POST: preference is committed to DB, refreshed from DB state, and returned.
|
||||||
# @PARAM: preference (UserDashboardPreference) - Preference entity to persist.
|
# @SIDE_EFFECT: Performs INSERT/UPDATE commit and refresh via active DB session.
|
||||||
# @RETURN: UserDashboardPreference - Persisted preference row.
|
# @DATA_CONTRACT: Input[UserDashboardPreference] -> Output[UserDashboardPreference]
|
||||||
def save_user_dashboard_preference(
|
def save_user_dashboard_preference(
|
||||||
self,
|
self,
|
||||||
preference: UserDashboardPreference,
|
preference: UserDashboardPreference,
|
||||||
@@ -143,14 +151,16 @@ class AuthRepository:
|
|||||||
# [/DEF:save_user_dashboard_preference:Function]
|
# [/DEF:save_user_dashboard_preference:Function]
|
||||||
|
|
||||||
# [DEF:list_permissions:Function]
|
# [DEF:list_permissions:Function]
|
||||||
# @PURPOSE: Lists all available permissions.
|
# @PURPOSE: List all permission entities available in storage.
|
||||||
# @POST: Returns a list of all Permission objects.
|
# @PRE: self.db is a valid open Session.
|
||||||
# @RETURN: List[Permission] - List of permissions.
|
# @POST: Returns list containing all Permission entities visible to the session.
|
||||||
|
# @SIDE_EFFECT: Executes read-only SELECT query through active DB session.
|
||||||
|
# @DATA_CONTRACT: Input[None] -> Output[List[Permission]]
|
||||||
def list_permissions(self) -> List[Permission]:
|
def list_permissions(self) -> List[Permission]:
|
||||||
with belief_scope("AuthRepository.list_permissions"):
|
with belief_scope("AuthRepository.list_permissions"):
|
||||||
return self.db.query(Permission).all()
|
return self.db.query(Permission).all()
|
||||||
# [/DEF:list_permissions:Function]
|
# [/DEF:list_permissions:Function]
|
||||||
|
|
||||||
# [/DEF:AuthRepository:Class]
|
|
||||||
|
|
||||||
|
# [/DEF:AuthRepository:Class]
|
||||||
# [/DEF:backend.src.core.auth.repository:Module]
|
# [/DEF:backend.src.core.auth.repository:Module]
|
||||||
@@ -1,17 +1,17 @@
|
|||||||
# [DEF:ConfigManagerModule:Module]
|
# [DEF:ConfigManagerModule:Module]
|
||||||
#
|
#
|
||||||
# @TIER: STANDARD
|
# @TIER: CRITICAL
|
||||||
# @SEMANTICS: config, manager, persistence, postgresql
|
# @SEMANTICS: config, manager, persistence, migration, postgresql
|
||||||
# @PURPOSE: Manages application configuration persisted in database with one-time migration from JSON.
|
# @PURPOSE: Manages application configuration persistence in DB with one-time migration from legacy JSON.
|
||||||
# @LAYER: Core
|
# @LAYER: Domain
|
||||||
# @RELATION: DEPENDS_ON -> ConfigModels
|
# @RELATION: [DEPENDS_ON] ->[ConfigModels]
|
||||||
# @RELATION: DEPENDS_ON -> AppConfigRecord
|
# @RELATION: [DEPENDS_ON] ->[SessionLocal]
|
||||||
# @RELATION: CALLS -> logger
|
# @RELATION: [DEPENDS_ON] ->[AppConfigRecord]
|
||||||
|
# @RELATION: [CALLS] ->[logger]
|
||||||
|
# @RELATION: [CALLS] ->[configure_logger]
|
||||||
|
# @RELATION: [BINDS_TO] ->[ConfigManager]
|
||||||
|
# @INVARIANT: Configuration must always be representable by AppConfig and persisted under global record id.
|
||||||
#
|
#
|
||||||
# @INVARIANT: Configuration must always be valid according to AppConfig model.
|
|
||||||
# @PUBLIC_API: ConfigManager
|
|
||||||
|
|
||||||
# [SECTION: IMPORTS]
|
|
||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
@@ -23,19 +23,18 @@ from .config_models import AppConfig, Environment, GlobalSettings, StorageConfig
|
|||||||
from .database import SessionLocal
|
from .database import SessionLocal
|
||||||
from ..models.config import AppConfigRecord
|
from ..models.config import AppConfigRecord
|
||||||
from .logger import logger, configure_logger, belief_scope
|
from .logger import logger, configure_logger, belief_scope
|
||||||
# [/SECTION]
|
|
||||||
|
|
||||||
|
|
||||||
# [DEF:ConfigManager:Class]
|
# [DEF:ConfigManager:Class]
|
||||||
# @TIER: STANDARD
|
# @TIER: CRITICAL
|
||||||
# @PURPOSE: A class to handle application configuration persistence and management.
|
# @PURPOSE: Handles application configuration load, validation, mutation, and persistence lifecycle.
|
||||||
class ConfigManager:
|
class ConfigManager:
|
||||||
# [DEF:__init__:Function]
|
# [DEF:__init__:Function]
|
||||||
# @TIER: STANDARD
|
# @PURPOSE: Initialize manager state from persisted or migrated configuration.
|
||||||
# @PURPOSE: Initializes the ConfigManager.
|
# @PRE: config_path is a non-empty string path.
|
||||||
# @PRE: isinstance(config_path, str) and len(config_path) > 0
|
# @POST: self.config is initialized as AppConfig and logger is configured.
|
||||||
# @POST: self.config is an instance of AppConfig
|
# @SIDE_EFFECT: Reads config sources and updates logging configuration.
|
||||||
# @PARAM: config_path (str) - Path to legacy JSON config (used only for initial migration fallback).
|
# @DATA_CONTRACT: Input(str config_path) -> Output(None; self.config: AppConfig)
|
||||||
def __init__(self, config_path: str = "config.json"):
|
def __init__(self, config_path: str = "config.json"):
|
||||||
with belief_scope("__init__"):
|
with belief_scope("__init__"):
|
||||||
assert isinstance(config_path, str) and config_path, "config_path must be a non-empty string"
|
assert isinstance(config_path, str) and config_path, "config_path must be a non-empty string"
|
||||||
@@ -52,18 +51,25 @@ class ConfigManager:
|
|||||||
# [/DEF:__init__:Function]
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
# [DEF:_default_config:Function]
|
# [DEF:_default_config:Function]
|
||||||
# @PURPOSE: Returns default application configuration.
|
# @PURPOSE: Build default application configuration fallback.
|
||||||
# @RETURN: AppConfig - Default configuration.
|
# @PRE: None.
|
||||||
|
# @POST: Returns valid AppConfig with empty environments and default storage settings.
|
||||||
|
# @SIDE_EFFECT: None.
|
||||||
|
# @DATA_CONTRACT: Input(None) -> Output(AppConfig)
|
||||||
def _default_config(self) -> AppConfig:
|
def _default_config(self) -> AppConfig:
|
||||||
return AppConfig(
|
with belief_scope("_default_config"):
|
||||||
environments=[],
|
return AppConfig(
|
||||||
settings=GlobalSettings(storage=StorageConfig()),
|
environments=[],
|
||||||
)
|
settings=GlobalSettings(storage=StorageConfig()),
|
||||||
|
)
|
||||||
# [/DEF:_default_config:Function]
|
# [/DEF:_default_config:Function]
|
||||||
|
|
||||||
# [DEF:_load_from_legacy_file:Function]
|
# [DEF:_load_from_legacy_file:Function]
|
||||||
# @PURPOSE: Loads legacy configuration from config.json for migration fallback.
|
# @PURPOSE: Load legacy JSON configuration for migration fallback path.
|
||||||
# @RETURN: AppConfig - Loaded or default configuration.
|
# @PRE: self.config_path is initialized.
|
||||||
|
# @POST: Returns AppConfig from file payload or safe default.
|
||||||
|
# @SIDE_EFFECT: Filesystem read and error logging.
|
||||||
|
# @DATA_CONTRACT: Input(Path self.config_path) -> Output(AppConfig)
|
||||||
def _load_from_legacy_file(self) -> AppConfig:
|
def _load_from_legacy_file(self) -> AppConfig:
|
||||||
with belief_scope("_load_from_legacy_file"):
|
with belief_scope("_load_from_legacy_file"):
|
||||||
if not self.config_path.exists():
|
if not self.config_path.exists():
|
||||||
@@ -81,18 +87,22 @@ class ConfigManager:
|
|||||||
# [/DEF:_load_from_legacy_file:Function]
|
# [/DEF:_load_from_legacy_file:Function]
|
||||||
|
|
||||||
# [DEF:_get_record:Function]
|
# [DEF:_get_record:Function]
|
||||||
# @PURPOSE: Loads config record from DB.
|
# @PURPOSE: Resolve global configuration record from DB.
|
||||||
# @PARAM: session (Session) - DB session.
|
# @PRE: session is an active SQLAlchemy Session.
|
||||||
# @RETURN: Optional[AppConfigRecord] - Existing record or None.
|
# @POST: Returns record when present, otherwise None.
|
||||||
|
# @SIDE_EFFECT: Database read query.
|
||||||
|
# @DATA_CONTRACT: Input(Session) -> Output(Optional[AppConfigRecord])
|
||||||
def _get_record(self, session: Session) -> Optional[AppConfigRecord]:
|
def _get_record(self, session: Session) -> Optional[AppConfigRecord]:
|
||||||
return session.query(AppConfigRecord).filter(AppConfigRecord.id == "global").first()
|
with belief_scope("_get_record"):
|
||||||
|
return session.query(AppConfigRecord).filter(AppConfigRecord.id == "global").first()
|
||||||
# [/DEF:_get_record:Function]
|
# [/DEF:_get_record:Function]
|
||||||
|
|
||||||
# [DEF:_load_config:Function]
|
# [DEF:_load_config:Function]
|
||||||
# @PURPOSE: Loads the configuration from DB or performs one-time migration from JSON file.
|
# @PURPOSE: Load configuration from DB or perform one-time migration from legacy JSON.
|
||||||
# @PRE: DB session factory is available.
|
# @PRE: SessionLocal factory is available and AppConfigRecord schema is accessible.
|
||||||
# @POST: isinstance(return, AppConfig)
|
# @POST: Returns valid AppConfig and closes opened DB session.
|
||||||
# @RETURN: AppConfig - Loaded configuration.
|
# @SIDE_EFFECT: Database read/write, possible migration write, logging.
|
||||||
|
# @DATA_CONTRACT: Input(None) -> Output(AppConfig)
|
||||||
def _load_config(self) -> AppConfig:
|
def _load_config(self) -> AppConfig:
|
||||||
with belief_scope("_load_config"):
|
with belief_scope("_load_config"):
|
||||||
session: Session = SessionLocal()
|
session: Session = SessionLocal()
|
||||||
@@ -114,11 +124,11 @@ class ConfigManager:
|
|||||||
# [/DEF:_load_config:Function]
|
# [/DEF:_load_config:Function]
|
||||||
|
|
||||||
# [DEF:_save_config_to_db:Function]
|
# [DEF:_save_config_to_db:Function]
|
||||||
# @PURPOSE: Saves the provided configuration object to DB.
|
# @PURPOSE: Persist provided AppConfig into the global DB configuration record.
|
||||||
# @PRE: isinstance(config, AppConfig)
|
# @PRE: config is AppConfig; session is either None or an active Session.
|
||||||
# @POST: Configuration saved to database.
|
# @POST: Global DB record payload equals config.model_dump() when commit succeeds.
|
||||||
# @PARAM: config (AppConfig) - The configuration to save.
|
# @SIDE_EFFECT: Database insert/update, commit/rollback, logging.
|
||||||
# @PARAM: session (Optional[Session]) - Existing DB session for transactional reuse.
|
# @DATA_CONTRACT: Input(AppConfig, Optional[Session]) -> Output(None)
|
||||||
def _save_config_to_db(self, config: AppConfig, session: Optional[Session] = None):
|
def _save_config_to_db(self, config: AppConfig, session: Optional[Session] = None):
|
||||||
with belief_scope("_save_config_to_db"):
|
with belief_scope("_save_config_to_db"):
|
||||||
assert isinstance(config, AppConfig), "config must be an instance of AppConfig"
|
assert isinstance(config, AppConfig), "config must be an instance of AppConfig"
|
||||||
@@ -145,27 +155,33 @@ class ConfigManager:
|
|||||||
# [/DEF:_save_config_to_db:Function]
|
# [/DEF:_save_config_to_db:Function]
|
||||||
|
|
||||||
# [DEF:save:Function]
|
# [DEF:save:Function]
|
||||||
# @PURPOSE: Saves the current configuration state to DB.
|
# @PURPOSE: Persist current in-memory configuration state.
|
||||||
# @PRE: self.config is set.
|
# @PRE: self.config is initialized.
|
||||||
# @POST: self._save_config_to_db called.
|
# @POST: Current self.config is written to DB global record.
|
||||||
|
# @SIDE_EFFECT: Database write and logging via delegated persistence call.
|
||||||
|
# @DATA_CONTRACT: Input(None; self.config: AppConfig) -> Output(None)
|
||||||
def save(self):
|
def save(self):
|
||||||
with belief_scope("save"):
|
with belief_scope("save"):
|
||||||
self._save_config_to_db(self.config)
|
self._save_config_to_db(self.config)
|
||||||
# [/DEF:save:Function]
|
# [/DEF:save:Function]
|
||||||
|
|
||||||
# [DEF:get_config:Function]
|
# [DEF:get_config:Function]
|
||||||
# @PURPOSE: Returns the current configuration.
|
# @PURPOSE: Return current in-memory configuration snapshot.
|
||||||
# @RETURN: AppConfig - The current configuration.
|
# @PRE: self.config is initialized.
|
||||||
|
# @POST: Returns AppConfig reference stored in manager.
|
||||||
|
# @SIDE_EFFECT: None.
|
||||||
|
# @DATA_CONTRACT: Input(None) -> Output(AppConfig)
|
||||||
def get_config(self) -> AppConfig:
|
def get_config(self) -> AppConfig:
|
||||||
with belief_scope("get_config"):
|
with belief_scope("get_config"):
|
||||||
return self.config
|
return self.config
|
||||||
# [/DEF:get_config:Function]
|
# [/DEF:get_config:Function]
|
||||||
|
|
||||||
# [DEF:update_global_settings:Function]
|
# [DEF:update_global_settings:Function]
|
||||||
# @PURPOSE: Updates the global settings and persists the change.
|
# @PURPOSE: Replace global settings and persist the resulting configuration.
|
||||||
# @PRE: isinstance(settings, GlobalSettings)
|
# @PRE: settings is GlobalSettings.
|
||||||
# @POST: self.config.settings updated and saved.
|
# @POST: self.config.settings equals provided settings and DB state is updated.
|
||||||
# @PARAM: settings (GlobalSettings) - The new global settings.
|
# @SIDE_EFFECT: Mutates self.config, DB write, logger reconfiguration, logging.
|
||||||
|
# @DATA_CONTRACT: Input(GlobalSettings) -> Output(None)
|
||||||
def update_global_settings(self, settings: GlobalSettings):
|
def update_global_settings(self, settings: GlobalSettings):
|
||||||
with belief_scope("update_global_settings"):
|
with belief_scope("update_global_settings"):
|
||||||
logger.info("[update_global_settings][Entry] Updating settings")
|
logger.info("[update_global_settings][Entry] Updating settings")
|
||||||
@@ -178,9 +194,11 @@ class ConfigManager:
|
|||||||
# [/DEF:update_global_settings:Function]
|
# [/DEF:update_global_settings:Function]
|
||||||
|
|
||||||
# [DEF:validate_path:Function]
|
# [DEF:validate_path:Function]
|
||||||
# @PURPOSE: Validates if a path exists and is writable.
|
# @PURPOSE: Validate that path exists and is writable, creating it when absent.
|
||||||
# @PARAM: path (str) - The path to validate.
|
# @PRE: path is a string path candidate.
|
||||||
# @RETURN: tuple (bool, str) - (is_valid, message)
|
# @POST: Returns (True, msg) for writable path, else (False, reason).
|
||||||
|
# @SIDE_EFFECT: Filesystem directory creation attempt and OS permission checks.
|
||||||
|
# @DATA_CONTRACT: Input(str path) -> Output(tuple[bool, str])
|
||||||
def validate_path(self, path: str) -> tuple[bool, str]:
|
def validate_path(self, path: str) -> tuple[bool, str]:
|
||||||
with belief_scope("validate_path"):
|
with belief_scope("validate_path"):
|
||||||
p = os.path.abspath(path)
|
p = os.path.abspath(path)
|
||||||
@@ -197,25 +215,33 @@ class ConfigManager:
|
|||||||
# [/DEF:validate_path:Function]
|
# [/DEF:validate_path:Function]
|
||||||
|
|
||||||
# [DEF:get_environments:Function]
|
# [DEF:get_environments:Function]
|
||||||
# @PURPOSE: Returns the list of configured environments.
|
# @PURPOSE: Return all configured environments.
|
||||||
# @RETURN: List[Environment] - List of environments.
|
# @PRE: self.config is initialized.
|
||||||
|
# @POST: Returns list of Environment models from current configuration.
|
||||||
|
# @SIDE_EFFECT: None.
|
||||||
|
# @DATA_CONTRACT: Input(None) -> Output(List[Environment])
|
||||||
def get_environments(self) -> List[Environment]:
|
def get_environments(self) -> List[Environment]:
|
||||||
with belief_scope("get_environments"):
|
with belief_scope("get_environments"):
|
||||||
return self.config.environments
|
return self.config.environments
|
||||||
# [/DEF:get_environments:Function]
|
# [/DEF:get_environments:Function]
|
||||||
|
|
||||||
# [DEF:has_environments:Function]
|
# [DEF:has_environments:Function]
|
||||||
# @PURPOSE: Checks if at least one environment is configured.
|
# @PURPOSE: Check whether at least one environment exists in configuration.
|
||||||
# @RETURN: bool - True if at least one environment exists.
|
# @PRE: self.config is initialized.
|
||||||
|
# @POST: Returns True iff environment list length is greater than zero.
|
||||||
|
# @SIDE_EFFECT: None.
|
||||||
|
# @DATA_CONTRACT: Input(None) -> Output(bool)
|
||||||
def has_environments(self) -> bool:
|
def has_environments(self) -> bool:
|
||||||
with belief_scope("has_environments"):
|
with belief_scope("has_environments"):
|
||||||
return len(self.config.environments) > 0
|
return len(self.config.environments) > 0
|
||||||
# [/DEF:has_environments:Function]
|
# [/DEF:has_environments:Function]
|
||||||
|
|
||||||
# [DEF:get_environment:Function]
|
# [DEF:get_environment:Function]
|
||||||
# @PURPOSE: Returns a single environment by ID.
|
# @PURPOSE: Resolve a configured environment by identifier.
|
||||||
# @PARAM: env_id (str) - The ID of the environment to retrieve.
|
# @PRE: env_id is string identifier.
|
||||||
# @RETURN: Optional[Environment] - The environment with the given ID, or None.
|
# @POST: Returns matching Environment when found; otherwise None.
|
||||||
|
# @SIDE_EFFECT: None.
|
||||||
|
# @DATA_CONTRACT: Input(str env_id) -> Output(Optional[Environment])
|
||||||
def get_environment(self, env_id: str) -> Optional[Environment]:
|
def get_environment(self, env_id: str) -> Optional[Environment]:
|
||||||
with belief_scope("get_environment"):
|
with belief_scope("get_environment"):
|
||||||
for env in self.config.environments:
|
for env in self.config.environments:
|
||||||
@@ -225,8 +251,11 @@ class ConfigManager:
|
|||||||
# [/DEF:get_environment:Function]
|
# [/DEF:get_environment:Function]
|
||||||
|
|
||||||
# [DEF:add_environment:Function]
|
# [DEF:add_environment:Function]
|
||||||
# @PURPOSE: Adds a new environment to the configuration.
|
# @PURPOSE: Upsert environment by id into configuration and persist.
|
||||||
# @PARAM: env (Environment) - The environment to add.
|
# @PRE: env is Environment.
|
||||||
|
# @POST: Configuration contains provided env id with new payload persisted.
|
||||||
|
# @SIDE_EFFECT: Mutates environment list, DB write, logging.
|
||||||
|
# @DATA_CONTRACT: Input(Environment) -> Output(None)
|
||||||
def add_environment(self, env: Environment):
|
def add_environment(self, env: Environment):
|
||||||
with belief_scope("add_environment"):
|
with belief_scope("add_environment"):
|
||||||
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
|
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
|
||||||
@@ -239,10 +268,11 @@ class ConfigManager:
|
|||||||
# [/DEF:add_environment:Function]
|
# [/DEF:add_environment:Function]
|
||||||
|
|
||||||
# [DEF:update_environment:Function]
|
# [DEF:update_environment:Function]
|
||||||
# @PURPOSE: Updates an existing environment.
|
# @PURPOSE: Update existing environment by id and preserve masked password placeholder behavior.
|
||||||
# @PARAM: env_id (str) - The ID of the environment to update.
|
# @PRE: env_id is non-empty string and updated_env is Environment.
|
||||||
# @PARAM: updated_env (Environment) - The updated environment data.
|
# @POST: Returns True and persists update when target exists; else returns False.
|
||||||
# @RETURN: bool - True if updated, False otherwise.
|
# @SIDE_EFFECT: May mutate environment list, DB write, logging.
|
||||||
|
# @DATA_CONTRACT: Input(str env_id, Environment updated_env) -> Output(bool)
|
||||||
def update_environment(self, env_id: str, updated_env: Environment) -> bool:
|
def update_environment(self, env_id: str, updated_env: Environment) -> bool:
|
||||||
with belief_scope("update_environment"):
|
with belief_scope("update_environment"):
|
||||||
logger.info(f"[update_environment][Entry] Updating {env_id}")
|
logger.info(f"[update_environment][Entry] Updating {env_id}")
|
||||||
@@ -264,8 +294,11 @@ class ConfigManager:
|
|||||||
# [/DEF:update_environment:Function]
|
# [/DEF:update_environment:Function]
|
||||||
|
|
||||||
# [DEF:delete_environment:Function]
|
# [DEF:delete_environment:Function]
|
||||||
# @PURPOSE: Deletes an environment by ID.
|
# @PURPOSE: Delete environment by id and persist when deletion occurs.
|
||||||
# @PARAM: env_id (str) - The ID of the environment to delete.
|
# @PRE: env_id is non-empty string.
|
||||||
|
# @POST: Environment is removed when present; otherwise configuration is unchanged.
|
||||||
|
# @SIDE_EFFECT: May mutate environment list, conditional DB write, logging.
|
||||||
|
# @DATA_CONTRACT: Input(str env_id) -> Output(None)
|
||||||
def delete_environment(self, env_id: str):
|
def delete_environment(self, env_id: str):
|
||||||
with belief_scope("delete_environment"):
|
with belief_scope("delete_environment"):
|
||||||
logger.info(f"[delete_environment][Entry] Deleting {env_id}")
|
logger.info(f"[delete_environment][Entry] Deleting {env_id}")
|
||||||
|
|||||||
@@ -225,7 +225,7 @@ def test_enable_belief_state_flag(caplog):
|
|||||||
assert not any("[DisabledFunction][Exit]" in msg for msg in log_messages), "Exit should not be logged when disabled"
|
assert not any("[DisabledFunction][Exit]" in msg for msg in log_messages), "Exit should not be logged when disabled"
|
||||||
# Coherence:OK should still be logged (internal tracking)
|
# Coherence:OK should still be logged (internal tracking)
|
||||||
assert any("[DisabledFunction][COHERENCE:OK]" in msg for msg in log_messages), "Coherence should still be logged"
|
assert any("[DisabledFunction][COHERENCE:OK]" in msg for msg in log_messages), "Coherence should still be logged"
|
||||||
|
# [/DEF:test_enable_belief_state_flag:Function]
|
||||||
|
|
||||||
|
|
||||||
# [DEF:test_belief_scope_missing_anchor:Function]
|
# [DEF:test_belief_scope_missing_anchor:Function]
|
||||||
|
|||||||
@@ -1,118 +1,170 @@
|
|||||||
# [DEF:backend.src.core.migration.risk_assessor:Module]
|
# [DEF:backend.src.core.migration.risk_assessor:Module]
|
||||||
# @TIER: STANDARD
|
# @TIER: CRITICAL
|
||||||
# @SEMANTICS: migration, dry_run, risk, scoring
|
# @SEMANTICS: migration, dry_run, risk, scoring, preflight
|
||||||
# @PURPOSE: Risk evaluation helpers for migration pre-flight reporting.
|
# @PURPOSE: Compute deterministic migration risk items and aggregate score for dry-run reporting.
|
||||||
# @LAYER: Core
|
# @LAYER: Domain
|
||||||
# @RELATION: USED_BY -> backend.src.core.migration.dry_run_orchestrator
|
# @RELATION: [DEPENDS_ON] ->[backend.src.core.superset_client.SupersetClient]
|
||||||
|
# @RELATION: [DISPATCHES] ->[backend.src.core.migration.dry_run_orchestrator.MigrationDryRunService.run]
|
||||||
|
# @INVARIANT: Risk scoring must remain bounded to [0,100] and preserve severity-to-weight mapping.
|
||||||
|
# @TEST_CONTRACT: [source_objects,target_objects,diff,target_client] -> [List[RiskItem]]
|
||||||
|
# @TEST_SCENARIO: [overwrite_update_objects] -> [medium overwrite_existing risk is emitted for each update diff item]
|
||||||
|
# @TEST_SCENARIO: [missing_datasource_dataset] -> [high missing_datasource risk is emitted]
|
||||||
|
# @TEST_SCENARIO: [owner_mismatch_dashboard] -> [low owner_mismatch risk is emitted]
|
||||||
|
# @TEST_EDGE: [missing_field] -> [object without uuid is ignored by indexer]
|
||||||
|
# @TEST_EDGE: [invalid_type] -> [non-list owners input normalizes to empty identifiers]
|
||||||
|
# @TEST_EDGE: [external_fail] -> [target_client get_databases exception propagates to caller]
|
||||||
|
# @TEST_INVARIANT: [score_upper_bound_100] -> VERIFIED_BY: [severity_weight_aggregation]
|
||||||
|
# @UX_STATE: [Idle] -> [N/A backend domain module]
|
||||||
|
# @UX_FEEDBACK: [N/A] -> [No direct UI side effects in this module]
|
||||||
|
# @UX_RECOVERY: [N/A] -> [Caller-level retry/recovery]
|
||||||
|
# @UX_REACTIVITY: [N/A] -> [Backend synchronous function contracts]
|
||||||
|
|
||||||
from typing import Any, Dict, List
|
from typing import Any, Dict, List
|
||||||
|
|
||||||
|
from ..logger import logger, belief_scope
|
||||||
from ..superset_client import SupersetClient
|
from ..superset_client import SupersetClient
|
||||||
|
|
||||||
|
|
||||||
# [DEF:index_by_uuid:Function]
|
# [DEF:index_by_uuid:Function]
|
||||||
# @PURPOSE: Build UUID-index from normalized objects.
|
# @PURPOSE: Build UUID-index from normalized objects.
|
||||||
|
# @PRE: Input list items are dict-like payloads potentially containing "uuid".
|
||||||
|
# @POST: Returns mapping keyed by string uuid; only truthy uuid values are included.
|
||||||
|
# @SIDE_EFFECT: Emits reasoning/reflective logs only.
|
||||||
|
# @DATA_CONTRACT: List[Dict[str, Any]] -> Dict[str, Dict[str, Any]]
|
||||||
def index_by_uuid(objects: List[Dict[str, Any]]) -> Dict[str, Dict[str, Any]]:
|
def index_by_uuid(objects: List[Dict[str, Any]]) -> Dict[str, Dict[str, Any]]:
|
||||||
indexed: Dict[str, Dict[str, Any]] = {}
|
with belief_scope("risk_assessor.index_by_uuid"):
|
||||||
for obj in objects:
|
logger.reason("Building UUID index", extra={"objects_count": len(objects)})
|
||||||
uuid = obj.get("uuid")
|
indexed: Dict[str, Dict[str, Any]] = {}
|
||||||
if uuid:
|
for obj in objects:
|
||||||
indexed[str(uuid)] = obj
|
uuid = obj.get("uuid")
|
||||||
return indexed
|
if uuid:
|
||||||
|
indexed[str(uuid)] = obj
|
||||||
|
logger.reflect("UUID index built", extra={"indexed_count": len(indexed)})
|
||||||
|
return indexed
|
||||||
# [/DEF:index_by_uuid:Function]
|
# [/DEF:index_by_uuid:Function]
|
||||||
|
|
||||||
|
|
||||||
# [DEF:extract_owner_identifiers:Function]
|
# [DEF:extract_owner_identifiers:Function]
|
||||||
# @PURPOSE: Normalize owner payloads for stable comparison.
|
# @PURPOSE: Normalize owner payloads for stable comparison.
|
||||||
|
# @PRE: Owners may be list payload, scalar values, or None.
|
||||||
|
# @POST: Returns sorted unique owner identifiers as strings.
|
||||||
|
# @SIDE_EFFECT: Emits reasoning/reflective logs only.
|
||||||
|
# @DATA_CONTRACT: Any -> List[str]
|
||||||
def extract_owner_identifiers(owners: Any) -> List[str]:
|
def extract_owner_identifiers(owners: Any) -> List[str]:
|
||||||
if not isinstance(owners, list):
|
with belief_scope("risk_assessor.extract_owner_identifiers"):
|
||||||
return []
|
logger.reason("Normalizing owner identifiers")
|
||||||
ids: List[str] = []
|
if not isinstance(owners, list):
|
||||||
for owner in owners:
|
logger.reflect("Owners payload is not list; returning empty identifiers")
|
||||||
if isinstance(owner, dict):
|
return []
|
||||||
if owner.get("username"):
|
ids: List[str] = []
|
||||||
ids.append(str(owner["username"]))
|
for owner in owners:
|
||||||
elif owner.get("id") is not None:
|
if isinstance(owner, dict):
|
||||||
ids.append(str(owner["id"]))
|
if owner.get("username"):
|
||||||
elif owner is not None:
|
ids.append(str(owner["username"]))
|
||||||
ids.append(str(owner))
|
elif owner.get("id") is not None:
|
||||||
return sorted(set(ids))
|
ids.append(str(owner["id"]))
|
||||||
|
elif owner is not None:
|
||||||
|
ids.append(str(owner))
|
||||||
|
normalized_ids = sorted(set(ids))
|
||||||
|
logger.reflect("Owner identifiers normalized", extra={"owner_count": len(normalized_ids)})
|
||||||
|
return normalized_ids
|
||||||
# [/DEF:extract_owner_identifiers:Function]
|
# [/DEF:extract_owner_identifiers:Function]
|
||||||
|
|
||||||
|
|
||||||
# [DEF:build_risks:Function]
|
# [DEF:build_risks:Function]
|
||||||
# @PURPOSE: Build risk list from computed diffs and target catalog state.
|
# @PURPOSE: Build risk list from computed diffs and target catalog state.
|
||||||
|
# @PRE: source_objects/target_objects/diff contain dashboards/charts/datasets keys with expected list structures.
|
||||||
|
# @PRE: target_client is authenticated/usable for database list retrieval.
|
||||||
|
# @POST: Returns list of deterministic risk items derived from overwrite, missing datasource, reference, and owner mismatch checks.
|
||||||
|
# @SIDE_EFFECT: Calls target Superset API for databases metadata and emits logs.
|
||||||
|
# @DATA_CONTRACT: (
|
||||||
|
# @DATA_CONTRACT: Dict[str, List[Dict[str, Any]]],
|
||||||
|
# @DATA_CONTRACT: Dict[str, List[Dict[str, Any]]],
|
||||||
|
# @DATA_CONTRACT: Dict[str, Dict[str, List[Dict[str, Any]]]],
|
||||||
|
# @DATA_CONTRACT: SupersetClient
|
||||||
|
# @DATA_CONTRACT: ) -> List[Dict[str, Any]]
|
||||||
def build_risks(
|
def build_risks(
|
||||||
source_objects: Dict[str, List[Dict[str, Any]]],
|
source_objects: Dict[str, List[Dict[str, Any]]],
|
||||||
target_objects: Dict[str, List[Dict[str, Any]]],
|
target_objects: Dict[str, List[Dict[str, Any]]],
|
||||||
diff: Dict[str, Dict[str, List[Dict[str, Any]]]],
|
diff: Dict[str, Dict[str, List[Dict[str, Any]]]],
|
||||||
target_client: SupersetClient,
|
target_client: SupersetClient,
|
||||||
) -> List[Dict[str, Any]]:
|
) -> List[Dict[str, Any]]:
|
||||||
risks: List[Dict[str, Any]] = []
|
with belief_scope("risk_assessor.build_risks"):
|
||||||
for object_type in ("dashboards", "charts", "datasets"):
|
logger.reason("Building migration risks from diff payload")
|
||||||
for item in diff[object_type]["update"]:
|
risks: List[Dict[str, Any]] = []
|
||||||
risks.append({
|
for object_type in ("dashboards", "charts", "datasets"):
|
||||||
"code": "overwrite_existing",
|
for item in diff[object_type]["update"]:
|
||||||
"severity": "medium",
|
risks.append({
|
||||||
"object_type": object_type[:-1],
|
"code": "overwrite_existing",
|
||||||
"object_uuid": item["uuid"],
|
"severity": "medium",
|
||||||
"message": f"Object will be updated in target: {item.get('title') or item['uuid']}",
|
"object_type": object_type[:-1],
|
||||||
})
|
"object_uuid": item["uuid"],
|
||||||
|
"message": f"Object will be updated in target: {item.get('title') or item['uuid']}",
|
||||||
|
})
|
||||||
|
|
||||||
target_dataset_uuids = set(index_by_uuid(target_objects["datasets"]).keys())
|
target_dataset_uuids = set(index_by_uuid(target_objects["datasets"]).keys())
|
||||||
_, target_databases = target_client.get_databases(query={"columns": ["uuid"]})
|
_, target_databases = target_client.get_databases(query={"columns": ["uuid"]})
|
||||||
target_database_uuids = {str(item.get("uuid")) for item in target_databases if item.get("uuid")}
|
target_database_uuids = {str(item.get("uuid")) for item in target_databases if item.get("uuid")}
|
||||||
|
|
||||||
for dataset in source_objects["datasets"]:
|
for dataset in source_objects["datasets"]:
|
||||||
db_uuid = dataset.get("database_uuid")
|
db_uuid = dataset.get("database_uuid")
|
||||||
if db_uuid and str(db_uuid) not in target_database_uuids:
|
if db_uuid and str(db_uuid) not in target_database_uuids:
|
||||||
risks.append({
|
risks.append({
|
||||||
"code": "missing_datasource",
|
"code": "missing_datasource",
|
||||||
"severity": "high",
|
"severity": "high",
|
||||||
"object_type": "dataset",
|
"object_type": "dataset",
|
||||||
"object_uuid": dataset.get("uuid"),
|
"object_uuid": dataset.get("uuid"),
|
||||||
"message": f"Target datasource is missing for dataset {dataset.get('title') or dataset.get('uuid')}",
|
"message": f"Target datasource is missing for dataset {dataset.get('title') or dataset.get('uuid')}",
|
||||||
})
|
})
|
||||||
|
|
||||||
for chart in source_objects["charts"]:
|
for chart in source_objects["charts"]:
|
||||||
ds_uuid = chart.get("dataset_uuid")
|
ds_uuid = chart.get("dataset_uuid")
|
||||||
if ds_uuid and str(ds_uuid) not in target_dataset_uuids:
|
if ds_uuid and str(ds_uuid) not in target_dataset_uuids:
|
||||||
risks.append({
|
risks.append({
|
||||||
"code": "breaking_reference",
|
"code": "breaking_reference",
|
||||||
"severity": "high",
|
"severity": "high",
|
||||||
"object_type": "chart",
|
"object_type": "chart",
|
||||||
"object_uuid": chart.get("uuid"),
|
"object_uuid": chart.get("uuid"),
|
||||||
"message": f"Chart references dataset not found on target: {ds_uuid}",
|
"message": f"Chart references dataset not found on target: {ds_uuid}",
|
||||||
})
|
})
|
||||||
|
|
||||||
source_dash = index_by_uuid(source_objects["dashboards"])
|
source_dash = index_by_uuid(source_objects["dashboards"])
|
||||||
target_dash = index_by_uuid(target_objects["dashboards"])
|
target_dash = index_by_uuid(target_objects["dashboards"])
|
||||||
for item in diff["dashboards"]["update"]:
|
for item in diff["dashboards"]["update"]:
|
||||||
source_obj = source_dash.get(item["uuid"])
|
source_obj = source_dash.get(item["uuid"])
|
||||||
target_obj = target_dash.get(item["uuid"])
|
target_obj = target_dash.get(item["uuid"])
|
||||||
if not source_obj or not target_obj:
|
if not source_obj or not target_obj:
|
||||||
continue
|
continue
|
||||||
source_owners = extract_owner_identifiers(source_obj.get("owners"))
|
source_owners = extract_owner_identifiers(source_obj.get("owners"))
|
||||||
target_owners = extract_owner_identifiers(target_obj.get("owners"))
|
target_owners = extract_owner_identifiers(target_obj.get("owners"))
|
||||||
if source_owners and target_owners and source_owners != target_owners:
|
if source_owners and target_owners and source_owners != target_owners:
|
||||||
risks.append({
|
risks.append({
|
||||||
"code": "owner_mismatch",
|
"code": "owner_mismatch",
|
||||||
"severity": "low",
|
"severity": "low",
|
||||||
"object_type": "dashboard",
|
"object_type": "dashboard",
|
||||||
"object_uuid": item["uuid"],
|
"object_uuid": item["uuid"],
|
||||||
"message": f"Owner mismatch for dashboard {item.get('title') or item['uuid']}",
|
"message": f"Owner mismatch for dashboard {item.get('title') or item['uuid']}",
|
||||||
})
|
})
|
||||||
return risks
|
logger.reflect("Risk list assembled", extra={"risk_count": len(risks)})
|
||||||
|
return risks
|
||||||
# [/DEF:build_risks:Function]
|
# [/DEF:build_risks:Function]
|
||||||
|
|
||||||
|
|
||||||
# [DEF:score_risks:Function]
|
# [DEF:score_risks:Function]
|
||||||
# @PURPOSE: Aggregate risk list into score and level.
|
# @PURPOSE: Aggregate risk list into score and level.
|
||||||
|
# @PRE: risk_items contains optional severity fields expected in {high,medium,low} or defaults to low weight.
|
||||||
|
# @POST: Returns dict with score in [0,100], derived level, and original items.
|
||||||
|
# @SIDE_EFFECT: Emits reasoning/reflective logs only.
|
||||||
|
# @DATA_CONTRACT: List[Dict[str, Any]] -> Dict[str, Any]
|
||||||
def score_risks(risk_items: List[Dict[str, Any]]) -> Dict[str, Any]:
|
def score_risks(risk_items: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||||
weights = {"high": 25, "medium": 10, "low": 5}
|
with belief_scope("risk_assessor.score_risks"):
|
||||||
score = min(100, sum(weights.get(item.get("severity", "low"), 5) for item in risk_items))
|
logger.reason("Scoring risk items", extra={"risk_items_count": len(risk_items)})
|
||||||
level = "low" if score < 25 else "medium" if score < 60 else "high"
|
weights = {"high": 25, "medium": 10, "low": 5}
|
||||||
return {"score": score, "level": level, "items": risk_items}
|
score = min(100, sum(weights.get(item.get("severity", "low"), 5) for item in risk_items))
|
||||||
|
level = "low" if score < 25 else "medium" if score < 60 else "high"
|
||||||
|
result = {"score": score, "level": level, "items": risk_items}
|
||||||
|
logger.reflect("Risk score computed", extra={"score": score, "level": level})
|
||||||
|
return result
|
||||||
# [/DEF:score_risks:Function]
|
# [/DEF:score_risks:Function]
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -1,11 +1,15 @@
|
|||||||
# [DEF:backend.src.core.migration_engine:Module]
|
# [DEF:backend.src.core.migration_engine:Module]
|
||||||
#
|
#
|
||||||
# @SEMANTICS: migration, engine, zip, yaml, transformation
|
# @TIER: CRITICAL
|
||||||
# @PURPOSE: Handles the interception and transformation of Superset asset ZIP archives.
|
# @SEMANTICS: migration, engine, zip, yaml, transformation, cross-filter, id-mapping
|
||||||
# @LAYER: Core
|
# @PURPOSE: Transforms Superset export ZIP archives while preserving archive integrity and patching mapped identifiers.
|
||||||
# @RELATION: DEPENDS_ON -> PyYAML
|
# @LAYER: Domain
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[src.core.logger]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[src.core.mapping_service.IdMappingService]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[src.models.mapping.ResourceType]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[yaml]
|
||||||
#
|
#
|
||||||
# @INVARIANT: ZIP structure must be preserved after transformation.
|
# @INVARIANT: ZIP structure and non-targeted metadata must remain valid after transformation.
|
||||||
|
|
||||||
# [SECTION: IMPORTS]
|
# [SECTION: IMPORTS]
|
||||||
import zipfile
|
import zipfile
|
||||||
@@ -26,10 +30,15 @@ from src.models.mapping import ResourceType
|
|||||||
class MigrationEngine:
|
class MigrationEngine:
|
||||||
|
|
||||||
# [DEF:__init__:Function]
|
# [DEF:__init__:Function]
|
||||||
# @PURPOSE: Initializes the migration engine with optional ID mapping service.
|
# @PURPOSE: Initializes migration orchestration dependencies for ZIP/YAML metadata transformations.
|
||||||
|
# @PRE: mapping_service is None or implements batch remote ID lookup for ResourceType.CHART.
|
||||||
|
# @POST: self.mapping_service is assigned and available for optional cross-filter patching flows.
|
||||||
|
# @SIDE_EFFECT: Mutates in-memory engine state by storing dependency reference.
|
||||||
|
# @DATA_CONTRACT: Input[Optional[IdMappingService]] -> Output[MigrationEngine]
|
||||||
# @PARAM: mapping_service (Optional[IdMappingService]) - Used for resolving target environment integer IDs.
|
# @PARAM: mapping_service (Optional[IdMappingService]) - Used for resolving target environment integer IDs.
|
||||||
def __init__(self, mapping_service: Optional[IdMappingService] = None):
|
def __init__(self, mapping_service: Optional[IdMappingService] = None):
|
||||||
self.mapping_service = mapping_service
|
with belief_scope("MigrationEngine.__init__"):
|
||||||
|
self.mapping_service = mapping_service
|
||||||
# [/DEF:__init__:Function]
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
# [DEF:transform_zip:Function]
|
# [DEF:transform_zip:Function]
|
||||||
@@ -40,9 +49,11 @@ class MigrationEngine:
|
|||||||
# @PARAM: strip_databases (bool) - Whether to remove the databases directory from the archive.
|
# @PARAM: strip_databases (bool) - Whether to remove the databases directory from the archive.
|
||||||
# @PARAM: target_env_id (Optional[str]) - Used if fix_cross_filters is True to know which environment map to use.
|
# @PARAM: target_env_id (Optional[str]) - Used if fix_cross_filters is True to know which environment map to use.
|
||||||
# @PARAM: fix_cross_filters (bool) - Whether to patch dashboard json_metadata.
|
# @PARAM: fix_cross_filters (bool) - Whether to patch dashboard json_metadata.
|
||||||
# @PRE: zip_path must point to a valid Superset export archive.
|
# @PRE: zip_path points to a readable ZIP; output_path parent is writable; db_mapping keys/values are UUID strings.
|
||||||
# @POST: Transformed archive is saved to output_path.
|
# @POST: Returns True only when extraction, transformation, and packaging complete without exception.
|
||||||
# @RETURN: bool - True if successful.
|
# @SIDE_EFFECT: Reads/writes filesystem archives, creates temporary directory, emits structured logs.
|
||||||
|
# @DATA_CONTRACT: Input[(str zip_path, str output_path, Dict[str,str] db_mapping, bool strip_databases, Optional[str] target_env_id, bool fix_cross_filters)] -> Output[bool]
|
||||||
|
# @RETURN: bool - True if successful.
|
||||||
def transform_zip(self, zip_path: str, output_path: str, db_mapping: Dict[str, str], strip_databases: bool = True, target_env_id: Optional[str] = None, fix_cross_filters: bool = False) -> bool:
|
def transform_zip(self, zip_path: str, output_path: str, db_mapping: Dict[str, str], strip_databases: bool = True, target_env_id: Optional[str] = None, fix_cross_filters: bool = False) -> bool:
|
||||||
"""
|
"""
|
||||||
Transform a Superset export ZIP by replacing database UUIDs and optionally fixing cross-filters.
|
Transform a Superset export ZIP by replacing database UUIDs and optionally fixing cross-filters.
|
||||||
@@ -105,48 +116,60 @@ class MigrationEngine:
|
|||||||
# @PURPOSE: Replaces database_uuid in a single YAML file.
|
# @PURPOSE: Replaces database_uuid in a single YAML file.
|
||||||
# @PARAM: file_path (Path) - Path to the YAML file.
|
# @PARAM: file_path (Path) - Path to the YAML file.
|
||||||
# @PARAM: db_mapping (Dict[str, str]) - UUID mapping dictionary.
|
# @PARAM: db_mapping (Dict[str, str]) - UUID mapping dictionary.
|
||||||
# @PRE: file_path must exist and be readable.
|
# @PRE: file_path exists, is readable YAML, and db_mapping contains source->target UUID pairs.
|
||||||
# @POST: File is modified in-place if source UUID matches mapping.
|
# @POST: database_uuid is replaced in-place only when source UUID is present in db_mapping.
|
||||||
|
# @SIDE_EFFECT: Reads and conditionally rewrites YAML file on disk.
|
||||||
|
# @DATA_CONTRACT: Input[(Path file_path, Dict[str,str] db_mapping)] -> Output[None]
|
||||||
def _transform_yaml(self, file_path: Path, db_mapping: Dict[str, str]):
|
def _transform_yaml(self, file_path: Path, db_mapping: Dict[str, str]):
|
||||||
with open(file_path, 'r') as f:
|
with belief_scope("MigrationEngine._transform_yaml"):
|
||||||
data = yaml.safe_load(f)
|
with open(file_path, 'r') as f:
|
||||||
|
data = yaml.safe_load(f)
|
||||||
|
|
||||||
if not data:
|
if not data:
|
||||||
return
|
return
|
||||||
|
|
||||||
# Superset dataset YAML structure:
|
# Superset dataset YAML structure:
|
||||||
# database_uuid: ...
|
# database_uuid: ...
|
||||||
source_uuid = data.get('database_uuid')
|
source_uuid = data.get('database_uuid')
|
||||||
if source_uuid in db_mapping:
|
if source_uuid in db_mapping:
|
||||||
data['database_uuid'] = db_mapping[source_uuid]
|
data['database_uuid'] = db_mapping[source_uuid]
|
||||||
with open(file_path, 'w') as f:
|
with open(file_path, 'w') as f:
|
||||||
yaml.dump(data, f)
|
yaml.dump(data, f)
|
||||||
# [/DEF:_transform_yaml:Function]
|
# [/DEF:_transform_yaml:Function]
|
||||||
|
|
||||||
# [DEF:_extract_chart_uuids_from_archive:Function]
|
# [DEF:_extract_chart_uuids_from_archive:Function]
|
||||||
# @PURPOSE: Scans the unpacked ZIP to map local exported integer IDs back to their UUIDs.
|
# @PURPOSE: Scans extracted chart YAML files and builds a source chart ID to UUID lookup map.
|
||||||
# @PARAM: temp_dir (Path) - Root dir of unpacked archive
|
# @PRE: temp_dir exists and points to extracted archive root with optional chart YAML resources.
|
||||||
|
# @POST: Returns a best-effort Dict[int, str] containing only parseable chart id/uuid pairs.
|
||||||
|
# @SIDE_EFFECT: Reads chart YAML files from filesystem; suppresses per-file parsing failures.
|
||||||
|
# @DATA_CONTRACT: Input[Path] -> Output[Dict[int,str]]
|
||||||
|
# @PARAM: temp_dir (Path) - Root dir of unpacked archive.
|
||||||
# @RETURN: Dict[int, str] - Mapping of source Integer ID to UUID.
|
# @RETURN: Dict[int, str] - Mapping of source Integer ID to UUID.
|
||||||
def _extract_chart_uuids_from_archive(self, temp_dir: Path) -> Dict[int, str]:
|
def _extract_chart_uuids_from_archive(self, temp_dir: Path) -> Dict[int, str]:
|
||||||
# Implementation Note: This is a placeholder for the logic that extracts
|
with belief_scope("MigrationEngine._extract_chart_uuids_from_archive"):
|
||||||
# actual Source IDs. In a real scenario, this involves parsing chart YAMLs
|
# Implementation Note: This is a placeholder for the logic that extracts
|
||||||
# or manifesting the export metadata structure where source IDs are stored.
|
# actual Source IDs. In a real scenario, this involves parsing chart YAMLs
|
||||||
# For simplicity in US1 MVP, we assume it's read from chart files if present.
|
# or manifesting the export metadata structure where source IDs are stored.
|
||||||
mapping = {}
|
# For simplicity in US1 MVP, we assume it's read from chart files if present.
|
||||||
chart_files = list(temp_dir.glob("**/charts/**/*.yaml")) + list(temp_dir.glob("**/charts/*.yaml"))
|
mapping = {}
|
||||||
for cf in set(chart_files):
|
chart_files = list(temp_dir.glob("**/charts/**/*.yaml")) + list(temp_dir.glob("**/charts/*.yaml"))
|
||||||
try:
|
for cf in set(chart_files):
|
||||||
with open(cf, 'r') as f:
|
try:
|
||||||
cdata = yaml.safe_load(f)
|
with open(cf, 'r') as f:
|
||||||
if cdata and 'id' in cdata and 'uuid' in cdata:
|
cdata = yaml.safe_load(f)
|
||||||
mapping[cdata['id']] = cdata['uuid']
|
if cdata and 'id' in cdata and 'uuid' in cdata:
|
||||||
except Exception:
|
mapping[cdata['id']] = cdata['uuid']
|
||||||
pass
|
except Exception:
|
||||||
return mapping
|
pass
|
||||||
|
return mapping
|
||||||
# [/DEF:_extract_chart_uuids_from_archive:Function]
|
# [/DEF:_extract_chart_uuids_from_archive:Function]
|
||||||
|
|
||||||
# [DEF:_patch_dashboard_metadata:Function]
|
# [DEF:_patch_dashboard_metadata:Function]
|
||||||
# @PURPOSE: Replaces integer IDs in json_metadata.
|
# @PURPOSE: Rewrites dashboard json_metadata chart/dataset integer identifiers using target environment mappings.
|
||||||
|
# @PRE: file_path points to dashboard YAML with json_metadata; target_env_id is non-empty; source_map contains source id->uuid.
|
||||||
|
# @POST: json_metadata is re-serialized with mapped integer IDs when remote mappings are available; otherwise file remains unchanged.
|
||||||
|
# @SIDE_EFFECT: Reads/writes YAML file, performs mapping lookup via mapping_service, emits logs for recoverable/terminal failures.
|
||||||
|
# @DATA_CONTRACT: Input[(Path file_path, str target_env_id, Dict[int,str] source_map)] -> Output[None]
|
||||||
# @PARAM: file_path (Path)
|
# @PARAM: file_path (Path)
|
||||||
# @PARAM: target_env_id (str)
|
# @PARAM: target_env_id (str)
|
||||||
# @PARAM: source_map (Dict[int, str])
|
# @PARAM: source_map (Dict[int, str])
|
||||||
|
|||||||
@@ -1,10 +1,12 @@
|
|||||||
# [DEF:backend.src.models.config:Module]
|
# [DEF:backend.src.models.config:Module]
|
||||||
#
|
#
|
||||||
# @TIER: STANDARD
|
# @TIER: CRITICAL
|
||||||
# @SEMANTICS: database, config, settings, sqlalchemy
|
# @SEMANTICS: database, config, settings, sqlalchemy, notification
|
||||||
# @PURPOSE: Defines database schema for persisted application configuration.
|
# @PURPOSE: Defines SQLAlchemy persistence models for application and notification configuration records.
|
||||||
# @LAYER: Domain
|
# @LAYER: Domain
|
||||||
# @RELATION: DEPENDS_ON -> sqlalchemy
|
# @RELATION: [DEPENDS_ON] ->[sqlalchemy]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.models.mapping:Base]
|
||||||
|
# @INVARIANT: Configuration payload and notification credentials must remain persisted as non-null JSON documents.
|
||||||
|
|
||||||
from sqlalchemy import Column, String, DateTime, JSON, Boolean
|
from sqlalchemy import Column, String, DateTime, JSON, Boolean
|
||||||
from sqlalchemy.sql import func
|
from sqlalchemy.sql import func
|
||||||
@@ -13,7 +15,11 @@ from .mapping import Base
|
|||||||
|
|
||||||
|
|
||||||
# [DEF:AppConfigRecord:Class]
|
# [DEF:AppConfigRecord:Class]
|
||||||
# @PURPOSE: Stores the single source of truth for application configuration.
|
# @PURPOSE: Stores persisted application configuration as a single authoritative record model.
|
||||||
|
# @PRE: SQLAlchemy declarative Base is initialized and table metadata registration is active.
|
||||||
|
# @POST: ORM table 'app_configurations' exposes id, payload, and updated_at fields with declared nullability/default semantics.
|
||||||
|
# @SIDE_EFFECT: Registers ORM mapping metadata during module import.
|
||||||
|
# @DATA_CONTRACT: Input -> persistence row {id:str, payload:json, updated_at:datetime}; Output -> AppConfigRecord ORM entity.
|
||||||
class AppConfigRecord(Base):
|
class AppConfigRecord(Base):
|
||||||
__tablename__ = "app_configurations"
|
__tablename__ = "app_configurations"
|
||||||
|
|
||||||
@@ -25,7 +31,11 @@ class AppConfigRecord(Base):
|
|||||||
# [/DEF:AppConfigRecord:Class]
|
# [/DEF:AppConfigRecord:Class]
|
||||||
|
|
||||||
# [DEF:NotificationConfig:Class]
|
# [DEF:NotificationConfig:Class]
|
||||||
# @PURPOSE: Global settings for external notification providers.
|
# @PURPOSE: Stores persisted provider-level notification configuration and encrypted credentials metadata.
|
||||||
|
# @PRE: SQLAlchemy declarative Base is initialized and uuid generation is available at instance creation time.
|
||||||
|
# @POST: ORM table 'notification_configs' exposes id, type, name, credentials, is_active, created_at, updated_at fields with declared constraints/defaults.
|
||||||
|
# @SIDE_EFFECT: Registers ORM mapping metadata during module import; may generate UUID values for new entity instances.
|
||||||
|
# @DATA_CONTRACT: Input -> persistence row {id:str, type:str, name:str, credentials:json, is_active:bool, created_at:datetime, updated_at:datetime}; Output -> NotificationConfig ORM entity.
|
||||||
class NotificationConfig(Base):
|
class NotificationConfig(Base):
|
||||||
__tablename__ = "notification_configs"
|
__tablename__ = "notification_configs"
|
||||||
|
|
||||||
|
|||||||
@@ -80,6 +80,8 @@ class MigrationJob(Base):
|
|||||||
status = Column(SQLEnum(MigrationStatus), default=MigrationStatus.PENDING)
|
status = Column(SQLEnum(MigrationStatus), default=MigrationStatus.PENDING)
|
||||||
replace_db = Column(Boolean, default=False)
|
replace_db = Column(Boolean, default=False)
|
||||||
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
created_at = Column(DateTime(timezone=True), server_default=func.now())
|
||||||
|
# [/DEF:MigrationJob:Class]
|
||||||
|
|
||||||
# [DEF:ResourceMapping:Class]
|
# [DEF:ResourceMapping:Class]
|
||||||
# @TIER: STANDARD
|
# @TIER: STANDARD
|
||||||
# @PURPOSE: Maps a universal UUID for a resource to its actual ID on a specific environment.
|
# @PURPOSE: Maps a universal UUID for a resource to its actual ID on a specific environment.
|
||||||
|
|||||||
@@ -462,6 +462,7 @@ class CleanReleaseTUI:
|
|||||||
self.status = CheckFinalStatus.FAILED
|
self.status = CheckFinalStatus.FAILED
|
||||||
self.refresh_overview()
|
self.refresh_overview()
|
||||||
self.refresh_screen()
|
self.refresh_screen()
|
||||||
|
# [/DEF:run_checks:Function]
|
||||||
|
|
||||||
def build_manifest(self):
|
def build_manifest(self):
|
||||||
try:
|
try:
|
||||||
|
|||||||
@@ -291,6 +291,9 @@ def main() -> None:
|
|||||||
logger.info(f"[COHERENCE:OK] Result summary: {json.dumps(result, ensure_ascii=True)}")
|
logger.info(f"[COHERENCE:OK] Result summary: {json.dumps(result, ensure_ascii=True)}")
|
||||||
|
|
||||||
|
|
||||||
|
# [/DEF:main:Function]
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
|
||||||
|
|||||||
@@ -82,4 +82,6 @@ async def test_get_health_summary_empty():
|
|||||||
summary = await service.get_health_summary(environment_id="env_none")
|
summary = await service.get_health_summary(environment_id="env_none")
|
||||||
|
|
||||||
assert summary.pass_count == 0
|
assert summary.pass_count == 0
|
||||||
assert len(summary.items) == 0
|
assert len(summary.items) == 0
|
||||||
|
|
||||||
|
# [/DEF:test_health_service:Module]
|
||||||
@@ -1,13 +1,16 @@
|
|||||||
# [DEF:backend.src.services.auth_service:Module]
|
# [DEF:backend.src.services.auth_service:Module]
|
||||||
#
|
#
|
||||||
# @SEMANTICS: auth, service, business-logic, login, jwt
|
# @TIER: CRITICAL
|
||||||
# @PURPOSE: Orchestrates authentication business logic.
|
# @SEMANTICS: auth, service, business-logic, login, jwt, adfs, jit-provisioning
|
||||||
# @LAYER: Service
|
# @PURPOSE: Orchestrates credential authentication and ADFS JIT user provisioning.
|
||||||
# @RELATION: USES -> backend.src.core.auth.repository.AuthRepository
|
# @LAYER: Domain
|
||||||
# @RELATION: USES -> backend.src.core.auth.security
|
# @RELATION: [DEPENDS_ON] ->[backend.src.core.auth.repository.AuthRepository]
|
||||||
# @RELATION: USES -> backend.src.core.auth.jwt
|
# @RELATION: [DEPENDS_ON] ->[backend.src.core.auth.security.verify_password]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.core.auth.jwt.create_access_token]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.models.auth.User]
|
||||||
|
# @RELATION: [DEPENDS_ON] ->[backend.src.models.auth.Role]
|
||||||
#
|
#
|
||||||
# @INVARIANT: Authentication must verify both credentials and account status.
|
# @INVARIANT: Authentication succeeds only for active users with valid credentials; issued sessions encode subject and scopes from assigned roles.
|
||||||
|
|
||||||
# [SECTION: IMPORTS]
|
# [SECTION: IMPORTS]
|
||||||
from typing import Dict, Any
|
from typing import Dict, Any
|
||||||
@@ -23,20 +26,25 @@ from ..core.logger import belief_scope
|
|||||||
# @PURPOSE: Provides high-level authentication services.
|
# @PURPOSE: Provides high-level authentication services.
|
||||||
class AuthService:
|
class AuthService:
|
||||||
# [DEF:__init__:Function]
|
# [DEF:__init__:Function]
|
||||||
# @PURPOSE: Initializes the service with a database session.
|
# @PURPOSE: Initializes the authentication service with repository access over an active DB session.
|
||||||
# @PARAM: db (Session) - SQLAlchemy session.
|
# @PRE: db is a valid SQLAlchemy Session instance bound to the auth persistence context.
|
||||||
|
# @POST: self.repo is initialized and ready for auth user/role CRUD operations.
|
||||||
|
# @SIDE_EFFECT: Allocates AuthRepository and binds it to the provided Session.
|
||||||
|
# @DATA_CONTRACT: Input(Session) -> Model(AuthRepository)
|
||||||
|
# @PARAM: db (Session) - SQLAlchemy session.
|
||||||
def __init__(self, db: Session):
|
def __init__(self, db: Session):
|
||||||
self.repo = AuthRepository(db)
|
self.repo = AuthRepository(db)
|
||||||
# [/DEF:__init__:Function]
|
# [/DEF:__init__:Function]
|
||||||
|
|
||||||
# [DEF:authenticate_user:Function]
|
# [DEF:authenticate_user:Function]
|
||||||
# @PURPOSE: Authenticates a user with username and password.
|
# @PURPOSE: Validates credentials and account state for local username/password authentication.
|
||||||
# @PRE: username and password are provided.
|
# @PRE: username and password are non-empty credential inputs.
|
||||||
# @POST: Returns User object if authentication succeeds, else None.
|
# @POST: Returns User only when user exists, is active, and password hash verification succeeds; otherwise returns None.
|
||||||
# @SIDE_EFFECT: Updates last_login timestamp on success.
|
# @SIDE_EFFECT: Persists last_login update for successful authentications via repository.
|
||||||
# @PARAM: username (str) - The username.
|
# @DATA_CONTRACT: Input(str username, str password) -> Output(User | None)
|
||||||
# @PARAM: password (str) - The plain password.
|
# @PARAM: username (str) - The username.
|
||||||
# @RETURN: Optional[User] - The authenticated user or None.
|
# @PARAM: password (str) - The plain password.
|
||||||
|
# @RETURN: Optional[User] - The authenticated user or None.
|
||||||
def authenticate_user(self, username: str, password: str):
|
def authenticate_user(self, username: str, password: str):
|
||||||
with belief_scope("AuthService.authenticate_user"):
|
with belief_scope("AuthService.authenticate_user"):
|
||||||
user = self.repo.get_user_by_username(username)
|
user = self.repo.get_user_by_username(username)
|
||||||
@@ -54,11 +62,13 @@ class AuthService:
|
|||||||
# [/DEF:authenticate_user:Function]
|
# [/DEF:authenticate_user:Function]
|
||||||
|
|
||||||
# [DEF:create_session:Function]
|
# [DEF:create_session:Function]
|
||||||
# @PURPOSE: Creates a JWT session for an authenticated user.
|
# @PURPOSE: Issues an access token payload for an already authenticated user.
|
||||||
# @PRE: user is a valid User object.
|
# @PRE: user is a valid User entity containing username and iterable roles with role.name values.
|
||||||
# @POST: Returns a dictionary with access_token and token_type.
|
# @POST: Returns session dict with non-empty access_token and token_type='bearer'.
|
||||||
# @PARAM: user (User) - The authenticated user.
|
# @SIDE_EFFECT: Generates signed JWT via auth JWT provider.
|
||||||
# @RETURN: Dict[str, str] - Session data.
|
# @DATA_CONTRACT: Input(User) -> Output(Dict[str, str]{access_token, token_type})
|
||||||
|
# @PARAM: user (User) - The authenticated user.
|
||||||
|
# @RETURN: Dict[str, str] - Session data.
|
||||||
def create_session(self, user) -> Dict[str, str]:
|
def create_session(self, user) -> Dict[str, str]:
|
||||||
with belief_scope("AuthService.create_session"):
|
with belief_scope("AuthService.create_session"):
|
||||||
# Collect role names for scopes
|
# Collect role names for scopes
|
||||||
@@ -77,11 +87,13 @@ class AuthService:
|
|||||||
# [/DEF:create_session:Function]
|
# [/DEF:create_session:Function]
|
||||||
|
|
||||||
# [DEF:provision_adfs_user:Function]
|
# [DEF:provision_adfs_user:Function]
|
||||||
# @PURPOSE: Just-In-Time (JIT) provisioning for ADFS users based on group mappings.
|
# @PURPOSE: Performs ADFS Just-In-Time provisioning and role synchronization from AD group mappings.
|
||||||
# @PRE: user_info contains 'upn' (username), 'email', and 'groups'.
|
# @PRE: user_info contains identity claims where at least one of 'upn' or 'email' is present; 'groups' may be absent.
|
||||||
# @POST: User is created/updated and assigned roles based on groups.
|
# @POST: Returns persisted user entity with roles synchronized to mapped AD groups and refreshed state.
|
||||||
# @PARAM: user_info (Dict[str, Any]) - Claims from ADFS token.
|
# @SIDE_EFFECT: May insert new User, mutate user.roles, commit transaction, and refresh ORM state.
|
||||||
# @RETURN: User - The provisioned user.
|
# @DATA_CONTRACT: Input(Dict[str, Any]{upn|email, email, groups[]}) -> Output(User persisted)
|
||||||
|
# @PARAM: user_info (Dict[str, Any]) - Claims from ADFS token.
|
||||||
|
# @RETURN: User - The provisioned user.
|
||||||
def provision_adfs_user(self, user_info: Dict[str, Any]) -> User:
|
def provision_adfs_user(self, user_info: Dict[str, Any]) -> User:
|
||||||
with belief_scope("AuthService.provision_adfs_user"):
|
with belief_scope("AuthService.provision_adfs_user"):
|
||||||
username = user_info.get("upn") or user_info.get("email")
|
username = user_info.get("upn") or user_info.get("email")
|
||||||
|
|||||||
@@ -22,3 +22,6 @@ def test_audit_check_run(mock_logger):
|
|||||||
def test_audit_report(mock_logger):
|
def test_audit_report(mock_logger):
|
||||||
audit_report("rep-1", "cand-1")
|
audit_report("rep-1", "cand-1")
|
||||||
mock_logger.info.assert_called_with("[EXPLORE] clean-release report_id=rep-1 candidate=cand-1")
|
mock_logger.info.assert_called_with("[EXPLORE] clean-release report_id=rep-1 candidate=cand-1")
|
||||||
|
|
||||||
|
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_audit_service:Module]
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
# @SEMANTICS: tests, clean-release, preparation, flow
|
# @SEMANTICS: tests, clean-release, preparation, flow
|
||||||
# @PURPOSE: Validate release candidate preparation flow, including policy evaluation and manifest persisting.
|
# @PURPOSE: Validate release candidate preparation flow, including policy evaluation and manifest persisting.
|
||||||
# @LAYER: Domain
|
# @LAYER: Domain
|
||||||
# @RELATION: TESTS -> backend.src.services.clean_release.preparation_service
|
# @RELATION: [DEPENDS_ON] ->[backend.src.services.clean_release.preparation_service:Module]
|
||||||
# @INVARIANT: Candidate preparation always persists manifest and candidate status deterministically.
|
# @INVARIANT: Candidate preparation always persists manifest and candidate status deterministically.
|
||||||
|
|
||||||
import pytest
|
import pytest
|
||||||
@@ -21,6 +21,8 @@ from src.models.clean_release import (
|
|||||||
)
|
)
|
||||||
from src.services.clean_release.preparation_service import prepare_candidate
|
from src.services.clean_release.preparation_service import prepare_candidate
|
||||||
|
|
||||||
|
# [DEF:backend.tests.services.clean_release.test_preparation_service._mock_policy:Function]
|
||||||
|
# @PURPOSE: Build a valid clean profile policy fixture for preparation tests.
|
||||||
def _mock_policy() -> CleanProfilePolicy:
|
def _mock_policy() -> CleanProfilePolicy:
|
||||||
return CleanProfilePolicy(
|
return CleanProfilePolicy(
|
||||||
policy_id="pol-1",
|
policy_id="pol-1",
|
||||||
@@ -33,7 +35,10 @@ def _mock_policy() -> CleanProfilePolicy:
|
|||||||
effective_from=datetime.now(timezone.utc),
|
effective_from=datetime.now(timezone.utc),
|
||||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||||
)
|
)
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_preparation_service._mock_policy:Function]
|
||||||
|
|
||||||
|
# [DEF:backend.tests.services.clean_release.test_preparation_service._mock_registry:Function]
|
||||||
|
# @PURPOSE: Build an internal-only source registry fixture for preparation tests.
|
||||||
def _mock_registry() -> ResourceSourceRegistry:
|
def _mock_registry() -> ResourceSourceRegistry:
|
||||||
return ResourceSourceRegistry(
|
return ResourceSourceRegistry(
|
||||||
registry_id="reg-1",
|
registry_id="reg-1",
|
||||||
@@ -42,7 +47,10 @@ def _mock_registry() -> ResourceSourceRegistry:
|
|||||||
updated_at=datetime.now(timezone.utc),
|
updated_at=datetime.now(timezone.utc),
|
||||||
updated_by="tester"
|
updated_by="tester"
|
||||||
)
|
)
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_preparation_service._mock_registry:Function]
|
||||||
|
|
||||||
|
# [DEF:backend.tests.services.clean_release.test_preparation_service._mock_candidate:Function]
|
||||||
|
# @PURPOSE: Build a draft release candidate fixture with provided identifier.
|
||||||
def _mock_candidate(candidate_id: str) -> ReleaseCandidate:
|
def _mock_candidate(candidate_id: str) -> ReleaseCandidate:
|
||||||
return ReleaseCandidate(
|
return ReleaseCandidate(
|
||||||
candidate_id=candidate_id,
|
candidate_id=candidate_id,
|
||||||
@@ -53,7 +61,15 @@ def _mock_candidate(candidate_id: str) -> ReleaseCandidate:
|
|||||||
created_by="tester",
|
created_by="tester",
|
||||||
source_snapshot_ref="v1.0.0-snapshot"
|
source_snapshot_ref="v1.0.0-snapshot"
|
||||||
)
|
)
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_preparation_service._mock_candidate:Function]
|
||||||
|
|
||||||
|
# [DEF:backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_success:Function]
|
||||||
|
# @PURPOSE: Verify candidate transitions to PREPARED when evaluation returns no violations.
|
||||||
|
# @TEST_CONTRACT: [valid_candidate + active_policy + internal_sources + no_violations] -> [status=PREPARED, manifest_persisted, candidate_saved]
|
||||||
|
# @TEST_SCENARIO: [prepare_success] -> [prepared status and persistence side effects are produced]
|
||||||
|
# @TEST_FIXTURE: [INLINE_MOCKS] -> INLINE_JSON
|
||||||
|
# @TEST_EDGE: [external_fail] -> [none; dependency interactions mocked and successful]
|
||||||
|
# @TEST_INVARIANT: [prepared_flow_persists_state] -> VERIFIED_BY: [prepare_success]
|
||||||
def test_prepare_candidate_success():
|
def test_prepare_candidate_success():
|
||||||
# Setup
|
# Setup
|
||||||
repository = MagicMock()
|
repository = MagicMock()
|
||||||
@@ -82,7 +98,15 @@ def test_prepare_candidate_success():
|
|||||||
assert candidate.status == ReleaseCandidateStatus.PREPARED
|
assert candidate.status == ReleaseCandidateStatus.PREPARED
|
||||||
repository.save_manifest.assert_called_once()
|
repository.save_manifest.assert_called_once()
|
||||||
repository.save_candidate.assert_called_with(candidate)
|
repository.save_candidate.assert_called_with(candidate)
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_success:Function]
|
||||||
|
|
||||||
|
# [DEF:backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_with_violations:Function]
|
||||||
|
# @PURPOSE: Verify candidate transitions to BLOCKED when evaluation returns blocking violations.
|
||||||
|
# @TEST_CONTRACT: [valid_candidate + active_policy + evaluation_with_violations] -> [status=BLOCKED, violations_exposed]
|
||||||
|
# @TEST_SCENARIO: [prepare_blocked_due_to_policy] -> [blocked status and violation list are produced]
|
||||||
|
# @TEST_FIXTURE: [INLINE_MOCKS] -> INLINE_JSON
|
||||||
|
# @TEST_EDGE: [external_fail] -> [none; dependency interactions mocked and successful]
|
||||||
|
# @TEST_INVARIANT: [blocked_flow_reports_violations] -> VERIFIED_BY: [prepare_blocked_due_to_policy]
|
||||||
def test_prepare_candidate_with_violations():
|
def test_prepare_candidate_with_violations():
|
||||||
# Setup
|
# Setup
|
||||||
repository = MagicMock()
|
repository = MagicMock()
|
||||||
@@ -110,14 +134,30 @@ def test_prepare_candidate_with_violations():
|
|||||||
assert result["status"] == ReleaseCandidateStatus.BLOCKED.value
|
assert result["status"] == ReleaseCandidateStatus.BLOCKED.value
|
||||||
assert candidate.status == ReleaseCandidateStatus.BLOCKED
|
assert candidate.status == ReleaseCandidateStatus.BLOCKED
|
||||||
assert len(result["violations"]) == 1
|
assert len(result["violations"]) == 1
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_with_violations:Function]
|
||||||
|
|
||||||
|
# [DEF:backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_not_found:Function]
|
||||||
|
# @PURPOSE: Verify preparation raises ValueError when candidate does not exist.
|
||||||
|
# @TEST_CONTRACT: [missing_candidate] -> [ValueError('Candidate not found')]
|
||||||
|
# @TEST_SCENARIO: [prepare_missing_candidate] -> [raises candidate not found error]
|
||||||
|
# @TEST_FIXTURE: [INLINE_MOCKS] -> INLINE_JSON
|
||||||
|
# @TEST_EDGE: [missing_field] -> [candidate lookup returns None]
|
||||||
|
# @TEST_INVARIANT: [missing_candidate_is_rejected] -> VERIFIED_BY: [prepare_missing_candidate]
|
||||||
def test_prepare_candidate_not_found():
|
def test_prepare_candidate_not_found():
|
||||||
repository = MagicMock()
|
repository = MagicMock()
|
||||||
repository.get_candidate.return_value = None
|
repository.get_candidate.return_value = None
|
||||||
|
|
||||||
with pytest.raises(ValueError, match="Candidate not found"):
|
with pytest.raises(ValueError, match="Candidate not found"):
|
||||||
prepare_candidate(repository, "non-existent", [], [], "op")
|
prepare_candidate(repository, "non-existent", [], [], "op")
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_not_found:Function]
|
||||||
|
|
||||||
|
# [DEF:backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_no_active_policy:Function]
|
||||||
|
# @PURPOSE: Verify preparation raises ValueError when no active policy is available.
|
||||||
|
# @TEST_CONTRACT: [candidate_present + missing_active_policy] -> [ValueError('Active clean policy not found')]
|
||||||
|
# @TEST_SCENARIO: [prepare_missing_policy] -> [raises active policy missing error]
|
||||||
|
# @TEST_FIXTURE: [INLINE_MOCKS] -> INLINE_JSON
|
||||||
|
# @TEST_EDGE: [invalid_type] -> [policy dependency resolves to None]
|
||||||
|
# @TEST_INVARIANT: [active_policy_required] -> VERIFIED_BY: [prepare_missing_policy]
|
||||||
def test_prepare_candidate_no_active_policy():
|
def test_prepare_candidate_no_active_policy():
|
||||||
repository = MagicMock()
|
repository = MagicMock()
|
||||||
repository.get_candidate.return_value = _mock_candidate("cand-1")
|
repository.get_candidate.return_value = _mock_candidate("cand-1")
|
||||||
@@ -125,3 +165,7 @@ def test_prepare_candidate_no_active_policy():
|
|||||||
|
|
||||||
with pytest.raises(ValueError, match="Active clean policy not found"):
|
with pytest.raises(ValueError, match="Active clean policy not found"):
|
||||||
prepare_candidate(repository, "cand-1", [], [], "op")
|
prepare_candidate(repository, "cand-1", [], [], "op")
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_preparation_service.test_prepare_candidate_no_active_policy:Function]
|
||||||
|
|
||||||
|
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_preparation_service:Module]
|
||||||
|
|||||||
@@ -55,4 +55,6 @@ def test_validate_internal_sources_external_blocked():
|
|||||||
assert result["ok"] is False
|
assert result["ok"] is False
|
||||||
assert len(result["violations"]) == 1
|
assert len(result["violations"]) == 1
|
||||||
assert result["violations"][0]["category"] == "external-source"
|
assert result["violations"][0]["category"] == "external-source"
|
||||||
assert result["violations"][0]["blocked_release"] is True
|
assert result["violations"][0]["blocked_release"] is True
|
||||||
|
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_source_isolation:Module]
|
||||||
@@ -25,3 +25,6 @@ def test_derive_final_status_failed_skipped():
|
|||||||
results = [CheckStageResult(stage=s, status=CheckStageStatus.PASS, details="ok") for s in MANDATORY_STAGE_ORDER]
|
results = [CheckStageResult(stage=s, status=CheckStageStatus.PASS, details="ok") for s in MANDATORY_STAGE_ORDER]
|
||||||
results[2].status = CheckStageStatus.SKIPPED
|
results[2].status = CheckStageStatus.SKIPPED
|
||||||
assert derive_final_status(results) == CheckFinalStatus.FAILED
|
assert derive_final_status(results) == CheckFinalStatus.FAILED
|
||||||
|
|
||||||
|
|
||||||
|
# [/DEF:backend.tests.services.clean_release.test_stages:Module]
|
||||||
|
|||||||
@@ -35,89 +35,117 @@ from ...models.clean_release import (
|
|||||||
from .policy_engine import CleanPolicyEngine
|
from .policy_engine import CleanPolicyEngine
|
||||||
from .repository import CleanReleaseRepository
|
from .repository import CleanReleaseRepository
|
||||||
from .stages import derive_final_status
|
from .stages import derive_final_status
|
||||||
|
from ...core.logger import belief_scope
|
||||||
|
|
||||||
|
|
||||||
# [DEF:CleanComplianceOrchestrator:Class]
|
# [DEF:CleanComplianceOrchestrator:Class]
|
||||||
# @PURPOSE: Coordinate clean-release compliance verification stages.
|
# @PURPOSE: Coordinate clean-release compliance verification stages.
|
||||||
class CleanComplianceOrchestrator:
|
class CleanComplianceOrchestrator:
|
||||||
|
# [DEF:CleanComplianceOrchestrator.__init__:Function]
|
||||||
|
# @PURPOSE: Bind repository dependency used for orchestrator persistence and lookups.
|
||||||
|
# @PRE: repository is a valid CleanReleaseRepository instance with required methods.
|
||||||
|
# @POST: self.repository is assigned and used by all orchestration steps.
|
||||||
|
# @SIDE_EFFECT: Stores repository reference on orchestrator instance.
|
||||||
|
# @DATA_CONTRACT: Input -> CleanReleaseRepository, Output -> None
|
||||||
def __init__(self, repository: CleanReleaseRepository):
|
def __init__(self, repository: CleanReleaseRepository):
|
||||||
self.repository = repository
|
with belief_scope("CleanComplianceOrchestrator.__init__"):
|
||||||
|
self.repository = repository
|
||||||
|
# [/DEF:CleanComplianceOrchestrator.__init__:Function]
|
||||||
|
|
||||||
# [DEF:start_check_run:Function]
|
# [DEF:start_check_run:Function]
|
||||||
# @PURPOSE: Initiate a new compliance run session.
|
# @PURPOSE: Initiate a new compliance run session.
|
||||||
# @PRE: candidate_id and policy_id must exist in repository.
|
# @PRE: candidate_id/policy_id/manifest_id identify existing records in repository.
|
||||||
# @POST: Returns initialized ComplianceRun in RUNNING state.
|
# @POST: Returns initialized ComplianceRun in RUNNING state persisted in repository.
|
||||||
|
# @SIDE_EFFECT: Reads manifest/policy and writes new ComplianceRun via repository.save_check_run.
|
||||||
|
# @DATA_CONTRACT: Input -> (candidate_id:str, policy_id:str, requested_by:str, manifest_id:str), Output -> ComplianceRun
|
||||||
def start_check_run(self, candidate_id: str, policy_id: str, requested_by: str, manifest_id: str) -> ComplianceRun:
|
def start_check_run(self, candidate_id: str, policy_id: str, requested_by: str, manifest_id: str) -> ComplianceRun:
|
||||||
manifest = self.repository.get_manifest(manifest_id)
|
with belief_scope("start_check_run"):
|
||||||
policy = self.repository.get_policy(policy_id)
|
manifest = self.repository.get_manifest(manifest_id)
|
||||||
if not manifest or not policy:
|
policy = self.repository.get_policy(policy_id)
|
||||||
raise ValueError("Manifest or Policy not found")
|
if not manifest or not policy:
|
||||||
|
raise ValueError("Manifest or Policy not found")
|
||||||
|
|
||||||
check_run = ComplianceRun(
|
check_run = ComplianceRun(
|
||||||
id=f"check-{uuid4()}",
|
id=f"check-{uuid4()}",
|
||||||
candidate_id=candidate_id,
|
candidate_id=candidate_id,
|
||||||
manifest_id=manifest_id,
|
manifest_id=manifest_id,
|
||||||
manifest_digest=manifest.manifest_digest,
|
manifest_digest=manifest.manifest_digest,
|
||||||
policy_snapshot_id=policy_id,
|
policy_snapshot_id=policy_id,
|
||||||
registry_snapshot_id=policy.registry_snapshot_id,
|
registry_snapshot_id=policy.registry_snapshot_id,
|
||||||
requested_by=requested_by,
|
requested_by=requested_by,
|
||||||
requested_at=datetime.now(timezone.utc),
|
requested_at=datetime.now(timezone.utc),
|
||||||
status=RunStatus.RUNNING,
|
status=RunStatus.RUNNING,
|
||||||
)
|
)
|
||||||
return self.repository.save_check_run(check_run)
|
return self.repository.save_check_run(check_run)
|
||||||
|
# [/DEF:start_check_run:Function]
|
||||||
|
|
||||||
|
# [DEF:execute_stages:Function]
|
||||||
|
# @PURPOSE: Execute or accept compliance stage outcomes and set intermediate/final check-run status fields.
|
||||||
|
# @PRE: check_run exists and references candidate/policy/registry/manifest identifiers resolvable by repository.
|
||||||
|
# @POST: Returns persisted ComplianceRun with status FAILED on missing dependencies, otherwise SUCCEEDED with final_status set.
|
||||||
|
# @SIDE_EFFECT: Reads candidate/policy/registry/manifest and persists updated check_run.
|
||||||
|
# @DATA_CONTRACT: Input -> (check_run:ComplianceRun, forced_results:Optional[List[ComplianceStageRun]]), Output -> ComplianceRun
|
||||||
def execute_stages(self, check_run: ComplianceRun, forced_results: Optional[List[ComplianceStageRun]] = None) -> ComplianceRun:
|
def execute_stages(self, check_run: ComplianceRun, forced_results: Optional[List[ComplianceStageRun]] = None) -> ComplianceRun:
|
||||||
if forced_results is not None:
|
with belief_scope("execute_stages"):
|
||||||
# In a real scenario, we'd persist these stages.
|
if forced_results is not None:
|
||||||
|
# In a real scenario, we'd persist these stages.
|
||||||
|
return self.repository.save_check_run(check_run)
|
||||||
|
|
||||||
|
# Real Logic Integration
|
||||||
|
candidate = self.repository.get_candidate(check_run.candidate_id)
|
||||||
|
policy = self.repository.get_policy(check_run.policy_snapshot_id)
|
||||||
|
if not candidate or not policy:
|
||||||
|
check_run.status = RunStatus.FAILED
|
||||||
|
return self.repository.save_check_run(check_run)
|
||||||
|
|
||||||
|
registry = self.repository.get_registry(check_run.registry_snapshot_id)
|
||||||
|
manifest = self.repository.get_manifest(check_run.manifest_id)
|
||||||
|
|
||||||
|
if not registry or not manifest:
|
||||||
|
check_run.status = RunStatus.FAILED
|
||||||
|
return self.repository.save_check_run(check_run)
|
||||||
|
|
||||||
|
# Simulate stage execution and violation detection
|
||||||
|
# 1. DATA_PURITY
|
||||||
|
summary = manifest.content_json.get("summary", {})
|
||||||
|
purity_ok = summary.get("prohibited_detected_count", 0) == 0
|
||||||
|
|
||||||
|
if not purity_ok:
|
||||||
|
check_run.final_status = ComplianceDecision.BLOCKED
|
||||||
|
else:
|
||||||
|
check_run.final_status = ComplianceDecision.PASSED
|
||||||
|
|
||||||
|
check_run.status = RunStatus.SUCCEEDED
|
||||||
|
check_run.finished_at = datetime.now(timezone.utc)
|
||||||
|
|
||||||
return self.repository.save_check_run(check_run)
|
return self.repository.save_check_run(check_run)
|
||||||
|
# [/DEF:execute_stages:Function]
|
||||||
# Real Logic Integration
|
|
||||||
candidate = self.repository.get_candidate(check_run.candidate_id)
|
|
||||||
policy = self.repository.get_policy(check_run.policy_snapshot_id)
|
|
||||||
if not candidate or not policy:
|
|
||||||
check_run.status = RunStatus.FAILED
|
|
||||||
return self.repository.save_check_run(check_run)
|
|
||||||
|
|
||||||
registry = self.repository.get_registry(check_run.registry_snapshot_id)
|
|
||||||
manifest = self.repository.get_manifest(check_run.manifest_id)
|
|
||||||
|
|
||||||
if not registry or not manifest:
|
|
||||||
check_run.status = RunStatus.FAILED
|
|
||||||
return self.repository.save_check_run(check_run)
|
|
||||||
|
|
||||||
# Simulate stage execution and violation detection
|
|
||||||
# 1. DATA_PURITY
|
|
||||||
summary = manifest.content_json.get("summary", {})
|
|
||||||
purity_ok = summary.get("prohibited_detected_count", 0) == 0
|
|
||||||
|
|
||||||
if not purity_ok:
|
|
||||||
check_run.final_status = ComplianceDecision.BLOCKED
|
|
||||||
else:
|
|
||||||
check_run.final_status = ComplianceDecision.PASSED
|
|
||||||
|
|
||||||
check_run.status = RunStatus.SUCCEEDED
|
|
||||||
check_run.finished_at = datetime.now(timezone.utc)
|
|
||||||
|
|
||||||
return self.repository.save_check_run(check_run)
|
|
||||||
|
|
||||||
# [DEF:finalize_run:Function]
|
# [DEF:finalize_run:Function]
|
||||||
# @PURPOSE: Finalize run status based on cumulative stage results.
|
# @PURPOSE: Finalize run status based on cumulative stage results.
|
||||||
# @POST: Status derivation follows strict MANDATORY_STAGE_ORDER.
|
# @PRE: check_run was started and may already contain a derived final_status from stage execution.
|
||||||
|
# @POST: Returns persisted ComplianceRun in SUCCEEDED status with final_status guaranteed non-empty.
|
||||||
|
# @SIDE_EFFECT: Mutates check_run terminal fields and persists via repository.save_check_run.
|
||||||
|
# @DATA_CONTRACT: Input -> ComplianceRun, Output -> ComplianceRun
|
||||||
def finalize_run(self, check_run: ComplianceRun) -> ComplianceRun:
|
def finalize_run(self, check_run: ComplianceRun) -> ComplianceRun:
|
||||||
# If not already set by execute_stages
|
with belief_scope("finalize_run"):
|
||||||
if not check_run.final_status:
|
# If not already set by execute_stages
|
||||||
check_run.final_status = ComplianceDecision.PASSED
|
if not check_run.final_status:
|
||||||
|
check_run.final_status = ComplianceDecision.PASSED
|
||||||
check_run.status = RunStatus.SUCCEEDED
|
|
||||||
check_run.finished_at = datetime.now(timezone.utc)
|
check_run.status = RunStatus.SUCCEEDED
|
||||||
return self.repository.save_check_run(check_run)
|
check_run.finished_at = datetime.now(timezone.utc)
|
||||||
|
return self.repository.save_check_run(check_run)
|
||||||
|
# [/DEF:finalize_run:Function]
|
||||||
# [/DEF:CleanComplianceOrchestrator:Class]
|
# [/DEF:CleanComplianceOrchestrator:Class]
|
||||||
|
|
||||||
|
|
||||||
# [DEF:run_check_legacy:Function]
|
# [DEF:run_check_legacy:Function]
|
||||||
# @PURPOSE: Legacy wrapper for compatibility with previous orchestrator call style.
|
# @PURPOSE: Legacy wrapper for compatibility with previous orchestrator call style.
|
||||||
# @PRE: Candidate/policy/manifest identifiers are valid for repository.
|
# @PRE: repository and identifiers are valid and resolvable by orchestrator dependencies.
|
||||||
# @POST: Returns finalized ComplianceRun produced by orchestrator.
|
# @POST: Returns finalized ComplianceRun produced by orchestrator start->execute->finalize sequence.
|
||||||
|
# @SIDE_EFFECT: Reads/writes compliance entities through repository during orchestrator calls.
|
||||||
|
# @DATA_CONTRACT: Input -> (repository:CleanReleaseRepository, candidate_id:str, policy_id:str, requested_by:str, manifest_id:str), Output -> ComplianceRun
|
||||||
def run_check_legacy(
|
def run_check_legacy(
|
||||||
repository: CleanReleaseRepository,
|
repository: CleanReleaseRepository,
|
||||||
candidate_id: str,
|
candidate_id: str,
|
||||||
@@ -125,14 +153,15 @@ def run_check_legacy(
|
|||||||
requested_by: str,
|
requested_by: str,
|
||||||
manifest_id: str,
|
manifest_id: str,
|
||||||
) -> ComplianceRun:
|
) -> ComplianceRun:
|
||||||
orchestrator = CleanComplianceOrchestrator(repository)
|
with belief_scope("run_check_legacy"):
|
||||||
run = orchestrator.start_check_run(
|
orchestrator = CleanComplianceOrchestrator(repository)
|
||||||
candidate_id=candidate_id,
|
run = orchestrator.start_check_run(
|
||||||
policy_id=policy_id,
|
candidate_id=candidate_id,
|
||||||
requested_by=requested_by,
|
policy_id=policy_id,
|
||||||
manifest_id=manifest_id,
|
requested_by=requested_by,
|
||||||
)
|
manifest_id=manifest_id,
|
||||||
run = orchestrator.execute_stages(run)
|
)
|
||||||
return orchestrator.finalize_run(run)
|
run = orchestrator.execute_stages(run)
|
||||||
|
return orchestrator.finalize_run(run)
|
||||||
# [/DEF:run_check_legacy:Function]
|
# [/DEF:run_check_legacy:Function]
|
||||||
# [/DEF:backend.src.services.clean_release.compliance_orchestrator:Module]
|
# [/DEF:backend.src.services.clean_release.compliance_orchestrator:Module]
|
||||||
@@ -1,17 +1,40 @@
|
|||||||
<!-- [DEF:ProtectedRoute:Component] -->
|
<!--[DEF:ProtectedRoute.svelte:Module] -->
|
||||||
<!--
|
<!--
|
||||||
@TIER: STANDARD
|
@TIER: CRITICAL
|
||||||
@SEMANTICS: auth, guard, route, protection, permission
|
@SEMANTICS: auth, route-guard, permission, redirect, session-validation
|
||||||
@PURPOSE: Wraps content to ensure only authenticated and authorized users can access it.
|
@PURPOSE: Enforces authenticated and authorized access before protected route content is rendered.
|
||||||
@LAYER: Component
|
@LAYER: UI
|
||||||
@RELATION: USES -> authStore
|
@RELATION: [BINDS_TO] ->[frontend.src.lib.auth.store.auth]
|
||||||
@RELATION: CALLS -> goto
|
@RELATION: [CALLS] ->[$app/navigation.goto]
|
||||||
@RELATION: DEPENDS_ON -> frontend.src.lib.auth.permissions.hasPermission
|
@RELATION: [DEPENDS_ON] ->[$lib/auth/permissions.hasPermission]
|
||||||
|
@RELATION: [CALLS] ->[frontend.src.lib.api.api.fetchApi]
|
||||||
|
@INVARIANT: Unauthenticated users are redirected to /login, unauthorized users are redirected to fallbackPath, and protected slot renders only when access is verified.
|
||||||
|
@UX_STATE: Idle -> Component mounted, verification not yet started.
|
||||||
|
@UX_STATE: Loading -> Spinner is rendered while auth/session/permission validation is in progress.
|
||||||
|
@UX_STATE: Error -> Session validation failure triggers logout and /login redirect.
|
||||||
|
@UX_STATE: Success -> Protected slot content is rendered for authenticated users with valid access.
|
||||||
|
@UX_FEEDBACK: Spinner feedback during Loading and navigation redirect feedback on Error/Unauthorized outcomes.
|
||||||
|
@UX_RECOVERY: Re-authenticate via /login after logout; retry occurs automatically on next protected navigation.
|
||||||
|
@UX_REACTIVITY: Props are bound via $props; local mutable UI flags use $state; auth store is consumed through Svelte store subscription ($auth).
|
||||||
|
@TEST_CONTRACT: [token:user:requiredPermission] -> [redirect:/login | redirect:fallbackPath | render:slot]
|
||||||
|
@TEST_SCENARIO: MissingTokenRedirect -> Navigates to /login and suppresses slot render.
|
||||||
|
@TEST_SCENARIO: PermissionDeniedRedirect -> Navigates to fallbackPath and suppresses slot render.
|
||||||
|
@TEST_SCENARIO: AuthorizedRender -> Renders slot when authenticated and permission passes.
|
||||||
|
@TEST_FIXTURE: AuthGuardStateMatrix -> INLINE_JSON
|
||||||
|
@TEST_EDGE: missing_field -> user payload missing in store triggers /auth/me fetch, then logout+redirect on failure.
|
||||||
|
@TEST_EDGE: invalid_type -> requiredPermission malformed (non-string/null) resolves to denied path and fallback redirect.
|
||||||
|
@TEST_EDGE: external_fail -> /auth/me network/API failure triggers logout and /login redirect.
|
||||||
|
@TEST_INVARIANT: GuardRedirectPolicy -> VERIFIED_BY: [MissingTokenRedirect, PermissionDeniedRedirect]
|
||||||
|
@TEST_INVARIANT: ProtectedRenderGate -> VERIFIED_BY: [AuthorizedRender]
|
||||||
|
-->
|
||||||
|
|
||||||
@INVARIANT: Redirects to /login if user is not authenticated and to fallback route when permission is denied.
|
<!--[DEF:ProtectedRoute:Component] -->
|
||||||
@UX_STATE: Loading -> Shows spinner while session/permission check is in progress.
|
<!--
|
||||||
@UX_STATE: Authorized -> Renders protected slot content.
|
@PURPOSE: Wraps protected slot content with session and permission verification guards.
|
||||||
@UX_RECOVERY: Invalid token triggers logout and redirect to /login.
|
@PRE: auth store and navigation API are available in runtime; component is mounted in a browser context.
|
||||||
|
@POST: Slot renders only when $auth.isAuthenticated and hasRouteAccess are both true.
|
||||||
|
@SIDE_EFFECT: Performs /auth/me request, mutates auth store state, emits console instrumentation logs, and executes navigation redirects.
|
||||||
|
@DATA_CONTRACT: Input[$props.requiredPermission?: string|null, $props.fallbackPath?: string] -> Output[UIState{isCheckingAccess:boolean, hasRouteAccess:boolean}]
|
||||||
-->
|
-->
|
||||||
|
|
||||||
<script lang="ts">
|
<script lang="ts">
|
||||||
@@ -21,66 +44,91 @@
|
|||||||
import { goto } from "$app/navigation";
|
import { goto } from "$app/navigation";
|
||||||
import { hasPermission } from "$lib/auth/permissions.js";
|
import { hasPermission } from "$lib/auth/permissions.js";
|
||||||
|
|
||||||
export let requiredPermission: string | null = null;
|
const { requiredPermission = null, fallbackPath = "/profile" } = $props<{
|
||||||
export let fallbackPath: string = "/profile";
|
requiredPermission?: string | null;
|
||||||
|
fallbackPath?: string;
|
||||||
|
}>();
|
||||||
|
|
||||||
let hasRouteAccess = false;
|
let hasRouteAccess = $state(false);
|
||||||
let isCheckingAccess = true;
|
let isCheckingAccess = $state(true);
|
||||||
|
|
||||||
// [DEF:verifySessionAndAccess:Function]
|
const belief_scope = async <T>(scopeId: string, run: () => Promise<T>): Promise<T> => {
|
||||||
|
console.info(`[${scopeId}][REASON] belief_scope.enter`);
|
||||||
|
try {
|
||||||
|
return await run();
|
||||||
|
} finally {
|
||||||
|
console.info(`[${scopeId}][REFLECT] belief_scope.exit`);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
<!-- [DEF:verifySessionAndAccess:Function] -->
|
||||||
/**
|
/**
|
||||||
* @purpose Validates active session and optional route permission before rendering protected slot.
|
* @PURPOSE: Validates session and optional permission gate before allowing protected content render.
|
||||||
* @pre Auth store is initialized.
|
* @PRE: auth store is initialized and can provide token/user state; navigation is available.
|
||||||
* @post hasRouteAccess is true only when session and permission checks pass.
|
* @POST: hasRouteAccess=true only when user identity is valid and permission check (if provided) passes.
|
||||||
* @side_effect May update auth store, perform redirect, and fetch current user.
|
* @SIDE_EFFECT: Mutates auth loading/user state, performs API I/O to /auth/me, and may redirect.
|
||||||
|
* @DATA_CONTRACT: Input[AuthState, requiredPermission, fallbackPath] -> Output[RouteDecision{login_redirect|fallback_redirect|grant}]
|
||||||
*/
|
*/
|
||||||
async function verifySessionAndAccess(): Promise<void> {
|
async function verifySessionAndAccess(): Promise<void> {
|
||||||
isCheckingAccess = true;
|
return belief_scope("ProtectedRoute.verifySessionAndAccess", async () => {
|
||||||
try {
|
console.info("[ProtectedRoute.verifySessionAndAccess][REASON] Starting route access verification");
|
||||||
if (!$auth.token) {
|
isCheckingAccess = true;
|
||||||
auth.setLoading(false);
|
|
||||||
await goto("/login");
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
let currentUser = $auth.user;
|
try {
|
||||||
if (!currentUser) {
|
if (!$auth.token) {
|
||||||
auth.setLoading(true);
|
auth.setLoading(false);
|
||||||
try {
|
console.info("[ProtectedRoute.verifySessionAndAccess][REFLECT] Missing token, redirecting to /login");
|
||||||
const user = await api.fetchApi("/auth/me");
|
|
||||||
auth.setUser(user);
|
|
||||||
currentUser = user;
|
|
||||||
} catch (error) {
|
|
||||||
console.error("[ProtectedRoute][COHERENCE:FAILED] Failed to verify session:", error);
|
|
||||||
auth.logout();
|
|
||||||
await goto("/login");
|
await goto("/login");
|
||||||
return;
|
return;
|
||||||
} finally {
|
|
||||||
auth.setLoading(false);
|
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
if (!currentUser) {
|
let currentUser = $auth.user;
|
||||||
auth.logout();
|
if (!currentUser) {
|
||||||
await goto("/login");
|
auth.setLoading(true);
|
||||||
return;
|
try {
|
||||||
}
|
const user = await api.fetchApi("/auth/me");
|
||||||
|
auth.setUser(user);
|
||||||
|
currentUser = user;
|
||||||
|
console.info("[ProtectedRoute.verifySessionAndAccess][REASON] Session user hydrated from /auth/me");
|
||||||
|
} catch (error) {
|
||||||
|
console.warn("[ProtectedRoute.verifySessionAndAccess][EXPLORE] Session validation failed", { error });
|
||||||
|
auth.logout();
|
||||||
|
await goto("/login");
|
||||||
|
return;
|
||||||
|
} finally {
|
||||||
|
auth.setLoading(false);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
if (requiredPermission && !hasPermission(currentUser, requiredPermission, "READ")) {
|
if (!currentUser) {
|
||||||
console.warn(
|
auth.logout();
|
||||||
`[ProtectedRoute][REFLECT] Permission denied for ${requiredPermission}, redirecting to ${fallbackPath}`,
|
console.info("[ProtectedRoute.verifySessionAndAccess][REFLECT] User unresolved, redirecting to /login");
|
||||||
);
|
await goto("/login");
|
||||||
hasRouteAccess = false;
|
return;
|
||||||
await goto(fallbackPath);
|
}
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
hasRouteAccess = true;
|
if (requiredPermission && !hasPermission(currentUser, requiredPermission, "READ")) {
|
||||||
} finally {
|
console.info("[ProtectedRoute.verifySessionAndAccess][REFLECT] Permission denied, redirecting to fallback", {
|
||||||
isCheckingAccess = false;
|
requiredPermission,
|
||||||
}
|
fallbackPath,
|
||||||
|
});
|
||||||
|
hasRouteAccess = false;
|
||||||
|
await goto(fallbackPath);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
hasRouteAccess = true;
|
||||||
|
console.info("[ProtectedRoute.verifySessionAndAccess][REFLECT] Access granted");
|
||||||
|
} finally {
|
||||||
|
isCheckingAccess = false;
|
||||||
|
console.info("[ProtectedRoute.verifySessionAndAccess][REFLECT] Verification cycle completed", {
|
||||||
|
isCheckingAccess,
|
||||||
|
hasRouteAccess,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:verifySessionAndAccess:Function]
|
<!-- [/DEF:verifySessionAndAccess:Function] -->
|
||||||
|
|
||||||
onMount(() => {
|
onMount(() => {
|
||||||
void verifySessionAndAccess();
|
void verifySessionAndAccess();
|
||||||
@@ -95,4 +143,5 @@
|
|||||||
<slot />
|
<slot />
|
||||||
{/if}
|
{/if}
|
||||||
|
|
||||||
<!-- [/DEF:ProtectedRoute:Component] -->
|
<!-- [/DEF:ProtectedRoute:Component] -->
|
||||||
|
<!-- [/DEF:ProtectedRoute.svelte:Module] -->
|
||||||
@@ -1,11 +1,45 @@
|
|||||||
|
<!-- [DEF:frontend/src/routes/migration/+page.svelte:Module] -->
|
||||||
|
<!--
|
||||||
|
@TIER: CRITICAL
|
||||||
|
@SEMANTICS: migration, dashboard, environment, selection, database-replacement, dry-run, task-resume
|
||||||
|
@PURPOSE: Main migration dashboard page for environment selection, dry-run validation, and migration execution.
|
||||||
|
@LAYER: UI
|
||||||
|
@RELATION: [DEPENDS_ON] ->[frontend/src/lib/api.js]
|
||||||
|
@RELATION: [DEPENDS_ON] ->[frontend/src/lib/stores.js]
|
||||||
|
@RELATION: [DEPENDS_ON] ->[frontend/src/services/taskService.js]
|
||||||
|
@RELATION: [BINDS_TO] ->[frontend/src/components/EnvSelector.svelte]
|
||||||
|
@RELATION: [BINDS_TO] ->[frontend/src/components/DashboardGrid.svelte]
|
||||||
|
@RELATION: [BINDS_TO] ->[frontend/src/components/MappingTable.svelte]
|
||||||
|
@RELATION: [BINDS_TO] ->[frontend/src/components/TaskRunner.svelte]
|
||||||
|
@RELATION: [BINDS_TO] ->[frontend/src/components/TaskHistory.svelte]
|
||||||
|
@RELATION: [BINDS_TO] ->[frontend/src/components/TaskLogViewer.svelte]
|
||||||
|
@RELATION: [BINDS_TO] ->[frontend/src/components/PasswordPrompt.svelte]
|
||||||
|
@INVARIANT: Migration start is blocked unless source and target environments are selected, distinct, and at least one dashboard is selected.
|
||||||
|
@UX_STATE: Idle -> User configures source/target environments, dashboard selection, and migration options.
|
||||||
|
@UX_STATE: Loading -> Environment/database/dry-run fetch operations disable relevant actions and show progress text.
|
||||||
|
@UX_STATE: Error -> Error banner/prompt message is shown while keeping user input intact for correction.
|
||||||
|
@UX_STATE: Success -> Dry-run summary or active task view is rendered after successful API operations.
|
||||||
|
@UX_FEEDBACK: Inline error banner, disabled CTA states, loading labels, dry-run summary cards, modal dialogs.
|
||||||
|
@UX_RECOVERY: User can adjust selection, refresh databases, retry dry-run/migration, resume task with passwords, or cancel modal flow.
|
||||||
|
@UX_REACTIVITY: State transitions rely on Svelte reactive bindings and store subscription to selectedTask.
|
||||||
|
@TEST_CONTRACT: [DashboardSelection + Environment IDs] -> [DryRunResult | TaskStartResult | ValidationError]
|
||||||
|
@TEST_SCENARIO: start_migration_valid_selection -> Starts backend task and switches to task view.
|
||||||
|
@TEST_SCENARIO: start_dry_run_valid_selection -> Renders pre-flight diff summary and risk panel.
|
||||||
|
@TEST_SCENARIO: awaiting_input_task_selected -> Opens password prompt with requested databases.
|
||||||
|
@TEST_FIXTURE: migration_dry_run_fixture -> file:backend/tests/fixtures/migration_dry_run_fixture.json
|
||||||
|
@TEST_EDGE: missing_field -> Empty source/target/selection blocks action and surfaces validation message.
|
||||||
|
@TEST_EDGE: invalid_type -> Malformed API payload is surfaced through error state.
|
||||||
|
@TEST_EDGE: external_fail -> API failures set error state and preserve recoverable UI controls.
|
||||||
|
@TEST_INVARIANT: migration_guardrails_enforced -> VERIFIED_BY: [start_migration_valid_selection, start_dry_run_valid_selection, missing_field]
|
||||||
|
-->
|
||||||
|
|
||||||
<!-- [DEF:MigrationDashboard:Component] -->
|
<!-- [DEF:MigrationDashboard:Component] -->
|
||||||
<!--
|
<!--
|
||||||
@SEMANTICS: migration, dashboard, environment, selection, database-replacement
|
@PURPOSE: Orchestrate migration UI workflow and route user actions to backend APIs and task store.
|
||||||
@PURPOSE: Main dashboard for configuring and starting migrations.
|
@PRE: API client and component dependencies are available; i18n store is initialized.
|
||||||
@LAYER: Page
|
@POST: User can progress through selection, dry-run, migration start, and task resume flows.
|
||||||
@RELATION: USES -> EnvSelector
|
@SIDE_EFFECT: Performs HTTP requests, mutates local UI state, updates selectedTask store.
|
||||||
|
@DATA_CONTRACT: DashboardSelection -> MigrationDryRunResult | Task DTO (from /tasks endpoints)
|
||||||
@INVARIANT: Migration cannot start without source and target environments.
|
|
||||||
-->
|
-->
|
||||||
|
|
||||||
<script lang="ts">
|
<script lang="ts">
|
||||||
@@ -58,6 +92,8 @@
|
|||||||
let passwordPromptErrorMessage = "";
|
let passwordPromptErrorMessage = "";
|
||||||
// [/SECTION]
|
// [/SECTION]
|
||||||
|
|
||||||
|
const belief_scope = <T>(_id: string, fn: () => T): T => fn();
|
||||||
|
|
||||||
// [DEF:fetchEnvironments:Function]
|
// [DEF:fetchEnvironments:Function]
|
||||||
/**
|
/**
|
||||||
* @purpose Fetches the list of environments from the API.
|
* @purpose Fetches the list of environments from the API.
|
||||||
@@ -65,13 +101,15 @@
|
|||||||
* @post environments state is updated.
|
* @post environments state is updated.
|
||||||
*/
|
*/
|
||||||
async function fetchEnvironments() {
|
async function fetchEnvironments() {
|
||||||
try {
|
return belief_scope("fetchEnvironments", async () => {
|
||||||
environments = await api.getEnvironmentsList();
|
try {
|
||||||
} catch (e) {
|
environments = await api.getEnvironmentsList();
|
||||||
error = e.message;
|
} catch (e) {
|
||||||
} finally {
|
error = e.message;
|
||||||
loading = false;
|
} finally {
|
||||||
}
|
loading = false;
|
||||||
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:fetchEnvironments:Function]
|
// [/DEF:fetchEnvironments:Function]
|
||||||
|
|
||||||
@@ -83,13 +121,15 @@
|
|||||||
* @post dashboards state is updated.
|
* @post dashboards state is updated.
|
||||||
*/
|
*/
|
||||||
async function fetchDashboards(envId: string) {
|
async function fetchDashboards(envId: string) {
|
||||||
try {
|
return belief_scope("fetchDashboards", async () => {
|
||||||
dashboards = await api.requestApi(`/environments/${envId}/dashboards`);
|
try {
|
||||||
selectedDashboardIds = []; // Reset selection when env changes
|
dashboards = await api.requestApi(`/environments/${envId}/dashboards`);
|
||||||
} catch (e) {
|
selectedDashboardIds = []; // Reset selection when env changes
|
||||||
error = e.message;
|
} catch (e) {
|
||||||
dashboards = [];
|
error = e.message;
|
||||||
}
|
dashboards = [];
|
||||||
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:fetchDashboards:Function]
|
// [/DEF:fetchDashboards:Function]
|
||||||
|
|
||||||
@@ -105,32 +145,34 @@
|
|||||||
* @post sourceDatabases, targetDatabases, mappings, and suggestions are updated.
|
* @post sourceDatabases, targetDatabases, mappings, and suggestions are updated.
|
||||||
*/
|
*/
|
||||||
async function fetchDatabases() {
|
async function fetchDatabases() {
|
||||||
if (!sourceEnvId || !targetEnvId) return;
|
return belief_scope("fetchDatabases", async () => {
|
||||||
fetchingDbs = true;
|
if (!sourceEnvId || !targetEnvId) return;
|
||||||
error = "";
|
fetchingDbs = true;
|
||||||
|
error = "";
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const [src, tgt, maps, sugs] = await Promise.all([
|
const [src, tgt, maps, sugs] = await Promise.all([
|
||||||
api.requestApi(`/environments/${sourceEnvId}/databases`),
|
api.requestApi(`/environments/${sourceEnvId}/databases`),
|
||||||
api.requestApi(`/environments/${targetEnvId}/databases`),
|
api.requestApi(`/environments/${targetEnvId}/databases`),
|
||||||
api.requestApi(
|
api.requestApi(
|
||||||
`/mappings?source_env_id=${sourceEnvId}&target_env_id=${targetEnvId}`,
|
`/mappings?source_env_id=${sourceEnvId}&target_env_id=${targetEnvId}`,
|
||||||
),
|
),
|
||||||
api.postApi(`/mappings/suggest`, {
|
api.postApi(`/mappings/suggest`, {
|
||||||
source_env_id: sourceEnvId,
|
source_env_id: sourceEnvId,
|
||||||
target_env_id: targetEnvId,
|
target_env_id: targetEnvId,
|
||||||
}),
|
}),
|
||||||
]);
|
]);
|
||||||
|
|
||||||
sourceDatabases = src;
|
sourceDatabases = src;
|
||||||
targetDatabases = tgt;
|
targetDatabases = tgt;
|
||||||
mappings = maps;
|
mappings = maps;
|
||||||
suggestions = sugs;
|
suggestions = sugs;
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
error = e.message;
|
error = e.message;
|
||||||
} finally {
|
} finally {
|
||||||
fetchingDbs = false;
|
fetchingDbs = false;
|
||||||
}
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:fetchDatabases:Function]
|
// [/DEF:fetchDatabases:Function]
|
||||||
|
|
||||||
@@ -141,29 +183,31 @@
|
|||||||
* @post Mapping is saved and local mappings list is updated.
|
* @post Mapping is saved and local mappings list is updated.
|
||||||
*/
|
*/
|
||||||
async function handleMappingUpdate(event: CustomEvent) {
|
async function handleMappingUpdate(event: CustomEvent) {
|
||||||
const { sourceUuid, targetUuid } = event.detail;
|
return belief_scope("handleMappingUpdate", async () => {
|
||||||
const sDb = sourceDatabases.find((d) => d.uuid === sourceUuid);
|
const { sourceUuid, targetUuid } = event.detail;
|
||||||
const tDb = targetDatabases.find((d) => d.uuid === targetUuid);
|
const sDb = sourceDatabases.find((d) => d.uuid === sourceUuid);
|
||||||
|
const tDb = targetDatabases.find((d) => d.uuid === targetUuid);
|
||||||
|
|
||||||
if (!sDb || !tDb) return;
|
if (!sDb || !tDb) return;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const savedMapping = await api.postApi("/mappings", {
|
const savedMapping = await api.postApi("/mappings", {
|
||||||
source_env_id: sourceEnvId,
|
source_env_id: sourceEnvId,
|
||||||
target_env_id: targetEnvId,
|
target_env_id: targetEnvId,
|
||||||
source_db_uuid: sourceUuid,
|
source_db_uuid: sourceUuid,
|
||||||
target_db_uuid: targetUuid,
|
target_db_uuid: targetUuid,
|
||||||
source_db_name: sDb.database_name,
|
source_db_name: sDb.database_name,
|
||||||
target_db_name: tDb.database_name,
|
target_db_name: tDb.database_name,
|
||||||
});
|
});
|
||||||
|
|
||||||
mappings = [
|
mappings = [
|
||||||
...mappings.filter((m) => m.source_db_uuid !== sourceUuid),
|
...mappings.filter((m) => m.source_db_uuid !== sourceUuid),
|
||||||
savedMapping,
|
savedMapping,
|
||||||
];
|
];
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
error = e.message;
|
error = e.message;
|
||||||
}
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:handleMappingUpdate:Function]
|
// [/DEF:handleMappingUpdate:Function]
|
||||||
|
|
||||||
@@ -172,10 +216,12 @@
|
|||||||
// @PRE: event.detail contains task object.
|
// @PRE: event.detail contains task object.
|
||||||
// @POST: logViewer state updated and showLogViewer set to true.
|
// @POST: logViewer state updated and showLogViewer set to true.
|
||||||
function handleViewLogs(event: CustomEvent) {
|
function handleViewLogs(event: CustomEvent) {
|
||||||
const task = event.detail;
|
return belief_scope("handleViewLogs", () => {
|
||||||
logViewerTaskId = task.id;
|
const task = event.detail;
|
||||||
logViewerTaskStatus = task.status;
|
logViewerTaskId = task.id;
|
||||||
showLogViewer = true;
|
logViewerTaskStatus = task.status;
|
||||||
|
showLogViewer = true;
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:handleViewLogs:Function]
|
// [/DEF:handleViewLogs:Function]
|
||||||
|
|
||||||
@@ -212,19 +258,21 @@
|
|||||||
// @PRE: event.detail contains passwords.
|
// @PRE: event.detail contains passwords.
|
||||||
// @POST: resumeTask is called and showPasswordPrompt is hidden on success.
|
// @POST: resumeTask is called and showPasswordPrompt is hidden on success.
|
||||||
async function handleResumeMigration(event: CustomEvent) {
|
async function handleResumeMigration(event: CustomEvent) {
|
||||||
if (!$selectedTask) return;
|
return belief_scope("handleResumeMigration", async () => {
|
||||||
|
if (!$selectedTask) return;
|
||||||
|
|
||||||
const { passwords } = event.detail;
|
const { passwords } = event.detail;
|
||||||
try {
|
try {
|
||||||
await resumeTask($selectedTask.id, passwords);
|
await resumeTask($selectedTask.id, passwords);
|
||||||
showPasswordPrompt = false;
|
showPasswordPrompt = false;
|
||||||
// Task status update will be handled by store/websocket
|
// Task status update will be handled by store/websocket
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
console.error("Failed to resume task:", e);
|
console.error("Failed to resume task:", e);
|
||||||
passwordPromptErrorMessage =
|
passwordPromptErrorMessage =
|
||||||
e.message || $t.migration?.resume_failed ;
|
e.message || $t.migration?.resume_failed;
|
||||||
// Keep prompt open
|
// Keep prompt open
|
||||||
}
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:handleResumeMigration:Function]
|
// [/DEF:handleResumeMigration:Function]
|
||||||
|
|
||||||
@@ -235,69 +283,71 @@
|
|||||||
* @post Migration task is started and selectedTask is updated.
|
* @post Migration task is started and selectedTask is updated.
|
||||||
*/
|
*/
|
||||||
async function startMigration() {
|
async function startMigration() {
|
||||||
if (!sourceEnvId || !targetEnvId) {
|
return belief_scope("startMigration", async () => {
|
||||||
error =
|
if (!sourceEnvId || !targetEnvId) {
|
||||||
$t.migration?.select_both_envs ||
|
error =
|
||||||
"Please select both source and target environments.";
|
$t.migration?.select_both_envs ||
|
||||||
return;
|
"Please select both source and target environments.";
|
||||||
}
|
return;
|
||||||
if (sourceEnvId === targetEnvId) {
|
|
||||||
error =
|
|
||||||
$t.migration?.different_envs ||
|
|
||||||
"Source and target environments must be different.";
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
if (selectedDashboardIds.length === 0) {
|
|
||||||
error =
|
|
||||||
$t.migration?.select_dashboards ||
|
|
||||||
"Please select at least one dashboard to migrate.";
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
error = "";
|
|
||||||
try {
|
|
||||||
dryRunResult = null;
|
|
||||||
const selection: DashboardSelection = {
|
|
||||||
selected_ids: selectedDashboardIds,
|
|
||||||
source_env_id: sourceEnvId,
|
|
||||||
target_env_id: targetEnvId,
|
|
||||||
replace_db_config: replaceDb,
|
|
||||||
fix_cross_filters: fixCrossFilters,
|
|
||||||
};
|
|
||||||
console.log(
|
|
||||||
`[MigrationDashboard][Action] Starting migration with selection:`,
|
|
||||||
selection,
|
|
||||||
);
|
|
||||||
const result = await api.postApi("/migration/execute", selection);
|
|
||||||
console.log(
|
|
||||||
`[MigrationDashboard][Action] Migration started: ${result.task_id} - ${result.message}`,
|
|
||||||
);
|
|
||||||
|
|
||||||
// Wait a brief moment for the backend to ensure the task is retrievable
|
|
||||||
await new Promise((r) => setTimeout(r, 500));
|
|
||||||
|
|
||||||
// Fetch full task details and switch to TaskRunner view
|
|
||||||
try {
|
|
||||||
const task = await api.getTask(result.task_id);
|
|
||||||
selectedTask.set(task);
|
|
||||||
} catch (fetchErr) {
|
|
||||||
// Fallback: create a temporary task object to switch view immediately
|
|
||||||
console.warn(
|
|
||||||
$t.migration?.task_placeholder_warn ||
|
|
||||||
"Could not fetch task details immediately, using placeholder.",
|
|
||||||
);
|
|
||||||
selectedTask.set({
|
|
||||||
id: result.task_id,
|
|
||||||
plugin_id: "superset-migration",
|
|
||||||
status: "RUNNING",
|
|
||||||
logs: [],
|
|
||||||
params: {},
|
|
||||||
});
|
|
||||||
}
|
}
|
||||||
} catch (e) {
|
if (sourceEnvId === targetEnvId) {
|
||||||
console.error(`[MigrationDashboard][Failure] Migration failed:`, e);
|
error =
|
||||||
error = e.message;
|
$t.migration?.different_envs ||
|
||||||
}
|
"Source and target environments must be different.";
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (selectedDashboardIds.length === 0) {
|
||||||
|
error =
|
||||||
|
$t.migration?.select_dashboards ||
|
||||||
|
"Please select at least one dashboard to migrate.";
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
error = "";
|
||||||
|
try {
|
||||||
|
dryRunResult = null;
|
||||||
|
const selection: DashboardSelection = {
|
||||||
|
selected_ids: selectedDashboardIds,
|
||||||
|
source_env_id: sourceEnvId,
|
||||||
|
target_env_id: targetEnvId,
|
||||||
|
replace_db_config: replaceDb,
|
||||||
|
fix_cross_filters: fixCrossFilters,
|
||||||
|
};
|
||||||
|
console.log(
|
||||||
|
`[MigrationDashboard][Action] Starting migration with selection:`,
|
||||||
|
selection,
|
||||||
|
);
|
||||||
|
const result = await api.postApi("/migration/execute", selection);
|
||||||
|
console.log(
|
||||||
|
`[MigrationDashboard][Action] Migration started: ${result.task_id} - ${result.message}`,
|
||||||
|
);
|
||||||
|
|
||||||
|
// Wait a brief moment for the backend to ensure the task is retrievable
|
||||||
|
await new Promise((r) => setTimeout(r, 500));
|
||||||
|
|
||||||
|
// Fetch full task details and switch to TaskRunner view
|
||||||
|
try {
|
||||||
|
const task = await api.getTask(result.task_id);
|
||||||
|
selectedTask.set(task);
|
||||||
|
} catch (fetchErr) {
|
||||||
|
// Fallback: create a temporary task object to switch view immediately
|
||||||
|
console.warn(
|
||||||
|
$t.migration?.task_placeholder_warn ||
|
||||||
|
"Could not fetch task details immediately, using placeholder.",
|
||||||
|
);
|
||||||
|
selectedTask.set({
|
||||||
|
id: result.task_id,
|
||||||
|
plugin_id: "superset-migration",
|
||||||
|
status: "RUNNING",
|
||||||
|
logs: [],
|
||||||
|
params: {},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
} catch (e) {
|
||||||
|
console.error(`[MigrationDashboard][Failure] Migration failed:`, e);
|
||||||
|
error = e.message;
|
||||||
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:startMigration:Function]
|
// [/DEF:startMigration:Function]
|
||||||
|
|
||||||
@@ -313,46 +363,59 @@
|
|||||||
* @UX_RECOVERY: User can adjust selection and press Dry Run again.
|
* @UX_RECOVERY: User can adjust selection and press Dry Run again.
|
||||||
*/
|
*/
|
||||||
async function startDryRun() {
|
async function startDryRun() {
|
||||||
if (!sourceEnvId || !targetEnvId) {
|
return belief_scope("startDryRun", async () => {
|
||||||
error =
|
if (!sourceEnvId || !targetEnvId) {
|
||||||
$t.migration?.select_both_envs ||
|
error =
|
||||||
"Please select both source and target environments.";
|
$t.migration?.select_both_envs ||
|
||||||
return;
|
"Please select both source and target environments.";
|
||||||
}
|
return;
|
||||||
if (sourceEnvId === targetEnvId) {
|
}
|
||||||
error =
|
if (sourceEnvId === targetEnvId) {
|
||||||
$t.migration?.different_envs ||
|
error =
|
||||||
"Source and target environments must be different.";
|
$t.migration?.different_envs ||
|
||||||
return;
|
"Source and target environments must be different.";
|
||||||
}
|
return;
|
||||||
if (selectedDashboardIds.length === 0) {
|
}
|
||||||
error =
|
if (selectedDashboardIds.length === 0) {
|
||||||
$t.migration?.select_dashboards ||
|
error =
|
||||||
"Please select at least one dashboard to migrate.";
|
$t.migration?.select_dashboards ||
|
||||||
return;
|
"Please select at least one dashboard to migrate.";
|
||||||
}
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
error = "";
|
error = "";
|
||||||
dryRunLoading = true;
|
dryRunLoading = true;
|
||||||
try {
|
try {
|
||||||
const selection: DashboardSelection = {
|
const selection: DashboardSelection = {
|
||||||
selected_ids: selectedDashboardIds,
|
selected_ids: selectedDashboardIds,
|
||||||
source_env_id: sourceEnvId,
|
source_env_id: sourceEnvId,
|
||||||
target_env_id: targetEnvId,
|
target_env_id: targetEnvId,
|
||||||
replace_db_config: replaceDb,
|
replace_db_config: replaceDb,
|
||||||
fix_cross_filters: fixCrossFilters,
|
fix_cross_filters: fixCrossFilters,
|
||||||
};
|
};
|
||||||
dryRunResult = await api.postApi("/migration/dry-run", selection);
|
dryRunResult = await api.postApi("/migration/dry-run", selection);
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
error = e.message;
|
error = e.message;
|
||||||
dryRunResult = null;
|
dryRunResult = null;
|
||||||
} finally {
|
} finally {
|
||||||
dryRunLoading = false;
|
dryRunLoading = false;
|
||||||
}
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:startDryRun:Function]
|
// [/DEF:startDryRun:Function]
|
||||||
|
// [/DEF:MigrationDashboard:Component]
|
||||||
</script>
|
</script>
|
||||||
|
|
||||||
|
<!-- [DEF:MigrationDashboardView:Block] -->
|
||||||
|
<!--
|
||||||
|
@PURPOSE: Render migration configuration controls, action CTAs, dry-run results, and modal entry points.
|
||||||
|
@UX_STATE: Idle -> Configuration form is interactive.
|
||||||
|
@UX_STATE: Loading -> Loading messages and disabled buttons prevent duplicate actions.
|
||||||
|
@UX_STATE: Error -> Error banner is displayed without discarding current selection.
|
||||||
|
@UX_STATE: Success -> Dry-run details or TaskRunner content is presented.
|
||||||
|
@UX_FEEDBACK: Buttons, banners, cards, and dialogs provide immediate operation feedback.
|
||||||
|
@UX_RECOVERY: User can retry operations from the same page state.
|
||||||
|
-->
|
||||||
<!-- [SECTION: TEMPLATE] -->
|
<!-- [SECTION: TEMPLATE] -->
|
||||||
<div class="max-w-4xl mx-auto p-6">
|
<div class="max-w-4xl mx-auto p-6">
|
||||||
<PageHeader title={$t.nav.migration} />
|
<PageHeader title={$t.nav.migration} />
|
||||||
@@ -556,5 +619,5 @@
|
|||||||
/>
|
/>
|
||||||
|
|
||||||
<!-- [/SECTION] -->
|
<!-- [/SECTION] -->
|
||||||
|
<!-- [/DEF:MigrationDashboardView:Block] -->
|
||||||
<!-- [/DEF:MigrationDashboard:Component] -->
|
<!-- [/DEF:frontend/src/routes/migration/+page.svelte:Module] -->
|
||||||
|
|||||||
@@ -1,25 +1,55 @@
|
|||||||
<!-- [DEF:MappingManagement:Component] -->
|
<!-- [DEF:frontend/src/routes/migration/mappings/+page.svelte:Module] -->
|
||||||
<!--
|
<!--
|
||||||
@SEMANTICS: mapping, management, database, fuzzy-matching
|
@TIER: CRITICAL
|
||||||
@PURPOSE: Page for managing database mappings between environments.
|
@SEMANTICS: migration-mappings, environment-selection, fuzzy-matching, persistence-contract, ux-state-machine
|
||||||
@LAYER: Page
|
@PURPOSE: Render and orchestrate mapping management UI for source/target environments with backend persistence.
|
||||||
@RELATION: USES -> EnvSelector
|
@LAYER: UI
|
||||||
@RELATION: USES -> MappingTable
|
@RELATION: [DEPENDS_ON] ->[api.client]
|
||||||
|
@RELATION: [DEPENDS_ON] ->[EnvSelector.svelte]
|
||||||
@INVARIANT: Mappings are saved to the backend for persistence.
|
@RELATION: [DEPENDS_ON] ->[MappingTable.svelte]
|
||||||
|
@RELATION: [DEPENDS_ON] ->[i18n.t]
|
||||||
|
@RELATION: [BINDS_TO] ->[migration.mappings.route]
|
||||||
|
@INVARIANT: Persisted mapping state in backend remains the source of truth for rendered mapping pairs.
|
||||||
|
@PRE: Translation store and API client are available; route is mounted in authenticated UI shell.
|
||||||
|
@POST: UI exposes deterministic Idle/Loading/Error/Success states for environment loading, database fetch, and mapping save.
|
||||||
|
@SIDE_EFFECT: Performs network I/O to environment/database/mapping endpoints and mutates local UI state.
|
||||||
|
@DATA_CONTRACT: Input(Event: update{sourceUuid,targetUuid}) -> Model(MappingPayload); Output(UIState{environments,databases,mappings,suggestions,status})
|
||||||
|
@UX_STATE: Idle -> Await environment selection and user-triggered fetch.
|
||||||
|
@UX_STATE: Loading -> Show loading text/spinner while environments or databases are fetched.
|
||||||
|
@UX_STATE: Error -> Render red alert panel with backend error message.
|
||||||
|
@UX_STATE: Success -> Render green confirmation panel after mapping save.
|
||||||
|
@UX_FEEDBACK: Error panel for failed API calls; success panel for persisted mapping confirmation.
|
||||||
|
@UX_RECOVERY: Retry via "fetch databases" action; reselection of environments clears stale arrays.
|
||||||
|
@UX_REACTIVITY: Svelte bind/on directives and reactive template branches coordinate state transitions (legacy route; no semantic logic mutation in this task).
|
||||||
|
@TEST_CONTRACT: [Valid source/target env IDs + fetch click] -> [Databases, mappings, suggestions rendered]
|
||||||
|
@TEST_SCENARIO: [save_mapping_success] -> [success banner appears and mapping list replaces same-source item]
|
||||||
|
@TEST_SCENARIO: [fetch_env_fail] -> [error banner appears and loading state exits]
|
||||||
|
@TEST_FIXTURE: [migration_mapping_pair] -> file:backend/tests/fixtures/migration_dry_run_fixture.json
|
||||||
|
@TEST_EDGE: [missing_field] ->[event.detail lacks sourceUuid/targetUuid => no save]
|
||||||
|
@TEST_EDGE: [invalid_type] ->[non-string env IDs => fetch disabled/guarded]
|
||||||
|
@TEST_EDGE: [external_fail] ->[API request rejection => error state]
|
||||||
|
@TEST_INVARIANT: [backend_source_of_truth] -> VERIFIED_BY: [save_mapping_success, fetch_env_fail]
|
||||||
|
@CRITICAL_TRACE: Frontend scope; Python belief_scope/logger are not applicable in Svelte runtime. Reflective tracing, when added, must use console prefix [ID][REFLECT].
|
||||||
-->
|
-->
|
||||||
|
|
||||||
<script lang="ts">
|
<script lang="ts">
|
||||||
// [SECTION: IMPORTS]
|
// [DEF:MappingsPageScript:Block]
|
||||||
|
// @PURPOSE: Define imports, state, and handlers that drive migration mappings page FSM.
|
||||||
|
// @RELATION: [CALLS] ->[fetchEnvironments]
|
||||||
|
// @RELATION: [CALLS] ->[fetchDatabases]
|
||||||
|
// @RELATION: [CALLS] ->[handleUpdate]
|
||||||
|
|
||||||
|
// [DEF:Imports:Block]
|
||||||
import { onMount } from 'svelte';
|
import { onMount } from 'svelte';
|
||||||
import { api } from '../../../lib/api.js';
|
import { api } from '../../../lib/api.js';
|
||||||
import EnvSelector from '../../../components/EnvSelector.svelte';
|
import EnvSelector from '../../../components/EnvSelector.svelte';
|
||||||
import MappingTable from '../../../components/MappingTable.svelte';
|
import MappingTable from '../../../components/MappingTable.svelte';
|
||||||
import { t } from '$lib/i18n';
|
import { t } from '$lib/i18n';
|
||||||
import { Button, PageHeader } from '$lib/ui';
|
import { Button, PageHeader } from '$lib/ui';
|
||||||
// [/SECTION]
|
// [/DEF:Imports:Block]
|
||||||
|
|
||||||
// [SECTION: STATE]
|
// [DEF:UiState:Store]
|
||||||
|
// @PURPOSE: Maintain local page state for environments, fetched databases, mappings, suggestions, and UX messages.
|
||||||
let environments = [];
|
let environments = [];
|
||||||
let sourceEnvId = "";
|
let sourceEnvId = "";
|
||||||
let targetEnvId = "";
|
let targetEnvId = "";
|
||||||
@@ -31,90 +61,115 @@
|
|||||||
let fetchingDbs = false;
|
let fetchingDbs = false;
|
||||||
let error = "";
|
let error = "";
|
||||||
let success = "";
|
let success = "";
|
||||||
// [/SECTION]
|
// [/DEF:UiState:Store]
|
||||||
|
|
||||||
|
// [DEF:belief_scope:Function]
|
||||||
|
// @PURPOSE: Frontend semantic scope wrapper for CRITICAL trace boundaries without changing business behavior.
|
||||||
|
// @PRE: scopeId is non-empty and run is callable.
|
||||||
|
// @POST: Executes run exactly once and returns/rejects with the same outcome.
|
||||||
|
// @SIDE_EFFECT: Emits trace logs for semantic scope entrance/exit.
|
||||||
|
// @DATA_CONTRACT: Input(scopeId:string, run:() => Promise<T>) -> Output(Promise<T>)
|
||||||
|
async function belief_scope<T>(scopeId: string, run: () => Promise<T>): Promise<T> {
|
||||||
|
console.info(`[${scopeId}][REASON] belief_scope.enter`);
|
||||||
|
try {
|
||||||
|
return await run();
|
||||||
|
} finally {
|
||||||
|
console.info(`[${scopeId}][REFLECT] belief_scope.exit`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
// [/DEF:belief_scope:Function]
|
||||||
|
|
||||||
// [DEF:fetchEnvironments:Function]
|
// [DEF:fetchEnvironments:Function]
|
||||||
// @PURPOSE: Fetches the list of environments.
|
// @PURPOSE: Load environment options for source/target selectors on initial mount.
|
||||||
// @PRE: None.
|
// @PRE: API client is initialized and route has mounted.
|
||||||
// @POST: environments array is populated.
|
// @POST: loading=false and environments populated on success or error message set on failure.
|
||||||
|
// @SIDE_EFFECT: Network I/O to environments endpoint; mutates environments/error/loading.
|
||||||
|
// @DATA_CONTRACT: Input(void) -> Output(EnvironmentSummary[])
|
||||||
async function fetchEnvironments() {
|
async function fetchEnvironments() {
|
||||||
try {
|
return belief_scope('migration.mappings.fetchEnvironments', async () => {
|
||||||
environments = await api.getEnvironmentsList();
|
try {
|
||||||
} catch (e) {
|
environments = await api.getEnvironmentsList();
|
||||||
error = e.message;
|
} catch (e) {
|
||||||
} finally {
|
error = e.message;
|
||||||
loading = false;
|
} finally {
|
||||||
}
|
loading = false;
|
||||||
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:fetchEnvironments:Function]
|
// [/DEF:fetchEnvironments:Function]
|
||||||
|
|
||||||
onMount(fetchEnvironments);
|
onMount(fetchEnvironments);
|
||||||
|
|
||||||
// [DEF:fetchDatabases:Function]
|
// [DEF:fetchDatabases:Function]
|
||||||
/**
|
// @PURPOSE: Fetch both environment database catalogs, existing mappings, and suggested matches.
|
||||||
* @purpose Fetches databases from both environments and gets suggestions.
|
// @PRE: sourceEnvId and targetEnvId are both selected and non-empty.
|
||||||
* @pre sourceEnvId and targetEnvId must be set.
|
// @POST: fetchingDbs=false and sourceDatabases/targetDatabases/mappings/suggestions updated or error set.
|
||||||
* @post sourceDatabases, targetDatabases, mappings, and suggestions are updated.
|
// @SIDE_EFFECT: Concurrent network I/O to environments, mappings, and suggestion endpoints; clears transient messages.
|
||||||
*/
|
// @DATA_CONTRACT: Input({sourceEnvId,targetEnvId}) -> Output({sourceDatabases,targetDatabases,mappings,suggestions})
|
||||||
async function fetchDatabases() {
|
async function fetchDatabases() {
|
||||||
if (!sourceEnvId || !targetEnvId) return;
|
return belief_scope('migration.mappings.fetchDatabases', async () => {
|
||||||
fetchingDbs = true;
|
if (!sourceEnvId || !targetEnvId) return;
|
||||||
error = "";
|
fetchingDbs = true;
|
||||||
success = "";
|
error = "";
|
||||||
|
success = "";
|
||||||
try {
|
|
||||||
const [src, tgt, maps, sugs] = await Promise.all([
|
try {
|
||||||
api.requestApi(`/environments/${sourceEnvId}/databases`),
|
const [src, tgt, maps, sugs] = await Promise.all([
|
||||||
api.requestApi(`/environments/${targetEnvId}/databases`),
|
api.requestApi(`/environments/${sourceEnvId}/databases`),
|
||||||
api.requestApi(`/mappings?source_env_id=${sourceEnvId}&target_env_id=${targetEnvId}`),
|
api.requestApi(`/environments/${targetEnvId}/databases`),
|
||||||
api.postApi(`/mappings/suggest`, { source_env_id: sourceEnvId, target_env_id: targetEnvId })
|
api.requestApi(`/mappings?source_env_id=${sourceEnvId}&target_env_id=${targetEnvId}`),
|
||||||
]);
|
api.postApi(`/mappings/suggest`, { source_env_id: sourceEnvId, target_env_id: targetEnvId })
|
||||||
|
]);
|
||||||
|
|
||||||
sourceDatabases = src;
|
sourceDatabases = src;
|
||||||
targetDatabases = tgt;
|
targetDatabases = tgt;
|
||||||
mappings = maps;
|
mappings = maps;
|
||||||
suggestions = sugs;
|
suggestions = sugs;
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
error = e.message;
|
error = e.message;
|
||||||
} finally {
|
} finally {
|
||||||
fetchingDbs = false;
|
fetchingDbs = false;
|
||||||
}
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:fetchDatabases:Function]
|
// [/DEF:fetchDatabases:Function]
|
||||||
|
|
||||||
// [DEF:handleUpdate:Function]
|
// [DEF:handleUpdate:Function]
|
||||||
/**
|
// @PURPOSE: Persist a selected mapping pair and reconcile local mapping list by source database UUID.
|
||||||
* @purpose Saves a mapping to the backend.
|
// @PRE: event.detail includes sourceUuid/targetUuid and matching source/target database records exist.
|
||||||
* @pre event.detail contains sourceUuid and targetUuid.
|
// @POST: mapping persisted; local mappings replaced for same source UUID; success or error feedback shown.
|
||||||
* @post Mapping is saved and local mappings list is updated.
|
// @SIDE_EFFECT: POST /mappings network I/O; mutates mappings/success/error.
|
||||||
*/
|
// @DATA_CONTRACT: Input(CustomEvent<{sourceUuid:string,targetUuid:string}>) -> Output(MappingRecord persisted + UI feedback)
|
||||||
async function handleUpdate(event: CustomEvent) {
|
async function handleUpdate(event: CustomEvent) {
|
||||||
const { sourceUuid, targetUuid } = event.detail;
|
return belief_scope('migration.mappings.handleUpdate', async () => {
|
||||||
const sDb = sourceDatabases.find(d => d.uuid === sourceUuid);
|
const { sourceUuid, targetUuid } = event.detail;
|
||||||
const tDb = targetDatabases.find(d => d.uuid === targetUuid);
|
const sDb = sourceDatabases.find(d => d.uuid === sourceUuid);
|
||||||
|
const tDb = targetDatabases.find(d => d.uuid === targetUuid);
|
||||||
if (!sDb || !tDb) return;
|
|
||||||
|
if (!sDb || !tDb) return;
|
||||||
|
|
||||||
try {
|
try {
|
||||||
const savedMapping = await api.postApi('/mappings', {
|
const savedMapping = await api.postApi('/mappings', {
|
||||||
source_env_id: sourceEnvId,
|
source_env_id: sourceEnvId,
|
||||||
target_env_id: targetEnvId,
|
target_env_id: targetEnvId,
|
||||||
source_db_uuid: sourceUuid,
|
source_db_uuid: sourceUuid,
|
||||||
target_db_uuid: targetUuid,
|
target_db_uuid: targetUuid,
|
||||||
source_db_name: sDb.database_name,
|
source_db_name: sDb.database_name,
|
||||||
target_db_name: tDb.database_name
|
target_db_name: tDb.database_name
|
||||||
});
|
});
|
||||||
|
|
||||||
mappings = [...mappings.filter(m => m.source_db_uuid !== sourceUuid), savedMapping];
|
mappings = [...mappings.filter(m => m.source_db_uuid !== sourceUuid), savedMapping];
|
||||||
success = $t.migration?.mapping_saved ;
|
success = $t.migration?.mapping_saved ;
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
error = e.message;
|
error = e.message;
|
||||||
}
|
}
|
||||||
|
});
|
||||||
}
|
}
|
||||||
// [/DEF:handleUpdate:Function]
|
// [/DEF:handleUpdate:Function]
|
||||||
|
// [/DEF:MappingsPageScript:Block]
|
||||||
</script>
|
</script>
|
||||||
|
|
||||||
<!-- [SECTION: TEMPLATE] -->
|
<!-- [DEF:MappingsPageTemplate:Block] -->
|
||||||
<div class="max-w-6xl mx-auto p-6">
|
<div class="max-w-6xl mx-auto p-6">
|
||||||
<PageHeader title={$t.migration?.mapping_management } />
|
<PageHeader title={$t.migration?.mapping_management } />
|
||||||
|
|
||||||
@@ -171,7 +226,6 @@
|
|||||||
{/if}
|
{/if}
|
||||||
{/if}
|
{/if}
|
||||||
</div>
|
</div>
|
||||||
<!-- [/SECTION] -->
|
<!-- [/DEF:MappingsPageTemplate:Block] -->
|
||||||
|
|
||||||
|
<!-- [/DEF:frontend/src/routes/migration/mappings/+page.svelte:Module] -->
|
||||||
<!-- [/DEF:MappingManagement:Component] -->
|
|
||||||
|
|||||||
@@ -167,8 +167,6 @@
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
// [/DEF:resetForm:Function]
|
// [/DEF:resetForm:Function]
|
||||||
// [/DEF:handleSave:Function]
|
|
||||||
|
|
||||||
// [DEF:handleDelete:Function]
|
// [DEF:handleDelete:Function]
|
||||||
/**
|
/**
|
||||||
* @purpose Deletes a git configuration by ID.
|
* @purpose Deletes a git configuration by ID.
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user