semantic update
This commit is contained in:
@@ -2,12 +2,12 @@
|
||||
|
||||
> High-level module structure for AI Context. Generated automatically.
|
||||
|
||||
**Generated:** 2026-03-10T18:26:33.375187
|
||||
**Generated:** 2026-03-10T20:52:01.801581
|
||||
|
||||
## Summary
|
||||
|
||||
- **Total Modules:** 103
|
||||
- **Total Entities:** 3077
|
||||
- **Total Entities:** 3088
|
||||
|
||||
## Module Hierarchy
|
||||
|
||||
@@ -1718,9 +1718,9 @@
|
||||
|
||||
### 📁 `migration/`
|
||||
|
||||
- 📊 **Tiers:** CRITICAL: 12
|
||||
- 📊 **Tiers:** CRITICAL: 21
|
||||
- 📄 **Files:** 1
|
||||
- 📦 **Entities:** 12
|
||||
- 📦 **Entities:** 21
|
||||
|
||||
**Key Entities:**
|
||||
|
||||
@@ -1972,9 +1972,9 @@
|
||||
### 📁 `root/`
|
||||
|
||||
- 🏗️ **Layers:** DevOps/Tooling, Unknown
|
||||
- 📊 **Tiers:** CRITICAL: 11, STANDARD: 17, TRIVIAL: 9
|
||||
- 📊 **Tiers:** CRITICAL: 11, STANDARD: 18, TRIVIAL: 10
|
||||
- 📄 **Files:** 2
|
||||
- 📦 **Entities:** 37
|
||||
- 📦 **Entities:** 39
|
||||
|
||||
**Key Entities:**
|
||||
|
||||
|
||||
@@ -68,6 +68,8 @@
|
||||
- 📝 Calculate score and determine module's max tier for weighted global score
|
||||
- ƒ **_generate_artifacts** (`Function`) `[CRITICAL]`
|
||||
- 📝 Writes output files with tier-based compliance data.
|
||||
- ƒ **_print_agent_report** (`Function`)
|
||||
- 📝 Prints a JSON report optimized for AI agent orchestration and control.
|
||||
- ƒ **_generate_report** (`Function`) `[CRITICAL]`
|
||||
- 📝 Generates the Markdown compliance report with severity levels.
|
||||
- ƒ **_collect_issues** (`Function`)
|
||||
@@ -84,6 +86,8 @@
|
||||
- 📝 Flattens entity tree for easier grouping.
|
||||
- ƒ **to_dict** (`Function`) `[TRIVIAL]`
|
||||
- 📝 Auto-detected function (orphan)
|
||||
- ƒ **collect_recursive** (`Function`) `[TRIVIAL]`
|
||||
- 📝 Auto-detected function (orphan)
|
||||
- 📦 **DashboardTypes** (`Module`) `[TRIVIAL]`
|
||||
- 📝 TypeScript interfaces for Dashboard entities
|
||||
- 🏗️ Layer: Domain
|
||||
@@ -1205,6 +1209,8 @@
|
||||
- 📝 Fetches the list of environments from the API.
|
||||
- ƒ **fetchDashboards** (`Function`) `[CRITICAL]`
|
||||
- 📝 Fetches dashboards for the selected source environment.
|
||||
- ▦ **ReactiveDashboardFetch** (`Block`) `[CRITICAL]`
|
||||
- 📝 Automatically fetch dashboards when the source environment is changed.
|
||||
- ƒ **fetchDatabases** (`Function`) `[CRITICAL]`
|
||||
- 📝 Fetches databases from both environments and gets suggestions.
|
||||
- ƒ **handleMappingUpdate** (`Function`) `[CRITICAL]`
|
||||
@@ -1213,15 +1219,25 @@
|
||||
- 📝 Opens the log viewer for a specific task.
|
||||
- ƒ **handlePasswordPrompt** (`Function`) `[CRITICAL]`
|
||||
- 📝 Reactive logic to show password prompt when a task is awaiting input.
|
||||
- ▦ **ReactivePasswordPrompt** (`Block`) `[CRITICAL]`
|
||||
- 📝 Monitor selected task for input requests and trigger password prompt.
|
||||
- ƒ **handleResumeMigration** (`Function`) `[CRITICAL]`
|
||||
- 📝 Resumes a migration task with provided passwords.
|
||||
- ƒ **startMigration** (`Function`) `[CRITICAL]`
|
||||
- 📝 Starts the migration process.
|
||||
- 📝 Initiates the migration process by sending the selection to the backend.
|
||||
- ƒ **startDryRun** (`Function`) `[CRITICAL]`
|
||||
- 📝 Builds pre-flight diff and risk summary without applying migration.
|
||||
- 📝 Performs a dry-run migration to identify potential risks and changes.
|
||||
- ▦ **MigrationDashboardView** (`Block`) `[CRITICAL]`
|
||||
- 📝 Render migration configuration controls, action CTAs, dry-run results, and modal entry points.
|
||||
- ▦ **MigrationHeader** (`Block`) `[CRITICAL]`
|
||||
- ▦ **TaskHistorySection** (`Block`) `[CRITICAL]`
|
||||
- ▦ **ActiveTaskSection** (`Block`) `[CRITICAL]`
|
||||
- ▦ **EnvironmentSelectionSection** (`Block`) `[CRITICAL]`
|
||||
- 🧩 **DashboardSelectionSection** (`Component`) `[CRITICAL]`
|
||||
- ▦ **MigrationOptionsSection** (`Block`) `[CRITICAL]`
|
||||
- ▦ **DryRunResultsSection** (`Block`) `[CRITICAL]`
|
||||
- ▦ **MigrationModals** (`Block`) `[CRITICAL]`
|
||||
- 📝 Render overlay components for log viewing and password entry.
|
||||
- ▦ **MappingsPageScript** (`Block`) `[CRITICAL]`
|
||||
- 📝 Define imports, state, and handlers that drive migration mappings page FSM.
|
||||
- 🔗 CALLS -> `fetchEnvironments`
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
---
|
||||
description: Maintain semantic integrity via multi-agent delegation. Analyzes codebase, delegates markup tasks to Semantic Engineer, verifies via Reviewer Agent, and reports status.
|
||||
description: Maintain semantic integrity by generating maps and auditing compliance reports.
|
||||
---
|
||||
|
||||
## User Input
|
||||
@@ -12,62 +12,63 @@ You **MUST** consider the user input before proceeding (if not empty).
|
||||
|
||||
## Goal
|
||||
|
||||
Ensure the codebase 100% adheres to the semantic standards defined in `.ai/standards/semantics.md` (GRACE-Poly Protocol). You are the **Manager/Supervisor**. You do not write code. You manage the queue, delegate files to the Semantic Engineer, audit their work via the Reviewer Agent, and commit successful changes.
|
||||
Ensure the codebase adheres to the semantic standards defined in `.ai/standards/semantics.md`. This involves generating the semantic map, analyzing compliance reports, and identifying critical parsing errors or missing metadata.
|
||||
|
||||
## Operating Constraints
|
||||
|
||||
1. **ROLE: Orchestrator**: High-level coordination ONLY. Do not output raw code diffs yourself.
|
||||
2. **DELEGATION PATTERN**: Strict `Orchestrator -> Engineer -> Reviewer -> Orchestrator` loop.
|
||||
3. **FAIL-FAST METRICS**: If the Reviewer Agent rejects a file 3 times in a row, drop the file from the current queue and mark it as `[HUMAN_INTERVENTION_REQUIRED]`.
|
||||
4. **TIER AWARENESS**: CRITICAL files MUST be processed first. A failure in a CRITICAL file blocks the entire pipeline.
|
||||
1. **ROLE: Orchestrator**: You are responsible for the high-level coordination of semantic maintenance.
|
||||
2. **STRICT ADHERENCE**: Follow `.ai/standards/semantics.md` for all anchor and tag syntax.
|
||||
3. **NON-DESTRUCTIVE**: Do not remove existing code logic; only add or update semantic annotations.
|
||||
4. **TIER AWARENESS**: Prioritize CRITICAL and STANDARD modules for compliance fixes.
|
||||
5. **NO PSEUDO-CONTRACTS (CRITICAL)**: You are STRICTLY FORBIDDEN from using automated scripts (e.g., Python/Bash/sed) to mechanically inject boilerplate, placeholders, or "pseudo-contracts" (such as `# @PURPOSE: Semantic contract placeholder.` or `# @PRE: Inputs satisfy function contract.`) merely to artificially inflate the compliance score. Every semantic tag, anchor, and contract you add MUST reflect a genuine, deep understanding of the specific code's actual logic and business requirements. Automated "stubbing" of semantics is classified as codebase corruption.
|
||||
|
||||
## Execution Steps
|
||||
|
||||
### 1. Generate Semantic State (Analyze)
|
||||
Run the generator script to map the current reality:
|
||||
### 1. Generate Semantic Map
|
||||
|
||||
Run the generator script from the repository root with the agent report option:
|
||||
|
||||
```bash
|
||||
python3 generate_semantic_map.py
|
||||
python3 generate_semantic_map.py --agent-report
|
||||
```
|
||||
Parse the output (Global Score, Critical Parsing Errors, Files with Score < 0.7).
|
||||
|
||||
### 2. Formulate Task Queue
|
||||
Create an execution queue based on the report. Priority:
|
||||
- **Priority 1 (Blockers)**: Files with "Critical Parsing Errors" (unclosed `[/DEF]` anchors).
|
||||
- **Priority 2 (Tier 1)**: `CRITICAL` tier modules missing mandatory tags (`@PRE`, `@POST`, `belief_scope`).
|
||||
- **Priority 3 (Tier 2)**: `STANDARD` modules with missing graph relations (`@RELATION`).
|
||||
### 2. Analyze Compliance Status
|
||||
|
||||
### 3. The Delegation Loop (For each file in the queue)
|
||||
For every target file, execute this exact sequence:
|
||||
**Parse the JSON output to identify**:
|
||||
- `global_score`: The overall compliance percentage.
|
||||
- `critical_parsing_errors_count`: Number of Priority 1 blockers.
|
||||
- `priority_2_tier1_critical_missing_mandatory_tags_files`: Number of CRITICAL files needing metadata.
|
||||
- `targets`: Status of key architectural files.
|
||||
|
||||
* **Step 3A (Delegate to Worker):** Send the file path and the specific violation from the report to the **Semantic Markup Agent (Engineer)**.
|
||||
*Prompt*: `"Fix semantic violations in [FILE]. Current issues: [ISSUES]. Apply GRACE-Poly standards without changing business logic."*
|
||||
* **Step 3B (Delegate to Auditor):** Once the Engineer returns the modified file, send it to the **Reviewer Agent (Auditor)**.
|
||||
*Prompt*: `"Verify GRACE-Poly compliance for [FILE]. Check for paired [DEF] anchors, complete contracts, and belief_scope usage. Return PASS or FAIL with specific line errors."*
|
||||
* **Step 3C (Evaluate):**
|
||||
* If Auditor returns `PASS`: Apply the diff to the codebase. Move to the next file.
|
||||
* If Auditor returns `FAIL`: Send the Auditor's error report back to the Engineer (Step 3A). Repeat max 3 times.
|
||||
### 3. Audit Critical Issues
|
||||
|
||||
### 4. Verification
|
||||
Once the queue is empty, re-run `python3 generate_semantic_map.py` to prove the metrics have improved.
|
||||
Read the latest report and extract:
|
||||
- **Critical Parsing Errors**: Unclosed anchors or mismatched tags.
|
||||
- **Low-Score Files**: Files with score < 0.7 or marked with 🔴.
|
||||
- **Missing Mandatory Tags**: Specifically for CRITICAL tier modules.
|
||||
|
||||
## Output Format
|
||||
### 4. Formulate Remediation Plan
|
||||
|
||||
Return a structured summary of the operation:
|
||||
Create a list of files requiring immediate attention:
|
||||
1. **Priority 1**: Fix all "Critical Parsing Errors" (unclosed anchors).
|
||||
2. **Priority 2**: Add missing mandatory tags for CRITICAL modules.
|
||||
3. **Priority 3**: Improve coverage for STANDARD modules.
|
||||
|
||||
```text
|
||||
=== GRACE SEMANTIC ORCHESTRATION REPORT ===
|
||||
Initial Global Score: [X]%
|
||||
Final Global Score: [Y]%
|
||||
Status:[PASS / BLOCKED]
|
||||
### 5. Execute Fixes (Optional/Handoff)
|
||||
|
||||
Files Processed:
|
||||
1. [file_path] -[PASS (1 attempt) | PASS (2 attempts) | FAILED]
|
||||
2. ...
|
||||
If $ARGUMENTS contains "fix" or "apply":
|
||||
- For each target file, use `read_file` to get context.
|
||||
- Apply semantic fixes using `apply_diff`, preserving all code logic.
|
||||
- Re-run `python3 generate_semantic_map.py --agent-report` to verify the fix.
|
||||
|
||||
## Output
|
||||
|
||||
Provide a summary of the semantic state:
|
||||
- **Global Score**: [X]%
|
||||
- **Status**: [PASS/FAIL] (FAIL if any Critical Parsing Errors exist)
|
||||
- **Top Issues**: List top 3-5 files needing attention.
|
||||
- **Action Taken**: Summary of maps generated or fixes applied.
|
||||
|
||||
Escalations (Human Intervention Required):
|
||||
-[file_path]: Failed auditor review 3 times. Reason: [Last Auditor Note].
|
||||
```
|
||||
## Context
|
||||
|
||||
$ARGUMENTS
|
||||
```
|
||||
|
||||
@@ -26,7 +26,7 @@ from ...dependencies import get_config_manager, get_task_manager, has_permission
|
||||
from ...core.database import get_db
|
||||
from ...models.dashboard import DashboardMetadata, DashboardSelection
|
||||
from ...core.superset_client import SupersetClient
|
||||
from ...core.logger import belief_scope
|
||||
from ...core.logger import logger, belief_scope
|
||||
from ...core.migration.dry_run_orchestrator import MigrationDryRunService
|
||||
from ...core.mapping_service import IdMappingService
|
||||
from ...models.mapping import ResourceMapping
|
||||
@@ -46,13 +46,17 @@ async def get_dashboards(
|
||||
_ = Depends(has_permission("plugin:migration", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("get_dashboards", f"env_id={env_id}"):
|
||||
logger.reason(f"Fetching dashboards for environment: {env_id}")
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
|
||||
if not env:
|
||||
logger.explore(f"Environment {env_id} not found in configuration")
|
||||
raise HTTPException(status_code=404, detail="Environment not found")
|
||||
|
||||
client = SupersetClient(env)
|
||||
dashboards = client.get_dashboards_summary()
|
||||
logger.reflect(f"Retrieved {len(dashboards)} dashboards from {env_id}")
|
||||
return dashboards
|
||||
# [/DEF:get_dashboards:Function]
|
||||
|
||||
@@ -70,30 +74,29 @@ async def execute_migration(
|
||||
_ = Depends(has_permission("plugin:migration", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("execute_migration"):
|
||||
logger.reason(f"Initiating migration from {selection.source_env_id} to {selection.target_env_id}")
|
||||
|
||||
# Validate environments exist
|
||||
environments = config_manager.get_environments()
|
||||
env_ids = {e.id for e in environments}
|
||||
if selection.source_env_id not in env_ids or selection.target_env_id not in env_ids:
|
||||
raise HTTPException(status_code=400, detail="Invalid source or target environment")
|
||||
|
||||
# Create migration task with debug logging
|
||||
from ...core.logger import logger
|
||||
if selection.source_env_id not in env_ids or selection.target_env_id not in env_ids:
|
||||
logger.explore("Invalid environment selection", extra={"source": selection.source_env_id, "target": selection.target_env_id})
|
||||
raise HTTPException(status_code=400, detail="Invalid source or target environment")
|
||||
|
||||
# Include replace_db_config and fix_cross_filters in the task parameters
|
||||
task_params = selection.dict()
|
||||
task_params['replace_db_config'] = selection.replace_db_config
|
||||
task_params['fix_cross_filters'] = selection.fix_cross_filters
|
||||
|
||||
logger.info(f"Creating migration task with params: {task_params}")
|
||||
logger.info(f"Available environments: {env_ids}")
|
||||
logger.info(f"Source env: {selection.source_env_id}, Target env: {selection.target_env_id}")
|
||||
logger.reason(f"Creating migration task with {len(selection.selected_ids)} dashboards")
|
||||
|
||||
try:
|
||||
task = await task_manager.create_task("superset-migration", task_params)
|
||||
logger.info(f"Task created successfully: {task.id}")
|
||||
logger.reflect(f"Migration task created: {task.id}")
|
||||
return {"task_id": task.id, "message": "Migration initiated"}
|
||||
except Exception as e:
|
||||
logger.error(f"Task creation failed: {e}")
|
||||
logger.explore(f"Task creation failed: {e}")
|
||||
raise HTTPException(status_code=500, detail=f"Failed to create migration task: {str(e)}")
|
||||
# [/DEF:execute_migration:Function]
|
||||
|
||||
@@ -112,28 +115,40 @@ async def dry_run_migration(
|
||||
_ = Depends(has_permission("plugin:migration", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("dry_run_migration"):
|
||||
logger.reason(f"Starting dry run: {selection.source_env_id} -> {selection.target_env_id}")
|
||||
|
||||
environments = config_manager.get_environments()
|
||||
env_map = {env.id: env for env in environments}
|
||||
source_env = env_map.get(selection.source_env_id)
|
||||
target_env = env_map.get(selection.target_env_id)
|
||||
|
||||
if not source_env or not target_env:
|
||||
logger.explore("Invalid environment selection for dry run")
|
||||
raise HTTPException(status_code=400, detail="Invalid source or target environment")
|
||||
|
||||
if selection.source_env_id == selection.target_env_id:
|
||||
logger.explore("Source and target environments are identical")
|
||||
raise HTTPException(status_code=400, detail="Source and target environments must be different")
|
||||
|
||||
if not selection.selected_ids:
|
||||
logger.explore("No dashboards selected for dry run")
|
||||
raise HTTPException(status_code=400, detail="No dashboards selected for dry run")
|
||||
|
||||
service = MigrationDryRunService()
|
||||
source_client = SupersetClient(source_env)
|
||||
target_client = SupersetClient(target_env)
|
||||
|
||||
try:
|
||||
return service.run(
|
||||
result = service.run(
|
||||
selection=selection,
|
||||
source_client=source_client,
|
||||
target_client=target_client,
|
||||
db=db,
|
||||
)
|
||||
logger.reflect("Dry run analysis complete")
|
||||
return result
|
||||
except ValueError as exc:
|
||||
logger.explore(f"Dry run orchestrator failed: {exc}")
|
||||
raise HTTPException(status_code=500, detail=str(exc)) from exc
|
||||
# [/DEF:dry_run_migration:Function]
|
||||
|
||||
|
||||
@@ -32,7 +32,13 @@ class AuthRepository:
|
||||
# @DATA_CONTRACT: Input[Session] -> Output[None]
|
||||
def __init__(self, db: Session):
|
||||
with belief_scope("AuthRepository.__init__"):
|
||||
if not isinstance(db, Session):
|
||||
logger.explore("Invalid session provided to AuthRepository", extra={"type": type(db)})
|
||||
raise TypeError("db must be an instance of sqlalchemy.orm.Session")
|
||||
|
||||
logger.reason("Binding AuthRepository to database session")
|
||||
self.db = db
|
||||
logger.reflect("AuthRepository initialized")
|
||||
# [/DEF:__init__:Function]
|
||||
|
||||
# [DEF:get_user_by_username:Function]
|
||||
@@ -43,7 +49,17 @@ class AuthRepository:
|
||||
# @DATA_CONTRACT: Input[str] -> Output[Optional[User]]
|
||||
def get_user_by_username(self, username: str) -> Optional[User]:
|
||||
with belief_scope("AuthRepository.get_user_by_username"):
|
||||
return self.db.query(User).filter(User.username == username).first()
|
||||
if not username or not isinstance(username, str):
|
||||
raise ValueError("username must be a non-empty string")
|
||||
|
||||
logger.reason(f"Querying user by username: {username}")
|
||||
user = self.db.query(User).filter(User.username == username).first()
|
||||
|
||||
if user:
|
||||
logger.reflect(f"User found: {username}")
|
||||
else:
|
||||
logger.explore(f"User not found: {username}")
|
||||
return user
|
||||
# [/DEF:get_user_by_username:Function]
|
||||
|
||||
# [DEF:get_user_by_id:Function]
|
||||
@@ -54,7 +70,17 @@ class AuthRepository:
|
||||
# @DATA_CONTRACT: Input[str] -> Output[Optional[User]]
|
||||
def get_user_by_id(self, user_id: str) -> Optional[User]:
|
||||
with belief_scope("AuthRepository.get_user_by_id"):
|
||||
return self.db.query(User).filter(User.id == user_id).first()
|
||||
if not user_id or not isinstance(user_id, str):
|
||||
raise ValueError("user_id must be a non-empty string")
|
||||
|
||||
logger.reason(f"Querying user by ID: {user_id}")
|
||||
user = self.db.query(User).filter(User.id == user_id).first()
|
||||
|
||||
if user:
|
||||
logger.reflect(f"User found by ID: {user_id}")
|
||||
else:
|
||||
logger.explore(f"User not found by ID: {user_id}")
|
||||
return user
|
||||
# [/DEF:get_user_by_id:Function]
|
||||
|
||||
# [DEF:get_role_by_name:Function]
|
||||
@@ -76,10 +102,15 @@ class AuthRepository:
|
||||
# @DATA_CONTRACT: Input[User] -> Output[None]
|
||||
def update_last_login(self, user: User):
|
||||
with belief_scope("AuthRepository.update_last_login"):
|
||||
if not isinstance(user, User):
|
||||
raise TypeError("user must be an instance of User")
|
||||
|
||||
from datetime import datetime
|
||||
logger.reason(f"Updating last login for user: {user.username}")
|
||||
user.last_login = datetime.utcnow()
|
||||
self.db.add(user)
|
||||
self.db.commit()
|
||||
logger.reflect(f"Last login updated and committed for user: {user.username}")
|
||||
# [/DEF:update_last_login:Function]
|
||||
|
||||
# [DEF:get_role_by_id:Function]
|
||||
@@ -144,9 +175,14 @@ class AuthRepository:
|
||||
preference: UserDashboardPreference,
|
||||
) -> UserDashboardPreference:
|
||||
with belief_scope("AuthRepository.save_user_dashboard_preference"):
|
||||
if not isinstance(preference, UserDashboardPreference):
|
||||
raise TypeError("preference must be an instance of UserDashboardPreference")
|
||||
|
||||
logger.reason(f"Saving dashboard preference for user: {preference.user_id}")
|
||||
self.db.add(preference)
|
||||
self.db.commit()
|
||||
self.db.refresh(preference)
|
||||
logger.reflect(f"Dashboard preference saved and refreshed for user: {preference.user_id}")
|
||||
return preference
|
||||
# [/DEF:save_user_dashboard_preference:Function]
|
||||
|
||||
|
||||
@@ -36,18 +36,23 @@ class ConfigManager:
|
||||
# @SIDE_EFFECT: Reads config sources and updates logging configuration.
|
||||
# @DATA_CONTRACT: Input(str config_path) -> Output(None; self.config: AppConfig)
|
||||
def __init__(self, config_path: str = "config.json"):
|
||||
with belief_scope("__init__"):
|
||||
assert isinstance(config_path, str) and config_path, "config_path must be a non-empty string"
|
||||
with belief_scope("ConfigManager.__init__"):
|
||||
if not isinstance(config_path, str) or not config_path:
|
||||
logger.explore("Invalid config_path provided", extra={"path": config_path})
|
||||
raise ValueError("config_path must be a non-empty string")
|
||||
|
||||
logger.info(f"[ConfigManager][Entry] Initializing with legacy path {config_path}")
|
||||
logger.reason(f"Initializing ConfigManager with legacy path: {config_path}")
|
||||
|
||||
self.config_path = Path(config_path)
|
||||
self.config: AppConfig = self._load_config()
|
||||
|
||||
configure_logger(self.config.settings.logging)
|
||||
assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig"
|
||||
|
||||
logger.info("[ConfigManager][Exit] Initialized")
|
||||
if not isinstance(self.config, AppConfig):
|
||||
logger.explore("Config loading resulted in invalid type", extra={"type": type(self.config)})
|
||||
raise TypeError("self.config must be an instance of AppConfig")
|
||||
|
||||
logger.reflect("ConfigManager initialization complete")
|
||||
# [/DEF:__init__:Function]
|
||||
|
||||
# [DEF:_default_config:Function]
|
||||
@@ -104,20 +109,23 @@ class ConfigManager:
|
||||
# @SIDE_EFFECT: Database read/write, possible migration write, logging.
|
||||
# @DATA_CONTRACT: Input(None) -> Output(AppConfig)
|
||||
def _load_config(self) -> AppConfig:
|
||||
with belief_scope("_load_config"):
|
||||
with belief_scope("ConfigManager._load_config"):
|
||||
session: Session = SessionLocal()
|
||||
try:
|
||||
record = self._get_record(session)
|
||||
if record and record.payload:
|
||||
logger.info("[_load_config][Coherence:OK] Configuration loaded from database")
|
||||
return AppConfig(**record.payload)
|
||||
logger.reason("Configuration found in database")
|
||||
config = AppConfig(**record.payload)
|
||||
logger.reflect("Database configuration validated")
|
||||
return config
|
||||
|
||||
logger.info("[_load_config][Action] No database config found, migrating legacy config")
|
||||
logger.reason("No database config found, initiating legacy migration")
|
||||
config = self._load_from_legacy_file()
|
||||
self._save_config_to_db(config, session=session)
|
||||
logger.reflect("Legacy configuration migrated to database")
|
||||
return config
|
||||
except Exception as e:
|
||||
logger.error(f"[_load_config][Coherence:Failed] Error loading config from DB: {e}")
|
||||
logger.explore(f"Error loading config from DB: {e}")
|
||||
return self._default_config()
|
||||
finally:
|
||||
session.close()
|
||||
@@ -130,8 +138,9 @@ class ConfigManager:
|
||||
# @SIDE_EFFECT: Database insert/update, commit/rollback, logging.
|
||||
# @DATA_CONTRACT: Input(AppConfig, Optional[Session]) -> Output(None)
|
||||
def _save_config_to_db(self, config: AppConfig, session: Optional[Session] = None):
|
||||
with belief_scope("_save_config_to_db"):
|
||||
assert isinstance(config, AppConfig), "config must be an instance of AppConfig"
|
||||
with belief_scope("ConfigManager._save_config_to_db"):
|
||||
if not isinstance(config, AppConfig):
|
||||
raise TypeError("config must be an instance of AppConfig")
|
||||
|
||||
owns_session = session is None
|
||||
db = session or SessionLocal()
|
||||
@@ -139,15 +148,17 @@ class ConfigManager:
|
||||
record = self._get_record(db)
|
||||
payload = config.model_dump()
|
||||
if record is None:
|
||||
logger.reason("Creating new global configuration record")
|
||||
record = AppConfigRecord(id="global", payload=payload)
|
||||
db.add(record)
|
||||
else:
|
||||
logger.reason("Updating existing global configuration record")
|
||||
record.payload = payload
|
||||
db.commit()
|
||||
logger.info("[_save_config_to_db][Action] Configuration saved to database")
|
||||
logger.reflect("Configuration successfully committed to database")
|
||||
except Exception as e:
|
||||
db.rollback()
|
||||
logger.error(f"[_save_config_to_db][Coherence:Failed] Failed to save: {e}")
|
||||
logger.explore(f"Failed to save configuration: {e}")
|
||||
raise
|
||||
finally:
|
||||
if owns_session:
|
||||
@@ -183,14 +194,15 @@ class ConfigManager:
|
||||
# @SIDE_EFFECT: Mutates self.config, DB write, logger reconfiguration, logging.
|
||||
# @DATA_CONTRACT: Input(GlobalSettings) -> Output(None)
|
||||
def update_global_settings(self, settings: GlobalSettings):
|
||||
with belief_scope("update_global_settings"):
|
||||
logger.info("[update_global_settings][Entry] Updating settings")
|
||||
with belief_scope("ConfigManager.update_global_settings"):
|
||||
if not isinstance(settings, GlobalSettings):
|
||||
raise TypeError("settings must be an instance of GlobalSettings")
|
||||
|
||||
assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings"
|
||||
logger.reason("Updating global settings and persisting")
|
||||
self.config.settings = settings
|
||||
self.save()
|
||||
configure_logger(settings.logging)
|
||||
logger.info("[update_global_settings][Exit] Settings updated")
|
||||
logger.reflect("Global settings updated and logger reconfigured")
|
||||
# [/DEF:update_global_settings:Function]
|
||||
|
||||
# [DEF:validate_path:Function]
|
||||
@@ -257,14 +269,15 @@ class ConfigManager:
|
||||
# @SIDE_EFFECT: Mutates environment list, DB write, logging.
|
||||
# @DATA_CONTRACT: Input(Environment) -> Output(None)
|
||||
def add_environment(self, env: Environment):
|
||||
with belief_scope("add_environment"):
|
||||
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
|
||||
assert isinstance(env, Environment), "env must be an instance of Environment"
|
||||
with belief_scope("ConfigManager.add_environment"):
|
||||
if not isinstance(env, Environment):
|
||||
raise TypeError("env must be an instance of Environment")
|
||||
|
||||
logger.reason(f"Adding/Updating environment: {env.id}")
|
||||
self.config.environments = [e for e in self.config.environments if e.id != env.id]
|
||||
self.config.environments.append(env)
|
||||
self.save()
|
||||
logger.info("[add_environment][Exit] Environment added")
|
||||
logger.reflect(f"Environment {env.id} persisted")
|
||||
# [/DEF:add_environment:Function]
|
||||
|
||||
# [DEF:update_environment:Function]
|
||||
@@ -274,22 +287,25 @@ class ConfigManager:
|
||||
# @SIDE_EFFECT: May mutate environment list, DB write, logging.
|
||||
# @DATA_CONTRACT: Input(str env_id, Environment updated_env) -> Output(bool)
|
||||
def update_environment(self, env_id: str, updated_env: Environment) -> bool:
|
||||
with belief_scope("update_environment"):
|
||||
logger.info(f"[update_environment][Entry] Updating {env_id}")
|
||||
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
|
||||
assert isinstance(updated_env, Environment), "updated_env must be an instance of Environment"
|
||||
with belief_scope("ConfigManager.update_environment"):
|
||||
if not env_id or not isinstance(env_id, str):
|
||||
raise ValueError("env_id must be a non-empty string")
|
||||
if not isinstance(updated_env, Environment):
|
||||
raise TypeError("updated_env must be an instance of Environment")
|
||||
|
||||
logger.reason(f"Attempting to update environment: {env_id}")
|
||||
for i, env in enumerate(self.config.environments):
|
||||
if env.id == env_id:
|
||||
if updated_env.password == "********":
|
||||
logger.reason("Preserving existing password for masked update")
|
||||
updated_env.password = env.password
|
||||
|
||||
self.config.environments[i] = updated_env
|
||||
self.save()
|
||||
logger.info(f"[update_environment][Coherence:OK] Updated {env_id}")
|
||||
logger.reflect(f"Environment {env_id} updated and saved")
|
||||
return True
|
||||
|
||||
logger.warning(f"[update_environment][Coherence:Failed] Environment {env_id} not found")
|
||||
logger.explore(f"Environment {env_id} not found for update")
|
||||
return False
|
||||
# [/DEF:update_environment:Function]
|
||||
|
||||
@@ -300,18 +316,19 @@ class ConfigManager:
|
||||
# @SIDE_EFFECT: May mutate environment list, conditional DB write, logging.
|
||||
# @DATA_CONTRACT: Input(str env_id) -> Output(None)
|
||||
def delete_environment(self, env_id: str):
|
||||
with belief_scope("delete_environment"):
|
||||
logger.info(f"[delete_environment][Entry] Deleting {env_id}")
|
||||
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string"
|
||||
with belief_scope("ConfigManager.delete_environment"):
|
||||
if not env_id or not isinstance(env_id, str):
|
||||
raise ValueError("env_id must be a non-empty string")
|
||||
|
||||
logger.reason(f"Attempting to delete environment: {env_id}")
|
||||
original_count = len(self.config.environments)
|
||||
self.config.environments = [e for e in self.config.environments if e.id != env_id]
|
||||
|
||||
if len(self.config.environments) < original_count:
|
||||
self.save()
|
||||
logger.info(f"[delete_environment][Action] Deleted {env_id}")
|
||||
logger.reflect(f"Environment {env_id} deleted and configuration saved")
|
||||
else:
|
||||
logger.warning(f"[delete_environment][Coherence:Failed] Environment {env_id} not found")
|
||||
logger.explore(f"Environment {env_id} not found for deletion")
|
||||
# [/DEF:delete_environment:Function]
|
||||
|
||||
|
||||
|
||||
@@ -38,7 +38,9 @@ class MigrationEngine:
|
||||
# @PARAM: mapping_service (Optional[IdMappingService]) - Used for resolving target environment integer IDs.
|
||||
def __init__(self, mapping_service: Optional[IdMappingService] = None):
|
||||
with belief_scope("MigrationEngine.__init__"):
|
||||
logger.reason("Initializing MigrationEngine")
|
||||
self.mapping_service = mapping_service
|
||||
logger.reflect("MigrationEngine initialized")
|
||||
# [/DEF:__init__:Function]
|
||||
|
||||
# [DEF:transform_zip:Function]
|
||||
@@ -59,12 +61,14 @@ class MigrationEngine:
|
||||
Transform a Superset export ZIP by replacing database UUIDs and optionally fixing cross-filters.
|
||||
"""
|
||||
with belief_scope("MigrationEngine.transform_zip"):
|
||||
logger.reason(f"Starting ZIP transformation: {zip_path} -> {output_path}")
|
||||
|
||||
with tempfile.TemporaryDirectory() as temp_dir_str:
|
||||
temp_dir = Path(temp_dir_str)
|
||||
|
||||
try:
|
||||
# 1. Extract
|
||||
logger.info(f"[MigrationEngine.transform_zip][Action] Extracting ZIP: {zip_path}")
|
||||
logger.reason(f"Extracting source archive to {temp_dir}")
|
||||
with zipfile.ZipFile(zip_path, 'r') as zf:
|
||||
zf.extractall(temp_dir)
|
||||
|
||||
@@ -72,33 +76,33 @@ class MigrationEngine:
|
||||
dataset_files = list(temp_dir.glob("**/datasets/**/*.yaml")) + list(temp_dir.glob("**/datasets/*.yaml"))
|
||||
dataset_files = list(set(dataset_files))
|
||||
|
||||
logger.info(f"[MigrationEngine.transform_zip][State] Found {len(dataset_files)} dataset files.")
|
||||
logger.reason(f"Transforming {len(dataset_files)} dataset YAML files")
|
||||
for ds_file in dataset_files:
|
||||
logger.info(f"[MigrationEngine.transform_zip][Action] Transforming dataset: {ds_file}")
|
||||
self._transform_yaml(ds_file, db_mapping)
|
||||
|
||||
# 2.5 Patch Cross-Filters (Dashboards)
|
||||
if fix_cross_filters and self.mapping_service and target_env_id:
|
||||
if fix_cross_filters:
|
||||
if self.mapping_service and target_env_id:
|
||||
dash_files = list(temp_dir.glob("**/dashboards/**/*.yaml")) + list(temp_dir.glob("**/dashboards/*.yaml"))
|
||||
dash_files = list(set(dash_files))
|
||||
|
||||
logger.info(f"[MigrationEngine.transform_zip][State] Found {len(dash_files)} dashboard files for patching.")
|
||||
logger.reason(f"Patching cross-filters for {len(dash_files)} dashboards")
|
||||
|
||||
# Gather all source UUID-to-ID mappings from the archive first
|
||||
source_id_to_uuid_map = self._extract_chart_uuids_from_archive(temp_dir)
|
||||
|
||||
for dash_file in dash_files:
|
||||
logger.info(f"[MigrationEngine.transform_zip][Action] Patching dashboard: {dash_file}")
|
||||
self._patch_dashboard_metadata(dash_file, target_env_id, source_id_to_uuid_map)
|
||||
else:
|
||||
logger.explore("Cross-filter patching requested but mapping service or target_env_id is missing")
|
||||
|
||||
# 3. Re-package
|
||||
logger.info(f"[MigrationEngine.transform_zip][Action] Re-packaging ZIP to: {output_path} (strip_databases={strip_databases})")
|
||||
logger.reason(f"Re-packaging transformed archive (strip_databases={strip_databases})")
|
||||
with zipfile.ZipFile(output_path, 'w', zipfile.ZIP_DEFLATED) as zf:
|
||||
for root, dirs, files in os.walk(temp_dir):
|
||||
rel_root = Path(root).relative_to(temp_dir)
|
||||
|
||||
if strip_databases and "databases" in rel_root.parts:
|
||||
logger.info(f"[MigrationEngine.transform_zip][Action] Skipping file in databases directory: {rel_root}")
|
||||
continue
|
||||
|
||||
for file in files:
|
||||
@@ -106,9 +110,10 @@ class MigrationEngine:
|
||||
arcname = file_path.relative_to(temp_dir)
|
||||
zf.write(file_path, arcname)
|
||||
|
||||
logger.reflect("ZIP transformation completed successfully")
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"[MigrationEngine.transform_zip][Coherence:Failed] Error transforming ZIP: {e}")
|
||||
logger.explore(f"Error transforming ZIP: {e}")
|
||||
return False
|
||||
# [/DEF:transform_zip:Function]
|
||||
|
||||
@@ -122,19 +127,23 @@ class MigrationEngine:
|
||||
# @DATA_CONTRACT: Input[(Path file_path, Dict[str,str] db_mapping)] -> Output[None]
|
||||
def _transform_yaml(self, file_path: Path, db_mapping: Dict[str, str]):
|
||||
with belief_scope("MigrationEngine._transform_yaml"):
|
||||
if not file_path.exists():
|
||||
logger.explore(f"YAML file not found: {file_path}")
|
||||
return
|
||||
|
||||
with open(file_path, 'r') as f:
|
||||
data = yaml.safe_load(f)
|
||||
|
||||
if not data:
|
||||
return
|
||||
|
||||
# Superset dataset YAML structure:
|
||||
# database_uuid: ...
|
||||
source_uuid = data.get('database_uuid')
|
||||
if source_uuid in db_mapping:
|
||||
logger.reason(f"Replacing database UUID in {file_path.name}")
|
||||
data['database_uuid'] = db_mapping[source_uuid]
|
||||
with open(file_path, 'w') as f:
|
||||
yaml.dump(data, f)
|
||||
logger.reflect(f"Database UUID patched in {file_path.name}")
|
||||
# [/DEF:_transform_yaml:Function]
|
||||
|
||||
# [DEF:_extract_chart_uuids_from_archive:Function]
|
||||
@@ -176,6 +185,9 @@ class MigrationEngine:
|
||||
def _patch_dashboard_metadata(self, file_path: Path, target_env_id: str, source_map: Dict[int, str]):
|
||||
with belief_scope("MigrationEngine._patch_dashboard_metadata"):
|
||||
try:
|
||||
if not file_path.exists():
|
||||
return
|
||||
|
||||
with open(file_path, 'r') as f:
|
||||
data = yaml.safe_load(f)
|
||||
|
||||
@@ -186,18 +198,13 @@ class MigrationEngine:
|
||||
if not metadata_str:
|
||||
return
|
||||
|
||||
metadata = json.loads(metadata_str)
|
||||
modified = False
|
||||
|
||||
# We need to deeply traverse and replace. For MVP, string replacement over the raw JSON is an option,
|
||||
# but careful dict traversal is safer.
|
||||
|
||||
# Fetch target UUIDs for everything we know:
|
||||
uuids_needed = list(source_map.values())
|
||||
logger.reason(f"Resolving {len(uuids_needed)} remote IDs for dashboard metadata patching")
|
||||
target_ids = self.mapping_service.get_remote_ids_batch(target_env_id, ResourceType.CHART, uuids_needed)
|
||||
|
||||
if not target_ids:
|
||||
logger.info("[MigrationEngine._patch_dashboard_metadata][Reflect] No remote target IDs found in mapping database.")
|
||||
logger.reflect("No remote target IDs found in mapping database for this dashboard.")
|
||||
return
|
||||
|
||||
# Map Source Int -> Target Int
|
||||
@@ -210,21 +217,16 @@ class MigrationEngine:
|
||||
missing_targets.append(s_id)
|
||||
|
||||
if missing_targets:
|
||||
logger.warning(f"[MigrationEngine._patch_dashboard_metadata][Coherence:Recoverable] Missing target IDs for source IDs: {missing_targets}. Cross-filters for these IDs might break.")
|
||||
logger.explore(f"Missing target IDs for source IDs: {missing_targets}. Cross-filters might break.")
|
||||
|
||||
if not source_to_target:
|
||||
logger.info("[MigrationEngine._patch_dashboard_metadata][Reflect] No source IDs matched remotely. Skipping patch.")
|
||||
logger.reflect("No source IDs matched remotely. Skipping patch.")
|
||||
return
|
||||
|
||||
# Complex metadata traversal would go here (e.g. for native_filter_configuration)
|
||||
# We use regex replacement over the string for safety over unknown nested dicts.
|
||||
|
||||
logger.reason(f"Patching {len(source_to_target)} ID references in json_metadata")
|
||||
new_metadata_str = metadata_str
|
||||
|
||||
# Replace chartId and datasetId assignments explicitly.
|
||||
# Pattern: "datasetId": 42 or "chartId": 42
|
||||
for s_id, t_id in source_to_target.items():
|
||||
# Replace in native_filter_configuration targets
|
||||
new_metadata_str = re.sub(r'("datasetId"\s*:\s*)' + str(s_id) + r'(\b)', r'\g<1>' + str(t_id) + r'\g<2>', new_metadata_str)
|
||||
new_metadata_str = re.sub(r'("chartId"\s*:\s*)' + str(s_id) + r'(\b)', r'\g<1>' + str(t_id) + r'\g<2>', new_metadata_str)
|
||||
|
||||
@@ -233,10 +235,10 @@ class MigrationEngine:
|
||||
|
||||
with open(file_path, 'w') as f:
|
||||
yaml.dump(data, f)
|
||||
logger.info(f"[MigrationEngine._patch_dashboard_metadata][Reason] Re-serialized modified JSON metadata for dashboard.")
|
||||
logger.reflect(f"Dashboard metadata patched and saved: {file_path.name}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[MigrationEngine._patch_dashboard_metadata][Coherence:Failed] Metadata patch failed: {e}")
|
||||
logger.explore(f"Metadata patch failed for {file_path.name}: {e}")
|
||||
|
||||
# [/DEF:_patch_dashboard_metadata:Function]
|
||||
|
||||
|
||||
@@ -15,10 +15,10 @@
|
||||
@RELATION: [BINDS_TO] ->[frontend/src/components/TaskLogViewer.svelte]
|
||||
@RELATION: [BINDS_TO] ->[frontend/src/components/PasswordPrompt.svelte]
|
||||
@INVARIANT: Migration start is blocked unless source and target environments are selected, distinct, and at least one dashboard is selected.
|
||||
@UX_STATE: Idle -> User configures source/target environments, dashboard selection, and migration options.
|
||||
@UX_STATE: Loading -> Environment/database/dry-run fetch operations disable relevant actions and show progress text.
|
||||
@UX_STATE: Error -> Error banner/prompt message is shown while keeping user input intact for correction.
|
||||
@UX_STATE: Success -> Dry-run summary or active task view is rendered after successful API operations.
|
||||
@UX_STATE: [Idle] -> User configures source/target environments, dashboard selection, and migration options.
|
||||
@UX_STATE: [Loading] -> Environment/database/dry-run fetch operations disable relevant actions and show progress text.
|
||||
@UX_STATE: [Error] -> Error banner/prompt message is shown while keeping user input intact for correction.
|
||||
@UX_STATE: [Success] -> Dry-run summary or active task view is rendered after successful API operations.
|
||||
@UX_FEEDBACK: Inline error banner, disabled CTA states, loading labels, dry-run summary cards, modal dialogs.
|
||||
@UX_RECOVERY: User can adjust selection, refresh databases, retry dry-run/migration, resume task with passwords, or cancel modal flow.
|
||||
@UX_REACTIVITY: State transitions rely on Svelte reactive bindings and store subscription to selectedTask.
|
||||
@@ -102,9 +102,12 @@
|
||||
*/
|
||||
async function fetchEnvironments() {
|
||||
return belief_scope("fetchEnvironments", async () => {
|
||||
console.info("[fetchEnvironments][REASON] Initializing environment list for selection");
|
||||
try {
|
||||
environments = await api.getEnvironmentsList();
|
||||
console.info("[fetchEnvironments][REFLECT] Environments loaded", { count: environments.length });
|
||||
} catch (e) {
|
||||
console.error("[fetchEnvironments][EXPLORE] Failed to fetch environments", e);
|
||||
error = e.message;
|
||||
} finally {
|
||||
loading = false;
|
||||
@@ -122,10 +125,13 @@
|
||||
*/
|
||||
async function fetchDashboards(envId: string) {
|
||||
return belief_scope("fetchDashboards", async () => {
|
||||
console.info("[fetchDashboards][REASON] Fetching dashboards for environment", { envId });
|
||||
try {
|
||||
dashboards = await api.requestApi(`/environments/${envId}/dashboards`);
|
||||
selectedDashboardIds = []; // Reset selection when env changes
|
||||
console.info("[fetchDashboards][REFLECT] Dashboards loaded", { count: dashboards.length });
|
||||
} catch (e) {
|
||||
console.error("[fetchDashboards][EXPLORE] Failed to fetch dashboards", e);
|
||||
error = e.message;
|
||||
dashboards = [];
|
||||
}
|
||||
@@ -135,8 +141,18 @@
|
||||
|
||||
onMount(fetchEnvironments);
|
||||
|
||||
// Reactive: fetch dashboards when source env changes
|
||||
$: if (sourceEnvId) fetchDashboards(sourceEnvId);
|
||||
// [DEF:ReactiveDashboardFetch:Block]
|
||||
/**
|
||||
* @PURPOSE: Automatically fetch dashboards when the source environment is changed.
|
||||
* @PRE: sourceEnvId is not empty.
|
||||
* @POST: fetchDashboards is called with the new sourceEnvId.
|
||||
* @UX_STATE: [Loading] -> Triggered when sourceEnvId changes.
|
||||
*/
|
||||
$: if (sourceEnvId) {
|
||||
console.info("[ReactiveDashboardFetch][REASON] Source environment changed, fetching dashboards", { sourceEnvId });
|
||||
fetchDashboards(sourceEnvId);
|
||||
}
|
||||
// [/DEF:ReactiveDashboardFetch:Block]
|
||||
|
||||
// [DEF:fetchDatabases:Function]
|
||||
/**
|
||||
@@ -146,7 +162,11 @@
|
||||
*/
|
||||
async function fetchDatabases() {
|
||||
return belief_scope("fetchDatabases", async () => {
|
||||
if (!sourceEnvId || !targetEnvId) return;
|
||||
if (!sourceEnvId || !targetEnvId) {
|
||||
console.warn("[fetchDatabases][EXPLORE] Missing environment IDs for database fetch");
|
||||
return;
|
||||
}
|
||||
console.info("[fetchDatabases][REASON] Fetching databases and suggestions for mapping", { sourceEnvId, targetEnvId });
|
||||
fetchingDbs = true;
|
||||
error = "";
|
||||
|
||||
@@ -167,7 +187,13 @@
|
||||
targetDatabases = tgt;
|
||||
mappings = maps;
|
||||
suggestions = sugs;
|
||||
console.info("[fetchDatabases][REFLECT] Databases and mappings loaded", {
|
||||
sourceCount: src.length,
|
||||
targetCount: tgt.length,
|
||||
mappingCount: maps.length
|
||||
});
|
||||
} catch (e) {
|
||||
console.error("[fetchDatabases][EXPLORE] Failed to fetch databases", e);
|
||||
error = e.message;
|
||||
} finally {
|
||||
fetchingDbs = false;
|
||||
@@ -188,8 +214,12 @@
|
||||
const sDb = sourceDatabases.find((d) => d.uuid === sourceUuid);
|
||||
const tDb = targetDatabases.find((d) => d.uuid === targetUuid);
|
||||
|
||||
if (!sDb || !tDb) return;
|
||||
if (!sDb || !tDb) {
|
||||
console.warn("[handleMappingUpdate][EXPLORE] Database not found for mapping", { sourceUuid, targetUuid });
|
||||
return;
|
||||
}
|
||||
|
||||
console.info("[handleMappingUpdate][REASON] Updating database mapping", { sourceUuid, targetUuid });
|
||||
try {
|
||||
const savedMapping = await api.postApi("/mappings", {
|
||||
source_env_id: sourceEnvId,
|
||||
@@ -204,7 +234,9 @@
|
||||
...mappings.filter((m) => m.source_db_uuid !== sourceUuid),
|
||||
savedMapping,
|
||||
];
|
||||
console.info("[handleMappingUpdate][REFLECT] Mapping saved successfully");
|
||||
} catch (e) {
|
||||
console.error("[handleMappingUpdate][EXPLORE] Failed to save mapping", e);
|
||||
error = e.message;
|
||||
}
|
||||
});
|
||||
@@ -234,6 +266,13 @@
|
||||
// Ideally, TaskHistory or TaskRunner emits an event when input is needed.
|
||||
// Or we watch selectedTask.
|
||||
|
||||
// [DEF:ReactivePasswordPrompt:Block]
|
||||
/**
|
||||
* @PURPOSE: Monitor selected task for input requests and trigger password prompt.
|
||||
* @PRE: $selectedTask is not null and status is AWAITING_INPUT.
|
||||
* @POST: showPasswordPrompt is set to true if input_request is database_password.
|
||||
* @UX_STATE: [AwaitingInput] -> Password prompt modal is displayed.
|
||||
*/
|
||||
$: if (
|
||||
$selectedTask &&
|
||||
$selectedTask.status === "AWAITING_INPUT" &&
|
||||
@@ -241,6 +280,7 @@
|
||||
) {
|
||||
const req = $selectedTask.input_request;
|
||||
if (req.type === "database_password") {
|
||||
console.info("[ReactivePasswordPrompt][REASON] Task awaiting database passwords", { taskId: $selectedTask.id });
|
||||
passwordPromptDatabases = req.databases || [];
|
||||
passwordPromptErrorMessage = req.error_message || "";
|
||||
showPasswordPrompt = true;
|
||||
@@ -251,6 +291,7 @@
|
||||
// showPasswordPrompt = false;
|
||||
// Actually, don't auto-close, let the user or success handler close it.
|
||||
}
|
||||
// [/DEF:ReactivePasswordPrompt:Block]
|
||||
// [/DEF:handlePasswordPrompt:Function]
|
||||
|
||||
// [DEF:handleResumeMigration:Function]
|
||||
@@ -278,31 +319,41 @@
|
||||
|
||||
// [DEF:startMigration:Function]
|
||||
/**
|
||||
* @purpose Starts the migration process.
|
||||
* @pre sourceEnvId and targetEnvId must be set and different.
|
||||
* @post Migration task is started and selectedTask is updated.
|
||||
* @PURPOSE: Initiates the migration process by sending the selection to the backend.
|
||||
* @PRE: sourceEnvId and targetEnvId are set and different; at least one dashboard is selected.
|
||||
* @POST: A migration task is created and selectedTask store is updated.
|
||||
* @SIDE_EFFECT: Resets dryRunResult; updates error state on failure.
|
||||
* @UX_STATE: [Loading] -> [Success] or [Error]
|
||||
*/
|
||||
async function startMigration() {
|
||||
return belief_scope("startMigration", async () => {
|
||||
if (!sourceEnvId || !targetEnvId) {
|
||||
console.warn("[startMigration][EXPLORE] Missing environment selection");
|
||||
error =
|
||||
$t.migration?.select_both_envs ||
|
||||
"Please select both source and target environments.";
|
||||
return;
|
||||
}
|
||||
if (sourceEnvId === targetEnvId) {
|
||||
console.warn("[startMigration][EXPLORE] Source and target environments are identical");
|
||||
error =
|
||||
$t.migration?.different_envs ||
|
||||
"Source and target environments must be different.";
|
||||
return;
|
||||
}
|
||||
if (selectedDashboardIds.length === 0) {
|
||||
console.warn("[startMigration][EXPLORE] No dashboards selected");
|
||||
error =
|
||||
$t.migration?.select_dashboards ||
|
||||
"Please select at least one dashboard to migrate.";
|
||||
return;
|
||||
}
|
||||
|
||||
console.info("[startMigration][REASON] Initiating migration execution", {
|
||||
sourceEnvId,
|
||||
targetEnvId,
|
||||
dashboardCount: selectedDashboardIds.length
|
||||
});
|
||||
error = "";
|
||||
try {
|
||||
dryRunResult = null;
|
||||
@@ -313,14 +364,9 @@
|
||||
replace_db_config: replaceDb,
|
||||
fix_cross_filters: fixCrossFilters,
|
||||
};
|
||||
console.log(
|
||||
`[MigrationDashboard][Action] Starting migration with selection:`,
|
||||
selection,
|
||||
);
|
||||
|
||||
const result = await api.postApi("/migration/execute", selection);
|
||||
console.log(
|
||||
`[MigrationDashboard][Action] Migration started: ${result.task_id} - ${result.message}`,
|
||||
);
|
||||
console.info("[startMigration][REFLECT] Migration task created", { taskId: result.task_id });
|
||||
|
||||
// Wait a brief moment for the backend to ensure the task is retrievable
|
||||
await new Promise((r) => setTimeout(r, 500));
|
||||
@@ -329,12 +375,9 @@
|
||||
try {
|
||||
const task = await api.getTask(result.task_id);
|
||||
selectedTask.set(task);
|
||||
console.info("[startMigration][REFLECT] Task details fetched and store updated");
|
||||
} catch (fetchErr) {
|
||||
// Fallback: create a temporary task object to switch view immediately
|
||||
console.warn(
|
||||
$t.migration?.task_placeholder_warn ||
|
||||
"Could not fetch task details immediately, using placeholder.",
|
||||
);
|
||||
console.warn("[startMigration][EXPLORE] Could not fetch task details immediately, using placeholder", fetchErr);
|
||||
selectedTask.set({
|
||||
id: result.task_id,
|
||||
plugin_id: "superset-migration",
|
||||
@@ -344,7 +387,7 @@
|
||||
});
|
||||
}
|
||||
} catch (e) {
|
||||
console.error(`[MigrationDashboard][Failure] Migration failed:`, e);
|
||||
console.error("[startMigration][EXPLORE] Migration initiation failed", e);
|
||||
error = e.message;
|
||||
}
|
||||
});
|
||||
@@ -353,36 +396,38 @@
|
||||
|
||||
// [DEF:startDryRun:Function]
|
||||
/**
|
||||
* @purpose Builds pre-flight diff and risk summary without applying migration.
|
||||
* @pre source/target environments and selected dashboards are valid.
|
||||
* @post dryRunResult is populated with backend response.
|
||||
* @UX_STATE: Idle -> Dry Run button is enabled when selection is valid.
|
||||
* @UX_STATE: Loading -> Dry Run button shows "Dry Run..." and stays disabled.
|
||||
* @UX_STATE: Error -> error banner is displayed and dryRunResult resets to null.
|
||||
* @PURPOSE: Performs a dry-run migration to identify potential risks and changes.
|
||||
* @PRE: source/target environments and selected dashboards are valid.
|
||||
* @POST: dryRunResult is populated with the pre-flight analysis.
|
||||
* @UX_STATE: [Loading] -> [Success] or [Error]
|
||||
* @UX_FEEDBACK: User sees summary cards + risk block + JSON details after success.
|
||||
* @UX_RECOVERY: User can adjust selection and press Dry Run again.
|
||||
*/
|
||||
async function startDryRun() {
|
||||
return belief_scope("startDryRun", async () => {
|
||||
if (!sourceEnvId || !targetEnvId) {
|
||||
console.warn("[startDryRun][EXPLORE] Missing environment selection");
|
||||
error =
|
||||
$t.migration?.select_both_envs ||
|
||||
"Please select both source and target environments.";
|
||||
return;
|
||||
}
|
||||
if (sourceEnvId === targetEnvId) {
|
||||
console.warn("[startDryRun][EXPLORE] Source and target environments are identical");
|
||||
error =
|
||||
$t.migration?.different_envs ||
|
||||
"Source and target environments must be different.";
|
||||
return;
|
||||
}
|
||||
if (selectedDashboardIds.length === 0) {
|
||||
console.warn("[startDryRun][EXPLORE] No dashboards selected");
|
||||
error =
|
||||
$t.migration?.select_dashboards ||
|
||||
"Please select at least one dashboard to migrate.";
|
||||
return;
|
||||
}
|
||||
|
||||
console.info("[startDryRun][REASON] Initiating dry-run analysis", { sourceEnvId, targetEnvId });
|
||||
error = "";
|
||||
dryRunLoading = true;
|
||||
try {
|
||||
@@ -394,7 +439,9 @@
|
||||
fix_cross_filters: fixCrossFilters,
|
||||
};
|
||||
dryRunResult = await api.postApi("/migration/dry-run", selection);
|
||||
console.info("[startDryRun][REFLECT] Dry-run analysis completed", { riskScore: dryRunResult.risk.score });
|
||||
} catch (e) {
|
||||
console.error("[startDryRun][EXPLORE] Dry-run analysis failed", e);
|
||||
error = e.message;
|
||||
dryRunResult = null;
|
||||
} finally {
|
||||
@@ -418,20 +465,29 @@
|
||||
-->
|
||||
<!-- [SECTION: TEMPLATE] -->
|
||||
<div class="max-w-4xl mx-auto p-6">
|
||||
<!-- [DEF:MigrationHeader:Block] -->
|
||||
<PageHeader title={$t.nav.migration} />
|
||||
<!-- [/DEF:MigrationHeader:Block] -->
|
||||
|
||||
<!-- [DEF:TaskHistorySection:Block] -->
|
||||
<TaskHistory on:viewLogs={handleViewLogs} />
|
||||
<!-- [/DEF:TaskHistorySection:Block] -->
|
||||
|
||||
<!-- [DEF:ActiveTaskSection:Block] -->
|
||||
{#if $selectedTask}
|
||||
<div class="mt-6">
|
||||
<TaskRunner />
|
||||
<div class="mt-4">
|
||||
<Button variant="secondary" on:click={() => selectedTask.set(null)}>
|
||||
<Button variant="secondary" on:click={() => {
|
||||
console.info("[ActiveTaskSection][REASON] User cancelled active task view");
|
||||
selectedTask.set(null);
|
||||
}}>
|
||||
{$t.common.cancel}
|
||||
</Button>
|
||||
</div>
|
||||
</div>
|
||||
{:else}
|
||||
<!-- [/DEF:ActiveTaskSection:Block] -->
|
||||
{#if loading}
|
||||
<p>{$t.migration?.loading_envs }</p>
|
||||
{:else if error}
|
||||
@@ -442,6 +498,7 @@
|
||||
</div>
|
||||
{/if}
|
||||
|
||||
<!-- [DEF:EnvironmentSelectionSection:Block] -->
|
||||
<div class="grid grid-cols-1 md:grid-cols-2 gap-6 mb-8">
|
||||
<EnvSelector
|
||||
label={$t.migration?.source_env }
|
||||
@@ -454,6 +511,7 @@
|
||||
{environments}
|
||||
/>
|
||||
</div>
|
||||
<!-- [/DEF:EnvironmentSelectionSection:Block] -->
|
||||
|
||||
<!-- [DEF:DashboardSelectionSection:Component] -->
|
||||
<div class="mb-8">
|
||||
@@ -476,6 +534,7 @@
|
||||
</div>
|
||||
<!-- [/DEF:DashboardSelectionSection:Component] -->
|
||||
|
||||
<!-- [DEF:MigrationOptionsSection:Block] -->
|
||||
<div class="mb-4">
|
||||
<div class="flex items-center mb-2">
|
||||
<input
|
||||
@@ -496,6 +555,7 @@
|
||||
type="checkbox"
|
||||
bind:checked={replaceDb}
|
||||
on:change={() => {
|
||||
console.info("[MigrationOptionsSection][REASON] Database replacement toggled", { replaceDb });
|
||||
if (replaceDb && sourceDatabases.length === 0) fetchDatabases();
|
||||
}}
|
||||
class="h-4 w-4 text-indigo-600 focus:ring-indigo-500 border-gray-300 rounded"
|
||||
@@ -505,6 +565,7 @@
|
||||
</label>
|
||||
</div>
|
||||
</div>
|
||||
<!-- [/DEF:MigrationOptionsSection:Block] -->
|
||||
|
||||
{#if replaceDb}
|
||||
<div class="mb-8 p-4 border rounded-md bg-gray-50">
|
||||
@@ -559,6 +620,7 @@
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
<!-- [DEF:DryRunResultsSection:Block] -->
|
||||
{#if dryRunResult}
|
||||
<div class="mt-6 rounded-md border border-slate-200 bg-slate-50 p-4 space-y-3">
|
||||
<h3 class="text-base font-semibold">Pre-flight Diff</h3>
|
||||
@@ -599,15 +661,24 @@
|
||||
</details>
|
||||
</div>
|
||||
{/if}
|
||||
<!-- [/DEF:DryRunResultsSection:Block] -->
|
||||
{/if}
|
||||
</div>
|
||||
|
||||
<!-- Modals -->
|
||||
<!-- [DEF:MigrationModals:Block] -->
|
||||
<!--
|
||||
@PURPOSE: Render overlay components for log viewing and password entry.
|
||||
@UX_STATE: [LogViewing] -> TaskLogViewer is visible.
|
||||
@UX_STATE: [AwaitingInput] -> PasswordPrompt is visible.
|
||||
-->
|
||||
<TaskLogViewer
|
||||
bind:show={showLogViewer}
|
||||
taskId={logViewerTaskId}
|
||||
taskStatus={logViewerTaskStatus}
|
||||
on:close={() => (showLogViewer = false)}
|
||||
on:close={() => {
|
||||
console.info("[MigrationModals][REASON] Closing log viewer");
|
||||
showLogViewer = false;
|
||||
}}
|
||||
/>
|
||||
|
||||
<PasswordPrompt
|
||||
@@ -615,8 +686,12 @@
|
||||
databases={passwordPromptDatabases}
|
||||
errorMessage={passwordPromptErrorMessage}
|
||||
on:resume={handleResumeMigration}
|
||||
on:cancel={() => (showPasswordPrompt = false)}
|
||||
on:cancel={() => {
|
||||
console.info("[MigrationModals][REASON] User cancelled password prompt");
|
||||
showPasswordPrompt = false;
|
||||
}}
|
||||
/>
|
||||
<!-- [/DEF:MigrationModals:Block] -->
|
||||
|
||||
<!-- [/SECTION] -->
|
||||
<!-- [/DEF:MigrationDashboardView:Block] -->
|
||||
|
||||
@@ -18,6 +18,7 @@ import re
|
||||
import json
|
||||
import datetime
|
||||
import fnmatch
|
||||
import argparse
|
||||
from enum import Enum
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Dict, List, Optional, Any, Pattern, Tuple, Set
|
||||
@@ -965,6 +966,106 @@ class SemanticMapGenerator:
|
||||
self._generate_module_map()
|
||||
# [/DEF:_generate_artifacts:Function]
|
||||
|
||||
# [DEF:_print_agent_report:Function]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Prints a JSON report optimized for AI agent orchestration and control.
|
||||
# @PRE: Validation and artifact generation are complete.
|
||||
# @POST: JSON report printed to stdout.
|
||||
def _print_agent_report(self):
|
||||
with belief_scope("_print_agent_report"):
|
||||
# Calculate global score (re-using logic from _generate_report)
|
||||
total_weighted_score = 0
|
||||
total_weight = 0
|
||||
for file_path, data in self.file_scores.items():
|
||||
tier = data["tier"]
|
||||
score = data["score"]
|
||||
weight = 3 if tier == Tier.CRITICAL else (2 if tier == Tier.STANDARD else 1)
|
||||
total_weighted_score += score * weight
|
||||
total_weight += weight
|
||||
gs = total_weighted_score / total_weight if total_weight > 0 else 0
|
||||
|
||||
# Flatten entities to get per-file issues
|
||||
file_data = {}
|
||||
def collect_recursive(entities):
|
||||
for e in entities:
|
||||
path = e.file_path
|
||||
if path not in file_data:
|
||||
file_data[path] = {"issues": [], "tier": e.get_tier().value, "score": self.file_scores.get(path, {}).get("score", 0)}
|
||||
file_data[path]["issues"].extend([i.to_dict() for i in e.compliance_issues])
|
||||
collect_recursive(e.children)
|
||||
collect_recursive(self.entities)
|
||||
|
||||
# Critical parsing errors
|
||||
cpe = []
|
||||
for path, data in file_data.items():
|
||||
for i in data["issues"]:
|
||||
msg = i.get("message", "").lower()
|
||||
sev = i.get("severity", "").lower()
|
||||
if "parsing" in msg and (sev == "error" or "critical" in msg):
|
||||
cpe.append({"file": path, "severity": i.get("severity"), "message": i.get("message")})
|
||||
|
||||
# <0.7 by tier
|
||||
lt = {"CRITICAL": 0, "STANDARD": 0, "TRIVIAL": 0, "UNKNOWN": 0}
|
||||
for path, data in file_data.items():
|
||||
if data["score"] < 0.7:
|
||||
tier = data["tier"]
|
||||
lt[tier if tier in lt else "UNKNOWN"] += 1
|
||||
|
||||
# Priority counts
|
||||
p2 = 0
|
||||
p3 = 0
|
||||
for path, data in file_data.items():
|
||||
tier = data["tier"]
|
||||
issues = data["issues"]
|
||||
if tier == "CRITICAL" and any("Missing Mandatory Tag" in i.get("message", "") for i in issues):
|
||||
p2 += 1
|
||||
if tier == "STANDARD" and any("@RELATION" in i.get("message", "") and "Missing Mandatory Tag" in i.get("message", "") for i in issues):
|
||||
p3 += 1
|
||||
|
||||
# Target files status
|
||||
targets = [
|
||||
'frontend/src/routes/migration/+page.svelte',
|
||||
'frontend/src/routes/migration/mappings/+page.svelte',
|
||||
'frontend/src/components/auth/ProtectedRoute.svelte',
|
||||
'backend/src/core/auth/repository.py',
|
||||
'backend/src/core/migration/risk_assessor.py',
|
||||
'backend/src/api/routes/migration.py',
|
||||
'backend/src/models/config.py',
|
||||
'backend/src/services/auth_service.py',
|
||||
'backend/src/core/config_manager.py',
|
||||
'backend/src/core/migration_engine.py'
|
||||
]
|
||||
status = []
|
||||
for t in targets:
|
||||
f = file_data.get(t)
|
||||
if not f:
|
||||
status.append({"path": t, "found": False})
|
||||
continue
|
||||
sc = f["score"]
|
||||
status.append({
|
||||
"path": t,
|
||||
"found": True,
|
||||
"score": sc,
|
||||
"tier": f["tier"],
|
||||
"under_0_7": sc < 0.7,
|
||||
"violations": len(f["issues"]) > 0,
|
||||
"issues_count": len(f["issues"])
|
||||
})
|
||||
|
||||
out = {
|
||||
"global_score": gs,
|
||||
"critical_parsing_errors_count": len(cpe),
|
||||
"critical_parsing_errors": cpe[:50],
|
||||
"lt_0_7_by_tier": lt,
|
||||
"priority_1_blockers": len(cpe),
|
||||
"priority_2_tier1_critical_missing_mandatory_tags_files": p2,
|
||||
"priority_3_tier2_standard_missing_relation_files": p3,
|
||||
"targets": status,
|
||||
"total_files": len(file_data)
|
||||
}
|
||||
print(json.dumps(out, ensure_ascii=False))
|
||||
# [/DEF:_print_agent_report:Function]
|
||||
|
||||
# [DEF:_generate_report:Function]
|
||||
# @TIER: CRITICAL
|
||||
# @PURPOSE: Generates the Markdown compliance report with severity levels.
|
||||
@@ -1306,7 +1407,14 @@ class SemanticMapGenerator:
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
parser = argparse.ArgumentParser(description="Generate Semantic Map and Compliance Reports")
|
||||
parser.add_argument("--agent-report", action="store_true", help="Output JSON report for AI agents")
|
||||
args = parser.parse_args()
|
||||
|
||||
generator = SemanticMapGenerator(PROJECT_ROOT)
|
||||
generator.run()
|
||||
|
||||
if args.agent_report:
|
||||
generator._print_agent_report()
|
||||
|
||||
# [/DEF:generate_semantic_map:Module]
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user