semantic update

This commit is contained in:
2026-03-10 21:33:09 +03:00
parent 542835e0ff
commit b77fa45e4e
10 changed files with 2084 additions and 1499 deletions

View File

@@ -2,12 +2,12 @@
> High-level module structure for AI Context. Generated automatically. > High-level module structure for AI Context. Generated automatically.
**Generated:** 2026-03-10T18:26:33.375187 **Generated:** 2026-03-10T20:52:01.801581
## Summary ## Summary
- **Total Modules:** 103 - **Total Modules:** 103
- **Total Entities:** 3077 - **Total Entities:** 3088
## Module Hierarchy ## Module Hierarchy
@@ -1718,9 +1718,9 @@
### 📁 `migration/` ### 📁 `migration/`
- 📊 **Tiers:** CRITICAL: 12 - 📊 **Tiers:** CRITICAL: 21
- 📄 **Files:** 1 - 📄 **Files:** 1
- 📦 **Entities:** 12 - 📦 **Entities:** 21
**Key Entities:** **Key Entities:**
@@ -1972,9 +1972,9 @@
### 📁 `root/` ### 📁 `root/`
- 🏗️ **Layers:** DevOps/Tooling, Unknown - 🏗️ **Layers:** DevOps/Tooling, Unknown
- 📊 **Tiers:** CRITICAL: 11, STANDARD: 17, TRIVIAL: 9 - 📊 **Tiers:** CRITICAL: 11, STANDARD: 18, TRIVIAL: 10
- 📄 **Files:** 2 - 📄 **Files:** 2
- 📦 **Entities:** 37 - 📦 **Entities:** 39
**Key Entities:** **Key Entities:**

View File

@@ -68,6 +68,8 @@
- 📝 Calculate score and determine module's max tier for weighted global score - 📝 Calculate score and determine module's max tier for weighted global score
- ƒ **_generate_artifacts** (`Function`) `[CRITICAL]` - ƒ **_generate_artifacts** (`Function`) `[CRITICAL]`
- 📝 Writes output files with tier-based compliance data. - 📝 Writes output files with tier-based compliance data.
- ƒ **_print_agent_report** (`Function`)
- 📝 Prints a JSON report optimized for AI agent orchestration and control.
- ƒ **_generate_report** (`Function`) `[CRITICAL]` - ƒ **_generate_report** (`Function`) `[CRITICAL]`
- 📝 Generates the Markdown compliance report with severity levels. - 📝 Generates the Markdown compliance report with severity levels.
- ƒ **_collect_issues** (`Function`) - ƒ **_collect_issues** (`Function`)
@@ -84,6 +86,8 @@
- 📝 Flattens entity tree for easier grouping. - 📝 Flattens entity tree for easier grouping.
- ƒ **to_dict** (`Function`) `[TRIVIAL]` - ƒ **to_dict** (`Function`) `[TRIVIAL]`
- 📝 Auto-detected function (orphan) - 📝 Auto-detected function (orphan)
- ƒ **collect_recursive** (`Function`) `[TRIVIAL]`
- 📝 Auto-detected function (orphan)
- 📦 **DashboardTypes** (`Module`) `[TRIVIAL]` - 📦 **DashboardTypes** (`Module`) `[TRIVIAL]`
- 📝 TypeScript interfaces for Dashboard entities - 📝 TypeScript interfaces for Dashboard entities
- 🏗️ Layer: Domain - 🏗️ Layer: Domain
@@ -1205,6 +1209,8 @@
- 📝 Fetches the list of environments from the API. - 📝 Fetches the list of environments from the API.
- ƒ **fetchDashboards** (`Function`) `[CRITICAL]` - ƒ **fetchDashboards** (`Function`) `[CRITICAL]`
- 📝 Fetches dashboards for the selected source environment. - 📝 Fetches dashboards for the selected source environment.
-**ReactiveDashboardFetch** (`Block`) `[CRITICAL]`
- 📝 Automatically fetch dashboards when the source environment is changed.
- ƒ **fetchDatabases** (`Function`) `[CRITICAL]` - ƒ **fetchDatabases** (`Function`) `[CRITICAL]`
- 📝 Fetches databases from both environments and gets suggestions. - 📝 Fetches databases from both environments and gets suggestions.
- ƒ **handleMappingUpdate** (`Function`) `[CRITICAL]` - ƒ **handleMappingUpdate** (`Function`) `[CRITICAL]`
@@ -1213,15 +1219,25 @@
- 📝 Opens the log viewer for a specific task. - 📝 Opens the log viewer for a specific task.
- ƒ **handlePasswordPrompt** (`Function`) `[CRITICAL]` - ƒ **handlePasswordPrompt** (`Function`) `[CRITICAL]`
- 📝 Reactive logic to show password prompt when a task is awaiting input. - 📝 Reactive logic to show password prompt when a task is awaiting input.
-**ReactivePasswordPrompt** (`Block`) `[CRITICAL]`
- 📝 Monitor selected task for input requests and trigger password prompt.
- ƒ **handleResumeMigration** (`Function`) `[CRITICAL]` - ƒ **handleResumeMigration** (`Function`) `[CRITICAL]`
- 📝 Resumes a migration task with provided passwords. - 📝 Resumes a migration task with provided passwords.
- ƒ **startMigration** (`Function`) `[CRITICAL]` - ƒ **startMigration** (`Function`) `[CRITICAL]`
- 📝 Starts the migration process. - 📝 Initiates the migration process by sending the selection to the backend.
- ƒ **startDryRun** (`Function`) `[CRITICAL]` - ƒ **startDryRun** (`Function`) `[CRITICAL]`
- 📝 Builds pre-flight diff and risk summary without applying migration. - 📝 Performs a dry-run migration to identify potential risks and changes.
-**MigrationDashboardView** (`Block`) `[CRITICAL]` -**MigrationDashboardView** (`Block`) `[CRITICAL]`
- 📝 Render migration configuration controls, action CTAs, dry-run results, and modal entry points. - 📝 Render migration configuration controls, action CTAs, dry-run results, and modal entry points.
-**MigrationHeader** (`Block`) `[CRITICAL]`
-**TaskHistorySection** (`Block`) `[CRITICAL]`
-**ActiveTaskSection** (`Block`) `[CRITICAL]`
-**EnvironmentSelectionSection** (`Block`) `[CRITICAL]`
- 🧩 **DashboardSelectionSection** (`Component`) `[CRITICAL]` - 🧩 **DashboardSelectionSection** (`Component`) `[CRITICAL]`
-**MigrationOptionsSection** (`Block`) `[CRITICAL]`
-**DryRunResultsSection** (`Block`) `[CRITICAL]`
-**MigrationModals** (`Block`) `[CRITICAL]`
- 📝 Render overlay components for log viewing and password entry.
-**MappingsPageScript** (`Block`) `[CRITICAL]` -**MappingsPageScript** (`Block`) `[CRITICAL]`
- 📝 Define imports, state, and handlers that drive migration mappings page FSM. - 📝 Define imports, state, and handlers that drive migration mappings page FSM.
- 🔗 CALLS -> `fetchEnvironments` - 🔗 CALLS -> `fetchEnvironments`

View File

@@ -1,5 +1,5 @@
--- ---
description: Maintain semantic integrity via multi-agent delegation. Analyzes codebase, delegates markup tasks to Semantic Engineer, verifies via Reviewer Agent, and reports status. description: Maintain semantic integrity by generating maps and auditing compliance reports.
--- ---
## User Input ## User Input
@@ -12,62 +12,63 @@ You **MUST** consider the user input before proceeding (if not empty).
## Goal ## Goal
Ensure the codebase 100% adheres to the semantic standards defined in `.ai/standards/semantics.md` (GRACE-Poly Protocol). You are the **Manager/Supervisor**. You do not write code. You manage the queue, delegate files to the Semantic Engineer, audit their work via the Reviewer Agent, and commit successful changes. Ensure the codebase adheres to the semantic standards defined in `.ai/standards/semantics.md`. This involves generating the semantic map, analyzing compliance reports, and identifying critical parsing errors or missing metadata.
## Operating Constraints ## Operating Constraints
1. **ROLE: Orchestrator**: High-level coordination ONLY. Do not output raw code diffs yourself. 1. **ROLE: Orchestrator**: You are responsible for the high-level coordination of semantic maintenance.
2. **DELEGATION PATTERN**: Strict `Orchestrator -> Engineer -> Reviewer -> Orchestrator` loop. 2. **STRICT ADHERENCE**: Follow `.ai/standards/semantics.md` for all anchor and tag syntax.
3. **FAIL-FAST METRICS**: If the Reviewer Agent rejects a file 3 times in a row, drop the file from the current queue and mark it as `[HUMAN_INTERVENTION_REQUIRED]`. 3. **NON-DESTRUCTIVE**: Do not remove existing code logic; only add or update semantic annotations.
4. **TIER AWARENESS**: CRITICAL files MUST be processed first. A failure in a CRITICAL file blocks the entire pipeline. 4. **TIER AWARENESS**: Prioritize CRITICAL and STANDARD modules for compliance fixes.
5. **NO PSEUDO-CONTRACTS (CRITICAL)**: You are STRICTLY FORBIDDEN from using automated scripts (e.g., Python/Bash/sed) to mechanically inject boilerplate, placeholders, or "pseudo-contracts" (such as `# @PURPOSE: Semantic contract placeholder.` or `# @PRE: Inputs satisfy function contract.`) merely to artificially inflate the compliance score. Every semantic tag, anchor, and contract you add MUST reflect a genuine, deep understanding of the specific code's actual logic and business requirements. Automated "stubbing" of semantics is classified as codebase corruption.
## Execution Steps ## Execution Steps
### 1. Generate Semantic State (Analyze) ### 1. Generate Semantic Map
Run the generator script to map the current reality:
Run the generator script from the repository root with the agent report option:
```bash ```bash
python3 generate_semantic_map.py python3 generate_semantic_map.py --agent-report
``` ```
Parse the output (Global Score, Critical Parsing Errors, Files with Score < 0.7).
### 2. Formulate Task Queue ### 2. Analyze Compliance Status
Create an execution queue based on the report. Priority:
- **Priority 1 (Blockers)**: Files with "Critical Parsing Errors" (unclosed `[/DEF]` anchors).
- **Priority 2 (Tier 1)**: `CRITICAL` tier modules missing mandatory tags (`@PRE`, `@POST`, `belief_scope`).
- **Priority 3 (Tier 2)**: `STANDARD` modules with missing graph relations (`@RELATION`).
### 3. The Delegation Loop (For each file in the queue) **Parse the JSON output to identify**:
For every target file, execute this exact sequence: - `global_score`: The overall compliance percentage.
- `critical_parsing_errors_count`: Number of Priority 1 blockers.
- `priority_2_tier1_critical_missing_mandatory_tags_files`: Number of CRITICAL files needing metadata.
- `targets`: Status of key architectural files.
* **Step 3A (Delegate to Worker):** Send the file path and the specific violation from the report to the **Semantic Markup Agent (Engineer)**. ### 3. Audit Critical Issues
*Prompt*: `"Fix semantic violations in [FILE]. Current issues: [ISSUES]. Apply GRACE-Poly standards without changing business logic."*
* **Step 3B (Delegate to Auditor):** Once the Engineer returns the modified file, send it to the **Reviewer Agent (Auditor)**.
*Prompt*: `"Verify GRACE-Poly compliance for [FILE]. Check for paired [DEF] anchors, complete contracts, and belief_scope usage. Return PASS or FAIL with specific line errors."*
* **Step 3C (Evaluate):**
* If Auditor returns `PASS`: Apply the diff to the codebase. Move to the next file.
* If Auditor returns `FAIL`: Send the Auditor's error report back to the Engineer (Step 3A). Repeat max 3 times.
### 4. Verification Read the latest report and extract:
Once the queue is empty, re-run `python3 generate_semantic_map.py` to prove the metrics have improved. - **Critical Parsing Errors**: Unclosed anchors or mismatched tags.
- **Low-Score Files**: Files with score < 0.7 or marked with 🔴.
- **Missing Mandatory Tags**: Specifically for CRITICAL tier modules.
## Output Format ### 4. Formulate Remediation Plan
Return a structured summary of the operation: Create a list of files requiring immediate attention:
1. **Priority 1**: Fix all "Critical Parsing Errors" (unclosed anchors).
2. **Priority 2**: Add missing mandatory tags for CRITICAL modules.
3. **Priority 3**: Improve coverage for STANDARD modules.
```text ### 5. Execute Fixes (Optional/Handoff)
=== GRACE SEMANTIC ORCHESTRATION REPORT ===
Initial Global Score: [X]%
Final Global Score: [Y]%
Status:[PASS / BLOCKED]
Files Processed: If $ARGUMENTS contains "fix" or "apply":
1. [file_path] -[PASS (1 attempt) | PASS (2 attempts) | FAILED] - For each target file, use `read_file` to get context.
2. ... - Apply semantic fixes using `apply_diff`, preserving all code logic.
- Re-run `python3 generate_semantic_map.py --agent-report` to verify the fix.
## Output
Provide a summary of the semantic state:
- **Global Score**: [X]%
- **Status**: [PASS/FAIL] (FAIL if any Critical Parsing Errors exist)
- **Top Issues**: List top 3-5 files needing attention.
- **Action Taken**: Summary of maps generated or fixes applied.
Escalations (Human Intervention Required):
-[file_path]: Failed auditor review 3 times. Reason: [Last Auditor Note].
```
## Context ## Context
$ARGUMENTS $ARGUMENTS
```

View File

@@ -26,7 +26,7 @@ from ...dependencies import get_config_manager, get_task_manager, has_permission
from ...core.database import get_db from ...core.database import get_db
from ...models.dashboard import DashboardMetadata, DashboardSelection from ...models.dashboard import DashboardMetadata, DashboardSelection
from ...core.superset_client import SupersetClient from ...core.superset_client import SupersetClient
from ...core.logger import belief_scope from ...core.logger import logger, belief_scope
from ...core.migration.dry_run_orchestrator import MigrationDryRunService from ...core.migration.dry_run_orchestrator import MigrationDryRunService
from ...core.mapping_service import IdMappingService from ...core.mapping_service import IdMappingService
from ...models.mapping import ResourceMapping from ...models.mapping import ResourceMapping
@@ -46,13 +46,17 @@ async def get_dashboards(
_ = Depends(has_permission("plugin:migration", "EXECUTE")) _ = Depends(has_permission("plugin:migration", "EXECUTE"))
): ):
with belief_scope("get_dashboards", f"env_id={env_id}"): with belief_scope("get_dashboards", f"env_id={env_id}"):
logger.reason(f"Fetching dashboards for environment: {env_id}")
environments = config_manager.get_environments() environments = config_manager.get_environments()
env = next((e for e in environments if e.id == env_id), None) env = next((e for e in environments if e.id == env_id), None)
if not env: if not env:
logger.explore(f"Environment {env_id} not found in configuration")
raise HTTPException(status_code=404, detail="Environment not found") raise HTTPException(status_code=404, detail="Environment not found")
client = SupersetClient(env) client = SupersetClient(env)
dashboards = client.get_dashboards_summary() dashboards = client.get_dashboards_summary()
logger.reflect(f"Retrieved {len(dashboards)} dashboards from {env_id}")
return dashboards return dashboards
# [/DEF:get_dashboards:Function] # [/DEF:get_dashboards:Function]
@@ -70,30 +74,29 @@ async def execute_migration(
_ = Depends(has_permission("plugin:migration", "EXECUTE")) _ = Depends(has_permission("plugin:migration", "EXECUTE"))
): ):
with belief_scope("execute_migration"): with belief_scope("execute_migration"):
logger.reason(f"Initiating migration from {selection.source_env_id} to {selection.target_env_id}")
# Validate environments exist # Validate environments exist
environments = config_manager.get_environments() environments = config_manager.get_environments()
env_ids = {e.id for e in environments} env_ids = {e.id for e in environments}
if selection.source_env_id not in env_ids or selection.target_env_id not in env_ids:
raise HTTPException(status_code=400, detail="Invalid source or target environment")
# Create migration task with debug logging if selection.source_env_id not in env_ids or selection.target_env_id not in env_ids:
from ...core.logger import logger logger.explore("Invalid environment selection", extra={"source": selection.source_env_id, "target": selection.target_env_id})
raise HTTPException(status_code=400, detail="Invalid source or target environment")
# Include replace_db_config and fix_cross_filters in the task parameters # Include replace_db_config and fix_cross_filters in the task parameters
task_params = selection.dict() task_params = selection.dict()
task_params['replace_db_config'] = selection.replace_db_config task_params['replace_db_config'] = selection.replace_db_config
task_params['fix_cross_filters'] = selection.fix_cross_filters task_params['fix_cross_filters'] = selection.fix_cross_filters
logger.info(f"Creating migration task with params: {task_params}") logger.reason(f"Creating migration task with {len(selection.selected_ids)} dashboards")
logger.info(f"Available environments: {env_ids}")
logger.info(f"Source env: {selection.source_env_id}, Target env: {selection.target_env_id}")
try: try:
task = await task_manager.create_task("superset-migration", task_params) task = await task_manager.create_task("superset-migration", task_params)
logger.info(f"Task created successfully: {task.id}") logger.reflect(f"Migration task created: {task.id}")
return {"task_id": task.id, "message": "Migration initiated"} return {"task_id": task.id, "message": "Migration initiated"}
except Exception as e: except Exception as e:
logger.error(f"Task creation failed: {e}") logger.explore(f"Task creation failed: {e}")
raise HTTPException(status_code=500, detail=f"Failed to create migration task: {str(e)}") raise HTTPException(status_code=500, detail=f"Failed to create migration task: {str(e)}")
# [/DEF:execute_migration:Function] # [/DEF:execute_migration:Function]
@@ -112,28 +115,40 @@ async def dry_run_migration(
_ = Depends(has_permission("plugin:migration", "EXECUTE")) _ = Depends(has_permission("plugin:migration", "EXECUTE"))
): ):
with belief_scope("dry_run_migration"): with belief_scope("dry_run_migration"):
logger.reason(f"Starting dry run: {selection.source_env_id} -> {selection.target_env_id}")
environments = config_manager.get_environments() environments = config_manager.get_environments()
env_map = {env.id: env for env in environments} env_map = {env.id: env for env in environments}
source_env = env_map.get(selection.source_env_id) source_env = env_map.get(selection.source_env_id)
target_env = env_map.get(selection.target_env_id) target_env = env_map.get(selection.target_env_id)
if not source_env or not target_env: if not source_env or not target_env:
logger.explore("Invalid environment selection for dry run")
raise HTTPException(status_code=400, detail="Invalid source or target environment") raise HTTPException(status_code=400, detail="Invalid source or target environment")
if selection.source_env_id == selection.target_env_id: if selection.source_env_id == selection.target_env_id:
logger.explore("Source and target environments are identical")
raise HTTPException(status_code=400, detail="Source and target environments must be different") raise HTTPException(status_code=400, detail="Source and target environments must be different")
if not selection.selected_ids: if not selection.selected_ids:
logger.explore("No dashboards selected for dry run")
raise HTTPException(status_code=400, detail="No dashboards selected for dry run") raise HTTPException(status_code=400, detail="No dashboards selected for dry run")
service = MigrationDryRunService() service = MigrationDryRunService()
source_client = SupersetClient(source_env) source_client = SupersetClient(source_env)
target_client = SupersetClient(target_env) target_client = SupersetClient(target_env)
try: try:
return service.run( result = service.run(
selection=selection, selection=selection,
source_client=source_client, source_client=source_client,
target_client=target_client, target_client=target_client,
db=db, db=db,
) )
logger.reflect("Dry run analysis complete")
return result
except ValueError as exc: except ValueError as exc:
logger.explore(f"Dry run orchestrator failed: {exc}")
raise HTTPException(status_code=500, detail=str(exc)) from exc raise HTTPException(status_code=500, detail=str(exc)) from exc
# [/DEF:dry_run_migration:Function] # [/DEF:dry_run_migration:Function]

View File

@@ -32,7 +32,13 @@ class AuthRepository:
# @DATA_CONTRACT: Input[Session] -> Output[None] # @DATA_CONTRACT: Input[Session] -> Output[None]
def __init__(self, db: Session): def __init__(self, db: Session):
with belief_scope("AuthRepository.__init__"): with belief_scope("AuthRepository.__init__"):
if not isinstance(db, Session):
logger.explore("Invalid session provided to AuthRepository", extra={"type": type(db)})
raise TypeError("db must be an instance of sqlalchemy.orm.Session")
logger.reason("Binding AuthRepository to database session")
self.db = db self.db = db
logger.reflect("AuthRepository initialized")
# [/DEF:__init__:Function] # [/DEF:__init__:Function]
# [DEF:get_user_by_username:Function] # [DEF:get_user_by_username:Function]
@@ -43,7 +49,17 @@ class AuthRepository:
# @DATA_CONTRACT: Input[str] -> Output[Optional[User]] # @DATA_CONTRACT: Input[str] -> Output[Optional[User]]
def get_user_by_username(self, username: str) -> Optional[User]: def get_user_by_username(self, username: str) -> Optional[User]:
with belief_scope("AuthRepository.get_user_by_username"): with belief_scope("AuthRepository.get_user_by_username"):
return self.db.query(User).filter(User.username == username).first() if not username or not isinstance(username, str):
raise ValueError("username must be a non-empty string")
logger.reason(f"Querying user by username: {username}")
user = self.db.query(User).filter(User.username == username).first()
if user:
logger.reflect(f"User found: {username}")
else:
logger.explore(f"User not found: {username}")
return user
# [/DEF:get_user_by_username:Function] # [/DEF:get_user_by_username:Function]
# [DEF:get_user_by_id:Function] # [DEF:get_user_by_id:Function]
@@ -54,7 +70,17 @@ class AuthRepository:
# @DATA_CONTRACT: Input[str] -> Output[Optional[User]] # @DATA_CONTRACT: Input[str] -> Output[Optional[User]]
def get_user_by_id(self, user_id: str) -> Optional[User]: def get_user_by_id(self, user_id: str) -> Optional[User]:
with belief_scope("AuthRepository.get_user_by_id"): with belief_scope("AuthRepository.get_user_by_id"):
return self.db.query(User).filter(User.id == user_id).first() if not user_id or not isinstance(user_id, str):
raise ValueError("user_id must be a non-empty string")
logger.reason(f"Querying user by ID: {user_id}")
user = self.db.query(User).filter(User.id == user_id).first()
if user:
logger.reflect(f"User found by ID: {user_id}")
else:
logger.explore(f"User not found by ID: {user_id}")
return user
# [/DEF:get_user_by_id:Function] # [/DEF:get_user_by_id:Function]
# [DEF:get_role_by_name:Function] # [DEF:get_role_by_name:Function]
@@ -76,10 +102,15 @@ class AuthRepository:
# @DATA_CONTRACT: Input[User] -> Output[None] # @DATA_CONTRACT: Input[User] -> Output[None]
def update_last_login(self, user: User): def update_last_login(self, user: User):
with belief_scope("AuthRepository.update_last_login"): with belief_scope("AuthRepository.update_last_login"):
if not isinstance(user, User):
raise TypeError("user must be an instance of User")
from datetime import datetime from datetime import datetime
logger.reason(f"Updating last login for user: {user.username}")
user.last_login = datetime.utcnow() user.last_login = datetime.utcnow()
self.db.add(user) self.db.add(user)
self.db.commit() self.db.commit()
logger.reflect(f"Last login updated and committed for user: {user.username}")
# [/DEF:update_last_login:Function] # [/DEF:update_last_login:Function]
# [DEF:get_role_by_id:Function] # [DEF:get_role_by_id:Function]
@@ -144,9 +175,14 @@ class AuthRepository:
preference: UserDashboardPreference, preference: UserDashboardPreference,
) -> UserDashboardPreference: ) -> UserDashboardPreference:
with belief_scope("AuthRepository.save_user_dashboard_preference"): with belief_scope("AuthRepository.save_user_dashboard_preference"):
if not isinstance(preference, UserDashboardPreference):
raise TypeError("preference must be an instance of UserDashboardPreference")
logger.reason(f"Saving dashboard preference for user: {preference.user_id}")
self.db.add(preference) self.db.add(preference)
self.db.commit() self.db.commit()
self.db.refresh(preference) self.db.refresh(preference)
logger.reflect(f"Dashboard preference saved and refreshed for user: {preference.user_id}")
return preference return preference
# [/DEF:save_user_dashboard_preference:Function] # [/DEF:save_user_dashboard_preference:Function]

View File

@@ -36,18 +36,23 @@ class ConfigManager:
# @SIDE_EFFECT: Reads config sources and updates logging configuration. # @SIDE_EFFECT: Reads config sources and updates logging configuration.
# @DATA_CONTRACT: Input(str config_path) -> Output(None; self.config: AppConfig) # @DATA_CONTRACT: Input(str config_path) -> Output(None; self.config: AppConfig)
def __init__(self, config_path: str = "config.json"): def __init__(self, config_path: str = "config.json"):
with belief_scope("__init__"): with belief_scope("ConfigManager.__init__"):
assert isinstance(config_path, str) and config_path, "config_path must be a non-empty string" if not isinstance(config_path, str) or not config_path:
logger.explore("Invalid config_path provided", extra={"path": config_path})
raise ValueError("config_path must be a non-empty string")
logger.info(f"[ConfigManager][Entry] Initializing with legacy path {config_path}") logger.reason(f"Initializing ConfigManager with legacy path: {config_path}")
self.config_path = Path(config_path) self.config_path = Path(config_path)
self.config: AppConfig = self._load_config() self.config: AppConfig = self._load_config()
configure_logger(self.config.settings.logging) configure_logger(self.config.settings.logging)
assert isinstance(self.config, AppConfig), "self.config must be an instance of AppConfig"
logger.info("[ConfigManager][Exit] Initialized") if not isinstance(self.config, AppConfig):
logger.explore("Config loading resulted in invalid type", extra={"type": type(self.config)})
raise TypeError("self.config must be an instance of AppConfig")
logger.reflect("ConfigManager initialization complete")
# [/DEF:__init__:Function] # [/DEF:__init__:Function]
# [DEF:_default_config:Function] # [DEF:_default_config:Function]
@@ -104,20 +109,23 @@ class ConfigManager:
# @SIDE_EFFECT: Database read/write, possible migration write, logging. # @SIDE_EFFECT: Database read/write, possible migration write, logging.
# @DATA_CONTRACT: Input(None) -> Output(AppConfig) # @DATA_CONTRACT: Input(None) -> Output(AppConfig)
def _load_config(self) -> AppConfig: def _load_config(self) -> AppConfig:
with belief_scope("_load_config"): with belief_scope("ConfigManager._load_config"):
session: Session = SessionLocal() session: Session = SessionLocal()
try: try:
record = self._get_record(session) record = self._get_record(session)
if record and record.payload: if record and record.payload:
logger.info("[_load_config][Coherence:OK] Configuration loaded from database") logger.reason("Configuration found in database")
return AppConfig(**record.payload) config = AppConfig(**record.payload)
logger.reflect("Database configuration validated")
return config
logger.info("[_load_config][Action] No database config found, migrating legacy config") logger.reason("No database config found, initiating legacy migration")
config = self._load_from_legacy_file() config = self._load_from_legacy_file()
self._save_config_to_db(config, session=session) self._save_config_to_db(config, session=session)
logger.reflect("Legacy configuration migrated to database")
return config return config
except Exception as e: except Exception as e:
logger.error(f"[_load_config][Coherence:Failed] Error loading config from DB: {e}") logger.explore(f"Error loading config from DB: {e}")
return self._default_config() return self._default_config()
finally: finally:
session.close() session.close()
@@ -130,8 +138,9 @@ class ConfigManager:
# @SIDE_EFFECT: Database insert/update, commit/rollback, logging. # @SIDE_EFFECT: Database insert/update, commit/rollback, logging.
# @DATA_CONTRACT: Input(AppConfig, Optional[Session]) -> Output(None) # @DATA_CONTRACT: Input(AppConfig, Optional[Session]) -> Output(None)
def _save_config_to_db(self, config: AppConfig, session: Optional[Session] = None): def _save_config_to_db(self, config: AppConfig, session: Optional[Session] = None):
with belief_scope("_save_config_to_db"): with belief_scope("ConfigManager._save_config_to_db"):
assert isinstance(config, AppConfig), "config must be an instance of AppConfig" if not isinstance(config, AppConfig):
raise TypeError("config must be an instance of AppConfig")
owns_session = session is None owns_session = session is None
db = session or SessionLocal() db = session or SessionLocal()
@@ -139,15 +148,17 @@ class ConfigManager:
record = self._get_record(db) record = self._get_record(db)
payload = config.model_dump() payload = config.model_dump()
if record is None: if record is None:
logger.reason("Creating new global configuration record")
record = AppConfigRecord(id="global", payload=payload) record = AppConfigRecord(id="global", payload=payload)
db.add(record) db.add(record)
else: else:
logger.reason("Updating existing global configuration record")
record.payload = payload record.payload = payload
db.commit() db.commit()
logger.info("[_save_config_to_db][Action] Configuration saved to database") logger.reflect("Configuration successfully committed to database")
except Exception as e: except Exception as e:
db.rollback() db.rollback()
logger.error(f"[_save_config_to_db][Coherence:Failed] Failed to save: {e}") logger.explore(f"Failed to save configuration: {e}")
raise raise
finally: finally:
if owns_session: if owns_session:
@@ -183,14 +194,15 @@ class ConfigManager:
# @SIDE_EFFECT: Mutates self.config, DB write, logger reconfiguration, logging. # @SIDE_EFFECT: Mutates self.config, DB write, logger reconfiguration, logging.
# @DATA_CONTRACT: Input(GlobalSettings) -> Output(None) # @DATA_CONTRACT: Input(GlobalSettings) -> Output(None)
def update_global_settings(self, settings: GlobalSettings): def update_global_settings(self, settings: GlobalSettings):
with belief_scope("update_global_settings"): with belief_scope("ConfigManager.update_global_settings"):
logger.info("[update_global_settings][Entry] Updating settings") if not isinstance(settings, GlobalSettings):
raise TypeError("settings must be an instance of GlobalSettings")
assert isinstance(settings, GlobalSettings), "settings must be an instance of GlobalSettings" logger.reason("Updating global settings and persisting")
self.config.settings = settings self.config.settings = settings
self.save() self.save()
configure_logger(settings.logging) configure_logger(settings.logging)
logger.info("[update_global_settings][Exit] Settings updated") logger.reflect("Global settings updated and logger reconfigured")
# [/DEF:update_global_settings:Function] # [/DEF:update_global_settings:Function]
# [DEF:validate_path:Function] # [DEF:validate_path:Function]
@@ -257,14 +269,15 @@ class ConfigManager:
# @SIDE_EFFECT: Mutates environment list, DB write, logging. # @SIDE_EFFECT: Mutates environment list, DB write, logging.
# @DATA_CONTRACT: Input(Environment) -> Output(None) # @DATA_CONTRACT: Input(Environment) -> Output(None)
def add_environment(self, env: Environment): def add_environment(self, env: Environment):
with belief_scope("add_environment"): with belief_scope("ConfigManager.add_environment"):
logger.info(f"[add_environment][Entry] Adding environment {env.id}") if not isinstance(env, Environment):
assert isinstance(env, Environment), "env must be an instance of Environment" raise TypeError("env must be an instance of Environment")
logger.reason(f"Adding/Updating environment: {env.id}")
self.config.environments = [e for e in self.config.environments if e.id != env.id] self.config.environments = [e for e in self.config.environments if e.id != env.id]
self.config.environments.append(env) self.config.environments.append(env)
self.save() self.save()
logger.info("[add_environment][Exit] Environment added") logger.reflect(f"Environment {env.id} persisted")
# [/DEF:add_environment:Function] # [/DEF:add_environment:Function]
# [DEF:update_environment:Function] # [DEF:update_environment:Function]
@@ -274,22 +287,25 @@ class ConfigManager:
# @SIDE_EFFECT: May mutate environment list, DB write, logging. # @SIDE_EFFECT: May mutate environment list, DB write, logging.
# @DATA_CONTRACT: Input(str env_id, Environment updated_env) -> Output(bool) # @DATA_CONTRACT: Input(str env_id, Environment updated_env) -> Output(bool)
def update_environment(self, env_id: str, updated_env: Environment) -> bool: def update_environment(self, env_id: str, updated_env: Environment) -> bool:
with belief_scope("update_environment"): with belief_scope("ConfigManager.update_environment"):
logger.info(f"[update_environment][Entry] Updating {env_id}") if not env_id or not isinstance(env_id, str):
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string" raise ValueError("env_id must be a non-empty string")
assert isinstance(updated_env, Environment), "updated_env must be an instance of Environment" if not isinstance(updated_env, Environment):
raise TypeError("updated_env must be an instance of Environment")
logger.reason(f"Attempting to update environment: {env_id}")
for i, env in enumerate(self.config.environments): for i, env in enumerate(self.config.environments):
if env.id == env_id: if env.id == env_id:
if updated_env.password == "********": if updated_env.password == "********":
logger.reason("Preserving existing password for masked update")
updated_env.password = env.password updated_env.password = env.password
self.config.environments[i] = updated_env self.config.environments[i] = updated_env
self.save() self.save()
logger.info(f"[update_environment][Coherence:OK] Updated {env_id}") logger.reflect(f"Environment {env_id} updated and saved")
return True return True
logger.warning(f"[update_environment][Coherence:Failed] Environment {env_id} not found") logger.explore(f"Environment {env_id} not found for update")
return False return False
# [/DEF:update_environment:Function] # [/DEF:update_environment:Function]
@@ -300,18 +316,19 @@ class ConfigManager:
# @SIDE_EFFECT: May mutate environment list, conditional DB write, logging. # @SIDE_EFFECT: May mutate environment list, conditional DB write, logging.
# @DATA_CONTRACT: Input(str env_id) -> Output(None) # @DATA_CONTRACT: Input(str env_id) -> Output(None)
def delete_environment(self, env_id: str): def delete_environment(self, env_id: str):
with belief_scope("delete_environment"): with belief_scope("ConfigManager.delete_environment"):
logger.info(f"[delete_environment][Entry] Deleting {env_id}") if not env_id or not isinstance(env_id, str):
assert env_id and isinstance(env_id, str), "env_id must be a non-empty string" raise ValueError("env_id must be a non-empty string")
logger.reason(f"Attempting to delete environment: {env_id}")
original_count = len(self.config.environments) original_count = len(self.config.environments)
self.config.environments = [e for e in self.config.environments if e.id != env_id] self.config.environments = [e for e in self.config.environments if e.id != env_id]
if len(self.config.environments) < original_count: if len(self.config.environments) < original_count:
self.save() self.save()
logger.info(f"[delete_environment][Action] Deleted {env_id}") logger.reflect(f"Environment {env_id} deleted and configuration saved")
else: else:
logger.warning(f"[delete_environment][Coherence:Failed] Environment {env_id} not found") logger.explore(f"Environment {env_id} not found for deletion")
# [/DEF:delete_environment:Function] # [/DEF:delete_environment:Function]

View File

@@ -38,7 +38,9 @@ class MigrationEngine:
# @PARAM: mapping_service (Optional[IdMappingService]) - Used for resolving target environment integer IDs. # @PARAM: mapping_service (Optional[IdMappingService]) - Used for resolving target environment integer IDs.
def __init__(self, mapping_service: Optional[IdMappingService] = None): def __init__(self, mapping_service: Optional[IdMappingService] = None):
with belief_scope("MigrationEngine.__init__"): with belief_scope("MigrationEngine.__init__"):
logger.reason("Initializing MigrationEngine")
self.mapping_service = mapping_service self.mapping_service = mapping_service
logger.reflect("MigrationEngine initialized")
# [/DEF:__init__:Function] # [/DEF:__init__:Function]
# [DEF:transform_zip:Function] # [DEF:transform_zip:Function]
@@ -59,12 +61,14 @@ class MigrationEngine:
Transform a Superset export ZIP by replacing database UUIDs and optionally fixing cross-filters. Transform a Superset export ZIP by replacing database UUIDs and optionally fixing cross-filters.
""" """
with belief_scope("MigrationEngine.transform_zip"): with belief_scope("MigrationEngine.transform_zip"):
logger.reason(f"Starting ZIP transformation: {zip_path} -> {output_path}")
with tempfile.TemporaryDirectory() as temp_dir_str: with tempfile.TemporaryDirectory() as temp_dir_str:
temp_dir = Path(temp_dir_str) temp_dir = Path(temp_dir_str)
try: try:
# 1. Extract # 1. Extract
logger.info(f"[MigrationEngine.transform_zip][Action] Extracting ZIP: {zip_path}") logger.reason(f"Extracting source archive to {temp_dir}")
with zipfile.ZipFile(zip_path, 'r') as zf: with zipfile.ZipFile(zip_path, 'r') as zf:
zf.extractall(temp_dir) zf.extractall(temp_dir)
@@ -72,33 +76,33 @@ class MigrationEngine:
dataset_files = list(temp_dir.glob("**/datasets/**/*.yaml")) + list(temp_dir.glob("**/datasets/*.yaml")) dataset_files = list(temp_dir.glob("**/datasets/**/*.yaml")) + list(temp_dir.glob("**/datasets/*.yaml"))
dataset_files = list(set(dataset_files)) dataset_files = list(set(dataset_files))
logger.info(f"[MigrationEngine.transform_zip][State] Found {len(dataset_files)} dataset files.") logger.reason(f"Transforming {len(dataset_files)} dataset YAML files")
for ds_file in dataset_files: for ds_file in dataset_files:
logger.info(f"[MigrationEngine.transform_zip][Action] Transforming dataset: {ds_file}")
self._transform_yaml(ds_file, db_mapping) self._transform_yaml(ds_file, db_mapping)
# 2.5 Patch Cross-Filters (Dashboards) # 2.5 Patch Cross-Filters (Dashboards)
if fix_cross_filters and self.mapping_service and target_env_id: if fix_cross_filters:
if self.mapping_service and target_env_id:
dash_files = list(temp_dir.glob("**/dashboards/**/*.yaml")) + list(temp_dir.glob("**/dashboards/*.yaml")) dash_files = list(temp_dir.glob("**/dashboards/**/*.yaml")) + list(temp_dir.glob("**/dashboards/*.yaml"))
dash_files = list(set(dash_files)) dash_files = list(set(dash_files))
logger.info(f"[MigrationEngine.transform_zip][State] Found {len(dash_files)} dashboard files for patching.") logger.reason(f"Patching cross-filters for {len(dash_files)} dashboards")
# Gather all source UUID-to-ID mappings from the archive first # Gather all source UUID-to-ID mappings from the archive first
source_id_to_uuid_map = self._extract_chart_uuids_from_archive(temp_dir) source_id_to_uuid_map = self._extract_chart_uuids_from_archive(temp_dir)
for dash_file in dash_files: for dash_file in dash_files:
logger.info(f"[MigrationEngine.transform_zip][Action] Patching dashboard: {dash_file}")
self._patch_dashboard_metadata(dash_file, target_env_id, source_id_to_uuid_map) self._patch_dashboard_metadata(dash_file, target_env_id, source_id_to_uuid_map)
else:
logger.explore("Cross-filter patching requested but mapping service or target_env_id is missing")
# 3. Re-package # 3. Re-package
logger.info(f"[MigrationEngine.transform_zip][Action] Re-packaging ZIP to: {output_path} (strip_databases={strip_databases})") logger.reason(f"Re-packaging transformed archive (strip_databases={strip_databases})")
with zipfile.ZipFile(output_path, 'w', zipfile.ZIP_DEFLATED) as zf: with zipfile.ZipFile(output_path, 'w', zipfile.ZIP_DEFLATED) as zf:
for root, dirs, files in os.walk(temp_dir): for root, dirs, files in os.walk(temp_dir):
rel_root = Path(root).relative_to(temp_dir) rel_root = Path(root).relative_to(temp_dir)
if strip_databases and "databases" in rel_root.parts: if strip_databases and "databases" in rel_root.parts:
logger.info(f"[MigrationEngine.transform_zip][Action] Skipping file in databases directory: {rel_root}")
continue continue
for file in files: for file in files:
@@ -106,9 +110,10 @@ class MigrationEngine:
arcname = file_path.relative_to(temp_dir) arcname = file_path.relative_to(temp_dir)
zf.write(file_path, arcname) zf.write(file_path, arcname)
logger.reflect("ZIP transformation completed successfully")
return True return True
except Exception as e: except Exception as e:
logger.error(f"[MigrationEngine.transform_zip][Coherence:Failed] Error transforming ZIP: {e}") logger.explore(f"Error transforming ZIP: {e}")
return False return False
# [/DEF:transform_zip:Function] # [/DEF:transform_zip:Function]
@@ -122,19 +127,23 @@ class MigrationEngine:
# @DATA_CONTRACT: Input[(Path file_path, Dict[str,str] db_mapping)] -> Output[None] # @DATA_CONTRACT: Input[(Path file_path, Dict[str,str] db_mapping)] -> Output[None]
def _transform_yaml(self, file_path: Path, db_mapping: Dict[str, str]): def _transform_yaml(self, file_path: Path, db_mapping: Dict[str, str]):
with belief_scope("MigrationEngine._transform_yaml"): with belief_scope("MigrationEngine._transform_yaml"):
if not file_path.exists():
logger.explore(f"YAML file not found: {file_path}")
return
with open(file_path, 'r') as f: with open(file_path, 'r') as f:
data = yaml.safe_load(f) data = yaml.safe_load(f)
if not data: if not data:
return return
# Superset dataset YAML structure:
# database_uuid: ...
source_uuid = data.get('database_uuid') source_uuid = data.get('database_uuid')
if source_uuid in db_mapping: if source_uuid in db_mapping:
logger.reason(f"Replacing database UUID in {file_path.name}")
data['database_uuid'] = db_mapping[source_uuid] data['database_uuid'] = db_mapping[source_uuid]
with open(file_path, 'w') as f: with open(file_path, 'w') as f:
yaml.dump(data, f) yaml.dump(data, f)
logger.reflect(f"Database UUID patched in {file_path.name}")
# [/DEF:_transform_yaml:Function] # [/DEF:_transform_yaml:Function]
# [DEF:_extract_chart_uuids_from_archive:Function] # [DEF:_extract_chart_uuids_from_archive:Function]
@@ -176,6 +185,9 @@ class MigrationEngine:
def _patch_dashboard_metadata(self, file_path: Path, target_env_id: str, source_map: Dict[int, str]): def _patch_dashboard_metadata(self, file_path: Path, target_env_id: str, source_map: Dict[int, str]):
with belief_scope("MigrationEngine._patch_dashboard_metadata"): with belief_scope("MigrationEngine._patch_dashboard_metadata"):
try: try:
if not file_path.exists():
return
with open(file_path, 'r') as f: with open(file_path, 'r') as f:
data = yaml.safe_load(f) data = yaml.safe_load(f)
@@ -186,18 +198,13 @@ class MigrationEngine:
if not metadata_str: if not metadata_str:
return return
metadata = json.loads(metadata_str)
modified = False
# We need to deeply traverse and replace. For MVP, string replacement over the raw JSON is an option,
# but careful dict traversal is safer.
# Fetch target UUIDs for everything we know: # Fetch target UUIDs for everything we know:
uuids_needed = list(source_map.values()) uuids_needed = list(source_map.values())
logger.reason(f"Resolving {len(uuids_needed)} remote IDs for dashboard metadata patching")
target_ids = self.mapping_service.get_remote_ids_batch(target_env_id, ResourceType.CHART, uuids_needed) target_ids = self.mapping_service.get_remote_ids_batch(target_env_id, ResourceType.CHART, uuids_needed)
if not target_ids: if not target_ids:
logger.info("[MigrationEngine._patch_dashboard_metadata][Reflect] No remote target IDs found in mapping database.") logger.reflect("No remote target IDs found in mapping database for this dashboard.")
return return
# Map Source Int -> Target Int # Map Source Int -> Target Int
@@ -210,21 +217,16 @@ class MigrationEngine:
missing_targets.append(s_id) missing_targets.append(s_id)
if missing_targets: if missing_targets:
logger.warning(f"[MigrationEngine._patch_dashboard_metadata][Coherence:Recoverable] Missing target IDs for source IDs: {missing_targets}. Cross-filters for these IDs might break.") logger.explore(f"Missing target IDs for source IDs: {missing_targets}. Cross-filters might break.")
if not source_to_target: if not source_to_target:
logger.info("[MigrationEngine._patch_dashboard_metadata][Reflect] No source IDs matched remotely. Skipping patch.") logger.reflect("No source IDs matched remotely. Skipping patch.")
return return
# Complex metadata traversal would go here (e.g. for native_filter_configuration) logger.reason(f"Patching {len(source_to_target)} ID references in json_metadata")
# We use regex replacement over the string for safety over unknown nested dicts.
new_metadata_str = metadata_str new_metadata_str = metadata_str
# Replace chartId and datasetId assignments explicitly.
# Pattern: "datasetId": 42 or "chartId": 42
for s_id, t_id in source_to_target.items(): for s_id, t_id in source_to_target.items():
# Replace in native_filter_configuration targets
new_metadata_str = re.sub(r'("datasetId"\s*:\s*)' + str(s_id) + r'(\b)', r'\g<1>' + str(t_id) + r'\g<2>', new_metadata_str) new_metadata_str = re.sub(r'("datasetId"\s*:\s*)' + str(s_id) + r'(\b)', r'\g<1>' + str(t_id) + r'\g<2>', new_metadata_str)
new_metadata_str = re.sub(r'("chartId"\s*:\s*)' + str(s_id) + r'(\b)', r'\g<1>' + str(t_id) + r'\g<2>', new_metadata_str) new_metadata_str = re.sub(r'("chartId"\s*:\s*)' + str(s_id) + r'(\b)', r'\g<1>' + str(t_id) + r'\g<2>', new_metadata_str)
@@ -233,10 +235,10 @@ class MigrationEngine:
with open(file_path, 'w') as f: with open(file_path, 'w') as f:
yaml.dump(data, f) yaml.dump(data, f)
logger.info(f"[MigrationEngine._patch_dashboard_metadata][Reason] Re-serialized modified JSON metadata for dashboard.") logger.reflect(f"Dashboard metadata patched and saved: {file_path.name}")
except Exception as e: except Exception as e:
logger.error(f"[MigrationEngine._patch_dashboard_metadata][Coherence:Failed] Metadata patch failed: {e}") logger.explore(f"Metadata patch failed for {file_path.name}: {e}")
# [/DEF:_patch_dashboard_metadata:Function] # [/DEF:_patch_dashboard_metadata:Function]

View File

@@ -15,10 +15,10 @@
@RELATION: [BINDS_TO] ->[frontend/src/components/TaskLogViewer.svelte] @RELATION: [BINDS_TO] ->[frontend/src/components/TaskLogViewer.svelte]
@RELATION: [BINDS_TO] ->[frontend/src/components/PasswordPrompt.svelte] @RELATION: [BINDS_TO] ->[frontend/src/components/PasswordPrompt.svelte]
@INVARIANT: Migration start is blocked unless source and target environments are selected, distinct, and at least one dashboard is selected. @INVARIANT: Migration start is blocked unless source and target environments are selected, distinct, and at least one dashboard is selected.
@UX_STATE: Idle -> User configures source/target environments, dashboard selection, and migration options. @UX_STATE: [Idle] -> User configures source/target environments, dashboard selection, and migration options.
@UX_STATE: Loading -> Environment/database/dry-run fetch operations disable relevant actions and show progress text. @UX_STATE: [Loading] -> Environment/database/dry-run fetch operations disable relevant actions and show progress text.
@UX_STATE: Error -> Error banner/prompt message is shown while keeping user input intact for correction. @UX_STATE: [Error] -> Error banner/prompt message is shown while keeping user input intact for correction.
@UX_STATE: Success -> Dry-run summary or active task view is rendered after successful API operations. @UX_STATE: [Success] -> Dry-run summary or active task view is rendered after successful API operations.
@UX_FEEDBACK: Inline error banner, disabled CTA states, loading labels, dry-run summary cards, modal dialogs. @UX_FEEDBACK: Inline error banner, disabled CTA states, loading labels, dry-run summary cards, modal dialogs.
@UX_RECOVERY: User can adjust selection, refresh databases, retry dry-run/migration, resume task with passwords, or cancel modal flow. @UX_RECOVERY: User can adjust selection, refresh databases, retry dry-run/migration, resume task with passwords, or cancel modal flow.
@UX_REACTIVITY: State transitions rely on Svelte reactive bindings and store subscription to selectedTask. @UX_REACTIVITY: State transitions rely on Svelte reactive bindings and store subscription to selectedTask.
@@ -102,9 +102,12 @@
*/ */
async function fetchEnvironments() { async function fetchEnvironments() {
return belief_scope("fetchEnvironments", async () => { return belief_scope("fetchEnvironments", async () => {
console.info("[fetchEnvironments][REASON] Initializing environment list for selection");
try { try {
environments = await api.getEnvironmentsList(); environments = await api.getEnvironmentsList();
console.info("[fetchEnvironments][REFLECT] Environments loaded", { count: environments.length });
} catch (e) { } catch (e) {
console.error("[fetchEnvironments][EXPLORE] Failed to fetch environments", e);
error = e.message; error = e.message;
} finally { } finally {
loading = false; loading = false;
@@ -122,10 +125,13 @@
*/ */
async function fetchDashboards(envId: string) { async function fetchDashboards(envId: string) {
return belief_scope("fetchDashboards", async () => { return belief_scope("fetchDashboards", async () => {
console.info("[fetchDashboards][REASON] Fetching dashboards for environment", { envId });
try { try {
dashboards = await api.requestApi(`/environments/${envId}/dashboards`); dashboards = await api.requestApi(`/environments/${envId}/dashboards`);
selectedDashboardIds = []; // Reset selection when env changes selectedDashboardIds = []; // Reset selection when env changes
console.info("[fetchDashboards][REFLECT] Dashboards loaded", { count: dashboards.length });
} catch (e) { } catch (e) {
console.error("[fetchDashboards][EXPLORE] Failed to fetch dashboards", e);
error = e.message; error = e.message;
dashboards = []; dashboards = [];
} }
@@ -135,8 +141,18 @@
onMount(fetchEnvironments); onMount(fetchEnvironments);
// Reactive: fetch dashboards when source env changes // [DEF:ReactiveDashboardFetch:Block]
$: if (sourceEnvId) fetchDashboards(sourceEnvId); /**
* @PURPOSE: Automatically fetch dashboards when the source environment is changed.
* @PRE: sourceEnvId is not empty.
* @POST: fetchDashboards is called with the new sourceEnvId.
* @UX_STATE: [Loading] -> Triggered when sourceEnvId changes.
*/
$: if (sourceEnvId) {
console.info("[ReactiveDashboardFetch][REASON] Source environment changed, fetching dashboards", { sourceEnvId });
fetchDashboards(sourceEnvId);
}
// [/DEF:ReactiveDashboardFetch:Block]
// [DEF:fetchDatabases:Function] // [DEF:fetchDatabases:Function]
/** /**
@@ -146,7 +162,11 @@
*/ */
async function fetchDatabases() { async function fetchDatabases() {
return belief_scope("fetchDatabases", async () => { return belief_scope("fetchDatabases", async () => {
if (!sourceEnvId || !targetEnvId) return; if (!sourceEnvId || !targetEnvId) {
console.warn("[fetchDatabases][EXPLORE] Missing environment IDs for database fetch");
return;
}
console.info("[fetchDatabases][REASON] Fetching databases and suggestions for mapping", { sourceEnvId, targetEnvId });
fetchingDbs = true; fetchingDbs = true;
error = ""; error = "";
@@ -167,7 +187,13 @@
targetDatabases = tgt; targetDatabases = tgt;
mappings = maps; mappings = maps;
suggestions = sugs; suggestions = sugs;
console.info("[fetchDatabases][REFLECT] Databases and mappings loaded", {
sourceCount: src.length,
targetCount: tgt.length,
mappingCount: maps.length
});
} catch (e) { } catch (e) {
console.error("[fetchDatabases][EXPLORE] Failed to fetch databases", e);
error = e.message; error = e.message;
} finally { } finally {
fetchingDbs = false; fetchingDbs = false;
@@ -188,8 +214,12 @@
const sDb = sourceDatabases.find((d) => d.uuid === sourceUuid); const sDb = sourceDatabases.find((d) => d.uuid === sourceUuid);
const tDb = targetDatabases.find((d) => d.uuid === targetUuid); const tDb = targetDatabases.find((d) => d.uuid === targetUuid);
if (!sDb || !tDb) return; if (!sDb || !tDb) {
console.warn("[handleMappingUpdate][EXPLORE] Database not found for mapping", { sourceUuid, targetUuid });
return;
}
console.info("[handleMappingUpdate][REASON] Updating database mapping", { sourceUuid, targetUuid });
try { try {
const savedMapping = await api.postApi("/mappings", { const savedMapping = await api.postApi("/mappings", {
source_env_id: sourceEnvId, source_env_id: sourceEnvId,
@@ -204,7 +234,9 @@
...mappings.filter((m) => m.source_db_uuid !== sourceUuid), ...mappings.filter((m) => m.source_db_uuid !== sourceUuid),
savedMapping, savedMapping,
]; ];
console.info("[handleMappingUpdate][REFLECT] Mapping saved successfully");
} catch (e) { } catch (e) {
console.error("[handleMappingUpdate][EXPLORE] Failed to save mapping", e);
error = e.message; error = e.message;
} }
}); });
@@ -234,6 +266,13 @@
// Ideally, TaskHistory or TaskRunner emits an event when input is needed. // Ideally, TaskHistory or TaskRunner emits an event when input is needed.
// Or we watch selectedTask. // Or we watch selectedTask.
// [DEF:ReactivePasswordPrompt:Block]
/**
* @PURPOSE: Monitor selected task for input requests and trigger password prompt.
* @PRE: $selectedTask is not null and status is AWAITING_INPUT.
* @POST: showPasswordPrompt is set to true if input_request is database_password.
* @UX_STATE: [AwaitingInput] -> Password prompt modal is displayed.
*/
$: if ( $: if (
$selectedTask && $selectedTask &&
$selectedTask.status === "AWAITING_INPUT" && $selectedTask.status === "AWAITING_INPUT" &&
@@ -241,6 +280,7 @@
) { ) {
const req = $selectedTask.input_request; const req = $selectedTask.input_request;
if (req.type === "database_password") { if (req.type === "database_password") {
console.info("[ReactivePasswordPrompt][REASON] Task awaiting database passwords", { taskId: $selectedTask.id });
passwordPromptDatabases = req.databases || []; passwordPromptDatabases = req.databases || [];
passwordPromptErrorMessage = req.error_message || ""; passwordPromptErrorMessage = req.error_message || "";
showPasswordPrompt = true; showPasswordPrompt = true;
@@ -251,6 +291,7 @@
// showPasswordPrompt = false; // showPasswordPrompt = false;
// Actually, don't auto-close, let the user or success handler close it. // Actually, don't auto-close, let the user or success handler close it.
} }
// [/DEF:ReactivePasswordPrompt:Block]
// [/DEF:handlePasswordPrompt:Function] // [/DEF:handlePasswordPrompt:Function]
// [DEF:handleResumeMigration:Function] // [DEF:handleResumeMigration:Function]
@@ -278,31 +319,41 @@
// [DEF:startMigration:Function] // [DEF:startMigration:Function]
/** /**
* @purpose Starts the migration process. * @PURPOSE: Initiates the migration process by sending the selection to the backend.
* @pre sourceEnvId and targetEnvId must be set and different. * @PRE: sourceEnvId and targetEnvId are set and different; at least one dashboard is selected.
* @post Migration task is started and selectedTask is updated. * @POST: A migration task is created and selectedTask store is updated.
* @SIDE_EFFECT: Resets dryRunResult; updates error state on failure.
* @UX_STATE: [Loading] -> [Success] or [Error]
*/ */
async function startMigration() { async function startMigration() {
return belief_scope("startMigration", async () => { return belief_scope("startMigration", async () => {
if (!sourceEnvId || !targetEnvId) { if (!sourceEnvId || !targetEnvId) {
console.warn("[startMigration][EXPLORE] Missing environment selection");
error = error =
$t.migration?.select_both_envs || $t.migration?.select_both_envs ||
"Please select both source and target environments."; "Please select both source and target environments.";
return; return;
} }
if (sourceEnvId === targetEnvId) { if (sourceEnvId === targetEnvId) {
console.warn("[startMigration][EXPLORE] Source and target environments are identical");
error = error =
$t.migration?.different_envs || $t.migration?.different_envs ||
"Source and target environments must be different."; "Source and target environments must be different.";
return; return;
} }
if (selectedDashboardIds.length === 0) { if (selectedDashboardIds.length === 0) {
console.warn("[startMigration][EXPLORE] No dashboards selected");
error = error =
$t.migration?.select_dashboards || $t.migration?.select_dashboards ||
"Please select at least one dashboard to migrate."; "Please select at least one dashboard to migrate.";
return; return;
} }
console.info("[startMigration][REASON] Initiating migration execution", {
sourceEnvId,
targetEnvId,
dashboardCount: selectedDashboardIds.length
});
error = ""; error = "";
try { try {
dryRunResult = null; dryRunResult = null;
@@ -313,14 +364,9 @@
replace_db_config: replaceDb, replace_db_config: replaceDb,
fix_cross_filters: fixCrossFilters, fix_cross_filters: fixCrossFilters,
}; };
console.log(
`[MigrationDashboard][Action] Starting migration with selection:`,
selection,
);
const result = await api.postApi("/migration/execute", selection); const result = await api.postApi("/migration/execute", selection);
console.log( console.info("[startMigration][REFLECT] Migration task created", { taskId: result.task_id });
`[MigrationDashboard][Action] Migration started: ${result.task_id} - ${result.message}`,
);
// Wait a brief moment for the backend to ensure the task is retrievable // Wait a brief moment for the backend to ensure the task is retrievable
await new Promise((r) => setTimeout(r, 500)); await new Promise((r) => setTimeout(r, 500));
@@ -329,12 +375,9 @@
try { try {
const task = await api.getTask(result.task_id); const task = await api.getTask(result.task_id);
selectedTask.set(task); selectedTask.set(task);
console.info("[startMigration][REFLECT] Task details fetched and store updated");
} catch (fetchErr) { } catch (fetchErr) {
// Fallback: create a temporary task object to switch view immediately console.warn("[startMigration][EXPLORE] Could not fetch task details immediately, using placeholder", fetchErr);
console.warn(
$t.migration?.task_placeholder_warn ||
"Could not fetch task details immediately, using placeholder.",
);
selectedTask.set({ selectedTask.set({
id: result.task_id, id: result.task_id,
plugin_id: "superset-migration", plugin_id: "superset-migration",
@@ -344,7 +387,7 @@
}); });
} }
} catch (e) { } catch (e) {
console.error(`[MigrationDashboard][Failure] Migration failed:`, e); console.error("[startMigration][EXPLORE] Migration initiation failed", e);
error = e.message; error = e.message;
} }
}); });
@@ -353,36 +396,38 @@
// [DEF:startDryRun:Function] // [DEF:startDryRun:Function]
/** /**
* @purpose Builds pre-flight diff and risk summary without applying migration. * @PURPOSE: Performs a dry-run migration to identify potential risks and changes.
* @pre source/target environments and selected dashboards are valid. * @PRE: source/target environments and selected dashboards are valid.
* @post dryRunResult is populated with backend response. * @POST: dryRunResult is populated with the pre-flight analysis.
* @UX_STATE: Idle -> Dry Run button is enabled when selection is valid. * @UX_STATE: [Loading] -> [Success] or [Error]
* @UX_STATE: Loading -> Dry Run button shows "Dry Run..." and stays disabled.
* @UX_STATE: Error -> error banner is displayed and dryRunResult resets to null.
* @UX_FEEDBACK: User sees summary cards + risk block + JSON details after success. * @UX_FEEDBACK: User sees summary cards + risk block + JSON details after success.
* @UX_RECOVERY: User can adjust selection and press Dry Run again. * @UX_RECOVERY: User can adjust selection and press Dry Run again.
*/ */
async function startDryRun() { async function startDryRun() {
return belief_scope("startDryRun", async () => { return belief_scope("startDryRun", async () => {
if (!sourceEnvId || !targetEnvId) { if (!sourceEnvId || !targetEnvId) {
console.warn("[startDryRun][EXPLORE] Missing environment selection");
error = error =
$t.migration?.select_both_envs || $t.migration?.select_both_envs ||
"Please select both source and target environments."; "Please select both source and target environments.";
return; return;
} }
if (sourceEnvId === targetEnvId) { if (sourceEnvId === targetEnvId) {
console.warn("[startDryRun][EXPLORE] Source and target environments are identical");
error = error =
$t.migration?.different_envs || $t.migration?.different_envs ||
"Source and target environments must be different."; "Source and target environments must be different.";
return; return;
} }
if (selectedDashboardIds.length === 0) { if (selectedDashboardIds.length === 0) {
console.warn("[startDryRun][EXPLORE] No dashboards selected");
error = error =
$t.migration?.select_dashboards || $t.migration?.select_dashboards ||
"Please select at least one dashboard to migrate."; "Please select at least one dashboard to migrate.";
return; return;
} }
console.info("[startDryRun][REASON] Initiating dry-run analysis", { sourceEnvId, targetEnvId });
error = ""; error = "";
dryRunLoading = true; dryRunLoading = true;
try { try {
@@ -394,7 +439,9 @@
fix_cross_filters: fixCrossFilters, fix_cross_filters: fixCrossFilters,
}; };
dryRunResult = await api.postApi("/migration/dry-run", selection); dryRunResult = await api.postApi("/migration/dry-run", selection);
console.info("[startDryRun][REFLECT] Dry-run analysis completed", { riskScore: dryRunResult.risk.score });
} catch (e) { } catch (e) {
console.error("[startDryRun][EXPLORE] Dry-run analysis failed", e);
error = e.message; error = e.message;
dryRunResult = null; dryRunResult = null;
} finally { } finally {
@@ -418,20 +465,29 @@
--> -->
<!-- [SECTION: TEMPLATE] --> <!-- [SECTION: TEMPLATE] -->
<div class="max-w-4xl mx-auto p-6"> <div class="max-w-4xl mx-auto p-6">
<!-- [DEF:MigrationHeader:Block] -->
<PageHeader title={$t.nav.migration} /> <PageHeader title={$t.nav.migration} />
<!-- [/DEF:MigrationHeader:Block] -->
<!-- [DEF:TaskHistorySection:Block] -->
<TaskHistory on:viewLogs={handleViewLogs} /> <TaskHistory on:viewLogs={handleViewLogs} />
<!-- [/DEF:TaskHistorySection:Block] -->
<!-- [DEF:ActiveTaskSection:Block] -->
{#if $selectedTask} {#if $selectedTask}
<div class="mt-6"> <div class="mt-6">
<TaskRunner /> <TaskRunner />
<div class="mt-4"> <div class="mt-4">
<Button variant="secondary" on:click={() => selectedTask.set(null)}> <Button variant="secondary" on:click={() => {
console.info("[ActiveTaskSection][REASON] User cancelled active task view");
selectedTask.set(null);
}}>
{$t.common.cancel} {$t.common.cancel}
</Button> </Button>
</div> </div>
</div> </div>
{:else} {:else}
<!-- [/DEF:ActiveTaskSection:Block] -->
{#if loading} {#if loading}
<p>{$t.migration?.loading_envs }</p> <p>{$t.migration?.loading_envs }</p>
{:else if error} {:else if error}
@@ -442,6 +498,7 @@
</div> </div>
{/if} {/if}
<!-- [DEF:EnvironmentSelectionSection:Block] -->
<div class="grid grid-cols-1 md:grid-cols-2 gap-6 mb-8"> <div class="grid grid-cols-1 md:grid-cols-2 gap-6 mb-8">
<EnvSelector <EnvSelector
label={$t.migration?.source_env } label={$t.migration?.source_env }
@@ -454,6 +511,7 @@
{environments} {environments}
/> />
</div> </div>
<!-- [/DEF:EnvironmentSelectionSection:Block] -->
<!-- [DEF:DashboardSelectionSection:Component] --> <!-- [DEF:DashboardSelectionSection:Component] -->
<div class="mb-8"> <div class="mb-8">
@@ -476,6 +534,7 @@
</div> </div>
<!-- [/DEF:DashboardSelectionSection:Component] --> <!-- [/DEF:DashboardSelectionSection:Component] -->
<!-- [DEF:MigrationOptionsSection:Block] -->
<div class="mb-4"> <div class="mb-4">
<div class="flex items-center mb-2"> <div class="flex items-center mb-2">
<input <input
@@ -496,6 +555,7 @@
type="checkbox" type="checkbox"
bind:checked={replaceDb} bind:checked={replaceDb}
on:change={() => { on:change={() => {
console.info("[MigrationOptionsSection][REASON] Database replacement toggled", { replaceDb });
if (replaceDb && sourceDatabases.length === 0) fetchDatabases(); if (replaceDb && sourceDatabases.length === 0) fetchDatabases();
}} }}
class="h-4 w-4 text-indigo-600 focus:ring-indigo-500 border-gray-300 rounded" class="h-4 w-4 text-indigo-600 focus:ring-indigo-500 border-gray-300 rounded"
@@ -505,6 +565,7 @@
</label> </label>
</div> </div>
</div> </div>
<!-- [/DEF:MigrationOptionsSection:Block] -->
{#if replaceDb} {#if replaceDb}
<div class="mb-8 p-4 border rounded-md bg-gray-50"> <div class="mb-8 p-4 border rounded-md bg-gray-50">
@@ -559,6 +620,7 @@
</Button> </Button>
</div> </div>
<!-- [DEF:DryRunResultsSection:Block] -->
{#if dryRunResult} {#if dryRunResult}
<div class="mt-6 rounded-md border border-slate-200 bg-slate-50 p-4 space-y-3"> <div class="mt-6 rounded-md border border-slate-200 bg-slate-50 p-4 space-y-3">
<h3 class="text-base font-semibold">Pre-flight Diff</h3> <h3 class="text-base font-semibold">Pre-flight Diff</h3>
@@ -599,15 +661,24 @@
</details> </details>
</div> </div>
{/if} {/if}
<!-- [/DEF:DryRunResultsSection:Block] -->
{/if} {/if}
</div> </div>
<!-- Modals --> <!-- [DEF:MigrationModals:Block] -->
<!--
@PURPOSE: Render overlay components for log viewing and password entry.
@UX_STATE: [LogViewing] -> TaskLogViewer is visible.
@UX_STATE: [AwaitingInput] -> PasswordPrompt is visible.
-->
<TaskLogViewer <TaskLogViewer
bind:show={showLogViewer} bind:show={showLogViewer}
taskId={logViewerTaskId} taskId={logViewerTaskId}
taskStatus={logViewerTaskStatus} taskStatus={logViewerTaskStatus}
on:close={() => (showLogViewer = false)} on:close={() => {
console.info("[MigrationModals][REASON] Closing log viewer");
showLogViewer = false;
}}
/> />
<PasswordPrompt <PasswordPrompt
@@ -615,8 +686,12 @@
databases={passwordPromptDatabases} databases={passwordPromptDatabases}
errorMessage={passwordPromptErrorMessage} errorMessage={passwordPromptErrorMessage}
on:resume={handleResumeMigration} on:resume={handleResumeMigration}
on:cancel={() => (showPasswordPrompt = false)} on:cancel={() => {
console.info("[MigrationModals][REASON] User cancelled password prompt");
showPasswordPrompt = false;
}}
/> />
<!-- [/DEF:MigrationModals:Block] -->
<!-- [/SECTION] --> <!-- [/SECTION] -->
<!-- [/DEF:MigrationDashboardView:Block] --> <!-- [/DEF:MigrationDashboardView:Block] -->

View File

@@ -18,6 +18,7 @@ import re
import json import json
import datetime import datetime
import fnmatch import fnmatch
import argparse
from enum import Enum from enum import Enum
from dataclasses import dataclass, field from dataclasses import dataclass, field
from typing import Dict, List, Optional, Any, Pattern, Tuple, Set from typing import Dict, List, Optional, Any, Pattern, Tuple, Set
@@ -965,6 +966,106 @@ class SemanticMapGenerator:
self._generate_module_map() self._generate_module_map()
# [/DEF:_generate_artifacts:Function] # [/DEF:_generate_artifacts:Function]
# [DEF:_print_agent_report:Function]
# @TIER: STANDARD
# @PURPOSE: Prints a JSON report optimized for AI agent orchestration and control.
# @PRE: Validation and artifact generation are complete.
# @POST: JSON report printed to stdout.
def _print_agent_report(self):
with belief_scope("_print_agent_report"):
# Calculate global score (re-using logic from _generate_report)
total_weighted_score = 0
total_weight = 0
for file_path, data in self.file_scores.items():
tier = data["tier"]
score = data["score"]
weight = 3 if tier == Tier.CRITICAL else (2 if tier == Tier.STANDARD else 1)
total_weighted_score += score * weight
total_weight += weight
gs = total_weighted_score / total_weight if total_weight > 0 else 0
# Flatten entities to get per-file issues
file_data = {}
def collect_recursive(entities):
for e in entities:
path = e.file_path
if path not in file_data:
file_data[path] = {"issues": [], "tier": e.get_tier().value, "score": self.file_scores.get(path, {}).get("score", 0)}
file_data[path]["issues"].extend([i.to_dict() for i in e.compliance_issues])
collect_recursive(e.children)
collect_recursive(self.entities)
# Critical parsing errors
cpe = []
for path, data in file_data.items():
for i in data["issues"]:
msg = i.get("message", "").lower()
sev = i.get("severity", "").lower()
if "parsing" in msg and (sev == "error" or "critical" in msg):
cpe.append({"file": path, "severity": i.get("severity"), "message": i.get("message")})
# <0.7 by tier
lt = {"CRITICAL": 0, "STANDARD": 0, "TRIVIAL": 0, "UNKNOWN": 0}
for path, data in file_data.items():
if data["score"] < 0.7:
tier = data["tier"]
lt[tier if tier in lt else "UNKNOWN"] += 1
# Priority counts
p2 = 0
p3 = 0
for path, data in file_data.items():
tier = data["tier"]
issues = data["issues"]
if tier == "CRITICAL" and any("Missing Mandatory Tag" in i.get("message", "") for i in issues):
p2 += 1
if tier == "STANDARD" and any("@RELATION" in i.get("message", "") and "Missing Mandatory Tag" in i.get("message", "") for i in issues):
p3 += 1
# Target files status
targets = [
'frontend/src/routes/migration/+page.svelte',
'frontend/src/routes/migration/mappings/+page.svelte',
'frontend/src/components/auth/ProtectedRoute.svelte',
'backend/src/core/auth/repository.py',
'backend/src/core/migration/risk_assessor.py',
'backend/src/api/routes/migration.py',
'backend/src/models/config.py',
'backend/src/services/auth_service.py',
'backend/src/core/config_manager.py',
'backend/src/core/migration_engine.py'
]
status = []
for t in targets:
f = file_data.get(t)
if not f:
status.append({"path": t, "found": False})
continue
sc = f["score"]
status.append({
"path": t,
"found": True,
"score": sc,
"tier": f["tier"],
"under_0_7": sc < 0.7,
"violations": len(f["issues"]) > 0,
"issues_count": len(f["issues"])
})
out = {
"global_score": gs,
"critical_parsing_errors_count": len(cpe),
"critical_parsing_errors": cpe[:50],
"lt_0_7_by_tier": lt,
"priority_1_blockers": len(cpe),
"priority_2_tier1_critical_missing_mandatory_tags_files": p2,
"priority_3_tier2_standard_missing_relation_files": p3,
"targets": status,
"total_files": len(file_data)
}
print(json.dumps(out, ensure_ascii=False))
# [/DEF:_print_agent_report:Function]
# [DEF:_generate_report:Function] # [DEF:_generate_report:Function]
# @TIER: CRITICAL # @TIER: CRITICAL
# @PURPOSE: Generates the Markdown compliance report with severity levels. # @PURPOSE: Generates the Markdown compliance report with severity levels.
@@ -1306,7 +1407,14 @@ class SemanticMapGenerator:
if __name__ == "__main__": if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Generate Semantic Map and Compliance Reports")
parser.add_argument("--agent-report", action="store_true", help="Output JSON report for AI agents")
args = parser.parse_args()
generator = SemanticMapGenerator(PROJECT_ROOT) generator = SemanticMapGenerator(PROJECT_ROOT)
generator.run() generator.run()
if args.agent_report:
generator._print_agent_report()
# [/DEF:generate_semantic_map:Module] # [/DEF:generate_semantic_map:Module]

File diff suppressed because it is too large Load Diff