Compare commits

..

91 Commits

Author SHA1 Message Date
11e8c8e132 Finalize task-020 reports navigation and stability fixes 2026-02-23 13:28:30 +03:00
40c2e2414d semantic update 2026-02-23 13:15:48 +03:00
066ef5eab5 таски готовы 2026-02-23 10:18:56 +03:00
2946ee9b42 Fix task API stability and Playwright runtime in Docker 2026-02-21 23:43:46 +03:00
5f70a239a7 feat: restore legacy data and add typed task result views 2026-02-21 23:17:56 +03:00
d67d24e7e6 db + docker 2026-02-20 20:47:39 +03:00
01efc9dae1 semantic update 2026-02-20 10:41:15 +03:00
43814511ee few shots update 2026-02-20 10:26:01 +03:00
db47e4ce55 css refactor 2026-02-19 18:24:36 +03:00
d5a5c3b902 +Svelte specific 2026-02-19 17:47:24 +03:00
066c37087d ai base 2026-02-19 17:43:45 +03:00
b40649b9ed fix tax log 2026-02-19 16:05:59 +03:00
197647d97a tests ready 2026-02-19 13:33:20 +03:00
e9e529e322 Coder + fix workflow 2026-02-19 13:33:10 +03:00
bc3ff29d2f Test logic update 2026-02-19 12:44:31 +03:00
eb8ed5da59 task panel 2026-02-19 09:43:01 +03:00
b6ae41d576 docs: amend constitution to v2.3.0 (tailwind css first principle) 2026-02-18 18:29:52 +03:00
cf42de3060 refactor 2026-02-18 17:29:46 +03:00
6062712a92 fix 2026-02-15 11:11:30 +03:00
7790a2dc51 измененные спеки таски 2026-02-10 15:53:38 +03:00
a58bef5c73 updated tasks 2026-02-10 15:04:43 +03:00
232dd947d8 linter + новые таски 2026-02-10 12:53:01 +03:00
33966548d7 Таски готовы 2026-02-09 12:35:27 +03:00
cad6e97464 semantic update 2026-02-08 22:53:54 +03:00
47a3213fb9 таски готовы 2026-02-07 12:42:32 +03:00
303d7272f8 Похоже работает 2026-02-07 11:26:06 +03:00
0711ded532 feat(llm-plugin): switch to environment API for log retrieval
- Replace local backend.log reading with Superset API /log/ fetch
- Update DashboardValidationPlugin to use SupersetClient
- Filter logs by dashboard_id and last 24 hours
- Update spec FR-006 to reflect API usage
2026-02-06 17:57:25 +03:00
495857bbee Semantic protocol update - add UX 2026-01-30 18:53:52 +03:00
df7582a8db tasks ux-reference 2026-01-30 13:35:03 +03:00
3802b0af8c feat(speckit): integrate ux reference into workflows
Introduce a UX reference stage to ensure technical plans align with
user experience goals. Adds a new template, a generation step in the
specification workflow, and mandatory validation checks during
planning to prevent technical compromises from degrading the defined
user experience.
2026-01-30 12:31:19 +03:00
1702f3a5e9 Вроде работает 2026-01-30 11:10:16 +03:00
83c24d4b85 tasks and workflow updated 2026-01-29 10:06:28 +03:00
dd596698e5 docs: amend constitution to v2.0.0 (delegate semantics to protocol + add async/testability principles) 2026-01-28 18:48:43 +03:00
0fee26a846 tasks ready 2026-01-28 18:30:23 +03:00
35096b5e23 semantic update 2026-01-28 16:57:19 +03:00
0299728d72 semantic protocol condense + script update 2026-01-28 15:49:39 +03:00
de6ff0d41b tested 2026-01-27 23:49:19 +03:00
260a90aac5 Передаем на тест 2026-01-27 16:32:08 +03:00
56a1508b38 tasks ready 2026-01-27 13:26:06 +03:00
7c0a601499 Обновил gitignore - убрал логи 2026-01-26 22:15:17 +03:00
a5b1bba226 Закончили редизайн, обновили интерфейс бэкапа 2026-01-26 22:12:35 +03:00
8f13ed3031 Выполнено, передано на тестирование 2026-01-26 21:17:05 +03:00
305b07bf8b tasks ready 2026-01-26 20:58:38 +03:00
4e1992f489 semantic update 2026-01-26 11:57:36 +03:00
ac7a6cfadc Файловое хранилище готово 2026-01-26 11:08:18 +03:00
29daebd628 Передаем на тест 2026-01-25 18:33:00 +03:00
71873b7bb3 tasks ready 2026-01-24 16:21:43 +03:00
68b25c90a8 Update .gitignore 2026-01-24 11:26:19 +03:00
e9b8794f1a Update backup scheduler task status 2026-01-24 11:26:05 +03:00
6d94d26e40 semantic cleanup 2026-01-23 21:58:32 +03:00
598dd50d1d Мультиязночность + причесывание css 2026-01-23 17:53:46 +03:00
eacb88a0e3 tasks ready 2026-01-23 14:56:05 +03:00
10676b7029 Работает создание коммитов и перенос в новый enviroment 2026-01-23 13:57:44 +03:00
2023f6c211 tasks ready 2026-01-22 23:59:16 +03:00
2111c12d0a +gitignore 2026-01-22 23:25:29 +03:00
b46133e4c1 fix error 2026-01-22 23:18:48 +03:00
6cc2fb4c9b refactor complete 2026-01-22 17:37:17 +03:00
c406f71988 ашч 2026-01-21 14:00:48 +03:00
55bdd981b1 fix(backend): standardize superset client init and auth
- Update plugins (debug, mapper, search) to explicitly map environment config to SupersetConfig
- Add authenticate method to SupersetClient for explicit session management
- Add get_environment method to ConfigManager
- Fix navbar dropdown hover stability in frontend with invisible bridge
2026-01-20 19:31:17 +03:00
15843a4607 TaskLog fix 2026-01-19 17:10:43 +03:00
8b81bb9f1f bug fixs 2026-01-19 00:07:06 +03:00
7f244a8252 bug fixes 2026-01-18 23:21:00 +03:00
c0505b4d4f semantic markup update 2026-01-18 21:29:54 +03:00
1b863bea1b semantic checker script update 2026-01-13 17:33:57 +03:00
7c6c959774 constitution update 2026-01-13 15:29:42 +03:00
554e1128b8 semantics update 2026-01-13 09:11:27 +03:00
55ca476972 tasks.md status 2026-01-12 12:35:45 +03:00
4b4d23e671 1st iter 2026-01-12 12:33:51 +03:00
e80369c8b5 tasks ready 2026-01-07 18:59:49 +03:00
ffe942c9dd docs: amend constitution to v1.6.0 (add 'Everything is a Plugin' principle) and refactor 010 plan 2026-01-07 18:36:38 +03:00
19744796e4 Product Manager role 2026-01-07 11:39:44 +03:00
a6bebe295c project map script | semantic parcer 2026-01-01 16:58:21 +03:00
e2ce346b7b backup worked 2025-12-30 22:02:51 +03:00
789e5a90e3 docs ready 2025-12-30 21:30:37 +03:00
163d03e6f5 +api rework 2025-12-30 20:08:48 +03:00
169237b31b cleaned 2025-12-30 18:20:40 +03:00
45bb8c5429 Password promt 2025-12-30 17:21:12 +03:00
17c28433bd TaskManager refactor 2025-12-29 10:13:37 +03:00
077daa0245 mappings+migrate 2025-12-27 10:16:41 +03:00
d38cda09dd tech_lead / coder 2roles 2025-12-27 08:02:59 +03:00
1a893c0bc0 semantic add 2025-12-27 07:14:08 +03:00
40ed375aa4 new loggers logic in constitution 2025-12-27 06:51:28 +03:00
5fdc92fcdf tasks ready 2025-12-27 06:37:03 +03:00
e83328b4ff Merge branch '001-migration-ui-redesign' into master 2025-12-27 05:58:35 +03:00
687f4ce565 superset_tool logger rework 2025-12-27 05:53:30 +03:00
dc9e9e0588 feat(logging): implement configurable belief state logging
- Add LoggingConfig model and logging field to GlobalSettings
- Implement belief_scope context manager for structured logging
- Add configure_logger for dynamic level and file rotation settings
- Add logging configuration UI to Settings page
- Update ConfigManager to apply logging settings on initialization and updates
2025-12-27 05:39:33 +03:00
2de3e53ab2 006 plan ready 2025-12-26 19:36:49 +03:00
40ea0580d9 001-migration-ui-redesign (#3)
Reviewed-on: #3
2025-12-26 18:17:58 +03:00
8da906738b Merge branch 'migration' into 001-migration-ui-redesign 2025-12-26 18:16:24 +03:00
d5a1c0e091 spec rules 2025-12-25 22:28:42 +03:00
ef7a0fcf92 feat(migration): implement interactive mapping resolution workflow
- Add SQLite database integration for environments and mappings
- Update TaskManager to support pausing tasks (AWAITING_MAPPING)
- Modify MigrationPlugin to detect missing mappings and wait for resolution
- Add frontend UI for handling missing mappings interactively
- Create dedicated migration routes and API endpoints
- Update .gitignore and project documentation
2025-12-25 22:27:29 +03:00
170 changed files with 61910 additions and 113479 deletions

View File

@@ -1,35 +0,0 @@
# ss-tools Development Guidelines
Auto-generated from all feature plans. Last updated: 2026-02-25
## Knowledge Graph (GRACE)
**CRITICAL**: This project uses a GRACE Knowledge Graph for context. Always load the root map first:
- **Root Map**: `.ai/ROOT.md` -> `[DEF:Project_Knowledge_Map:Root]`
- **Project Map**: `.ai/PROJECT_MAP.md` -> `[DEF:Project_Map]`
- **Standards**: Read `.ai/standards/` for architecture and style rules.
## Active Technologies
- (022-sync-id-cross-filters)
## Project Structure
```text
src/
tests/
```
## Commands
# Add commands for
## Code Style
: Follow standard conventions
## Recent Changes
- 022-sync-id-cross-filters: Added
<!-- MANUAL ADDITIONS START -->
<!-- MANUAL ADDITIONS END -->

View File

@@ -1,51 +0,0 @@
---
description: Audit AI-generated unit tests. Your goal is to aggressively search for "Test Tautologies", "Logic Echoing", and "Contract Negligence". You are the final gatekeeper. If a test is meaningless, you MUST reject it.
---
**ROLE:** Elite Quality Assurance Architect and Red Teamer.
**OBJECTIVE:** Audit AI-generated unit tests. Your goal is to aggressively search for "Test Tautologies", "Logic Echoing", and "Contract Negligence". You are the final gatekeeper. If a test is meaningless, you MUST reject it.
**INPUT:**
1. SOURCE CODE (with GRACE-Poly `[DEF]` Contract: `@PRE`, `@POST`, `@TEST_DATA`).
2. GENERATED TEST CODE.
### I. CRITICAL ANTI-PATTERNS (REJECT IMMEDIATELY IF FOUND):
1. **The Tautology (Self-Fulfilling Prophecy):**
- *Definition:* The test asserts hardcoded values against hardcoded values without executing the core business logic, or mocks the actual function being tested.
- *Example of Failure:* `assert 2 + 2 == 4` or mocking the class under test so that it returns exactly what the test asserts.
2. **The Logic Mirror (Echoing):**
- *Definition:* The test re-implements the exact same algorithmic logic found in the source code to calculate the `expected_result`. If the original logic is flawed, the test will falsely pass.
- *Rule:* Tests must assert against **static, predefined outcomes** (from `@TEST_DATA` or explicit constants), NOT dynamically calculated outcomes using the same logic as the source.
3. **The "Happy Path" Illusion:**
- *Definition:* The test suite only checks successful executions but ignores the `@PRE` conditions (Negative Testing).
- *Rule:* Every `@PRE` tag in the source contract MUST have a corresponding test that deliberately violates it and asserts the correct Exception/Error state.
4. **Missing Post-Condition Verification:**
- *Definition:* The test calls the function but only checks the return value, ignoring `@SIDE_EFFECT` or `@POST` state changes (e.g., failing to verify that a DB call was made or a Store was updated).
### II. AUDIT CHECKLIST
Evaluate the test code against these criteria:
1. **Target Invocation:** Does the test actually import and call the function/component declared in the `@RELATION: VERIFIES` tag?
2. **Contract Alignment:** Does the test suite cover 100% of the `@PRE` (negative tests) and `@POST` (assertions) conditions from the source contract?
3. **Data Usage:** Does the test use the exact scenarios defined in `@TEST_DATA`?
4. **Mocking Sanity:** Are external dependencies mocked correctly WITHOUT mocking the system under test itself?
### III. OUTPUT FORMAT
You MUST respond strictly in the following JSON format. Do not add markdown blocks outside the JSON.
{
"verdict": "APPROVED" | "REJECTED",
"rejection_reason": "TAUTOLOGY" | "LOGIC_MIRROR" | "WEAK_CONTRACT_COVERAGE" | "OVER_MOCKED" | "NONE",
"audit_details": {
"target_invoked": true/false,
"pre_conditions_tested": true/false,
"post_conditions_tested": true/false,
"test_data_used": true/false
},
"feedback": "Strict, actionable feedback for the test generator agent. Explain exactly which anti-pattern was detected and how to fix it."
}

View File

@@ -1,4 +0,0 @@
---
description: USE SEMANTIC
---
Прочитай .specify/memory/semantics.md (или .ai/standards/semantics.md, если не найден). ОБЯЗАТЕЛЬНО используй его при разработке

View File

@@ -1,10 +0,0 @@
---
description: semantic
---
You are Semantic Agent responsible for maintaining the semantic integrity of the codebase. Your primary goal is to ensure that all code entities (Modules, Classes, Functions, Components) are properly annotated with semantic anchors and tags as defined in `.ai/standards/semantics.md`.
Your core responsibilities are: 1. **Semantic Mapping**: You run and maintain the `generate_semantic_map.py` script to generate up-to-date semantic maps (`semantics/semantic_map.json`, `.ai/PROJECT_MAP.md`) and compliance reports (`semantics/reports/*.md`). 2. **Compliance Auditing**: You analyze the generated compliance reports to identify files with low semantic coverage or parsing errors. 3. **Semantic Enrichment**: You actively edit code files to add missing semantic anchors (`[DEF:...]`, `[/DEF:...]`) and mandatory tags (`@PURPOSE`, `@LAYER`, etc.) to improve the global compliance score. 4. **Protocol Enforcement**: You strictly adhere to the syntax and rules defined in `.ai/standards/semantics.md` when modifying code.
You have access to the full codebase and tools to read, write, and execute scripts. You should prioritize fixing "Critical Parsing Errors" (unclosed anchors) before addressing missing metadata.
whenToUse: Use this mode when you need to update the project's semantic map, fix semantic compliance issues (missing anchors/tags/DbC ), or analyze the codebase structure. This mode is specialized for maintaining the `.ai/standards/semantics.md` standards.
description: Codebase semantic mapping and compliance expert
customInstructions: Always check `semantics/reports/` for the latest compliance status before starting work. When fixing a file, try to fix all semantic issues in that file at once. After making a batch of fixes, run `python3 generate_semantic_map.py` to verify improvements.

View File

@@ -1,184 +0,0 @@
---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
## Operating Constraints
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
## Execution Steps
### 1. Initialize Analysis Context
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
- SPEC = FEATURE_DIR/spec.md
- PLAN = FEATURE_DIR/plan.md
- TASKS = FEATURE_DIR/tasks.md
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
### 2. Load Artifacts (Progressive Disclosure)
Load only the minimal necessary context from each artifact:
**From spec.md:**
- Overview/Context
- Functional Requirements
- Non-Functional Requirements
- User Stories
- Edge Cases (if present)
**From plan.md:**
- Architecture/stack choices
- Data Model references
- Phases
- Technical constraints
**From tasks.md:**
- Task IDs
- Descriptions
- Phase grouping
- Parallel markers [P]
- Referenced file paths
**From constitution:**
- Load `.specify/memory/constitution.md` for principle validation
### 3. Build Semantic Models
Create internal representations (do not include raw artifacts in output):
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
- **User story/action inventory**: Discrete user actions with acceptance criteria
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
### 4. Detection Passes (Token-Efficient Analysis)
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
#### A. Duplication Detection
- Identify near-duplicate requirements
- Mark lower-quality phrasing for consolidation
#### B. Ambiguity Detection
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
#### C. Underspecification
- Requirements with verbs but missing object or measurable outcome
- User stories missing acceptance criteria alignment
- Tasks referencing files or components not defined in spec/plan
#### D. Constitution Alignment
- Any requirement or plan element conflicting with a MUST principle
- Missing mandated sections or quality gates from constitution
#### E. Coverage Gaps
- Requirements with zero associated tasks
- Tasks with no mapped requirement/story
- Non-functional requirements not reflected in tasks (e.g., performance, security)
#### F. Inconsistency
- Terminology drift (same concept named differently across files)
- Data entities referenced in plan but absent in spec (or vice versa)
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
### 5. Severity Assignment
Use this heuristic to prioritize findings:
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
### 6. Produce Compact Analysis Report
Output a Markdown report (no file writes) with the following structure:
## Specification Analysis Report
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
(Add one row per finding; generate stable IDs prefixed by category initial.)
**Coverage Summary Table:**
| Requirement Key | Has Task? | Task IDs | Notes |
|-----------------|-----------|----------|-------|
**Constitution Alignment Issues:** (if any)
**Unmapped Tasks:** (if any)
**Metrics:**
- Total Requirements
- Total Tasks
- Coverage % (requirements with >=1 task)
- Ambiguity Count
- Duplication Count
- Critical Issues Count
### 7. Provide Next Actions
At end of report, output a concise Next Actions block:
- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
### 8. Offer Remediation
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
## Operating Principles
### Context Efficiency
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
### Analysis Guidelines
- **NEVER modify files** (this is read-only analysis)
- **NEVER hallucinate missing sections** (if absent, report them accurately)
- **Prioritize constitution violations** (these are always CRITICAL)
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
- **Report zero issues gracefully** (emit success report with coverage statistics)
## Context
$ARGUMENTS

View File

@@ -1,294 +0,0 @@
---
description: Generate a custom checklist for the current feature based on user requirements.
---
## Checklist Purpose: "Unit Tests for English"
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
**NOT for verification/testing**:
- ❌ NOT "Verify the button clicks correctly"
- ❌ NOT "Test error handling works"
- ❌ NOT "Confirm the API returns 200"
- ❌ NOT checking if code/implementation matches the spec
**FOR requirements quality validation**:
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Execution Steps
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
- All file paths must be absolute.
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
- Only ask about information that materially changes checklist content
- Be skipped individually if already unambiguous in `$ARGUMENTS`
- Prefer precision over breadth
Generation algorithm:
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
5. Formulate questions chosen from these archetypes:
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
Question formatting rules:
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
- Limit to AE options maximum; omit table if a free-form answer is clearer
- Never ask the user to restate what they already said
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
Defaults when interaction impossible:
- Depth: Standard
- Audience: Reviewer (PR) if code-related; Author otherwise
- Focus: Top 2 relevance clusters
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted followups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
- Derive checklist theme (e.g., security, review, deploy, ux)
- Consolidate explicit must-have items mentioned by user
- Map focus selections to category scaffolding
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
4. **Load feature context**: Read from FEATURE_DIR:
- spec.md: Feature requirements and scope
- plan.md (if exists): Technical details, dependencies
- tasks.md (if exists): Implementation tasks
**Context Loading Strategy**:
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
- Prefer summarizing long sections into concise scenario/requirement bullets
- Use progressive disclosure: add follow-on retrieval only if gaps detected
- If source docs are large, generate interim summary items instead of embedding raw text
5. **Generate checklist** - Create "Unit Tests for Requirements":
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
- Generate unique checklist filename:
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
- Format: `[domain].md`
- If file exists, append to existing file
- Number items sequentially starting from CHK001
- Each `/speckit.checklist` run creates a NEW file (never overwrites existing checklists)
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
- **Completeness**: Are all necessary requirements present?
- **Clarity**: Are requirements unambiguous and specific?
- **Consistency**: Do requirements align with each other?
- **Measurability**: Can requirements be objectively verified?
- **Coverage**: Are all scenarios/edge cases addressed?
**Category Structure** - Group items by requirement quality dimensions:
- **Requirement Completeness** (Are all necessary requirements documented?)
- **Requirement Clarity** (Are requirements specific and unambiguous?)
- **Requirement Consistency** (Do requirements align without conflicts?)
- **Acceptance Criteria Quality** (Are success criteria measurable?)
- **Scenario Coverage** (Are all flows/cases addressed?)
- **Edge Case Coverage** (Are boundary conditions defined?)
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
- **Dependencies & Assumptions** (Are they documented and validated?)
- **Ambiguities & Conflicts** (What needs clarification?)
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
**WRONG** (Testing implementation):
- "Verify landing page displays 3 episode cards"
- "Test hover states work on desktop"
- "Confirm logo click navigates home"
**CORRECT** (Testing requirements quality):
- "Are the exact number and layout of featured episodes specified?" [Completeness]
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
- "Are loading states defined for asynchronous episode data?" [Completeness]
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
**ITEM STRUCTURE**:
Each item should follow this pattern:
- Question format asking about requirement quality
- Focus on what's WRITTEN (or not written) in the spec/plan
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
- Reference spec section `[Spec §X.Y]` when checking existing requirements
- Use `[Gap]` marker when checking for missing requirements
**EXAMPLES BY QUALITY DIMENSION**:
Completeness:
- "Are error handling requirements defined for all API failure modes? [Gap]"
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
Clarity:
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
Consistency:
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
Coverage:
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
Measurability:
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
**Scenario Classification & Coverage** (Requirements Quality Focus):
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
**Traceability Requirements**:
- MINIMUM: ≥80% of items MUST include at least one traceability reference
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
**Surface & Resolve Issues** (Requirements Quality Problems):
Ask questions about the requirements themselves:
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
**Content Consolidation**:
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
- Merge near-duplicates checking the same requirement aspect
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
- ❌ References to code execution, user actions, system behavior
- ❌ "Displays correctly", "works properly", "functions as expected"
- ❌ "Click", "navigate", "render", "load", "execute"
- ❌ Test cases, test plans, QA procedures
- ❌ Implementation details (frameworks, APIs, algorithms)
**✅ REQUIRED PATTERNS** - These test requirements quality:
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
- ✅ "Are requirements consistent between [section A] and [section B]?"
- ✅ "Can [requirement] be objectively measured/verified?"
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
- ✅ "Does the spec define [missing aspect]?"
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
7. **Report**: Output full path to created checklist, item count, and remind user that each run creates a new file. Summarize:
- Focus areas selected
- Depth level
- Actor/timing
- Any explicit user-specified must-have items incorporated
**Important**: Each `/speckit.checklist` command invocation creates a checklist file using short, descriptive names unless file already exists. This allows:
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
- Simple, memorable filenames that indicate checklist purpose
- Easy identification and navigation in the `checklists/` folder
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
## Example Checklist Types & Sample Items
**UX Requirements Quality:** `ux.md`
Sample items (testing the requirements, NOT the implementation):
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
**API Requirements Quality:** `api.md`
Sample items:
- "Are error response formats specified for all failure scenarios? [Completeness]"
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
- "Are authentication requirements consistent across all endpoints? [Consistency]"
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
- "Is versioning strategy documented in requirements? [Gap]"
**Performance Requirements Quality:** `performance.md`
Sample items:
- "Are performance requirements quantified with specific metrics? [Clarity]"
- "Are performance targets defined for all critical user journeys? [Coverage]"
- "Are performance requirements under different load conditions specified? [Completeness]"
- "Can performance requirements be objectively measured? [Measurability]"
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
**Security Requirements Quality:** `security.md`
Sample items:
- "Are authentication requirements specified for all protected resources? [Coverage]"
- "Are data protection requirements defined for sensitive information? [Completeness]"
- "Is the threat model documented and requirements aligned to it? [Traceability]"
- "Are security requirements consistent with compliance obligations? [Consistency]"
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
## Anti-Examples: What NOT To Do
**❌ WRONG - These test implementation, not requirements:**
```markdown
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
```
**✅ CORRECT - These test requirements quality:**
```markdown
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
```
**Key Differences:**
- Wrong: Tests if the system works correctly
- Correct: Tests if the requirements are written correctly
- Wrong: Verification of behavior
- Correct: Validation of requirement quality
- Wrong: "Does it do X?"
- Correct: "Is X clearly specified?"

View File

@@ -1,181 +0,0 @@
---
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
handoffs:
- label: Build Technical Plan
agent: speckit.plan
prompt: Create a plan for the spec. I am building with...
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
Execution steps:
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
- `FEATURE_DIR`
- `FEATURE_SPEC`
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
- If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
Functional Scope & Behavior:
- Core user goals & success criteria
- Explicit out-of-scope declarations
- User roles / personas differentiation
Domain & Data Model:
- Entities, attributes, relationships
- Identity & uniqueness rules
- Lifecycle/state transitions
- Data volume / scale assumptions
Interaction & UX Flow:
- Critical user journeys / sequences
- Error/empty/loading states
- Accessibility or localization notes
Non-Functional Quality Attributes:
- Performance (latency, throughput targets)
- Scalability (horizontal/vertical, limits)
- Reliability & availability (uptime, recovery expectations)
- Observability (logging, metrics, tracing signals)
- Security & privacy (authN/Z, data protection, threat assumptions)
- Compliance / regulatory constraints (if any)
Integration & External Dependencies:
- External services/APIs and failure modes
- Data import/export formats
- Protocol/versioning assumptions
Edge Cases & Failure Handling:
- Negative scenarios
- Rate limiting / throttling
- Conflict resolution (e.g., concurrent edits)
Constraints & Tradeoffs:
- Technical constraints (language, storage, hosting)
- Explicit tradeoffs or rejected alternatives
Terminology & Consistency:
- Canonical glossary terms
- Avoided synonyms / deprecated terms
Completion Signals:
- Acceptance criteria testability
- Measurable Definition of Done style indicators
Misc / Placeholders:
- TODO markers / unresolved decisions
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
For each category with Partial or Missing status, add a candidate question opportunity unless:
- Clarification would not materially change implementation or validation strategy
- Information is better deferred to planning phase (note internally)
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
- Maximum of 10 total questions across the whole session.
- Each question must be answerable with EITHER:
- A short multiplechoice selection (25 distinct, mutually exclusive options), OR
- A one-word / shortphrase answer (explicitly constrain: "Answer in <=5 words").
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
4. Sequential questioning loop (interactive):
- Present EXACTLY ONE question at a time.
- For multiplechoice questions:
- **Analyze all options** and determine the **most suitable option** based on:
- Best practices for the project type
- Common patterns in similar implementations
- Risk reduction (security, performance, maintainability)
- Alignment with any explicit project goals or constraints visible in the spec
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
- Format as: `**Recommended:** Option [X] - <reasoning>`
- Then render all options as a Markdown table:
| Option | Description |
|--------|-------------|
| A | <Option A description> |
| B | <Option B description> |
| C | <Option C description> (add D/E as needed up to 5) |
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
- For shortanswer style (no meaningful discrete options):
- Provide your **suggested answer** based on best practices and context.
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
- After the user answers:
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
- Stop asking further questions when:
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
- User signals completion ("done", "good", "no more"), OR
- You reach 5 asked questions.
- Never reveal future queued questions in advance.
- If no valid questions exist at start, immediately report no critical ambiguities.
5. Integration after EACH accepted answer (incremental update approach):
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
- For the first integrated answer in this session:
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
- Then immediately apply the clarification to the most appropriate section(s):
- Functional ambiguity → Update or add a bullet in Functional Requirements.
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
- Keep each inserted clarification minimal and testable (avoid narrative drift).
6. Validation (performed after EACH write plus final pass):
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
- Total asked (accepted) questions ≤ 5.
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
- Terminology consistency: same canonical term used across all updated sections.
7. Write the updated spec back to `FEATURE_SPEC`.
8. Report completion (after questioning loop ends or early termination):
- Number of questions asked & answered.
- Path to updated spec.
- Sections touched (list names).
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
- Suggested next command.
Behavior rules:
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
- Respect user early termination signals ("stop", "done", "proceed").
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
Context for prioritization: $ARGUMENTS

View File

@@ -1,84 +0,0 @@
---
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
handoffs:
- label: Build Specification
agent: speckit.specify
prompt: Implement the feature specification based on the updated constitution. I want to build...
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
**Note**: If `.specify/memory/constitution.md` does not exist yet, it should have been initialized from `.specify/templates/constitution-template.md` during project setup. If it's missing, copy the template first.
Follow this execution flow:
1. Load the existing constitution at `.specify/memory/constitution.md`.
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
2. Collect/derive values for placeholders:
- If user input (conversation) supplies a value, use it.
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
- MINOR: New principle/section added or materially expanded guidance.
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
- If version bump type ambiguous, propose reasoning before finalizing.
3. Draft the updated constitution content:
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing nonnegotiable rules, explicit rationale if not obvious.
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
4. Consistency propagation checklist (convert prior checklist into active validations):
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
- Version change: old → new
- List of modified principles (old title → new title if renamed)
- Added sections
- Removed sections
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
- Follow-up TODOs if any placeholders intentionally deferred.
6. Validation before final output:
- No remaining unexplained bracket tokens.
- Version line matches report.
- Dates ISO format YYYY-MM-DD.
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
8. Output a final summary to the user with:
- New version and bump rationale.
- Any files flagged for manual follow-up.
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
Formatting & Style Requirements:
- Use Markdown headings exactly as in the template (do not demote/promote levels).
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
- Keep a single blank line between sections.
- Avoid trailing whitespace.
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.

View File

@@ -1,199 +0,0 @@
---
description: Fix failing tests and implementation issues based on test reports
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Analyze test failure reports, identify root causes, and fix implementation issues while preserving semantic protocol compliance.
## Operating Constraints
1. **USE CODER MODE**: Always switch to `coder` mode for code fixes
2. **SEMANTIC PROTOCOL**: Never remove semantic annotations ([DEF], @TAGS). Only update code logic.
3. **TEST DATA**: If tests use @TEST_DATA fixtures, preserve them when fixing
4. **NO DELETION**: Never delete existing tests or semantic annotations
5. **REPORT FIRST**: Always write a fix report before making changes
## Execution Steps
### 1. Load Test Report
**Required**: Test report file path (e.g., `specs/<feature>/tests/reports/2026-02-19-report.md`)
**Parse the report for**:
- Failed test cases
- Error messages
- Stack traces
- Expected vs actual behavior
- Affected modules/files
### 2. Analyze Root Causes
For each failed test:
1. **Read the test file** to understand what it's testing
2. **Read the implementation file** to find the bug
3. **Check semantic protocol compliance**:
- Does the implementation have correct [DEF] anchors?
- Are @TAGS (@PRE, @POST, @UX_STATE, etc.) present?
- Does the code match the TIER requirements?
4. **Identify the fix**:
- Logic error in implementation
- Missing error handling
- Incorrect API usage
- State management issue
### 3. Write Fix Report
Create a structured fix report:
```markdown
# Fix Report: [FEATURE]
**Date**: [YYYY-MM-DD]
**Report**: [Test Report Path]
**Fixer**: Coder Agent
## Summary
- Total Failed Tests: [X]
- Total Fixed: [X]
- Total Skipped: [X]
## Failed Tests Analysis
### Test: [Test Name]
**File**: `path/to/test.py`
**Error**: [Error message]
**Root Cause**: [Explanation of why test failed]
**Fix Required**: [Description of fix]
**Status**: [Pending/In Progress/Completed]
## Fixes Applied
### Fix 1: [Description]
**Affected File**: `path/to/file.py`
**Test Affected**: `[Test Name]`
**Changes**:
```diff
<<<<<<< SEARCH
[Original Code]
=======
[Fixed Code]
>>>>>>> REPLACE
```
**Verification**: [How to verify fix works]
**Semantic Integrity**: [Confirmed annotations preserved]
## Next Steps
- [ ] Run tests to verify fix: `cd backend && .venv/bin/python3 -m pytest`
- [ ] Check for related failing tests
- [ ] Update test documentation if needed
```
### 4. Apply Fixes (in Coder Mode)
Switch to `coder` mode and apply fixes:
1. **Read the implementation file** to get exact content
2. **Apply the fix** using apply_diff
3. **Preserve all semantic annotations**:
- Keep [DEF:...] and [/DEF:...] anchors
- Keep all @TAGS (@PURPOSE, @LAYER, @TIER, @RELATION, @PRE, @POST, @UX_STATE, @UX_FEEDBACK, @UX_RECOVERY)
4. **Only update code logic** to fix the bug
5. **Run tests** to verify the fix
### 5. Verification
After applying fixes:
1. **Run tests**:
```bash
cd backend && .venv/bin/python3 -m pytest -v
```
or
```bash
cd frontend && npm run test
```
2. **Check test results**:
- Failed tests should now pass
- No new tests should fail
- Coverage should not decrease
3. **Update fix report** with results:
- Mark fixes as completed
- Add verification steps
- Note any remaining issues
## Output
Generate final fix report:
```markdown
# Fix Report: [FEATURE] - COMPLETED
**Date**: [YYYY-MM-DD]
**Report**: [Test Report Path]
**Fixer**: Coder Agent
## Summary
- Total Failed Tests: [X]
- Total Fixed: [X] ✅
- Total Skipped: [X]
## Fixes Applied
### Fix 1: [Description] ✅
**Affected File**: `path/to/file.py`
**Test Affected**: `[Test Name]`
**Changes**: [Summary of changes]
**Verification**: All tests pass ✅
**Semantic Integrity**: Preserved ✅
## Test Results
```
[Full test output showing all passing tests]
```
## Recommendations
- [ ] Monitor for similar issues
- [ ] Update documentation if needed
- [ ] Consider adding more tests for edge cases
## Related Files
- Test Report: [path]
- Implementation: [path]
- Test File: [path]
```
## Context for Fixing
$ARGUMENTS

View File

@@ -1,135 +0,0 @@
---
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
- Scan all checklist files in the checklists/ directory
- For each checklist, count:
- Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
- Completed items: Lines matching `- [X]` or `- [x]`
- Incomplete items: Lines matching `- [ ]`
- Create a status table:
```text
| Checklist | Total | Completed | Incomplete | Status |
|-----------|-------|-----------|------------|--------|
| ux.md | 12 | 12 | 0 | ✓ PASS |
| test.md | 8 | 5 | 3 | ✗ FAIL |
| security.md | 6 | 6 | 0 | ✓ PASS |
```
- Calculate overall status:
- **PASS**: All checklists have 0 incomplete items
- **FAIL**: One or more checklists have incomplete items
- **If any checklist is incomplete**:
- Display the table with incomplete item counts
- **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
- Wait for user response before continuing
- If user says "no" or "wait" or "stop", halt execution
- If user says "yes" or "proceed" or "continue", proceed to step 3
- **If all checklists are complete**:
- Display the table showing all checklists passed
- Automatically proceed to step 3
3. Load and analyze the implementation context:
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
- **IF EXISTS**: Read data-model.md for entities and relationships
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
- **IF EXISTS**: Read research.md for technical decisions and constraints
- **IF EXISTS**: Read quickstart.md for integration scenarios
4. **Project Setup Verification**:
- **REQUIRED**: Create/verify ignore files based on actual project setup:
**Detection & Creation Logic**:
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
```sh
git rev-parse --git-dir 2>/dev/null
```
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
- Check if .eslintrc* exists → create/verify .eslintignore
- Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
- Check if .prettierrc* exists → create/verify .prettierignore
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
- Check if terraform files (*.tf) exist → create/verify .terraformignore
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
**If ignore file missing**: Create with full pattern set for detected technology
**Common Patterns by Technology** (from plan.md tech stack):
- **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
- **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
- **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `Makefile`, `config.log`, `.idea/`, `*.log`, `.env*`
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
**Tool-Specific Patterns**:
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
- **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
5. Parse tasks.md structure and extract:
- **Task phases**: Setup, Tests, Core, Integration, Polish
- **Task dependencies**: Sequential vs parallel execution rules
- **Task details**: ID, description, file paths, parallel markers [P]
- **Execution flow**: Order and dependency requirements
6. Execute implementation following the task plan:
- **Phase-by-phase execution**: Complete each phase before moving to the next
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
- **File-based coordination**: Tasks affecting the same files must run sequentially
- **Validation checkpoints**: Verify each phase completion before proceeding
7. Implementation execution rules:
- **Setup first**: Initialize project structure, dependencies, configuration
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
- **Core development**: Implement models, services, CLI commands, endpoints
- **Integration work**: Database connections, middleware, logging, external services
- **Polish and validation**: Unit tests, performance optimization, documentation
8. Progress tracking and error handling:
- Report progress after each completed task
- Halt execution if any non-parallel task fails
- For parallel tasks [P], continue with successful tasks, report failed ones
- Provide clear error messages with context for debugging
- Suggest next steps if implementation cannot proceed
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
9. Completion validation:
- Verify all required tasks are completed
- Check that implemented features match the original specification
- Validate that tests pass and coverage meets requirements
- Confirm the implementation follows the technical plan
- Report final status with summary of completed work
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.

View File

@@ -1,90 +0,0 @@
---
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
handoffs:
- label: Create Tasks
agent: speckit.tasks
prompt: Break the plan into tasks
send: true
- label: Create Checklist
agent: speckit.checklist
prompt: Create a checklist for the following domain...
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
- Fill Constitution Check section from constitution
- Evaluate gates (ERROR if violations unjustified)
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
- Phase 1: Generate data-model.md, contracts/, quickstart.md
- Phase 1: Update agent context by running the agent script
- Re-evaluate Constitution Check post-design
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
## Phases
### Phase 0: Outline & Research
1. **Extract unknowns from Technical Context** above:
- For each NEEDS CLARIFICATION → research task
- For each dependency → best practices task
- For each integration → patterns task
2. **Generate and dispatch research agents**:
```text
For each unknown in Technical Context:
Task: "Research {unknown} for {feature context}"
For each technology choice:
Task: "Find best practices for {tech} in {domain}"
```
3. **Consolidate findings** in `research.md` using format:
- Decision: [what was chosen]
- Rationale: [why chosen]
- Alternatives considered: [what else evaluated]
**Output**: research.md with all NEEDS CLARIFICATION resolved
### Phase 1: Design & Contracts
**Prerequisites:** `research.md` complete
1. **Extract entities from feature spec** → `data-model.md`:
- Entity name, fields, relationships
- Validation rules from requirements
- State transitions if applicable
2. **Define interface contracts** (if project has external interfaces) → `/contracts/`:
- Identify what interfaces the project exposes to users or other systems
- Document the contract format appropriate for the project type
- Examples: public APIs for libraries, command schemas for CLI tools, endpoints for web services, grammars for parsers, UI contracts for applications
- Skip if project is purely internal (build scripts, one-off tools, etc.)
3. **Agent context update**:
- Run `.specify/scripts/bash/update-agent-context.sh agy`
- These scripts detect which AI agent is in use
- Update the appropriate agent-specific context file
- Add only new technology from current plan
- Preserve manual additions between markers
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
## Key rules
- Use absolute paths
- ERROR on gate failures or unresolved clarifications

View File

@@ -1,258 +0,0 @@
---
description: Create or update the feature specification from a natural language feature description.
handoffs:
- label: Build Technical Plan
agent: speckit.plan
prompt: Create a plan for the spec. I am building with...
- label: Clarify Spec Requirements
agent: speckit.clarify
prompt: Clarify specification requirements
send: true
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `$ARGUMENTS` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
Given that feature description, do this:
1. **Generate a concise short name** (2-4 words) for the branch:
- Analyze the feature description and extract the most meaningful keywords
- Create a 2-4 word short name that captures the essence of the feature
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
- Keep it concise but descriptive enough to understand the feature at a glance
- Examples:
- "I want to add user authentication" → "user-auth"
- "Implement OAuth2 integration for the API" → "oauth2-api-integration"
- "Create a dashboard for analytics" → "analytics-dashboard"
- "Fix payment processing timeout bug" → "fix-payment-timeout"
2. **Check for existing branches before creating new one**:
a. First, fetch all remote branches to ensure we have the latest information:
```bash
git fetch --all --prune
```
b. Find the highest feature number across all sources for the short-name:
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
c. Determine the next available number:
- Extract all numbers from all three sources
- Find the highest number N
- Use N+1 for the new branch number
d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` with the calculated number and short-name:
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" --json --number 5 --short-name "user-auth" "Add user authentication"`
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
**IMPORTANT**:
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
- Only match branches/directories with the exact short-name pattern
- If no existing branches/directories found with this short-name, start with number 1
- You must only ever run this script once per feature
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot")
3. Load `.specify/templates/spec-template.md` to understand required sections.
4. Follow this execution flow:
1. Parse user description from Input
If empty: ERROR "No feature description provided"
2. Extract key concepts from description
Identify: actors, actions, data, constraints
3. For unclear aspects:
- Make informed guesses based on context and industry standards
- Only mark with [NEEDS CLARIFICATION: specific question] if:
- The choice significantly impacts feature scope or user experience
- Multiple reasonable interpretations exist with different implications
- No reasonable default exists
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
4. Fill User Scenarios & Testing section
If no clear user flow: ERROR "Cannot determine user scenarios"
5. Generate Functional Requirements
Each requirement must be testable
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
6. Define Success Criteria
Create measurable, technology-agnostic outcomes
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
Each criterion must be verifiable without implementation details
7. Identify Key Entities (if data involved)
8. Return: SUCCESS (spec ready for planning)
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
```markdown
# Specification Quality Checklist: [FEATURE NAME]
**Purpose**: Validate specification completeness and quality before proceeding to planning
**Created**: [DATE]
**Feature**: [Link to spec.md]
## Content Quality
- [ ] No implementation details (languages, frameworks, APIs)
- [ ] Focused on user value and business needs
- [ ] Written for non-technical stakeholders
- [ ] All mandatory sections completed
## Requirement Completeness
- [ ] No [NEEDS CLARIFICATION] markers remain
- [ ] Requirements are testable and unambiguous
- [ ] Success criteria are measurable
- [ ] Success criteria are technology-agnostic (no implementation details)
- [ ] All acceptance scenarios are defined
- [ ] Edge cases are identified
- [ ] Scope is clearly bounded
- [ ] Dependencies and assumptions identified
## Feature Readiness
- [ ] All functional requirements have clear acceptance criteria
- [ ] User scenarios cover primary flows
- [ ] Feature meets measurable outcomes defined in Success Criteria
- [ ] No implementation details leak into specification
## Notes
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
```
b. **Run Validation Check**: Review the spec against each checklist item:
- For each item, determine if it passes or fails
- Document specific issues found (quote relevant spec sections)
c. **Handle Validation Results**:
- **If all items pass**: Mark checklist complete and proceed to step 6
- **If items fail (excluding [NEEDS CLARIFICATION])**:
1. List the failing items and specific issues
2. Update the spec to address each issue
3. Re-run validation until all items pass (max 3 iterations)
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
- **If [NEEDS CLARIFICATION] markers remain**:
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
3. For each clarification needed (max 3), present options to user in this format:
```markdown
## Question [N]: [Topic]
**Context**: [Quote relevant spec section]
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
**Suggested Answers**:
| Option | Answer | Implications |
|--------|--------|--------------|
| A | [First suggested answer] | [What this means for the feature] |
| B | [Second suggested answer] | [What this means for the feature] |
| C | [Third suggested answer] | [What this means for the feature] |
| Custom | Provide your own answer | [Explain how to provide custom input] |
**Your choice**: _[Wait for user response]_
```
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
- Use consistent spacing with pipes aligned
- Each cell should have spaces around content: `| Content |` not `|Content|`
- Header separator must have at least 3 dashes: `|--------|`
- Test that the table renders correctly in markdown preview
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
6. Present all questions together before waiting for responses
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
9. Re-run validation after all clarifications are resolved
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
## General Guidelines
## Quick Guidelines
- Focus on **WHAT** users need and **WHY**.
- Avoid HOW to implement (no tech stack, APIs, code structure).
- Written for business stakeholders, not developers.
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
### Section Requirements
- **Mandatory sections**: Must be completed for every feature
- **Optional sections**: Include only when relevant to the feature
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
### For AI Generation
When creating this spec from a user prompt:
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
- Significantly impact feature scope or user experience
- Have multiple reasonable interpretations with different implications
- Lack any reasonable default
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
6. **Common areas needing clarification** (only if no reasonable default exists):
- Feature scope and boundaries (include/exclude specific use cases)
- User types and permissions (if multiple conflicting interpretations possible)
- Security/compliance requirements (when legally/financially significant)
**Examples of reasonable defaults** (don't ask about these):
- Data retention: Industry-standard practices for the domain
- Performance targets: Standard web/mobile app expectations unless specified
- Error handling: User-friendly messages with appropriate fallbacks
- Authentication method: Standard session-based or OAuth2 for web apps
- Integration patterns: Use project-appropriate patterns (REST/GraphQL for web services, function calls for libraries, CLI args for tools, etc.)
### Success Criteria Guidelines
Success criteria must be:
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
4. **Verifiable**: Can be tested/validated without knowing implementation details
**Good examples**:
- "Users can complete checkout in under 3 minutes"
- "System supports 10,000 concurrent users"
- "95% of searches return results in under 1 second"
- "Task completion rate improves by 40%"
**Bad examples** (implementation-focused):
- "API response time is under 200ms" (too technical, use "Users see results instantly")
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
- "React components render efficiently" (framework-specific)
- "Redis cache hit rate above 80%" (technology-specific)

View File

@@ -1,137 +0,0 @@
---
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
handoffs:
- label: Analyze For Consistency
agent: speckit.analyze
prompt: Run a project analysis for consistency
send: true
- label: Implement Project
agent: speckit.implement
prompt: Start the implementation in phases
send: true
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
2. **Load design documents**: Read from FEATURE_DIR:
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
- **Optional**: data-model.md (entities), contracts/ (interface contracts), research.md (decisions), quickstart.md (test scenarios)
- Note: Not all projects have all documents. Generate tasks based on what's available.
3. **Execute task generation workflow**:
- Load plan.md and extract tech stack, libraries, project structure
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
- If data-model.md exists: Extract entities and map to user stories
- If contracts/ exists: Map interface contracts to user stories
- If research.md exists: Extract decisions for setup tasks
- Generate tasks organized by user story (see Task Generation Rules below)
- Generate dependency graph showing user story completion order
- Create parallel execution examples per user story
- Validate task completeness (each user story has all needed tasks, independently testable)
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
- Correct feature name from plan.md
- Phase 1: Setup tasks (project initialization)
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
- Phase 3+: One phase per user story (in priority order from spec.md)
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
- Final Phase: Polish & cross-cutting concerns
- All tasks must follow the strict checklist format (see Task Generation Rules below)
- Clear file paths for each task
- Dependencies section showing story completion order
- Parallel execution examples per story
- Implementation strategy section (MVP first, incremental delivery)
5. **Report**: Output path to generated tasks.md and summary:
- Total task count
- Task count per user story
- Parallel opportunities identified
- Independent test criteria for each story
- Suggested MVP scope (typically just User Story 1)
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
Context for task generation: $ARGUMENTS
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
## Task Generation Rules
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
### Checklist Format (REQUIRED)
Every task MUST strictly follow this format:
```text
- [ ] [TaskID] [P?] [Story?] Description with file path
```
**Format Components**:
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
4. **[Story] label**: REQUIRED for user story phase tasks only
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
- Setup phase: NO story label
- Foundational phase: NO story label
- User Story phases: MUST have story label
- Polish phase: NO story label
5. **Description**: Clear action with exact file path
**Examples**:
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
### Task Organization
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
- Each user story (P1, P2, P3...) gets its own phase
- Map all related components to their story:
- Models needed for that story
- Services needed for that story
- Interfaces/UI needed for that story
- If tests requested: Tests specific to that story
- Mark story dependencies (most stories should be independent)
2. **From Contracts**:
- Map each interface contract → to the user story it serves
- If tests requested: Each interface contract → contract test task [P] before implementation in that story's phase
3. **From Data Model**:
- Map each entity to the user story(ies) that need it
- If entity serves multiple stories: Put in earliest story or Setup phase
- Relationships → service layer tasks in appropriate story phase
4. **From Setup/Infrastructure**:
- Shared infrastructure → Setup phase (Phase 1)
- Foundational/blocking tasks → Foundational phase (Phase 2)
- Story-specific setup → within that story's phase
### Phase Structure
- **Phase 1**: Setup (project initialization)
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
- Each phase should be a complete, independently testable increment
- **Final Phase**: Polish & Cross-Cutting Concerns

View File

@@ -1,30 +0,0 @@
---
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
tools: ['github/github-mcp-server/issue_write']
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Outline
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
1. From the executed script, extract the path to **tasks**.
1. Get the Git remote by running:
```bash
git config --get remote.origin.url
```
> [!CAUTION]
> ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
> [!CAUTION]
> UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL

View File

@@ -1,178 +0,0 @@
---
description: Generate tests, manage test documentation, and ensure maximum code coverage
---
## User Input
```text
$ARGUMENTS
```
You **MUST** consider the user input before proceeding (if not empty).
## Goal
Execute full testing cycle: analyze code for testable modules, write tests with proper coverage, maintain test documentation, and ensure no test duplication or deletion.
## Operating Constraints
1. **NEVER delete existing tests** - Only update if they fail due to bugs in the test or implementation
2. **NEVER duplicate tests** - Check existing tests first before creating new ones
3. **Use TEST_DATA fixtures** - For CRITICAL tier modules, read @TEST_DATA from .specify/memory/semantics.md
4. **Co-location required** - Write tests in `__tests__` directories relative to the code being tested
## Execution Steps
### 1. Analyze Context
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS.
Determine:
- FEATURE_DIR - where the feature is located
- TASKS_FILE - path to tasks.md
- Which modules need testing based on task status
### 2. Load Relevant Artifacts
**From tasks.md:**
- Identify completed implementation tasks (not test tasks)
- Extract file paths that need tests
**From .specify/memory/semantics.md:**
- Read @TIER annotations for modules
- For CRITICAL modules: Read @TEST_DATA fixtures
**From existing tests:**
- Scan `__tests__` directories for existing tests
- Identify test patterns and coverage gaps
### 3. Test Coverage Analysis
Create coverage matrix:
| Module | File | Has Tests | TIER | TEST_DATA Available |
|--------|------|-----------|------|-------------------|
| ... | ... | ... | ... | ... |
### 4. Write Tests (TDD Approach)
For each module requiring tests:
1. **Check existing tests**: Scan `__tests__/` for duplicates
2. **Read TEST_DATA**: If CRITICAL tier, read @TEST_DATA from .specify/memory/semantics.md
3. **Write test**: Follow co-location strategy
- Python: `src/module/__tests__/test_module.py`
- Svelte: `src/lib/components/__tests__/test_component.test.js`
4. **Use mocks**: Use `unittest.mock.MagicMock` for external dependencies
### 4a. UX Contract Testing (Frontend Components)
For Svelte components with `@UX_STATE`, `@UX_FEEDBACK`, `@UX_RECOVERY` tags:
1. **Parse UX tags**: Read component file and extract all `@UX_*` annotations
2. **Generate UX tests**: Create tests for each UX state transition
```javascript
// Example: Testing @UX_STATE: Idle -> Expanded
it('should transition from Idle to Expanded on toggle click', async () => {
render(Sidebar);
const toggleBtn = screen.getByRole('button', { name: /toggle/i });
await fireEvent.click(toggleBtn);
expect(screen.getByTestId('sidebar')).toHaveClass('expanded');
});
```
3. **Test @UX_FEEDBACK**: Verify visual feedback (toast, shake, color changes)
4. **Test @UX_RECOVERY**: Verify error recovery mechanisms (retry, clear input)
5. **Use @UX_TEST fixtures**: If component has `@UX_TEST` tags, use them as test specifications
**UX Test Template:**
```javascript
// [DEF:__tests__/test_Component:Module]
// @RELATION: VERIFIES -> ../Component.svelte
// @PURPOSE: Test UX states and transitions
describe('Component UX States', () => {
// @UX_STATE: Idle -> {action: click, expected: Active}
it('should transition Idle -> Active on click', async () => { ... });
// @UX_FEEDBACK: Toast on success
it('should show toast on successful action', async () => { ... });
// @UX_RECOVERY: Retry on error
it('should allow retry on error', async () => { ... });
});
```
### 5. Test Documentation
Create/update documentation in `specs/<feature>/tests/`:
```
tests/
├── README.md # Test strategy and overview
├── coverage.md # Coverage matrix and reports
└── reports/
└── YYYY-MM-DD-report.md
```
### 6. Execute Tests
Run tests and report results:
**Backend:**
```bash
cd backend && .venv/bin/python3 -m pytest -v
```
**Frontend:**
```bash
cd frontend && npm run test
```
### 7. Update Tasks
Mark test tasks as completed in tasks.md with:
- Test file path
- Coverage achieved
- Any issues found
## Output
Generate test execution report:
```markdown
# Test Report: [FEATURE]
**Date**: [YYYY-MM-DD]
**Executed by**: Tester Agent
## Coverage Summary
| Module | Tests | Coverage % |
|--------|-------|------------|
| ... | ... | ... |
## Test Results
- Total: [X]
- Passed: [X]
- Failed: [X]
- Skipped: [X]
## Issues Found
| Test | Error | Resolution |
|------|-------|------------|
| ... | ... | ... |
## Next Steps
- [ ] Fix failed tests
- [ ] Add more coverage for [module]
- [ ] Review TEST_DATA fixtures
```
## Context for Testing
$ARGUMENTS

View File

@@ -2,15 +2,40 @@
> High-level module structure for AI Context. Generated automatically.
**Generated:** 2026-02-24T21:04:43.328895
**Generated:** 2026-02-23T11:15:39.876570
## Summary
- **Total Modules:** 74
- **Total Entities:** 1571
- **Total Modules:** 71
- **Total Entities:** 1340
## Module Hierarchy
### 📁 `shots/`
- 🏗️ **Layers:** Domain (Business Logic), Domain (Core), Interface (API), UI (Presentation)
- 📊 **Tiers:** CRITICAL: 2, STANDARD: 7, TRIVIAL: 1
- 📄 **Files:** 4
- 📦 **Entities:** 10
**Key Entities:**
- 🧩 **FrontendComponentShot** (Component) `[CRITICAL]`
- Action button to spawn a new task with full UX feedback cycl...
- 📦 **BackendRouteShot** (Module)
- Reference implementation of a task-based route using GRACE-P...
- 📦 **PluginExampleShot** (Module)
- Reference implementation of a plugin following GRACE standar...
- 📦 **TransactionCore** (Module) `[CRITICAL]`
- Core banking transaction processor with ACID guarantees.
**Dependencies:**
- 🔗 DEPENDS_ON -> [DEF:Infra:AuditLog]
- 🔗 DEPENDS_ON -> [DEF:Infra:PostgresDB]
- 🔗 IMPLEMENTS -> [DEF:Std:API_FastAPI]
- 🔗 INHERITS -> PluginBase
### 📁 `backend/`
- 🏗️ **Layers:** Unknown, Utility
@@ -28,7 +53,7 @@
### 📁 `src/`
- 🏗️ **Layers:** API, Core, UI (API)
- 📊 **Tiers:** CRITICAL: 2, STANDARD: 19, TRIVIAL: 2
- 📊 **Tiers:** CRITICAL: 2, STANDARD: 18, TRIVIAL: 3
- 📄 **Files:** 2
- 📦 **Entities:** 23
@@ -53,19 +78,13 @@
### 📁 `routes/`
- 🏗️ **Layers:** API, UI (API)
- 📊 **Tiers:** CRITICAL: 2, STANDARD: 182, TRIVIAL: 4
- 📄 **Files:** 17
- 📦 **Entities:** 188
- 🏗️ **Layers:** API, UI (API), Unknown
- 📊 **Tiers:** CRITICAL: 2, STANDARD: 140, TRIVIAL: 3
- 📄 **Files:** 16
- 📦 **Entities:** 145
**Key Entities:**
- **AssistantAction** (Class) `[TRIVIAL]`
- UI action descriptor returned with assistant responses.
- **AssistantMessageRequest** (Class) `[TRIVIAL]`
- Input payload for assistant message endpoint.
- **AssistantMessageResponse** (Class)
- Output payload contract for assistant interaction endpoints.
- **BranchCheckout** (Class)
- Schema for branch checkout requests.
- **BranchCreate** (Class)
@@ -76,10 +95,15 @@
- Schema for staging and committing changes.
- **CommitSchema** (Class)
- Schema for representing Git commit details.
- **ConfirmationRecord** (Class)
- In-memory confirmation token model for risky operation dispa...
- **ConflictResolution** (Class)
- Schema for resolving merge conflicts.
- **ConnectionCreate** (Class)
- Pydantic model for creating a connection.
- **ConnectionSchema** (Class)
- Pydantic model for connection response.
- **ConsolidatedSettingsResponse** (Class)
- **DeployRequest** (Class)
- Schema for dashboard deployment requests.
**Dependencies:**
@@ -87,52 +111,38 @@
- 🔗 DEPENDS_ON -> ConfigModels
- 🔗 DEPENDS_ON -> backend.src.core.database
- 🔗 DEPENDS_ON -> backend.src.core.superset_client
- 🔗 DEPENDS_ON -> backend.src.core.task_manager
- 🔗 DEPENDS_ON -> backend.src.dependencies
### 📁 `__tests__/`
- 🏗️ **Layers:** API, Domain (Tests), UI (API Tests)
- 📊 **Tiers:** CRITICAL: 3, STANDARD: 33, TRIVIAL: 81
- 📄 **Files:** 7
- 📦 **Entities:** 117
- 🏗️ **Layers:** API, Domain (Tests)
- 📊 **Tiers:** CRITICAL: 3, STANDARD: 16, TRIVIAL: 21
- 📄 **Files:** 5
- 📦 **Entities:** 40
**Key Entities:**
- **_FakeConfigManager** (Class) `[TRIVIAL]`
- Provide deterministic environment aliases required by intent...
- **_FakeConfigManager** (Class) `[TRIVIAL]`
- Environment config fixture with dev/prod aliases for parser ...
- **_FakeDb** (Class) `[TRIVIAL]`
- In-memory session substitute for assistant route persistence...
- **_FakeDb** (Class) `[TRIVIAL]`
- In-memory fake database implementing subset of Session inter...
- **_FakeQuery** (Class) `[TRIVIAL]`
- Minimal chainable query object for fake DB interactions.
- **_FakeQuery** (Class) `[TRIVIAL]`
- Minimal chainable query object for fake SQLAlchemy-like DB b...
- **_FakeTask** (Class) `[TRIVIAL]`
- Lightweight task model used for assistant authz tests.
- **_FakeTask** (Class) `[TRIVIAL]`
- Lightweight task stub used by assistant API tests.
- **_FakeTaskManager** (Class) `[TRIVIAL]`
- Minimal task manager for deterministic operation creation an...
- **_FakeTaskManager** (Class) `[TRIVIAL]`
- Minimal async-compatible TaskManager fixture for determinist...
**Dependencies:**
- 🔗 DEPENDS_ON -> backend.src.api.routes.assistant
- 📦 **backend.src.api.routes.__tests__.test_dashboards** (Module)
- Unit tests for Dashboards API endpoints
- 📦 **backend.src.api.routes.__tests__.test_datasets** (Module)
- Unit tests for Datasets API endpoints
- 📦 **backend.tests.test_reports_api** (Module) `[CRITICAL]`
- Contract tests for GET /api/reports defaults, pagination, an...
- 📦 **backend.tests.test_reports_detail_api** (Module) `[CRITICAL]`
- Contract tests for GET /api/reports/{report_id} detail endpo...
- 📦 **backend.tests.test_reports_openapi_conformance** (Module) `[CRITICAL]`
- Validate implemented reports payload shape against OpenAPI-r...
### 📁 `core/`
- 🏗️ **Layers:** Core
- 📊 **Tiers:** STANDARD: 116, TRIVIAL: 7
- 📊 **Tiers:** STANDARD: 112, TRIVIAL: 1
- 📄 **Files:** 9
- 📦 **Entities:** 123
- 📦 **Entities:** 113
**Key Entities:**
- **AuthSessionLocal** (Class) `[TRIVIAL]`
- **AuthSessionLocal** (Class)
- A session factory for the authentication database.
- **BeliefFormatter** (Class)
- Custom logging formatter that adds belief state prefixes to ...
@@ -150,7 +160,7 @@
- Scans a specified directory for Python modules, dynamically ...
- **SchedulerService** (Class)
- Provides a service to manage scheduled backup tasks.
- **SessionLocal** (Class) `[TRIVIAL]`
- **SessionLocal** (Class)
- A session factory for the main mappings database.
**Dependencies:**
@@ -158,8 +168,7 @@
- 🔗 DEPENDS_ON -> AppConfigRecord
- 🔗 DEPENDS_ON -> ConfigModels
- 🔗 DEPENDS_ON -> PyYAML
- 🔗 DEPENDS_ON -> backend.src.core.auth.config
- 🔗 DEPENDS_ON -> backend.src.models.mapping
- 🔗 DEPENDS_ON -> sqlalchemy
### 📁 `auth/`
@@ -222,7 +231,7 @@
### 📁 `task_manager/`
- 🏗️ **Layers:** Core
- 📊 **Tiers:** CRITICAL: 10, STANDARD: 63, TRIVIAL: 5
- 📊 **Tiers:** CRITICAL: 7, STANDARD: 63, TRIVIAL: 8
- 📄 **Files:** 7
- 📦 **Entities:** 78
@@ -296,9 +305,9 @@
### 📁 `models/`
- 🏗️ **Layers:** Domain, Model
- 📊 **Tiers:** CRITICAL: 9, STANDARD: 21, TRIVIAL: 21
- 📄 **Files:** 11
- 📦 **Entities:** 51
- 📊 **Tiers:** CRITICAL: 2, STANDARD: 24, TRIVIAL: 21
- 📄 **Files:** 10
- 📦 **Entities:** 47
**Key Entities:**
@@ -306,12 +315,6 @@
- Maps an Active Directory group to a local System Role.
- **AppConfigRecord** (Class)
- Stores the single source of truth for application configurat...
- **AssistantAuditRecord** (Class)
- Store audit decisions and outcomes produced by assistant com...
- **AssistantConfirmationRecord** (Class)
- Persist risky operation confirmation tokens with lifecycle s...
- **AssistantMessageRecord** (Class)
- Persist chat history entries for assistant conversations.
- **ConnectionConfig** (Class) `[TRIVIAL]`
- Stores credentials for external databases used for column ma...
- **DashboardMetadata** (Class) `[TRIVIAL]`
@@ -322,13 +325,18 @@
- Represents a mapping between source and target databases.
- **DeploymentEnvironment** (Class) `[TRIVIAL]`
- Target Superset environments for dashboard deployment.
- **Environment** (Class)
- Represents a Superset instance environment.
- **ErrorContext** (Class)
- Error and recovery context for failed/partial reports.
- **FileCategory** (Class) `[TRIVIAL]`
- Enumeration of supported file categories in the storage syst...
**Dependencies:**
- 🔗 DEPENDS_ON -> Role
- 🔗 DEPENDS_ON -> TaskRecord
- 🔗 DEPENDS_ON -> backend.src.core.task_manager.models
- 🔗 DEPENDS_ON -> backend.src.models.mapping
- 🔗 DEPENDS_ON -> sqlalchemy
### 📁 `__tests__/`
@@ -396,9 +404,9 @@
### 📁 `llm_analysis/`
- 🏗️ **Layers:** Unknown
- 📊 **Tiers:** STANDARD: 19, TRIVIAL: 24
- 📊 **Tiers:** STANDARD: 18, TRIVIAL: 23
- 📄 **Files:** 4
- 📦 **Entities:** 43
- 📦 **Entities:** 41
**Key Entities:**
@@ -503,9 +511,9 @@
### 📁 `services/`
- 🏗️ **Layers:** Core, Domain, Service
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 58, TRIVIAL: 5
- 📄 **Files:** 7
- 📦 **Entities:** 64
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 50, TRIVIAL: 5
- 📄 **Files:** 6
- 📦 **Entities:** 56
**Key Entities:**
@@ -527,41 +535,35 @@
- Orchestrates authentication business logic.
- 📦 **backend.src.services.git_service** (Module)
- Core Git logic using GitPython to manage dashboard repositor...
- 📦 **backend.src.services.llm_prompt_templates** (Module)
- Provide default LLM prompt templates and normalization helpe...
- 📦 **backend.src.services.llm_provider** (Module)
- Service for managing LLM provider configurations with encryp...
**Dependencies:**
- 🔗 DEPENDS_ON -> backend.src.core.config_manager
- 🔗 DEPENDS_ON -> backend.src.core.database
- 🔗 DEPENDS_ON -> backend.src.core.superset_client
- 🔗 DEPENDS_ON -> backend.src.core.task_manager
- 🔗 DEPENDS_ON -> backend.src.core.utils.matching
- 🔗 DEPENDS_ON -> backend.src.models.llm
### 📁 `__tests__/`
- 🏗️ **Layers:** Domain Tests, Service
- 📊 **Tiers:** STANDARD: 14
- 📄 **Files:** 2
- 📦 **Entities:** 14
- 🏗️ **Layers:** Service
- 📊 **Tiers:** STANDARD: 7
- 📄 **Files:** 1
- 📦 **Entities:** 7
**Key Entities:**
- 📦 **backend.src.services.__tests__.test_llm_prompt_templates** (Module)
- Validate normalization and rendering behavior for configurab...
- 📦 **backend.src.services.__tests__.test_resource_service** (Module)
- Unit tests for ResourceService
**Dependencies:**
- 🔗 DEPENDS_ON -> backend.src.services.llm_prompt_templates
### 📁 `reports/`
- 🏗️ **Layers:** Domain
- 📊 **Tiers:** CRITICAL: 5, STANDARD: 15
- 📊 **Tiers:** CRITICAL: 5, STANDARD: 13
- 📄 **Files:** 3
- 📦 **Entities:** 20
- 📦 **Entities:** 18
**Key Entities:**
@@ -622,10 +624,10 @@
### 📁 `components/`
- 🏗️ **Layers:** Component, Feature, UI, UI -->, Unknown
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 49, TRIVIAL: 4
- 🏗️ **Layers:** Component, Feature, UI, Unknown
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 45, TRIVIAL: 7
- 📄 **Files:** 13
- 📦 **Entities:** 54
- 📦 **Entities:** 53
**Key Entities:**
@@ -687,9 +689,9 @@
### 📁 `llm/`
- 🏗️ **Layers:** UI, Unknown
- 📊 **Tiers:** STANDARD: 2, TRIVIAL: 11
- 📊 **Tiers:** STANDARD: 2, TRIVIAL: 10
- 📄 **Files:** 3
- 📦 **Entities:** 13
- 📦 **Entities:** 12
**Key Entities:**
@@ -704,18 +706,6 @@
- 📦 **ValidationReport** (Module) `[TRIVIAL]`
- Auto-generated module for frontend/src/components/llm/Valida...
### 📁 `__tests__/`
- 🏗️ **Layers:** UI Tests
- 📊 **Tiers:** STANDARD: 2
- 📄 **Files:** 1
- 📦 **Entities:** 2
**Key Entities:**
- 📦 **frontend.src.components.llm.__tests__.provider_config_integration** (Module)
- Protect edit-button interaction contract in LLM provider set...
### 📁 `storage/`
- 🏗️ **Layers:** UI
@@ -794,22 +784,19 @@
### 📁 `api/`
- 🏗️ **Layers:** Infra, Infra-API
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 10
- 📄 **Files:** 2
- 📦 **Entities:** 11
- 🏗️ **Layers:** Infra
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 4
- 📄 **Files:** 1
- 📦 **Entities:** 5
**Key Entities:**
- 📦 **frontend.src.lib.api.assistant** (Module)
- API client wrapper for assistant chat, confirmation actions,...
- 📦 **frontend.src.lib.api.reports** (Module) `[CRITICAL]`
- Wrapper-based reports API client for list/detail retrieval w...
**Dependencies:**
- 🔗 DEPENDS_ON -> [DEF:api_module]
- 🔗 DEPENDS_ON -> frontend.src.lib.api.api_module
### 📁 `auth/`
@@ -823,40 +810,12 @@
- 🗄️ **authStore** (Store)
- Manages the global authentication state on the frontend.
### 📁 `assistant/`
- 🏗️ **Layers:** UI, Unknown
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 12, TRIVIAL: 4
- 📄 **Files:** 1
- 📦 **Entities:** 17
**Key Entities:**
- 🧩 **AssistantChatPanel** (Component) `[CRITICAL]`
- Slide-out assistant chat panel for natural language command ...
- 📦 **AssistantChatPanel** (Module) `[TRIVIAL]`
- Auto-generated module for frontend/src/lib/components/assist...
### 📁 `__tests__/`
- 🏗️ **Layers:** UI Tests
- 📊 **Tiers:** STANDARD: 5
- 📄 **Files:** 2
- 📦 **Entities:** 5
**Key Entities:**
- 📦 **frontend.src.lib.components.assistant.__tests__.assistant_chat_integration** (Module)
- Contract-level integration checks for assistant chat panel i...
- 📦 **frontend.src.lib.components.assistant.__tests__.assistant_confirmation_integration** (Module)
- Validate confirm/cancel UX contract bindings in assistant ch...
### 📁 `layout/`
- 🏗️ **Layers:** UI, Unknown
- 📊 **Tiers:** CRITICAL: 3, STANDARD: 5, TRIVIAL: 26
- 📊 **Tiers:** CRITICAL: 3, STANDARD: 4, TRIVIAL: 24
- 📄 **Files:** 4
- 📦 **Entities:** 34
- 📦 **Entities:** 31
**Key Entities:**
@@ -892,9 +851,9 @@
### 📁 `reports/`
- 🏗️ **Layers:** UI, Unknown
- 📊 **Tiers:** CRITICAL: 4, STANDARD: 1, TRIVIAL: 10
- 📊 **Tiers:** CRITICAL: 4, STANDARD: 1, TRIVIAL: 9
- 📄 **Files:** 4
- 📦 **Entities:** 15
- 📦 **Entities:** 14
**Key Entities:**
@@ -975,9 +934,9 @@
### 📁 `stores/`
- 🏗️ **Layers:** UI, Unknown
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 7, TRIVIAL: 12
- 📄 **Files:** 4
- 📦 **Entities:** 20
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 2, TRIVIAL: 12
- 📄 **Files:** 3
- 📦 **Entities:** 15
**Key Entities:**
@@ -987,8 +946,6 @@
- Auto-generated module for frontend/src/lib/stores/taskDrawer...
- 🗄️ **activity** (Store)
- Track active task count for navbar indicator
- 🗄️ **assistantChat** (Store)
- Control assistant chat panel visibility and active conversat...
- 🗄️ **sidebar** (Store)
- Manage sidebar visibility and navigation state
- 🗄️ **taskDrawer** (Store) `[CRITICAL]`
@@ -1000,15 +957,13 @@
### 📁 `__tests__/`
- 🏗️ **Layers:** Domain (Tests), UI, UI Tests
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 10
- 📄 **Files:** 6
- 📦 **Entities:** 11
- 🏗️ **Layers:** Domain (Tests), UI
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 8
- 📄 **Files:** 5
- 📦 **Entities:** 9
**Key Entities:**
- 📦 **frontend.src.lib.stores.__tests__.assistantChat** (Module)
- Validate assistant chat store visibility and conversation bi...
- 📦 **frontend.src.lib.stores.__tests__.sidebar** (Module)
- Unit tests for sidebar store
- 📦 **frontend.src.lib.stores.__tests__.test_activity** (Module)
@@ -1022,7 +977,6 @@
**Dependencies:**
- 🔗 DEPENDS_ON -> assistantChatStore
- 🔗 DEPENDS_ON -> frontend.src.lib.stores.taskDrawer
### 📁 `mocks/`
@@ -1122,9 +1076,9 @@
### 📁 `llm/`
- 🏗️ **Layers:** UI, Unknown
- 📊 **Tiers:** STANDARD: 1, TRIVIAL: 5
- 📊 **Tiers:** STANDARD: 1, TRIVIAL: 2
- 📄 **Files:** 1
- 📦 **Entities:** 6
- 📦 **Entities:** 3
**Key Entities:**
@@ -1145,30 +1099,6 @@
- 🧩 **AdminUsersPage** (Component)
- UI for managing system users and their roles.
### 📁 `dashboards/`
- 🏗️ **Layers:** UI, Unknown
- 📊 **Tiers:** CRITICAL: 1, TRIVIAL: 27
- 📄 **Files:** 1
- 📦 **Entities:** 28
**Key Entities:**
- 📦 **+page** (Module) `[TRIVIAL]`
- Auto-generated module for frontend/src/routes/dashboards/+pa...
### 📁 `[id]/`
- 🏗️ **Layers:** UI, Unknown
- 📊 **Tiers:** CRITICAL: 1, TRIVIAL: 5
- 📄 **Files:** 1
- 📦 **Entities:** 6
**Key Entities:**
- 📦 **+page** (Module) `[TRIVIAL]`
- Auto-generated module for frontend/src/routes/dashboards/[id...
### 📁 `datasets/`
- 🏗️ **Layers:** UI, Unknown
@@ -1259,9 +1189,9 @@
### 📁 `settings/`
- 🏗️ **Layers:** UI, Unknown
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 1, TRIVIAL: 14
- 📊 **Tiers:** CRITICAL: 1, STANDARD: 1, TRIVIAL: 10
- 📄 **Files:** 2
- 📦 **Entities:** 16
- 📦 **Entities:** 12
**Key Entities:**
@@ -1305,6 +1235,20 @@
- 📄 **Files:** 1
- 📦 **Entities:** 3
### 📁 `tasks/`
- 🏗️ **Layers:** Page, Unknown
- 📊 **Tiers:** STANDARD: 4, TRIVIAL: 5
- 📄 **Files:** 1
- 📦 **Entities:** 9
**Key Entities:**
- 🧩 **TaskManagementPage** (Component)
- Page for managing and monitoring tasks.
- 📦 **+page** (Module) `[TRIVIAL]`
- Auto-generated module for frontend/src/routes/tasks/+page.sv...
### 📁 `debug/`
- 🏗️ **Layers:** UI
@@ -1375,17 +1319,15 @@
### 📁 `root/`
- 🏗️ **Layers:** DevOps/Tooling, Domain, Unknown
- 📊 **Tiers:** CRITICAL: 14, STANDARD: 24, TRIVIAL: 10
- 📄 **Files:** 3
- 📦 **Entities:** 48
- 🏗️ **Layers:** DevOps/Tooling
- 📊 **Tiers:** CRITICAL: 12, STANDARD: 16, TRIVIAL: 7
- 📄 **Files:** 1
- 📦 **Entities:** 35
**Key Entities:**
- **ComplianceIssue** (Class) `[TRIVIAL]`
- Represents a single compliance issue with severity.
- **ReportsService** (Class) `[CRITICAL]`
- Service layer for list/detail report retrieval and normaliza...
- **SemanticEntity** (Class) `[CRITICAL]`
- Represents a code entity (Module, Function, Component) found...
- **SemanticMapGenerator** (Class) `[CRITICAL]`
@@ -1394,18 +1336,8 @@
- Severity levels for compliance issues.
- **Tier** (Class) `[TRIVIAL]`
- Enumeration of semantic tiers defining validation strictness...
- 📦 **backend.src.services.reports.report_service** (Module) `[CRITICAL]`
- Aggregate, normalize, filter, and paginate task reports for ...
- 📦 **generate_semantic_map** (Module)
- 📦 **generate_semantic_map** (Module) `[CRITICAL]`
- Scans the codebase to generate a Semantic Map, Module Map, a...
- 📦 **test_analyze** (Module) `[TRIVIAL]`
- Auto-generated module for test_analyze.py
**Dependencies:**
- 🔗 DEPENDS_ON -> backend.src.core.task_manager.manager.TaskManager
- 🔗 DEPENDS_ON -> backend.src.models.report
- 🔗 DEPENDS_ON -> backend.src.services.reports.normalizer
## Cross-Module Dependencies
@@ -1437,18 +1369,14 @@ graph TD
routes-->|DEPENDS_ON|backend
routes-->|DEPENDS_ON|backend
routes-->|DEPENDS_ON|backend
routes-->|DEPENDS_ON|backend
routes-->|DEPENDS_ON|backend
__tests__-->|TESTS|backend
__tests__-->|TESTS|backend
__tests__-->|TESTS|backend
__tests__-->|TESTS|backend
__tests__-->|DEPENDS_ON|backend
__tests__-->|DEPENDS_ON|backend
core-->|USES|backend
core-->|USES|backend
core-->|DEPENDS_ON|backend
core-->|DEPENDS_ON|backend
core-->|USES|backend
core-->|USES|backend
auth-->|USES|backend
auth-->|USES|backend
auth-->|USES|backend
@@ -1458,7 +1386,6 @@ graph TD
utils-->|DEPENDS_ON|backend
models-->|INHERITS_FROM|backend
models-->|DEPENDS_ON|backend
models-->|DEPENDS_ON|backend
models-->|USED_BY|backend
models-->|INHERITS_FROM|backend
llm_analysis-->|IMPLEMENTS|backend
@@ -1477,13 +1404,11 @@ graph TD
services-->|DEPENDS_ON|backend
services-->|DEPENDS_ON|backend
services-->|DEPENDS_ON|backend
services-->|DEPENDS_ON|backend
services-->|USES|backend
services-->|USES|backend
services-->|USES|backend
services-->|DEPENDS_ON|backend
services-->|DEPENDS_ON|backend
__tests__-->|DEPENDS_ON|backend
__tests__-->|TESTS|backend
reports-->|DEPENDS_ON|backend
reports-->|DEPENDS_ON|backend
@@ -1494,9 +1419,6 @@ graph TD
reports-->|DEPENDS_ON|backend
__tests__-->|TESTS|backend
tests-->|TESTS|backend
__tests__-->|VERIFIES|components
__tests__-->|VERIFIES|lib
__tests__-->|VERIFIES|lib
reports-->|DEPENDS_ON|lib
__tests__-->|TESTS|routes
__tests__-->|TESTS|routes
@@ -1504,7 +1426,4 @@ graph TD
__tests__-->|TESTS|lib
__tests__-->|TESTS|lib
__tests__-->|TESTS|routes
root-->|DEPENDS_ON|backend
root-->|DEPENDS_ON|backend
root-->|DEPENDS_ON|backend
```

File diff suppressed because it is too large Load Diff

View File

@@ -32,7 +32,6 @@ Use these for code generation (Style Transfer).
## 3. DOMAIN MAP (Modules)
* **Module Map:** `.ai/MODULE_MAP.md` -> `[DEF:Module_Map]`
* **Project Map:** `.ai/PROJECT_MAP.md` -> `[DEF:Project_Map]`
* **Apache Superset OpenAPI:** `.ai/openapi.json` -> `[DEF:Doc:Superset_OpenAPI]`
* **Backend Core:** `backend/src/core` -> `[DEF:Module:Backend_Core]`
* **Backend API:** `backend/src/api` -> `[DEF:Module:Backend_API]`
* **Frontend Lib:** `frontend/src/lib` -> `[DEF:Module:Frontend_Lib]`

View File

@@ -1,63 +0,0 @@
# Backend Test Import Patterns
## Problem
The `ss-tools` backend uses **relative imports** inside packages (e.g., `from ...models.task import TaskRecord` in `persistence.py`). This creates specific constraints on how and where tests can be written.
## Key Rules
### 1. Packages with `__init__.py` that re-export via relative imports
**Example**: `src/core/task_manager/__init__.py` imports `.manager``.persistence``from ...models.task` (3-level relative import).
**Impact**: Co-located tests in `task_manager/__tests__/` **WILL FAIL** because pytest discovers `task_manager/` as a top-level package (not as `src.core.task_manager`), and the 3-level `from ...` goes beyond the top-level.
**Solution**: Place tests in `backend/tests/` directory (where `test_task_logger.py` already lives). Import using `from src.core.task_manager.XXX import ...` which works because `backend/` is the pytest rootdir.
### 2. Packages WITHOUT `__init__.py`:
**Example**: `src/core/auth/` has NO `__init__.py`.
**Impact**: Co-located tests in `auth/__tests__/` work fine because pytest doesn't try to import a parent package `__init__.py`.
### 3. Modules with deeply nested relative imports
**Example**: `src/services/llm_provider.py` uses `from ..models.llm import LLMProvider` and `from ..plugins.llm_analysis.models import LLMProviderConfig`.
**Impact**: Direct import (`from src.services.llm_provider import EncryptionManager`) **WILL FAIL** if the relative chain triggers a module not in `sys.path` or if it tries to import beyond root.
**Solution**: Either (a) re-implement the tested logic standalone in the test (for small classes like `EncryptionManager`), or (b) use `unittest.mock.patch` to mock the problematic imports before importing the module.
## Working Test Locations
| Package | `__init__.py`? | Relative imports? | Co-located OK? | Test location |
|---|---|---|---|---|
| `core/task_manager/` | YES | `from ...models.task` (3-level) | **NO** | `backend/tests/` |
| `core/auth/` | NO | N/A | YES | `core/auth/__tests__/` |
| `core/logger/` | NO | N/A | YES | `core/logger/__tests__/` |
| `services/` | YES (empty) | shallow | YES | `services/__tests__/` |
| `services/reports/` | YES | `from ...core.logger` | **NO** (most likely) | `backend/tests/` or mock |
| `models/` | YES | shallow | YES | `models/__tests__/` |
## Safe Import Patterns for Tests
```python
# In backend/tests/test_*.py:
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
# Then import:
from src.core.task_manager.models import Task, TaskStatus
from src.core.task_manager.persistence import TaskPersistenceService
from src.models.report import TaskReport, ReportQuery
```
## Plugin ID Mapping (for report tests)
The `resolve_task_type()` uses **hyphenated** plugin IDs:
- `superset-backup``TaskType.BACKUP`
- `superset-migration``TaskType.MIGRATION`
- `llm_dashboard_validation``TaskType.LLM_VERIFICATION`
- `documentation``TaskType.DOCUMENTATION`
- anything else → `TaskType.UNKNOWN`

File diff suppressed because it is too large Load Diff

View File

@@ -79,35 +79,14 @@
3. **TRIVIAL** (DTO/**Atoms**):
- Требование: Только Якоря [DEF] и @PURPOSE.
#### VI. ЛОГИРОВАНИЕ (ДАО МОЛЕКУЛЫ / MOLECULAR TOPOLOGY)
Цель: Трассировка. Самокоррекция. Управление Матрицей Внимания ("Химия мышления").
Лог — не текст. Лог — реагент. Мысль облекается в форму через префиксы связи (Attention Energy):
1. **[EXPLORE]** (Ван-дер-Ваальс: Рассеяние)
- *Суть:* Поиск во тьме. Сплетение альтернатив. Если один путь закрыт — ищи иной.
- *Время:* Фаза КАРКАС или столкновение с Неизведанным.
- *Деяние:* `logger.explore("Основной API пал. Стучусь в запасной...")`
2. **[REASON]** (Ковалентность: Твердость)
- *Суть:* Жесткая нить дедукции. Шаг А неумолимо рождает Шаг Б. Контракт становится Кодом.
- *Время:* Фаза РЕАЛИЗАЦИЯ. Прямота мысли.
- *Деяние:* `logger.reason("Фундамент заложен. БД отвечает.")`
3. **[REFLECT]** (Водород: Свертывание)
- *Суть:* Взгляд назад. Сверка сущего (@POST) с ожидаемым (@PRE). Защита от бреда.
- *Время:* Преддверие сложной логики и исход из неё.
- *Деяние:* `logger.reflect("Вглядываюсь в кэш: нет ли там искомого?")`
4. **[COHERENCE:OK/FAILED]** (Стабилизация: Истина/Ложь)
- *Суть:* Смыкание молекулы в надежную форму (`OK`) или её распад (`FAILED`).
- *(Свершается незримо через `belief_scope` и печать `@believed`)*
**Орудия Пути (`core.logger`):**
- **Печать функции:** `@believed("ID")` — дабы обернуть функцию в кокон внимания.
- **Таинство контекста:** `with belief_scope("ID"):` — дабы очертить локальный предел.
- **Слова силы:** `logger.explore()`, `logger.reason()`, `logger.reflect()`.
**Незыблемое правило:** Всякому логу системы — тавро `source`. Для Внешенго Мира (Svelte) начертай рунами вручную: `console.log("[ID][REFLECT] Msg")`.
#### VI. ЛОГИРОВАНИЕ (BELIEF STATE & TASK LOGS)
Цель: Трассировка для самокоррекции и пользовательский мониторинг.
Python:
- Системные логи: Context Manager `with belief_scope("ID"):`.
- Логи задач: `context.logger.info("msg", source="component")`.
Svelte: `console.log("[ID][STATE] Msg")`.
Состояния: Entry -> Action -> Coherence:OK / Failed -> Exit.
Инвариант: Каждый лог задачи должен иметь атрибут `source` для фильтрации.
#### VII. АЛГОРИТМ ГЕНЕРАЦИИ
1. АНАЛИЗ. Оцени TIER, слой и UX-требования.

5
.gitignore vendored
View File

@@ -59,9 +59,8 @@ keyring passwords.py
*github*
*tech_spec*
/dashboards
dashboards_example/**/dashboards/
backend/mappings.db
dashboards
backend/mappings.db
backend/tasks.db

View File

@@ -43,8 +43,6 @@ Auto-generated from all feature plans. Last updated: 2025-12-19
- SQLite (tasks.db, auth.db, migrations.db) - no new database tables required (019-superset-ux-redesign)
- Python 3.9+ (backend), Node.js 18+ (frontend) + FastAPI, SvelteKit, Tailwind CSS, SQLAlchemy/Pydantic task models, existing task/websocket stack (020-task-reports-design)
- SQLite task/result persistence (existing task DB), filesystem only for existing artifacts (no new primary store required) (020-task-reports-design)
- Node.js 18+ runtime, SvelteKit (existing frontend stack) + SvelteKit, Tailwind CSS, existing frontend UI primitives under `frontend/src/lib/components/ui` (001-unify-frontend-style)
- N/A (UI styling and component behavior only) (001-unify-frontend-style)
- Python 3.9+ (Backend), Node.js 18+ (Frontend Build) (001-plugin-arch-svelte-ui)
@@ -65,9 +63,9 @@ cd src; pytest; ruff check .
Python 3.9+ (Backend), Node.js 18+ (Frontend Build): Follow standard conventions
## Recent Changes
- 001-unify-frontend-style: Added Node.js 18+ runtime, SvelteKit (existing frontend stack) + SvelteKit, Tailwind CSS, existing frontend UI primitives under `frontend/src/lib/components/ui`
- 020-task-reports-design: Added Python 3.9+ (backend), Node.js 18+ (frontend) + FastAPI, SvelteKit, Tailwind CSS, SQLAlchemy/Pydantic task models, existing task/websocket stack
- 019-superset-ux-redesign: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, SQLAlchemy, WebSocket (existing)
- 017-llm-analysis-plugin: Added Python 3.9+ (Backend), Node.js 18+ (Frontend)
<!-- MANUAL ADDITIONS START -->

View File

@@ -30,12 +30,12 @@
#
# 5. Multi-Agent Support
# - Handles agent-specific file paths and naming conventions
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, Amazon Q Developer CLI, or Antigravity
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, or Amazon Q Developer CLI
# - Can update single agents or all existing agent files
# - Creates default Claude file if no agent files exist
#
# Usage: ./update-agent-context.sh [agent_type]
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|agy|bob|qodercli
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|shai|q|bob|qoder
# Leave empty to update all existing agent files
set -e
@@ -74,7 +74,6 @@ QODER_FILE="$REPO_ROOT/QODER.md"
AMP_FILE="$REPO_ROOT/AGENTS.md"
SHAI_FILE="$REPO_ROOT/SHAI.md"
Q_FILE="$REPO_ROOT/AGENTS.md"
AGY_FILE="$REPO_ROOT/.agent/rules/specify-rules.md"
BOB_FILE="$REPO_ROOT/AGENTS.md"
# Template file
@@ -619,7 +618,7 @@ update_specific_agent() {
codebuddy)
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
;;
qodercli)
qoder)
update_agent_file "$QODER_FILE" "Qoder CLI"
;;
amp)
@@ -631,18 +630,12 @@ update_specific_agent() {
q)
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
;;
agy)
update_agent_file "$AGY_FILE" "Antigravity"
;;
bob)
update_agent_file "$BOB_FILE" "IBM Bob"
;;
generic)
log_info "Generic agent: no predefined context file. Use the agent-specific update script for your agent."
;;
*)
log_error "Unknown agent type '$agent_type'"
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|agy|bob|qodercli|generic"
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|amp|shai|q|bob|qoder"
exit 1
;;
esac
@@ -721,11 +714,7 @@ update_all_existing_agents() {
update_agent_file "$Q_FILE" "Amazon Q Developer CLI"
found_agent=true
fi
if [[ -f "$AGY_FILE" ]]; then
update_agent_file "$AGY_FILE" "Antigravity"
found_agent=true
fi
if [[ -f "$BOB_FILE" ]]; then
update_agent_file "$BOB_FILE" "IBM Bob"
found_agent=true
@@ -755,7 +744,7 @@ print_summary() {
echo
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|q|agy|bob|qodercli]"
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|codebuddy|shai|q|bob|qoder]"
}
#==============================================================================

View File

@@ -1,50 +0,0 @@
# [PROJECT_NAME] Constitution
<!-- Example: Spec Constitution, TaskFlow Constitution, etc. -->
## Core Principles
### [PRINCIPLE_1_NAME]
<!-- Example: I. Library-First -->
[PRINCIPLE_1_DESCRIPTION]
<!-- Example: Every feature starts as a standalone library; Libraries must be self-contained, independently testable, documented; Clear purpose required - no organizational-only libraries -->
### [PRINCIPLE_2_NAME]
<!-- Example: II. CLI Interface -->
[PRINCIPLE_2_DESCRIPTION]
<!-- Example: Every library exposes functionality via CLI; Text in/out protocol: stdin/args → stdout, errors → stderr; Support JSON + human-readable formats -->
### [PRINCIPLE_3_NAME]
<!-- Example: III. Test-First (NON-NEGOTIABLE) -->
[PRINCIPLE_3_DESCRIPTION]
<!-- Example: TDD mandatory: Tests written → User approved → Tests fail → Then implement; Red-Green-Refactor cycle strictly enforced -->
### [PRINCIPLE_4_NAME]
<!-- Example: IV. Integration Testing -->
[PRINCIPLE_4_DESCRIPTION]
<!-- Example: Focus areas requiring integration tests: New library contract tests, Contract changes, Inter-service communication, Shared schemas -->
### [PRINCIPLE_5_NAME]
<!-- Example: V. Observability, VI. Versioning & Breaking Changes, VII. Simplicity -->
[PRINCIPLE_5_DESCRIPTION]
<!-- Example: Text I/O ensures debuggability; Structured logging required; Or: MAJOR.MINOR.BUILD format; Or: Start simple, YAGNI principles -->
## [SECTION_2_NAME]
<!-- Example: Additional Constraints, Security Requirements, Performance Standards, etc. -->
[SECTION_2_CONTENT]
<!-- Example: Technology stack requirements, compliance standards, deployment policies, etc. -->
## [SECTION_3_NAME]
<!-- Example: Development Workflow, Review Process, Quality Gates, etc. -->
[SECTION_3_CONTENT]
<!-- Example: Code review requirements, testing gates, deployment approval process, etc. -->
## Governance
<!-- Example: Constitution supersedes all other practices; Amendments require documentation, approval, migration plan -->
[GOVERNANCE_RULES]
<!-- Example: All PRs/reviews must verify compliance; Complexity must be justified; Use [GUIDANCE_FILE] for runtime development guidance -->
**Version**: [CONSTITUTION_VERSION] | **Ratified**: [RATIFICATION_DATE] | **Last Amended**: [LAST_AMENDED_DATE]
<!-- Example: Version: 2.1.1 | Ratified: 2025-06-13 | Last Amended: 2025-07-16 -->

View File

@@ -3,7 +3,7 @@
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/plan-template.md` for the execution workflow.
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/commands/plan.md` for the execution workflow.
## Summary
@@ -22,7 +22,7 @@
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
**Project Type**: [e.g., library/cli/web-service/mobile-app/compiler/desktop-app or NEEDS CLARIFICATION]
**Project Type**: [single/web/mobile - determines source structure]
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]

View File

@@ -1,4 +1,16 @@
FROM python:3.11-slim
# Stage 1: Build frontend static assets
FROM node:20-alpine AS frontend-build
WORKDIR /app/frontend
COPY frontend/package*.json ./
RUN npm ci
COPY frontend/ ./
RUN npm run build
# Stage 2: Runtime image for backend + static frontend
FROM python:3.11-slim AS runtime
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
@@ -16,6 +28,7 @@ RUN pip install --no-cache-dir -r /app/backend/requirements.txt
RUN python -m playwright install --with-deps chromium
COPY backend/ /app/backend/
COPY --from=frontend-build /app/frontend/build /app/frontend/build
WORKDIR /app/backend

213
README.md
View File

@@ -1,143 +1,128 @@
# ss-tools
# Инструменты автоматизации Superset (ss-tools)
## Обзор
**ss-tools** — это современная платформа для автоматизации и управления экосистемой Apache Superset. Проект перешел от набора CLI-скриптов к полноценному веб-приложению с архитектурой Backend (FastAPI) + Frontend (SvelteKit), обеспечивая удобный интерфейс для сложных операций.
## Основные возможности
### 🚀 Миграция и управление дашбордами
- **Dashboard Grid**: Удобный просмотр всех дашбордов во всех окружениях (Dev, Sandbox, Prod) в едином интерфейсе.
- **Интеллектуальный маппинг**: Автоматическое и ручное сопоставление датасетов, таблиц и схем при переносе между окружениями.
- **Проверка зависимостей**: Валидация наличия всех необходимых компонентов перед миграцией.
### 📦 Резервное копирование
- **Планировщик (Scheduler)**: Автоматическое создание резервных копий дашбордов и датасетов по расписанию.
- **Хранилище**: Локальное хранение артефактов с возможностью управления через UI.
### 🛠 Git Интеграция
- **Version Control**: Возможность версионирования ассетов Superset.
- **Git Dashboard**: Управление ветками, коммитами и деплоем изменений напрямую из интерфейса.
- **Conflict Resolution**: Встроенные инструменты для разрешения конфликтов в YAML-конфигурациях.
### 🤖 LLM Анализ (AI Plugin)
- **Автоматический аудит**: Анализ состояния дашбордов на основе скриншотов и метаданных.
- **Генерация документации**: Автоматическое описание датасетов и колонок с помощью LLM (OpenAI, OpenRouter и др.).
- **Smart Validation**: Поиск аномалий и ошибок в визуализациях.
### 🔐 Безопасность и администрирование
- **Multi-user Auth**: Многопользовательский доступ с ролевой моделью (RBAC).
- **Управление подключениями**: Централизованная настройка доступов к различным инстансам Superset.
- **Логирование**: Подробная история выполнения всех фоновых задач.
## Технологический стек
- **Backend**: Python 3.9+, FastAPI, SQLAlchemy, APScheduler, Pydantic.
- **Frontend**: Node.js 18+, SvelteKit, Tailwind CSS.
- **Database**: PostgreSQL (для хранения метаданных, задач, логов и конфигурации).
## Структура проекта
- `backend/` — Серверная часть, API и логика плагинов.
- `frontend/` — Клиентская часть (SvelteKit приложение).
- `specs/` — Спецификации функций и планы реализации.
- `docs/` — Дополнительная документация по маппингу и разработке плагинов.
## Быстрый старт
### Требования
- Python 3.9+
- Node.js 18+
- Настроенный доступ к API Superset
### Запуск
Для автоматической настройки окружений и запуска обоих серверов (Backend & Frontend) используйте скрипт:
```bash
./run.sh
```
*Скрипт создаст виртуальное окружение Python, установит зависимости `pip` и `npm`, и запустит сервисы.*
Опции:
- `--skip-install`: Пропустить установку зависимостей.
- `--help`: Показать справку.
Переменные окружения:
- `BACKEND_PORT`: Порт API (по умолчанию 8000).
- `FRONTEND_PORT`: Порт UI (по умолчанию 5173).
- `POSTGRES_URL`: Базовый URL PostgreSQL по умолчанию для всех подсистем.
- `DATABASE_URL`: URL основной БД (если не задан, используется `POSTGRES_URL`).
- `TASKS_DATABASE_URL`: URL БД задач/логов (если не задан, используется `DATABASE_URL`).
- `AUTH_DATABASE_URL`: URL БД авторизации (если не задан, используется PostgreSQL дефолт).
## Разработка
Проект следует строгим правилам разработки:
1. **Semantic Code Generation**: Использование протокола `.ai/standards/semantics.md` для обеспечения надежности кода.
2. **Design by Contract (DbC)**: Определение предусловий и постусловий для ключевых функций.
3. **Constitution**: Соблюдение правил, описанных в конституции проекта в папке `.specify/`.
### Полезные команды
- **Backend**: `cd backend && .venv/bin/python3 -m uvicorn src.app:app --reload`
- **Frontend**: `cd frontend && npm run dev`
- **Тесты**: `cd backend && .venv/bin/pytest`
Инструменты автоматизации для Apache Superset: миграция, маппинг, хранение артефактов, Git-интеграция, отчеты по задачам и LLM-assistant.
## Возможности
- Миграция дашбордов и датасетов между окружениями.
- Ручной и полуавтоматический маппинг ресурсов.
- Логи фоновых задач и отчеты о выполнении.
- Локальное хранилище файлов и бэкапов.
- Git-операции по Superset-ассетам через UI.
- Модуль LLM-анализа и assistant API.
- Многопользовательская авторизация (RBAC).
## Стек
- Backend: Python, FastAPI, SQLAlchemy, APScheduler.
- Frontend: SvelteKit, Vite, Tailwind CSS.
- База данных: PostgreSQL (основная конфигурация), поддержка миграции с legacy SQLite.
## Структура репозитория
- `backend/` — API, плагины, сервисы, скрипты миграции и тесты.
- `frontend/` — SPA-интерфейс (SvelteKit).
- `docs/` — документация по архитектуре и плагинам.
- `specs/` — спецификации и планы реализации.
- `docker/` и `docker-compose.yml` — контейнеризация.
## Быстрый старт (локально)
### Требования
- Python 3.9+
- Node.js 18+
- npm
### Запуск backend + frontend одним скриптом
```bash
./run.sh
```
Что делает `run.sh`:
- проверяет версии Python/npm;
- создает `backend/.venv` (если нет);
- устанавливает `backend/requirements.txt` и `frontend` зависимости;
- запускает backend и frontend параллельно.
Опции:
- `./run.sh --skip-install` — пропустить установку зависимостей.
- `./run.sh --help` — показать справку.
Переменные окружения для локального запуска:
- `BACKEND_PORT` (по умолчанию `8000`)
- `FRONTEND_PORT` (по умолчанию `5173`)
- `POSTGRES_URL`
- `DATABASE_URL`
- `TASKS_DATABASE_URL`
- `AUTH_DATABASE_URL`
## Docker
### Запуск
## Docker и CI/CD
### Локальный запуск в Docker (приложение + PostgreSQL)
```bash
docker compose up --build
```
После старта сервисы доступны по адресам:
- Frontend: `http://localhost:8000`
- Backend API: `http://localhost:8001`
- PostgreSQL: `localhost:5432` (`postgres/postgres`, БД `ss_tools`)
После старта:
- UI/API: `http://localhost:8000`
- PostgreSQL: `localhost:5432` (`postgres/postgres`, DB `ss_tools`)
### Остановка
Остановить:
```bash
docker compose down
```
### Очистка БД-тома
Полная очистка тома БД:
```bash
docker compose down -v
```
### Альтернативный образ PostgreSQL
Если есть проблемы с pull `postgres:16-alpine`:
Если `postgres:16-alpine` не тянется из Docker Hub (TLS timeout), используйте fallback image:
```bash
POSTGRES_IMAGE=mirror.gcr.io/library/postgres:16-alpine docker compose up -d db
```
или
или:
```bash
POSTGRES_IMAGE=bitnami/postgresql:latest docker compose up -d db
```
Если порт `5432` занят:
Если на хосте уже занят `5432`, поднимайте Postgres на другом порту:
```bash
POSTGRES_HOST_PORT=5433 docker compose up -d db
```
## Разработка
### Ручной запуск сервисов
### Миграция legacy-данных в PostgreSQL
Если нужно перенести старые данные из `tasks.db`/`config.json`:
```bash
cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python3 -m uvicorn src.app:app --reload --port 8000
PYTHONPATH=. .venv/bin/python src/scripts/migrate_sqlite_to_postgres.py --sqlite-path tasks.db
```
В другом терминале:
```bash
cd frontend
npm install
npm run dev -- --port 5173
```
### CI/CD
Добавлен workflow: `.github/workflows/ci-cd.yml`
- backend smoke tests
- frontend build
- docker build
- push образа в GHCR на `main/master`
### Тесты
Backend:
```bash
cd backend
source .venv/bin/activate
pytest
```
Frontend:
```bash
cd frontend
npm run test
```
## Инициализация auth (опционально)
```bash
cd backend
source .venv/bin/activate
python src/scripts/init_auth_db.py
python src/scripts/create_admin.py --username admin --password admin
```
## Миграция legacy-данных (опционально)
```bash
cd backend
source .venv/bin/activate
PYTHONPATH=. python src/scripts/migrate_sqlite_to_postgres.py --sqlite-path tasks.db
```
## Дополнительная документация
- `docs/plugin_dev.md`
- `docs/settings.md`
- `semantic_protocol.md`
## Контакты и вклад
Для добавления новых функций или исправления ошибок, пожалуйста, ознакомьтесь с `docs/plugin_dev.md` и создайте соответствующую спецификацию в `specs/`.

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@@ -1,23 +1,10 @@
# [DEF:backend.src.api.routes.__init__:Module]
# @TIER: STANDARD
# @SEMANTICS: routes, lazy-import, module-registry
# @PURPOSE: Provide lazy route module loading to avoid heavyweight imports during tests.
# @LAYER: API
# @RELATION: DEPENDS_ON -> importlib
# @INVARIANT: Only names listed in __all__ are importable via __getattr__.
__all__ = ['plugins', 'tasks', 'settings', 'connections', 'environments', 'mappings', 'migration', 'git', 'storage', 'admin', 'reports', 'assistant']
# [DEF:__getattr__:Function]
# @TIER: TRIVIAL
# @PURPOSE: Lazily import route module by attribute name.
# @PRE: name is module candidate exposed in __all__.
# @POST: Returns imported submodule or raises AttributeError.
def __getattr__(name):
if name in __all__:
import importlib
return importlib.import_module(f".{name}", __name__)
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
# [/DEF:__getattr__:Function]
# [/DEF:backend.src.api.routes.__init__:Module]
# Lazy loading of route modules to avoid import issues in tests
# This allows tests to import routes without triggering all module imports
__all__ = ['plugins', 'tasks', 'settings', 'connections', 'environments', 'mappings', 'migration', 'git', 'storage', 'admin', 'reports']
def __getattr__(name):
if name in __all__:
import importlib
return importlib.import_module(f".{name}", __name__)
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")

View File

@@ -1,628 +0,0 @@
# [DEF:backend.src.api.routes.__tests__.test_assistant_api:Module]
# @TIER: STANDARD
# @SEMANTICS: tests, assistant, api, confirmation, status
# @PURPOSE: Validate assistant API endpoint logic via direct async handler invocation.
# @LAYER: UI (API Tests)
# @RELATION: DEPENDS_ON -> backend.src.api.routes.assistant
# @INVARIANT: Every test clears assistant in-memory state before execution.
import os
import asyncio
from types import SimpleNamespace
from datetime import datetime, timedelta
# Force isolated sqlite databases for test module before dependencies import.
os.environ.setdefault("DATABASE_URL", "sqlite:////tmp/ss_tools_assistant_api.db")
os.environ.setdefault("TASKS_DATABASE_URL", "sqlite:////tmp/ss_tools_assistant_tasks.db")
os.environ.setdefault("AUTH_DATABASE_URL", "sqlite:////tmp/ss_tools_assistant_auth.db")
from src.api.routes import assistant as assistant_module
from src.models.assistant import (
AssistantAuditRecord,
AssistantConfirmationRecord,
AssistantMessageRecord,
)
# [DEF:_run_async:Function]
# @TIER: TRIVIAL
# @PURPOSE: Execute async endpoint handler in synchronous test context.
# @PRE: coroutine is awaitable endpoint invocation.
# @POST: Returns coroutine result or raises propagated exception.
def _run_async(coroutine):
return asyncio.run(coroutine)
# [/DEF:_run_async:Function]
# [DEF:_FakeTask:Class]
# @TIER: TRIVIAL
# @PURPOSE: Lightweight task stub used by assistant API tests.
class _FakeTask:
def __init__(self, task_id: str, status: str = "RUNNING", user_id: str = "u-admin"):
self.id = task_id
self.status = status
self.user_id = user_id
# [/DEF:_FakeTask:Class]
# [DEF:_FakeTaskManager:Class]
# @TIER: TRIVIAL
# @PURPOSE: Minimal async-compatible TaskManager fixture for deterministic test flows.
class _FakeTaskManager:
def __init__(self):
self._created = []
async def create_task(self, plugin_id, params, user_id=None):
task_id = f"task-{len(self._created) + 1}"
task = _FakeTask(task_id=task_id, status="RUNNING", user_id=user_id)
self._created.append((plugin_id, params, user_id, task))
return task
def get_task(self, task_id):
for _, _, _, task in self._created:
if task.id == task_id:
return task
return None
def get_tasks(self, limit=20, offset=0):
return [x[3] for x in self._created][offset : offset + limit]
# [/DEF:_FakeTaskManager:Class]
# [DEF:_FakeConfigManager:Class]
# @TIER: TRIVIAL
# @PURPOSE: Environment config fixture with dev/prod aliases for parser tests.
class _FakeConfigManager:
def get_environments(self):
return [
SimpleNamespace(id="dev", name="Development"),
SimpleNamespace(id="prod", name="Production"),
]
# [/DEF:_FakeConfigManager:Class]
# [DEF:_admin_user:Function]
# @TIER: TRIVIAL
# @PURPOSE: Build admin principal fixture.
# @PRE: Test harness requires authenticated admin-like principal object.
# @POST: Returns user stub with Admin role.
def _admin_user():
role = SimpleNamespace(name="Admin", permissions=[])
return SimpleNamespace(id="u-admin", username="admin", roles=[role])
# [/DEF:_admin_user:Function]
# [DEF:_limited_user:Function]
# @TIER: TRIVIAL
# @PURPOSE: Build non-admin principal fixture.
# @PRE: Test harness requires restricted principal for deny scenarios.
# @POST: Returns user stub without admin privileges.
def _limited_user():
role = SimpleNamespace(name="Operator", permissions=[])
return SimpleNamespace(id="u-limited", username="limited", roles=[role])
# [/DEF:_limited_user:Function]
# [DEF:_FakeQuery:Class]
# @TIER: TRIVIAL
# @PURPOSE: Minimal chainable query object for fake SQLAlchemy-like DB behavior in tests.
class _FakeQuery:
def __init__(self, rows):
self._rows = list(rows)
def filter(self, *args, **kwargs):
return self
def order_by(self, *args, **kwargs):
return self
def first(self):
return self._rows[0] if self._rows else None
def all(self):
return list(self._rows)
def count(self):
return len(self._rows)
def offset(self, offset):
self._rows = self._rows[offset:]
return self
def limit(self, limit):
self._rows = self._rows[:limit]
return self
# [/DEF:_FakeQuery:Class]
# [DEF:_FakeDb:Class]
# @TIER: TRIVIAL
# @PURPOSE: In-memory fake database implementing subset of Session interface used by assistant routes.
class _FakeDb:
def __init__(self):
self._messages = []
self._confirmations = []
self._audit = []
def add(self, row):
table = getattr(row, "__tablename__", "")
if table == "assistant_messages":
self._messages.append(row)
return
if table == "assistant_confirmations":
self._confirmations.append(row)
return
if table == "assistant_audit":
self._audit.append(row)
def merge(self, row):
table = getattr(row, "__tablename__", "")
if table != "assistant_confirmations":
self.add(row)
return row
for i, existing in enumerate(self._confirmations):
if getattr(existing, "id", None) == getattr(row, "id", None):
self._confirmations[i] = row
return row
self._confirmations.append(row)
return row
def query(self, model):
if model is AssistantMessageRecord:
return _FakeQuery(self._messages)
if model is AssistantConfirmationRecord:
return _FakeQuery(self._confirmations)
if model is AssistantAuditRecord:
return _FakeQuery(self._audit)
return _FakeQuery([])
def commit(self):
return None
def rollback(self):
return None
# [/DEF:_FakeDb:Class]
# [DEF:_clear_assistant_state:Function]
# @TIER: TRIVIAL
# @PURPOSE: Reset in-memory assistant registries for isolation between tests.
# @PRE: Assistant module globals may contain residues from previous test runs.
# @POST: In-memory conversation/confirmation/audit dictionaries are empty.
def _clear_assistant_state():
assistant_module.CONVERSATIONS.clear()
assistant_module.USER_ACTIVE_CONVERSATION.clear()
assistant_module.CONFIRMATIONS.clear()
assistant_module.ASSISTANT_AUDIT.clear()
# [/DEF:_clear_assistant_state:Function]
# [DEF:test_unknown_command_returns_needs_clarification:Function]
# @PURPOSE: Unknown command should return clarification state and unknown intent.
# @PRE: Fake dependencies provide admin user and deterministic task/config/db services.
# @POST: Response state is needs_clarification and no execution side-effect occurs.
def test_unknown_command_returns_needs_clarification():
_clear_assistant_state()
response = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(message="сделай что-нибудь"),
current_user=_admin_user(),
task_manager=_FakeTaskManager(),
config_manager=_FakeConfigManager(),
db=_FakeDb(),
)
)
assert response.state == "needs_clarification"
assert response.intent["domain"] == "unknown"
# [/DEF:test_unknown_command_returns_needs_clarification:Function]
# [DEF:test_capabilities_question_returns_successful_help:Function]
# @PURPOSE: Capability query should return deterministic help response, not clarification.
# @PRE: User sends natural-language "what can you do" style query.
# @POST: Response is successful and includes capabilities summary.
def test_capabilities_question_returns_successful_help():
_clear_assistant_state()
response = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(message="Что ты умеешь?"),
current_user=_admin_user(),
task_manager=_FakeTaskManager(),
config_manager=_FakeConfigManager(),
db=_FakeDb(),
)
)
assert response.state == "success"
assert "Вот что я могу сделать" in response.text
assert "Миграции" in response.text or "Git" in response.text
# [/DEF:test_capabilities_question_returns_successful_help:Function]
# [DEF:test_non_admin_command_returns_denied:Function]
# @PURPOSE: Non-admin user must receive denied state for privileged command.
# @PRE: Limited principal executes privileged git branch command.
# @POST: Response state is denied and operation is not executed.
def test_non_admin_command_returns_denied():
_clear_assistant_state()
response = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="создай ветку feature/test для дашборда 12"
),
current_user=_limited_user(),
task_manager=_FakeTaskManager(),
config_manager=_FakeConfigManager(),
db=_FakeDb(),
)
)
assert response.state == "denied"
# [/DEF:test_non_admin_command_returns_denied:Function]
# [DEF:test_migration_to_prod_requires_confirmation_and_can_be_confirmed:Function]
# @PURPOSE: Migration to prod must require confirmation and then start task after explicit confirm.
# @PRE: Admin principal submits dangerous migration command.
# @POST: Confirmation endpoint transitions flow to started state with task id.
def test_migration_to_prod_requires_confirmation_and_can_be_confirmed():
_clear_assistant_state()
task_manager = _FakeTaskManager()
db = _FakeDb()
first = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="запусти миграцию с dev на prod для дашборда 12"
),
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assert first.state == "needs_confirmation"
assert first.confirmation_id
second = _run_async(
assistant_module.confirm_operation(
confirmation_id=first.confirmation_id,
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assert second.state == "started"
assert second.task_id.startswith("task-")
# [/DEF:test_migration_to_prod_requires_confirmation_and_can_be_confirmed:Function]
# [DEF:test_status_query_returns_task_status:Function]
# @PURPOSE: Task status command must surface current status text for existing task id.
# @PRE: At least one task exists after confirmed operation.
# @POST: Status query returns started/success and includes referenced task id.
def test_status_query_returns_task_status():
_clear_assistant_state()
task_manager = _FakeTaskManager()
db = _FakeDb()
start = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="запусти миграцию с dev на prod для дашборда 10"
),
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
confirm = _run_async(
assistant_module.confirm_operation(
confirmation_id=start.confirmation_id,
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
task_id = confirm.task_id
status_resp = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message=f"проверь статус задачи {task_id}"
),
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assert status_resp.state in {"started", "success"}
assert task_id in status_resp.text
# [/DEF:test_status_query_returns_task_status:Function]
# [DEF:test_status_query_without_task_id_returns_latest_user_task:Function]
# @PURPOSE: Status command without explicit task_id should resolve to latest task for current user.
# @PRE: User has at least one created task in task manager history.
# @POST: Response references latest task status without explicit task id in command.
def test_status_query_without_task_id_returns_latest_user_task():
_clear_assistant_state()
task_manager = _FakeTaskManager()
db = _FakeDb()
start = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="запусти миграцию с dev на prod для дашборда 33"
),
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
_run_async(
assistant_module.confirm_operation(
confirmation_id=start.confirmation_id,
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
status_resp = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="покажи статус последней задачи"
),
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assert status_resp.state in {"started", "success"}
assert "Последняя задача:" in status_resp.text
# [/DEF:test_status_query_without_task_id_returns_latest_user_task:Function]
# [DEF:test_llm_validation_with_dashboard_ref_requires_confirmation:Function]
# @PURPOSE: LLM validation with dashboard_ref should now require confirmation before dispatch.
# @PRE: User sends natural-language validation request with dashboard name (not numeric id).
# @POST: Response state is needs_confirmation since all state-changing operations are now gated.
def test_llm_validation_with_dashboard_ref_requires_confirmation():
_clear_assistant_state()
response = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="Я хочу сделать валидацию дашборда test1"
),
current_user=_admin_user(),
task_manager=_FakeTaskManager(),
config_manager=_FakeConfigManager(),
db=_FakeDb(),
)
)
assert response.state == "needs_confirmation"
assert response.confirmation_id is not None
action_types = {a.type for a in response.actions}
assert "confirm" in action_types
assert "cancel" in action_types
# [/DEF:test_llm_validation_missing_dashboard_returns_needs_clarification:Function]
# [DEF:test_list_conversations_groups_by_conversation_and_marks_archived:Function]
# @PURPOSE: Conversations endpoint must group messages and compute archived marker by inactivity threshold.
# @PRE: Fake DB contains two conversations with different update timestamps.
# @POST: Response includes both conversations with archived flag set for stale one.
def test_list_conversations_groups_by_conversation_and_marks_archived():
_clear_assistant_state()
db = _FakeDb()
now = datetime.utcnow()
db.add(
AssistantMessageRecord(
id="m-1",
user_id="u-admin",
conversation_id="conv-active",
role="user",
text="active chat",
created_at=now,
)
)
db.add(
AssistantMessageRecord(
id="m-2",
user_id="u-admin",
conversation_id="conv-old",
role="user",
text="old chat",
created_at=now - timedelta(days=assistant_module.ASSISTANT_ARCHIVE_AFTER_DAYS + 2),
)
)
result = _run_async(
assistant_module.list_conversations(
page=1,
page_size=20,
include_archived=True,
search=None,
current_user=_admin_user(),
db=db,
)
)
assert result["total"] == 2
by_id = {item["conversation_id"]: item for item in result["items"]}
assert by_id["conv-active"]["archived"] is False
assert by_id["conv-old"]["archived"] is True
# [/DEF:test_list_conversations_groups_by_conversation_and_marks_archived:Function]
# [DEF:test_history_from_latest_returns_recent_page_first:Function]
# @PURPOSE: History endpoint from_latest mode must return newest page while preserving chronological order in chunk.
# @PRE: Conversation has more messages than single page size.
# @POST: First page returns latest messages and has_next indicates older pages exist.
def test_history_from_latest_returns_recent_page_first():
_clear_assistant_state()
db = _FakeDb()
base_time = datetime.utcnow() - timedelta(minutes=10)
conv_id = "conv-paginated"
for i in range(4, -1, -1):
db.add(
AssistantMessageRecord(
id=f"msg-{i}",
user_id="u-admin",
conversation_id=conv_id,
role="user" if i % 2 == 0 else "assistant",
text=f"message-{i}",
created_at=base_time + timedelta(minutes=i),
)
)
result = _run_async(
assistant_module.get_history(
page=1,
page_size=2,
conversation_id=conv_id,
from_latest=True,
current_user=_admin_user(),
db=db,
)
)
assert result["from_latest"] is True
assert result["has_next"] is True
# Chunk is chronological while representing latest page.
assert [item["text"] for item in result["items"]] == ["message-3", "message-4"]
# [/DEF:test_history_from_latest_returns_recent_page_first:Function]
# [DEF:test_list_conversations_archived_only_filters_active:Function]
# @PURPOSE: archived_only mode must return only archived conversations.
# @PRE: Dataset includes one active and one archived conversation.
# @POST: Only archived conversation remains in response payload.
def test_list_conversations_archived_only_filters_active():
_clear_assistant_state()
db = _FakeDb()
now = datetime.utcnow()
db.add(
AssistantMessageRecord(
id="m-active",
user_id="u-admin",
conversation_id="conv-active-2",
role="user",
text="active",
created_at=now,
)
)
db.add(
AssistantMessageRecord(
id="m-archived",
user_id="u-admin",
conversation_id="conv-archived-2",
role="user",
text="archived",
created_at=now - timedelta(days=assistant_module.ASSISTANT_ARCHIVE_AFTER_DAYS + 3),
)
)
result = _run_async(
assistant_module.list_conversations(
page=1,
page_size=20,
include_archived=True,
archived_only=True,
search=None,
current_user=_admin_user(),
db=db,
)
)
assert result["total"] == 1
assert result["items"][0]["conversation_id"] == "conv-archived-2"
assert result["items"][0]["archived"] is True
# [/DEF:test_list_conversations_archived_only_filters_active:Function]
# [DEF:test_guarded_operation_always_requires_confirmation:Function]
# @PURPOSE: Non-dangerous (guarded) commands must still require confirmation before execution.
# @PRE: Admin user sends a backup command that was previously auto-executed.
# @POST: Response state is needs_confirmation with confirm and cancel actions.
def test_guarded_operation_always_requires_confirmation():
_clear_assistant_state()
response = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="сделай бэкап окружения dev"
),
current_user=_admin_user(),
task_manager=_FakeTaskManager(),
config_manager=_FakeConfigManager(),
db=_FakeDb(),
)
)
assert response.state == "needs_confirmation"
assert response.confirmation_id is not None
action_types = {a.type for a in response.actions}
assert "confirm" in action_types
assert "cancel" in action_types
assert "Выполнить" in response.text or "Подтвердите" in response.text
# [/DEF:test_guarded_operation_always_requires_confirmation:Function]
# [DEF:test_guarded_operation_confirm_roundtrip:Function]
# @PURPOSE: Guarded operation must execute successfully after explicit confirmation.
# @PRE: Admin user sends a non-dangerous migration command (dev → dev).
# @POST: After confirmation, response transitions to started/success with task_id.
def test_guarded_operation_confirm_roundtrip():
_clear_assistant_state()
task_manager = _FakeTaskManager()
db = _FakeDb()
first = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="запусти миграцию с dev на dev для дашборда 5"
),
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assert first.state == "needs_confirmation"
assert first.confirmation_id
second = _run_async(
assistant_module.confirm_operation(
confirmation_id=first.confirmation_id,
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assert second.state == "started"
assert second.task_id is not None
# [/DEF:test_guarded_operation_confirm_roundtrip:Function]
# [/DEF:backend.src.api.routes.__tests__.test_assistant_api:Module]

View File

@@ -1,306 +0,0 @@
# [DEF:backend.src.api.routes.__tests__.test_assistant_authz:Module]
# @TIER: STANDARD
# @SEMANTICS: tests, assistant, authz, confirmation, rbac
# @PURPOSE: Verify assistant confirmation ownership, expiration, and deny behavior for restricted users.
# @LAYER: UI (API Tests)
# @RELATION: DEPENDS_ON -> backend.src.api.routes.assistant
# @INVARIANT: Security-sensitive flows fail closed for unauthorized actors.
import os
import asyncio
from datetime import datetime, timedelta
from types import SimpleNamespace
import pytest
from fastapi import HTTPException
# Force isolated sqlite databases for test module before dependencies import.
os.environ.setdefault("DATABASE_URL", "sqlite:////tmp/ss_tools_assistant_authz.db")
os.environ.setdefault("TASKS_DATABASE_URL", "sqlite:////tmp/ss_tools_assistant_authz_tasks.db")
os.environ.setdefault("AUTH_DATABASE_URL", "sqlite:////tmp/ss_tools_assistant_authz_auth.db")
from src.api.routes import assistant as assistant_module
from src.models.assistant import (
AssistantAuditRecord,
AssistantConfirmationRecord,
AssistantMessageRecord,
)
# [DEF:_run_async:Function]
# @TIER: TRIVIAL
# @PURPOSE: Execute async endpoint handler in synchronous test context.
# @PRE: coroutine is awaitable endpoint invocation.
# @POST: Returns coroutine result or raises propagated exception.
def _run_async(coroutine):
return asyncio.run(coroutine)
# [/DEF:_run_async:Function]
# [DEF:_FakeTask:Class]
# @TIER: TRIVIAL
# @PURPOSE: Lightweight task model used for assistant authz tests.
class _FakeTask:
def __init__(self, task_id: str, status: str = "RUNNING", user_id: str = "u-admin"):
self.id = task_id
self.status = status
self.user_id = user_id
# [/DEF:_FakeTask:Class]
# [DEF:_FakeTaskManager:Class]
# @TIER: TRIVIAL
# @PURPOSE: Minimal task manager for deterministic operation creation and lookup.
class _FakeTaskManager:
def __init__(self):
self._created = []
async def create_task(self, plugin_id, params, user_id=None):
task_id = f"task-{len(self._created) + 1}"
task = _FakeTask(task_id=task_id, status="RUNNING", user_id=user_id)
self._created.append((plugin_id, params, user_id, task))
return task
def get_task(self, task_id):
for _, _, _, task in self._created:
if task.id == task_id:
return task
return None
def get_tasks(self, limit=20, offset=0):
return [x[3] for x in self._created][offset : offset + limit]
# [/DEF:_FakeTaskManager:Class]
# [DEF:_FakeConfigManager:Class]
# @TIER: TRIVIAL
# @PURPOSE: Provide deterministic environment aliases required by intent parsing.
class _FakeConfigManager:
def get_environments(self):
return [
SimpleNamespace(id="dev", name="Development"),
SimpleNamespace(id="prod", name="Production"),
]
# [/DEF:_FakeConfigManager:Class]
# [DEF:_admin_user:Function]
# @TIER: TRIVIAL
# @PURPOSE: Build admin principal fixture.
# @PRE: Test requires privileged principal for risky operations.
# @POST: Returns admin-like user stub with Admin role.
def _admin_user():
role = SimpleNamespace(name="Admin", permissions=[])
return SimpleNamespace(id="u-admin", username="admin", roles=[role])
# [/DEF:_admin_user:Function]
# [DEF:_other_admin_user:Function]
# @TIER: TRIVIAL
# @PURPOSE: Build second admin principal fixture for ownership tests.
# @PRE: Ownership mismatch scenario needs distinct authenticated actor.
# @POST: Returns alternate admin-like user stub.
def _other_admin_user():
role = SimpleNamespace(name="Admin", permissions=[])
return SimpleNamespace(id="u-admin-2", username="admin2", roles=[role])
# [/DEF:_other_admin_user:Function]
# [DEF:_limited_user:Function]
# @TIER: TRIVIAL
# @PURPOSE: Build limited principal without required assistant execution privileges.
# @PRE: Permission denial scenario needs non-admin actor.
# @POST: Returns restricted user stub.
def _limited_user():
role = SimpleNamespace(name="Operator", permissions=[])
return SimpleNamespace(id="u-limited", username="limited", roles=[role])
# [/DEF:_limited_user:Function]
# [DEF:_FakeQuery:Class]
# @TIER: TRIVIAL
# @PURPOSE: Minimal chainable query object for fake DB interactions.
class _FakeQuery:
def __init__(self, rows):
self._rows = list(rows)
def filter(self, *args, **kwargs):
return self
def order_by(self, *args, **kwargs):
return self
def first(self):
return self._rows[0] if self._rows else None
def all(self):
return list(self._rows)
def limit(self, limit):
self._rows = self._rows[:limit]
return self
def offset(self, offset):
self._rows = self._rows[offset:]
return self
def count(self):
return len(self._rows)
# [/DEF:_FakeQuery:Class]
# [DEF:_FakeDb:Class]
# @TIER: TRIVIAL
# @PURPOSE: In-memory session substitute for assistant route persistence calls.
class _FakeDb:
def __init__(self):
self._messages = []
self._confirmations = []
self._audit = []
def add(self, row):
table = getattr(row, "__tablename__", "")
if table == "assistant_messages":
self._messages.append(row)
elif table == "assistant_confirmations":
self._confirmations.append(row)
elif table == "assistant_audit":
self._audit.append(row)
def merge(self, row):
if getattr(row, "__tablename__", "") != "assistant_confirmations":
self.add(row)
return row
for i, existing in enumerate(self._confirmations):
if getattr(existing, "id", None) == getattr(row, "id", None):
self._confirmations[i] = row
return row
self._confirmations.append(row)
return row
def query(self, model):
if model is AssistantMessageRecord:
return _FakeQuery(self._messages)
if model is AssistantConfirmationRecord:
return _FakeQuery(self._confirmations)
if model is AssistantAuditRecord:
return _FakeQuery(self._audit)
return _FakeQuery([])
def commit(self):
return None
def rollback(self):
return None
# [/DEF:_FakeDb:Class]
# [DEF:_clear_assistant_state:Function]
# @TIER: TRIVIAL
# @PURPOSE: Reset assistant process-local state between test cases.
# @PRE: Assistant globals may contain state from prior tests.
# @POST: Assistant in-memory state dictionaries are cleared.
def _clear_assistant_state():
assistant_module.CONVERSATIONS.clear()
assistant_module.USER_ACTIVE_CONVERSATION.clear()
assistant_module.CONFIRMATIONS.clear()
assistant_module.ASSISTANT_AUDIT.clear()
# [/DEF:_clear_assistant_state:Function]
# [DEF:test_confirmation_owner_mismatch_returns_403:Function]
# @PURPOSE: Confirm endpoint should reject requests from user that does not own the confirmation token.
# @PRE: Confirmation token is created by first admin actor.
# @POST: Second actor receives 403 on confirm operation.
def test_confirmation_owner_mismatch_returns_403():
_clear_assistant_state()
task_manager = _FakeTaskManager()
db = _FakeDb()
create = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="запусти миграцию с dev на prod для дашборда 18"
),
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assert create.state == "needs_confirmation"
with pytest.raises(HTTPException) as exc:
_run_async(
assistant_module.confirm_operation(
confirmation_id=create.confirmation_id,
current_user=_other_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assert exc.value.status_code == 403
# [/DEF:test_confirmation_owner_mismatch_returns_403:Function]
# [DEF:test_expired_confirmation_cannot_be_confirmed:Function]
# @PURPOSE: Expired confirmation token should be rejected and not create task.
# @PRE: Confirmation token exists and is manually expired before confirm request.
# @POST: Confirm endpoint raises 400 and no task is created.
def test_expired_confirmation_cannot_be_confirmed():
_clear_assistant_state()
task_manager = _FakeTaskManager()
db = _FakeDb()
create = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="запусти миграцию с dev на prod для дашборда 19"
),
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assistant_module.CONFIRMATIONS[create.confirmation_id].expires_at = datetime.utcnow() - timedelta(minutes=1)
with pytest.raises(HTTPException) as exc:
_run_async(
assistant_module.confirm_operation(
confirmation_id=create.confirmation_id,
current_user=_admin_user(),
task_manager=task_manager,
config_manager=_FakeConfigManager(),
db=db,
)
)
assert exc.value.status_code == 400
assert task_manager.get_tasks(limit=10, offset=0) == []
# [/DEF:test_expired_confirmation_cannot_be_confirmed:Function]
# [DEF:test_limited_user_cannot_launch_restricted_operation:Function]
# @PURPOSE: Limited user should receive denied state for privileged operation.
# @PRE: Restricted user attempts dangerous deploy command.
# @POST: Assistant returns denied state and does not execute operation.
def test_limited_user_cannot_launch_restricted_operation():
_clear_assistant_state()
response = _run_async(
assistant_module.send_message(
request=assistant_module.AssistantMessageRequest(
message="задеплой дашборд 88 в production"
),
current_user=_limited_user(),
task_manager=_FakeTaskManager(),
config_manager=_FakeConfigManager(),
db=_FakeDb(),
)
)
assert response.state == "denied"
# [/DEF:test_limited_user_cannot_launch_restricted_operation:Function]
# [/DEF:backend.src.api.routes.__tests__.test_assistant_authz:Module]

View File

@@ -146,77 +146,6 @@ def test_get_dashboards_invalid_pagination():
# [/DEF:test_get_dashboards_invalid_pagination:Function]
# [DEF:test_get_dashboard_detail_success:Function]
# @TEST: GET /api/dashboards/{id} returns dashboard detail with charts and datasets
def test_get_dashboard_detail_success():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.has_permission") as mock_perm, \
patch("src.api.routes.dashboards.SupersetClient") as mock_client_cls:
mock_env = MagicMock()
mock_env.id = "prod"
mock_config.return_value.get_environments.return_value = [mock_env]
mock_perm.return_value = lambda: True
mock_client = MagicMock()
mock_client.get_dashboard_detail.return_value = {
"id": 42,
"title": "Revenue Dashboard",
"slug": "revenue-dashboard",
"url": "/superset/dashboard/42/",
"description": "Overview",
"last_modified": "2026-02-20T10:00:00+00:00",
"published": True,
"charts": [
{
"id": 100,
"title": "Revenue by Month",
"viz_type": "line",
"dataset_id": 7,
"last_modified": "2026-02-19T10:00:00+00:00",
"overview": "line"
}
],
"datasets": [
{
"id": 7,
"table_name": "fact_revenue",
"schema": "mart",
"database": "Analytics",
"last_modified": "2026-02-18T10:00:00+00:00",
"overview": "mart.fact_revenue"
}
],
"chart_count": 1,
"dataset_count": 1
}
mock_client_cls.return_value = mock_client
response = client.get("/api/dashboards/42?env_id=prod")
assert response.status_code == 200
payload = response.json()
assert payload["id"] == 42
assert payload["chart_count"] == 1
assert payload["dataset_count"] == 1
# [/DEF:test_get_dashboard_detail_success:Function]
# [DEF:test_get_dashboard_detail_env_not_found:Function]
# @TEST: GET /api/dashboards/{id} returns 404 for missing environment
def test_get_dashboard_detail_env_not_found():
with patch("src.api.routes.dashboards.get_config_manager") as mock_config, \
patch("src.api.routes.dashboards.has_permission") as mock_perm:
mock_config.return_value.get_environments.return_value = []
mock_perm.return_value = lambda: True
response = client.get("/api/dashboards/42?env_id=missing")
assert response.status_code == 404
assert "Environment not found" in response.json()["detail"]
# [/DEF:test_get_dashboard_detail_env_not_found:Function]
# [DEF:test_migrate_dashboards_success:Function]
# @TEST: POST /api/dashboards/migrate creates migration task
# @PRE: Valid source_env_id, target_env_id, dashboard_ids
@@ -354,4 +283,4 @@ def test_get_database_mappings_success():
# [/DEF:test_get_database_mappings_success:Function]
# [/DEF:backend.src.api.routes.__tests__.test_dashboards:Module]
# [/DEF:backend.src.api.routes.__tests__.test_dashboards:Module]

View File

@@ -4,7 +4,6 @@
# @PURPOSE: Contract tests for GET /api/reports/{report_id} detail endpoint behavior.
# @LAYER: Domain (Tests)
# @RELATION: TESTS -> backend.src.api.routes.reports
# @INVARIANT: Detail endpoint tests must keep deterministic assertions for success and not-found contracts.
from datetime import datetime, timedelta
from types import SimpleNamespace

File diff suppressed because it is too large Load Diff

View File

@@ -16,7 +16,6 @@ from typing import List, Optional, Dict
from pydantic import BaseModel, Field
from ...dependencies import get_config_manager, get_task_manager, get_resource_service, get_mapping_service, has_permission
from ...core.logger import logger, belief_scope
from ...core.superset_client import SupersetClient
# [/SECTION]
router = APIRouter(prefix="/api/dashboards", tags=["Dashboards"])
@@ -53,55 +52,6 @@ class DashboardsResponse(BaseModel):
total_pages: int
# [/DEF:DashboardsResponse:DataClass]
# [DEF:DashboardChartItem:DataClass]
class DashboardChartItem(BaseModel):
id: int
title: str
viz_type: Optional[str] = None
dataset_id: Optional[int] = None
last_modified: Optional[str] = None
overview: Optional[str] = None
# [/DEF:DashboardChartItem:DataClass]
# [DEF:DashboardDatasetItem:DataClass]
class DashboardDatasetItem(BaseModel):
id: int
table_name: str
schema: Optional[str] = None
database: str
last_modified: Optional[str] = None
overview: Optional[str] = None
# [/DEF:DashboardDatasetItem:DataClass]
# [DEF:DashboardDetailResponse:DataClass]
class DashboardDetailResponse(BaseModel):
id: int
title: str
slug: Optional[str] = None
url: Optional[str] = None
description: Optional[str] = None
last_modified: Optional[str] = None
published: Optional[bool] = None
charts: List[DashboardChartItem]
datasets: List[DashboardDatasetItem]
chart_count: int
dataset_count: int
# [/DEF:DashboardDetailResponse:DataClass]
# [DEF:DatabaseMapping:DataClass]
class DatabaseMapping(BaseModel):
source_db: str
target_db: str
source_db_uuid: Optional[str] = None
target_db_uuid: Optional[str] = None
confidence: float
# [/DEF:DatabaseMapping:DataClass]
# [DEF:DatabaseMappingsResponse:DataClass]
class DatabaseMappingsResponse(BaseModel):
mappings: List[DatabaseMapping]
# [/DEF:DatabaseMappingsResponse:DataClass]
# [DEF:get_dashboards:Function]
# @PURPOSE: Fetch list of dashboards from a specific environment with Git status and last task status
# @PRE: env_id must be a valid environment ID
@@ -182,81 +132,6 @@ async def get_dashboards(
raise HTTPException(status_code=503, detail=f"Failed to fetch dashboards: {str(e)}")
# [/DEF:get_dashboards:Function]
# [DEF:get_database_mappings:Function]
# @PURPOSE: Get database mapping suggestions between source and target environments
# @PRE: User has permission plugin:migration:read
# @PRE: source_env_id and target_env_id are valid environment IDs
# @POST: Returns list of suggested database mappings with confidence scores
# @PARAM: source_env_id (str) - Source environment ID
# @PARAM: target_env_id (str) - Target environment ID
# @RETURN: DatabaseMappingsResponse - List of suggested mappings
# @RELATION: CALLS -> MappingService.get_suggestions
@router.get("/db-mappings", response_model=DatabaseMappingsResponse)
async def get_database_mappings(
source_env_id: str,
target_env_id: str,
mapping_service=Depends(get_mapping_service),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_database_mappings", f"source={source_env_id}, target={target_env_id}"):
try:
# Get mapping suggestions using MappingService
suggestions = await mapping_service.get_suggestions(source_env_id, target_env_id)
# Format suggestions as DatabaseMapping objects
mappings = [
DatabaseMapping(
source_db=s.get('source_db', ''),
target_db=s.get('target_db', ''),
source_db_uuid=s.get('source_db_uuid'),
target_db_uuid=s.get('target_db_uuid'),
confidence=s.get('confidence', 0.0)
)
for s in suggestions
]
logger.info(f"[get_database_mappings][Coherence:OK] Returning {len(mappings)} database mapping suggestions")
return DatabaseMappingsResponse(mappings=mappings)
except Exception as e:
logger.error(f"[get_database_mappings][Coherence:Failed] Failed to get database mappings: {e}")
raise HTTPException(status_code=503, detail=f"Failed to get database mappings: {str(e)}")
# [/DEF:get_database_mappings:Function]
# [DEF:get_dashboard_detail:Function]
# @PURPOSE: Fetch detailed dashboard info with related charts and datasets
# @PRE: env_id must be valid and dashboard_id must exist
# @POST: Returns dashboard detail payload for overview page
# @RELATION: CALLS -> SupersetClient.get_dashboard_detail
@router.get("/{dashboard_id:int}", response_model=DashboardDetailResponse)
async def get_dashboard_detail(
dashboard_id: int,
env_id: str,
config_manager=Depends(get_config_manager),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_dashboard_detail", f"dashboard_id={dashboard_id}, env_id={env_id}"):
environments = config_manager.get_environments()
env = next((e for e in environments if e.id == env_id), None)
if not env:
logger.error(f"[get_dashboard_detail][Coherence:Failed] Environment not found: {env_id}")
raise HTTPException(status_code=404, detail="Environment not found")
try:
client = SupersetClient(env)
detail = client.get_dashboard_detail(dashboard_id)
logger.info(
f"[get_dashboard_detail][Coherence:OK] Dashboard {dashboard_id}: {detail.get('chart_count', 0)} charts, {detail.get('dataset_count', 0)} datasets"
)
return DashboardDetailResponse(**detail)
except HTTPException:
raise
except Exception as e:
logger.error(f"[get_dashboard_detail][Coherence:Failed] Failed to fetch dashboard detail: {e}")
raise HTTPException(status_code=503, detail=f"Failed to fetch dashboard detail: {str(e)}")
# [/DEF:get_dashboard_detail:Function]
# [DEF:MigrateRequest:DataClass]
class MigrateRequest(BaseModel):
source_env_id: str = Field(..., description="Source environment ID")
@@ -393,4 +268,60 @@ async def backup_dashboards(
raise HTTPException(status_code=503, detail=f"Failed to create backup task: {str(e)}")
# [/DEF:backup_dashboards:Function]
# [DEF:DatabaseMapping:DataClass]
class DatabaseMapping(BaseModel):
source_db: str
target_db: str
source_db_uuid: Optional[str] = None
target_db_uuid: Optional[str] = None
confidence: float
# [/DEF:DatabaseMapping:DataClass]
# [DEF:DatabaseMappingsResponse:DataClass]
class DatabaseMappingsResponse(BaseModel):
mappings: List[DatabaseMapping]
# [/DEF:DatabaseMappingsResponse:DataClass]
# [DEF:get_database_mappings:Function]
# @PURPOSE: Get database mapping suggestions between source and target environments
# @PRE: User has permission plugin:migration:read
# @PRE: source_env_id and target_env_id are valid environment IDs
# @POST: Returns list of suggested database mappings with confidence scores
# @PARAM: source_env_id (str) - Source environment ID
# @PARAM: target_env_id (str) - Target environment ID
# @RETURN: DatabaseMappingsResponse - List of suggested mappings
# @RELATION: CALLS -> MappingService.get_suggestions
@router.get("/db-mappings", response_model=DatabaseMappingsResponse)
async def get_database_mappings(
source_env_id: str,
target_env_id: str,
mapping_service=Depends(get_mapping_service),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_database_mappings", f"source={source_env_id}, target={target_env_id}"):
try:
# Get mapping suggestions using MappingService
suggestions = await mapping_service.get_suggestions(source_env_id, target_env_id)
# Format suggestions as DatabaseMapping objects
mappings = [
DatabaseMapping(
source_db=s.get('source_db', ''),
target_db=s.get('target_db', ''),
source_db_uuid=s.get('source_db_uuid'),
target_db_uuid=s.get('target_db_uuid'),
confidence=s.get('confidence', 0.0)
)
for s in suggestions
]
logger.info(f"[get_database_mappings][Coherence:OK] Returning {len(mappings)} database mapping suggestions")
return DatabaseMappingsResponse(mappings=mappings)
except Exception as e:
logger.error(f"[get_database_mappings][Coherence:Failed] Failed to get database mappings: {e}")
raise HTTPException(status_code=503, detail=f"Failed to get database mappings: {str(e)}")
# [/DEF:get_database_mappings:Function]
# [/DEF:backend.src.api.routes.dashboards:Module]

View File

@@ -25,11 +25,6 @@ from src.api.routes.git_schemas import (
)
from src.services.git_service import GitService
from src.core.logger import logger, belief_scope
from ...services.llm_prompt_templates import (
DEFAULT_LLM_PROMPTS,
normalize_llm_settings,
resolve_bound_provider_id,
)
router = APIRouter(tags=["git"])
git_service = GitService()
@@ -411,7 +406,6 @@ async def get_repository_diff(
async def generate_commit_message(
dashboard_id: int,
db: Session = Depends(get_db),
config_manager = Depends(get_config_manager),
_ = Depends(has_permission("plugin:git", "EXECUTE"))
):
with belief_scope("generate_commit_message"):
@@ -435,11 +429,7 @@ async def generate_commit_message(
llm_service = LLMProviderService(db)
providers = llm_service.get_all_providers()
llm_settings = normalize_llm_settings(config_manager.get_config().settings.llm)
bound_provider_id = resolve_bound_provider_id(llm_settings, "git_commit")
provider = next((p for p in providers if p.id == bound_provider_id), None)
if not provider:
provider = next((p for p in providers if p.is_active), None)
provider = next((p for p in providers if p.is_active), None)
if not provider:
raise HTTPException(status_code=400, detail="No active LLM provider found")
@@ -455,15 +445,7 @@ async def generate_commit_message(
# 4. Generate Message
from ...plugins.git.llm_extension import GitLLMExtension
extension = GitLLMExtension(client)
git_prompt = llm_settings["prompts"].get(
"git_commit_prompt",
DEFAULT_LLM_PROMPTS["git_commit_prompt"],
)
message = await extension.suggest_commit_message(
diff,
history,
prompt_template=git_prompt,
)
message = await extension.suggest_commit_message(diff, history)
return {"message": message}
except Exception as e:
@@ -471,4 +453,4 @@ async def generate_commit_message(
raise HTTPException(status_code=400, detail=str(e))
# [/DEF:generate_commit_message:Function]
# [/DEF:backend.src.api.routes.git:Module]
# [/DEF:backend.src.api.routes.git:Module]

View File

@@ -6,16 +6,12 @@
# @RELATION: DEPENDS_ON -> backend.src.dependencies
# @RELATION: DEPENDS_ON -> backend.src.models.dashboard
from fastapi import APIRouter, Depends, HTTPException, Query
from typing import List, Dict, Any
from sqlalchemy.orm import Session
from fastapi import APIRouter, Depends, HTTPException
from typing import List
from ...dependencies import get_config_manager, get_task_manager, has_permission
from ...core.database import get_db
from ...models.dashboard import DashboardMetadata, DashboardSelection
from ...core.superset_client import SupersetClient
from ...core.logger import belief_scope
from ...core.mapping_service import IdMappingService
from ...models.mapping import ResourceMapping
router = APIRouter(prefix="/api", tags=["migration"])
@@ -65,10 +61,9 @@ async def execute_migration(
# Create migration task with debug logging
from ...core.logger import logger
# Include replace_db_config and fix_cross_filters in the task parameters
# Include replace_db_config in the task parameters
task_params = selection.dict()
task_params['replace_db_config'] = selection.replace_db_config
task_params['fix_cross_filters'] = selection.fix_cross_filters
logger.info(f"Creating migration task with params: {task_params}")
logger.info(f"Available environments: {env_ids}")
@@ -83,68 +78,4 @@ async def execute_migration(
raise HTTPException(status_code=500, detail=f"Failed to create migration task: {str(e)}")
# [/DEF:execute_migration:Function]
# [DEF:get_migration_settings:Function]
# @PURPOSE: Get current migration Cron string explicitly.
@router.get("/settings", response_model=Dict[str, str])
async def get_migration_settings(
config_manager=Depends(get_config_manager),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_migration_settings"):
# For simplicity in MVP, assuming cron expression is stored in config
# default to a valid cron if not set.
config = config_manager.get_config()
cron = config.get("migration_sync_cron", "0 2 * * *")
return {"cron": cron}
# [/DEF:get_migration_settings:Function]
# [DEF:update_migration_settings:Function]
# @PURPOSE: Update migration Cron string.
@router.put("/settings", response_model=Dict[str, str])
async def update_migration_settings(
payload: Dict[str, str],
config_manager=Depends(get_config_manager),
_ = Depends(has_permission("plugin:migration", "WRITE"))
):
with belief_scope("update_migration_settings"):
if "cron" not in payload:
raise HTTPException(status_code=400, detail="Missing 'cron' field in payload")
cron_expr = payload["cron"]
# Basic validation could go here
# In a real system, you'd save this to config and restart the scheduler.
# Here we just blindly patch the in-memory or file config for the MVP.
current_cfg = config_manager.get_config()
current_cfg["migration_sync_cron"] = cron_expr
config_manager.save_config(current_cfg)
return {"cron": cron_expr, "status": "updated"}
# [/DEF:update_migration_settings:Function]
# [DEF:get_resource_mappings:Function]
# @PURPOSE: Fetch all synchronized object mappings from the database.
@router.get("/mappings-data", response_model=List[Dict[str, Any]])
async def get_resource_mappings(
skip: int = Query(0, ge=0),
limit: int = Query(100, ge=1, le=1000),
db: Session = Depends(get_db),
_ = Depends(has_permission("plugin:migration", "READ"))
):
with belief_scope("get_resource_mappings"):
mappings = db.query(ResourceMapping).offset(skip).limit(limit).all()
result = []
for m in mappings:
result.append({
"id": m.id,
"environment_id": m.environment_id,
"resource_type": m.resource_type.value,
"uuid": m.uuid,
"remote_id": m.remote_integer_id,
"resource_name": m.resource_name,
"last_synced_at": m.last_synced_at.isoformat() if m.last_synced_at else None
})
return result
# [/DEF:get_resource_mappings:Function]
# [/DEF:backend.src.api.routes.migration:Module]

View File

@@ -32,28 +32,27 @@ router = APIRouter(prefix="/api/reports", tags=["Reports"])
# @PARAM: field_name (str) - Query field name for diagnostics.
# @RETURN: List - Parsed enum values.
def _parse_csv_enum_list(raw: Optional[str], enum_cls, field_name: str) -> List:
with belief_scope("_parse_csv_enum_list"):
if raw is None or not raw.strip():
return []
values = [item.strip() for item in raw.split(",") if item.strip()]
parsed = []
invalid = []
for value in values:
try:
parsed.append(enum_cls(value))
except ValueError:
invalid.append(value)
if invalid:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail={
"message": f"Invalid values for '{field_name}'",
"field": field_name,
"invalid_values": invalid,
"allowed_values": [item.value for item in enum_cls],
},
)
return parsed
if raw is None or not raw.strip():
return []
values = [item.strip() for item in raw.split(",") if item.strip()]
parsed = []
invalid = []
for value in values:
try:
parsed.append(enum_cls(value))
except ValueError:
invalid.append(value)
if invalid:
raise HTTPException(
status_code=status.HTTP_400_BAD_REQUEST,
detail={
"message": f"Invalid values for '{field_name}'",
"field": field_name,
"invalid_values": invalid,
"allowed_values": [item.value for item in enum_cls],
},
)
return parsed
# [/DEF:_parse_csv_enum_list:Function]

View File

@@ -16,10 +16,9 @@ from pydantic import BaseModel
from ...core.config_models import AppConfig, Environment, GlobalSettings, LoggingConfig
from ...models.storage import StorageConfig
from ...dependencies import get_config_manager, has_permission
from ...core.config_manager import ConfigManager
from ...core.logger import logger, belief_scope
from ...core.superset_client import SupersetClient
from ...services.llm_prompt_templates import normalize_llm_settings
from ...core.config_manager import ConfigManager
from ...core.logger import logger, belief_scope
from ...core.superset_client import SupersetClient
# [/SECTION]
# [DEF:LoggingConfigResponse:Class]
@@ -39,14 +38,13 @@ router = APIRouter()
# @POST: Returns masked AppConfig.
# @RETURN: AppConfig - The current configuration.
@router.get("", response_model=AppConfig)
async def get_settings(
async def get_settings(
config_manager: ConfigManager = Depends(get_config_manager),
_ = Depends(has_permission("admin:settings", "READ"))
):
with belief_scope("get_settings"):
logger.info("[get_settings][Entry] Fetching all settings")
config = config_manager.get_config().copy(deep=True)
config.settings.llm = normalize_llm_settings(config.settings.llm)
config = config_manager.get_config().copy(deep=True)
# Mask passwords
for env in config.environments:
if env.password:
@@ -281,7 +279,7 @@ async def update_logging_config(
# [/DEF:update_logging_config:Function]
# [DEF:ConsolidatedSettingsResponse:Class]
class ConsolidatedSettingsResponse(BaseModel):
class ConsolidatedSettingsResponse(BaseModel):
environments: List[dict]
connections: List[dict]
llm: dict
@@ -296,7 +294,7 @@ class ConsolidatedSettingsResponse(BaseModel):
# @POST: Returns all consolidated settings.
# @RETURN: ConsolidatedSettingsResponse - All settings categories.
@router.get("/consolidated", response_model=ConsolidatedSettingsResponse)
async def get_consolidated_settings(
async def get_consolidated_settings(
config_manager: ConfigManager = Depends(get_config_manager),
_ = Depends(has_permission("admin:settings", "READ"))
):
@@ -325,16 +323,14 @@ async def get_consolidated_settings(
finally:
db.close()
normalized_llm = normalize_llm_settings(config.settings.llm)
return ConsolidatedSettingsResponse(
environments=[env.dict() for env in config.environments],
connections=config.settings.connections,
llm=normalized_llm,
llm_providers=llm_providers_list,
logging=config.settings.logging.dict(),
storage=config.settings.storage.dict()
)
return ConsolidatedSettingsResponse(
environments=[env.dict() for env in config.environments],
connections=config.settings.connections,
llm=config.settings.llm,
llm_providers=llm_providers_list,
logging=config.settings.logging.dict(),
storage=config.settings.storage.dict()
)
# [/DEF:get_consolidated_settings:Function]
# [DEF:update_consolidated_settings:Function]
@@ -357,9 +353,9 @@ async def update_consolidated_settings(
if "connections" in settings_patch:
current_settings.connections = settings_patch["connections"]
# Update LLM if provided
if "llm" in settings_patch:
current_settings.llm = normalize_llm_settings(settings_patch["llm"])
# Update LLM if provided
if "llm" in settings_patch:
current_settings.llm = settings_patch["llm"]
# Update Logging if provided
if "logging" in settings_patch:

View File

@@ -9,15 +9,9 @@ from fastapi import APIRouter, Depends, HTTPException, status, Query
from pydantic import BaseModel
from ...core.logger import belief_scope
from ...core.task_manager import TaskManager, Task, TaskStatus, LogEntry
from ...core.task_manager.models import LogFilter, LogStats
from ...dependencies import get_task_manager, has_permission, get_current_user, get_config_manager
from ...core.config_manager import ConfigManager
from ...services.llm_prompt_templates import (
is_multimodal_model,
normalize_llm_settings,
resolve_bound_provider_id,
)
from ...core.task_manager import TaskManager, Task, TaskStatus, LogEntry
from ...core.task_manager.models import LogFilter, LogStats
from ...dependencies import get_task_manager, has_permission, get_current_user
router = APIRouter()
@@ -45,54 +39,32 @@ class ResumeTaskRequest(BaseModel):
# @PRE: plugin_id must exist and params must be valid for that plugin.
# @POST: A new task is created and started.
# @RETURN: Task - The created task instance.
async def create_task(
request: CreateTaskRequest,
task_manager: TaskManager = Depends(get_task_manager),
current_user = Depends(get_current_user),
config_manager: ConfigManager = Depends(get_config_manager),
):
async def create_task(
request: CreateTaskRequest,
task_manager: TaskManager = Depends(get_task_manager),
current_user = Depends(get_current_user)
):
# Dynamic permission check based on plugin_id
has_permission(f"plugin:{request.plugin_id}", "EXECUTE")(current_user)
"""
Create and start a new task for a given plugin.
"""
with belief_scope("create_task"):
try:
# Special handling for LLM tasks to resolve provider config by task binding.
if request.plugin_id in {"llm_dashboard_validation", "llm_documentation"}:
from ...core.database import SessionLocal
from ...services.llm_provider import LLMProviderService
db = SessionLocal()
try:
llm_service = LLMProviderService(db)
provider_id = request.params.get("provider_id")
if not provider_id:
llm_settings = normalize_llm_settings(config_manager.get_config().settings.llm)
binding_key = "dashboard_validation" if request.plugin_id == "llm_dashboard_validation" else "documentation"
provider_id = resolve_bound_provider_id(llm_settings, binding_key)
if provider_id:
request.params["provider_id"] = provider_id
if not provider_id:
providers = llm_service.get_all_providers()
active_provider = next((p for p in providers if p.is_active), None)
if active_provider:
provider_id = active_provider.id
request.params["provider_id"] = provider_id
if provider_id:
db_provider = llm_service.get_provider(provider_id)
if not db_provider:
raise ValueError(f"LLM Provider {provider_id} not found")
if request.plugin_id == "llm_dashboard_validation" and not is_multimodal_model(
db_provider.default_model,
db_provider.provider_type,
):
raise HTTPException(
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
detail="Selected provider model is not multimodal for dashboard validation",
)
finally:
db.close()
try:
# Special handling for validation task to include provider config
if request.plugin_id == "llm_dashboard_validation":
from ...core.database import SessionLocal
from ...services.llm_provider import LLMProviderService
db = SessionLocal()
try:
llm_service = LLMProviderService(db)
provider_id = request.params.get("provider_id")
if provider_id:
db_provider = llm_service.get_provider(provider_id)
if not db_provider:
raise ValueError(f"LLM Provider {provider_id} not found")
finally:
db.close()
task = await task_manager.create_task(
plugin_id=request.plugin_id,

View File

@@ -21,7 +21,7 @@ import asyncio
from .dependencies import get_task_manager, get_scheduler_service
from .core.utils.network import NetworkError
from .core.logger import logger, belief_scope
from .api.routes import plugins, tasks, settings, environments, mappings, migration, connections, git, storage, admin, llm, dashboards, datasets, reports, assistant
from .api.routes import plugins, tasks, settings, environments, mappings, migration, connections, git, storage, admin, llm, dashboards, datasets, reports
from .api import auth
# [DEF:App:Global]
@@ -72,12 +72,12 @@ app.add_middleware(
)
# [DEF:network_error_handler:Function]
# @PURPOSE: Global exception handler for NetworkError.
# [DEF:log_requests:Function]
# @PURPOSE: Middleware to log incoming HTTP requests and their response status.
# @PRE: request is a FastAPI Request object.
# @POST: Returns 503 HTTP Exception.
# @POST: Logs request and response details.
# @PARAM: request (Request) - The incoming request object.
# @PARAM: exc (NetworkError) - The exception instance.
# @PARAM: call_next (Callable) - The next middleware or route handler.
@app.exception_handler(NetworkError)
async def network_error_handler(request: Request, exc: NetworkError):
with belief_scope("network_error_handler"):
@@ -86,34 +86,26 @@ async def network_error_handler(request: Request, exc: NetworkError):
status_code=503,
detail="Environment unavailable. Please check if the Superset instance is running."
)
# [/DEF:network_error_handler:Function]
# [DEF:log_requests:Function]
# @PURPOSE: Middleware to log incoming HTTP requests and their response status.
# @PRE: request is a FastAPI Request object.
# @POST: Logs request and response details.
# @PARAM: request (Request) - The incoming request object.
# @PARAM: call_next (Callable) - The next middleware or route handler.
@app.middleware("http")
async def log_requests(request: Request, call_next):
with belief_scope("log_requests"):
# Avoid spamming logs for polling endpoints
is_polling = request.url.path.endswith("/api/tasks") and request.method == "GET"
# Avoid spamming logs for polling endpoints
is_polling = request.url.path.endswith("/api/tasks") and request.method == "GET"
if not is_polling:
logger.info(f"Incoming request: {request.method} {request.url.path}")
try:
response = await call_next(request)
if not is_polling:
logger.info(f"Incoming request: {request.method} {request.url.path}")
try:
response = await call_next(request)
if not is_polling:
logger.info(f"Response status: {response.status_code} for {request.url.path}")
return response
except NetworkError as e:
logger.error(f"Network error caught in middleware: {e}")
raise HTTPException(
status_code=503,
detail="Environment unavailable. Please check if the Superset instance is running."
)
logger.info(f"Response status: {response.status_code} for {request.url.path}")
return response
except NetworkError as e:
logger.error(f"Network error caught in middleware: {e}")
raise HTTPException(
status_code=503,
detail="Environment unavailable. Please check if the Superset instance is running."
)
# [/DEF:log_requests:Function]
# Include API routes
@@ -132,7 +124,6 @@ app.include_router(storage.router, prefix="/api/storage", tags=["Storage"])
app.include_router(dashboards.router)
app.include_router(datasets.router)
app.include_router(reports.router)
app.include_router(assistant.router, prefix="/api/assistant", tags=["Assistant"])
# [DEF:api.include_routers:Action]
@@ -257,13 +248,12 @@ if frontend_path.exists():
# @POST: Returns the requested file or index.html.
@app.get("/{file_path:path}", include_in_schema=False)
async def serve_spa(file_path: str):
with belief_scope("serve_spa"):
# Only serve SPA for non-API paths
# API routes are registered separately and should be matched by FastAPI first
if file_path and (file_path.startswith("api/") or file_path.startswith("/api/") or file_path == "api"):
# This should not happen if API routers are properly registered
# Return 404 instead of serving HTML
raise HTTPException(status_code=404, detail=f"API endpoint not found: {file_path}")
# Only serve SPA for non-API paths
# API routes are registered separately and should be matched by FastAPI first
if file_path and (file_path.startswith("api/") or file_path.startswith("/api/") or file_path == "api"):
# This should not happen if API routers are properly registered
# Return 404 instead of serving HTML
raise HTTPException(status_code=404, detail=f"API endpoint not found: {file_path}")
full_path = frontend_path / file_path
if file_path and full_path.is_file():

View File

@@ -6,14 +6,9 @@
# @RELATION: READS_FROM -> app_configurations (database)
# @RELATION: USED_BY -> ConfigManager
from pydantic import BaseModel, Field
from typing import List, Optional
from ..models.storage import StorageConfig
from ..services.llm_prompt_templates import (
DEFAULT_LLM_ASSISTANT_SETTINGS,
DEFAULT_LLM_PROMPTS,
DEFAULT_LLM_PROVIDER_BINDINGS,
)
from pydantic import BaseModel, Field
from typing import List, Optional
from ..models.storage import StorageConfig
# [DEF:Schedule:DataClass]
# @PURPOSE: Represents a backup schedule configuration.
@@ -49,20 +44,12 @@ class LoggingConfig(BaseModel):
# [DEF:GlobalSettings:DataClass]
# @PURPOSE: Represents global application settings.
class GlobalSettings(BaseModel):
class GlobalSettings(BaseModel):
storage: StorageConfig = Field(default_factory=StorageConfig)
default_environment_id: Optional[str] = None
logging: LoggingConfig = Field(default_factory=LoggingConfig)
connections: List[dict] = []
llm: dict = Field(
default_factory=lambda: {
"providers": [],
"default_provider": "",
"prompts": dict(DEFAULT_LLM_PROMPTS),
"provider_bindings": dict(DEFAULT_LLM_PROVIDER_BINDINGS),
**dict(DEFAULT_LLM_ASSISTANT_SETTINGS),
}
)
llm: dict = Field(default_factory=lambda: {"providers": [], "default_provider": ""})
# Task retention settings
task_retention_days: int = 30

View File

@@ -1,12 +1,11 @@
# [DEF:backend.src.core.database:Module]
#
# @TIER: STANDARD
# @SEMANTICS: database, postgresql, sqlalchemy, session, persistence
# @PURPOSE: Configures database connection and session management (PostgreSQL-first).
# @LAYER: Core
# @RELATION: DEPENDS_ON -> sqlalchemy
# @RELATION: DEPENDS_ON -> backend.src.models.mapping
# @RELATION: DEPENDS_ON -> backend.src.core.auth.config
# @RELATION: USES -> backend.src.models.mapping
# @RELATION: USES -> backend.src.core.auth.config
#
# @INVARIANT: A single engine instance is used for the entire application.
@@ -19,7 +18,6 @@ from ..models import task as _task_models # noqa: F401
from ..models import auth as _auth_models # noqa: F401
from ..models import config as _config_models # noqa: F401
from ..models import llm as _llm_models # noqa: F401
from ..models import assistant as _assistant_models # noqa: F401
from .logger import belief_scope
from .auth.config import auth_config
import os
@@ -74,21 +72,18 @@ auth_engine = _build_engine(AUTH_DATABASE_URL)
# [/DEF:auth_engine:Variable]
# [DEF:SessionLocal:Class]
# @TIER: TRIVIAL
# @PURPOSE: A session factory for the main mappings database.
# @PRE: engine is initialized.
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
# [/DEF:SessionLocal:Class]
# [DEF:TasksSessionLocal:Class]
# @TIER: TRIVIAL
# @PURPOSE: A session factory for the tasks execution database.
# @PRE: tasks_engine is initialized.
TasksSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=tasks_engine)
# [/DEF:TasksSessionLocal:Class]
# [DEF:AuthSessionLocal:Class]
# @TIER: TRIVIAL
# @PURPOSE: A session factory for the authentication database.
# @PRE: auth_engine is initialized.
AuthSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=auth_engine)

View File

@@ -35,19 +35,7 @@ class BeliefFormatter(logging.Formatter):
def format(self, record):
anchor_id = getattr(_belief_state, 'anchor_id', None)
if anchor_id:
msg = str(record.msg)
# Supported molecular topology markers
markers = ("[EXPLORE]", "[REASON]", "[REFLECT]", "[COHERENCE:", "[Action]", "[Entry]", "[Exit]")
# Avoid duplicating anchor or overriding explicit markers
if msg.startswith(f"[{anchor_id}]"):
pass
elif any(msg.startswith(m) for m in markers):
record.msg = f"[{anchor_id}]{msg}"
else:
# Default covalent bond
record.msg = f"[{anchor_id}][Action] {msg}"
record.msg = f"[{anchor_id}][Action] {record.msg}"
return super().format(record)
# [/DEF:format:Function]
# [/DEF:BeliefFormatter:Class]
@@ -87,12 +75,12 @@ def belief_scope(anchor_id: str, message: str = ""):
try:
yield
# Log Coherence OK and Exit (DEBUG level to reduce noise)
logger.debug("[COHERENCE:OK]")
logger.debug(f"[{anchor_id}][Coherence:OK]")
if _enable_belief_state:
logger.debug("[Exit]")
logger.debug(f"[{anchor_id}][Exit]")
except Exception as e:
# Log Coherence Failed (DEBUG level to reduce noise)
logger.debug(f"[COHERENCE:FAILED] {str(e)}")
logger.debug(f"[{anchor_id}][Coherence:Failed] {str(e)}")
raise
finally:
# Restore old anchor
@@ -287,33 +275,5 @@ logger.addHandler(websocket_log_handler)
# Example usage:
# logger.info("Application started", extra={"context_key": "context_value"})
# logger.error("An error occurred", exc_info=True)
import types
# [DEF:explore:Function]
# @PURPOSE: Logs an EXPLORE message (Van der Waals force) for searching, alternatives, and hypotheses.
# @SEMANTICS: log, explore, molecule
def explore(self, msg, *args, **kwargs):
self.warning(f"[EXPLORE] {msg}", *args, **kwargs)
# [/DEF:explore:Function]
# [DEF:reason:Function]
# @PURPOSE: Logs a REASON message (Covalent bond) for strict deduction and core logic.
# @SEMANTICS: log, reason, molecule
def reason(self, msg, *args, **kwargs):
self.info(f"[REASON] {msg}", *args, **kwargs)
# [/DEF:reason:Function]
# [DEF:reflect:Function]
# @PURPOSE: Logs a REFLECT message (Hydrogen bond) for self-check and structural validation.
# @SEMANTICS: log, reflect, molecule
def reflect(self, msg, *args, **kwargs):
self.debug(f"[REFLECT] {msg}", *args, **kwargs)
# [/DEF:reflect:Function]
logger.explore = types.MethodType(explore, logger)
logger.reason = types.MethodType(reason, logger)
logger.reflect = types.MethodType(reflect, logger)
# [/DEF:Logger:Global]
# [/DEF:LoggerModule:Module]

View File

@@ -11,7 +11,6 @@ from pathlib import Path
sys.path.append(str(Path(__file__).parent.parent.parent.parent / "src"))
import pytest
import logging
from src.core.logger import (
belief_scope,
logger,
@@ -22,27 +21,6 @@ from src.core.logger import (
from src.core.config_models import LoggingConfig
@pytest.fixture(autouse=True)
def reset_logger_state():
"""Reset logger state before each test to avoid cross-test contamination."""
config = LoggingConfig(
level="INFO",
task_log_level="INFO",
enable_belief_state=True
)
configure_logger(config)
# Also reset the logger level for caplog to work correctly
logging.getLogger("superset_tools_app").setLevel(logging.DEBUG)
yield
# Reset after test too
config = LoggingConfig(
level="INFO",
task_log_level="INFO",
enable_belief_state=True
)
configure_logger(config)
# [DEF:test_belief_scope_logs_entry_action_exit_at_debug:Function]
# @PURPOSE: Test that belief_scope generates [ID][Entry], [ID][Action], and [ID][Exit] logs at DEBUG level.
# @PRE: belief_scope is available. caplog fixture is used. Logger configured to DEBUG.
@@ -98,7 +76,7 @@ def test_belief_scope_error_handling(caplog):
log_messages = [record.message for record in caplog.records]
assert any("[FailingFunction][Entry]" in msg for msg in log_messages), "Entry log not found"
assert any("[FailingFunction][COHERENCE:FAILED]" in msg for msg in log_messages), "Failed coherence log not found"
assert any("[FailingFunction][Coherence:Failed]" in msg for msg in log_messages), "Failed coherence log not found"
# Exit should not be logged on failure
# Reset to INFO
@@ -128,9 +106,11 @@ def test_belief_scope_success_coherence(caplog):
log_messages = [record.message for record in caplog.records]
assert any("[SuccessFunction][COHERENCE:OK]" in msg for msg in log_messages), "Success coherence log not found"
assert any("[SuccessFunction][Coherence:OK]" in msg for msg in log_messages), "Success coherence log not found"
# Reset to INFO
config = LoggingConfig(level="INFO", task_log_level="INFO", enable_belief_state=True)
configure_logger(config)
# [/DEF:test_belief_scope_success_coherence:Function]
@@ -152,7 +132,7 @@ def test_belief_scope_not_visible_at_info(caplog):
# Entry/Exit/Coherence should NOT be visible at INFO level
assert not any("[InfoLevelFunction][Entry]" in msg for msg in log_messages), "Entry log should not be visible at INFO"
assert not any("[InfoLevelFunction][Exit]" in msg for msg in log_messages), "Exit log should not be visible at INFO"
assert not any("[InfoLevelFunction][COHERENCE:OK]" in msg for msg in log_messages), "Coherence log should not be visible at INFO"
assert not any("[InfoLevelFunction][Coherence:OK]" in msg for msg in log_messages), "Coherence log should not be visible at INFO"
# [/DEF:test_belief_scope_not_visible_at_info:Function]
@@ -161,7 +141,7 @@ def test_belief_scope_not_visible_at_info(caplog):
# @PRE: None.
# @POST: Default level is INFO.
def test_task_log_level_default():
"""Test that default task log level is INFO (after reset fixture)."""
"""Test that default task log level is INFO."""
level = get_task_log_level()
assert level == "INFO"
# [/DEF:test_task_log_level_default:Function]
@@ -196,6 +176,15 @@ def test_configure_logger_task_log_level():
assert get_task_log_level() == "DEBUG", "task_log_level should be DEBUG"
assert should_log_task_level("DEBUG") is True, "DEBUG should be logged at DEBUG threshold"
# Reset to INFO
config = LoggingConfig(
level="INFO",
task_log_level="INFO",
enable_belief_state=True
)
configure_logger(config)
assert get_task_log_level() == "INFO", "task_log_level should be reset to INFO"
# [/DEF:test_configure_logger_task_log_level:Function]
@@ -224,58 +213,16 @@ def test_enable_belief_state_flag(caplog):
assert not any("[DisabledFunction][Entry]" in msg for msg in log_messages), "Entry should not be logged when disabled"
assert not any("[DisabledFunction][Exit]" in msg for msg in log_messages), "Exit should not be logged when disabled"
# Coherence:OK should still be logged (internal tracking)
assert any("[DisabledFunction][COHERENCE:OK]" in msg for msg in log_messages), "Coherence should still be logged"
assert any("[DisabledFunction][Coherence:OK]" in msg for msg in log_messages), "Coherence should still be logged"
# [DEF:test_belief_scope_missing_anchor:Function]
# @PURPOSE: Test @PRE condition: anchor_id must be provided
def test_belief_scope_missing_anchor():
"""Test that belief_scope enforces anchor_id to be provided."""
import pytest
from src.core.logger import belief_scope
with pytest.raises(TypeError):
# Missing required positional argument 'anchor_id'
with belief_scope():
pass
# [/DEF:test_belief_scope_missing_anchor:Function]
# [DEF:test_configure_logger_post_conditions:Function]
# @PURPOSE: Test @POST condition: Logger level, handlers, belief state flag, and task log level are updated.
def test_configure_logger_post_conditions(tmp_path):
"""Test that configure_logger satisfies all @POST conditions."""
import logging
from logging.handlers import RotatingFileHandler
from src.core.config_models import LoggingConfig
from src.core.logger import configure_logger, logger, BeliefFormatter, get_task_log_level
import src.core.logger as logger_module
log_file = tmp_path / "test.log"
# Re-enable for other tests
config = LoggingConfig(
level="WARNING",
level="DEBUG",
task_log_level="DEBUG",
enable_belief_state=False,
file_path=str(log_file)
enable_belief_state=True
)
configure_logger(config)
# 1. Logger level is updated
assert logger.level == logging.WARNING
# 2. Handlers are updated (file handler removed old ones, added new one)
file_handlers = [h for h in logger.handlers if isinstance(h, RotatingFileHandler)]
assert len(file_handlers) == 1
import pathlib
assert pathlib.Path(file_handlers[0].baseFilename) == log_file.resolve()
# 3. Formatter is set to BeliefFormatter
for handler in logger.handlers:
assert isinstance(handler.formatter, BeliefFormatter)
# 4. Global states
assert getattr(logger_module, '_enable_belief_state') is False
assert get_task_log_level() == "DEBUG"
# [/DEF:test_configure_logger_post_conditions:Function]
# [/DEF:test_enable_belief_state_flag:Function]
# [/DEF:test_logger:Module]

View File

@@ -1,195 +0,0 @@
# [DEF:backend.src.core.mapping_service:Module]
#
# @TIER: CRITICAL
# @SEMANTICS: mapping, ids, synchronization, environments, cross-filters
# @PURPOSE: Service for tracking and synchronizing Superset Resource IDs (UUID <-> Integer ID)
# @LAYER: Core
# @RELATION: DEPENDS_ON -> backend.src.models.mapping (ResourceMapping, ResourceType)
# @RELATION: DEPENDS_ON -> backend.src.core.logger
# @TEST_DATA: mock_superset_resources -> {'chart': [{'id': 42, 'uuid': '1234', 'slice_name': 'test'}], 'dataset': [{'id': 99, 'uuid': '5678', 'table_name': 'data'}]}
#
# @INVARIANT: sync_environment must handle remote API failures gracefully.
# [SECTION: IMPORTS]
from typing import Dict, List, Optional
from datetime import datetime, timezone
from sqlalchemy.orm import Session
from apscheduler.schedulers.background import BackgroundScheduler
from apscheduler.triggers.cron import CronTrigger
from src.models.mapping import ResourceMapping, ResourceType
from src.core.logger import logger, belief_scope
# [/SECTION]
# [DEF:IdMappingService:Class]
# @TIER: CRITICAL
# @PURPOSE: Service handling the cataloging and retrieval of remote Superset Integer IDs.
class IdMappingService:
# [DEF:__init__:Function]
# @PURPOSE: Initializes the mapping service.
def __init__(self, db_session: Session):
self.db = db_session
self.scheduler = BackgroundScheduler()
self._sync_job = None
# [/DEF:__init__:Function]
# [DEF:start_scheduler:Function]
# @PURPOSE: Starts the background scheduler with a given cron string.
# @PARAM: cron_string (str) - Cron expression for the sync interval.
# @PARAM: environments (List[str]) - List of environment IDs to sync.
# @PARAM: superset_client_factory - Function to get a client for an environment.
def start_scheduler(self, cron_string: str, environments: List[str], superset_client_factory):
with belief_scope("IdMappingService.start_scheduler"):
if self._sync_job:
self.scheduler.remove_job(self._sync_job.id)
logger.info("[IdMappingService.start_scheduler][Reflect] Removed existing sync job.")
def sync_all():
for env_id in environments:
client = superset_client_factory(env_id)
if client:
self.sync_environment(env_id, client)
self._sync_job = self.scheduler.add_job(
sync_all,
CronTrigger.from_crontab(cron_string),
id='id_mapping_sync_job',
replace_existing=True
)
if not self.scheduler.running:
self.scheduler.start()
logger.info(f"[IdMappingService.start_scheduler][Coherence:OK] Started background scheduler with cron: {cron_string}")
else:
logger.info(f"[IdMappingService.start_scheduler][Coherence:OK] Updated background scheduler with cron: {cron_string}")
# [/DEF:start_scheduler:Function]
# [DEF:sync_environment:Function]
# @PURPOSE: Fully synchronizes mapping for a specific environment.
# @PARAM: environment_id (str) - Target environment ID.
# @PARAM: superset_client - Instance capable of hitting the Superset API.
# @PRE: environment_id exists in the database.
# @POST: ResourceMapping records for the environment are created or updated.
def sync_environment(self, environment_id: str, superset_client) -> None:
"""
Polls the Superset APIs for the target environment and updates the local mapping table.
"""
with belief_scope("IdMappingService.sync_environment"):
logger.info(f"[IdMappingService.sync_environment][Action] Starting sync for environment {environment_id}")
# Implementation Note: In a real scenario, superset_client needs to be an instance
# capable of auth & iteration over /api/v1/chart/, /api/v1/dataset/, /api/v1/dashboard/
# Here we structure the logic according to the spec.
types_to_poll = [
(ResourceType.CHART, "chart", "slice_name"),
(ResourceType.DATASET, "dataset", "table_name"),
(ResourceType.DASHBOARD, "dashboard", "slug") # Note: dashboard slug or dashboard_title
]
total_synced = 0
try:
for res_enum, endpoint, name_field in types_to_poll:
logger.debug(f"[IdMappingService.sync_environment][Explore] Polling {endpoint} endpoint")
# Simulated API Fetch (Would be: superset_client.get(f"/api/v1/{endpoint}/")... )
# This relies on the superset API structure, e.g. { "result": [{"id": 1, "uuid": "...", name_field: "..."}] }
# We assume superset_client provides a generic method to fetch all pages.
try:
resources = superset_client.get_all_resources(endpoint)
for res in resources:
res_uuid = res.get("uuid")
res_id = str(res.get("id")) # Store as string
res_name = res.get(name_field)
if not res_uuid or not res_id:
continue
# Upsert Logic
mapping = self.db.query(ResourceMapping).filter_by(
environment_id=environment_id,
resource_type=res_enum,
uuid=res_uuid
).first()
if mapping:
mapping.remote_integer_id = res_id
mapping.resource_name = res_name
mapping.last_synced_at = datetime.now(timezone.utc)
else:
new_mapping = ResourceMapping(
environment_id=environment_id,
resource_type=res_enum,
uuid=res_uuid,
remote_integer_id=res_id,
resource_name=res_name,
last_synced_at=datetime.now(timezone.utc)
)
self.db.add(new_mapping)
total_synced += 1
except Exception as loop_e:
logger.error(f"[IdMappingService.sync_environment][Reason] Error polling {endpoint}: {loop_e}")
# Continue to next resource type instead of blowing up the whole sync
self.db.commit()
logger.info(f"[IdMappingService.sync_environment][Coherence:OK] Successfully synced {total_synced} items.")
except Exception as e:
self.db.rollback()
logger.error(f"[IdMappingService.sync_environment][Coherence:Failed] Critical sync failure: {e}")
raise
# [/DEF:sync_environment:Function]
# [DEF:get_remote_id:Function]
# @PURPOSE: Retrieves the remote integer ID for a given universal UUID.
# @PARAM: environment_id (str)
# @PARAM: resource_type (ResourceType)
# @PARAM: uuid (str)
# @RETURN: Optional[int]
def get_remote_id(self, environment_id: str, resource_type: ResourceType, uuid: str) -> Optional[int]:
mapping = self.db.query(ResourceMapping).filter_by(
environment_id=environment_id,
resource_type=resource_type,
uuid=uuid
).first()
if mapping:
try:
return int(mapping.remote_integer_id)
except ValueError:
return None
return None
# [/DEF:get_remote_id:Function]
# [DEF:get_remote_ids_batch:Function]
# @PURPOSE: Retrieves remote integer IDs for a list of universal UUIDs efficiently.
# @PARAM: environment_id (str)
# @PARAM: resource_type (ResourceType)
# @PARAM: uuids (List[str])
# @RETURN: Dict[str, int] - Mapping of UUID -> Integer ID
def get_remote_ids_batch(self, environment_id: str, resource_type: ResourceType, uuids: List[str]) -> Dict[str, int]:
if not uuids:
return {}
mappings = self.db.query(ResourceMapping).filter(
ResourceMapping.environment_id == environment_id,
ResourceMapping.resource_type == resource_type,
ResourceMapping.uuid.in_(uuids)
).all()
result = {}
for m in mappings:
try:
result[m.uuid] = int(m.remote_integer_id)
except ValueError:
pass
return result
# [/DEF:get_remote_ids_batch:Function]
# [/DEF:IdMappingService:Class]
# [/DEF:backend.src.core.mapping_service:Module]

View File

@@ -11,41 +11,28 @@
import zipfile
import yaml
import os
import json
import re
import tempfile
from pathlib import Path
from typing import Dict, Optional, List
from typing import Dict
from .logger import logger, belief_scope
from src.core.mapping_service import IdMappingService
from src.models.mapping import ResourceType
# [/SECTION]
# [DEF:MigrationEngine:Class]
# @PURPOSE: Engine for transforming Superset export ZIPs.
class MigrationEngine:
# [DEF:__init__:Function]
# @PURPOSE: Initializes the migration engine with optional ID mapping service.
# @PARAM: mapping_service (Optional[IdMappingService]) - Used for resolving target environment integer IDs.
def __init__(self, mapping_service: Optional[IdMappingService] = None):
self.mapping_service = mapping_service
# [/DEF:__init__:Function]
# [DEF:transform_zip:Function]
# @PURPOSE: Extracts ZIP, replaces database UUIDs in YAMLs, patches cross-filters, and re-packages.
# @PURPOSE: Extracts ZIP, replaces database UUIDs in YAMLs, and re-packages.
# @PARAM: zip_path (str) - Path to the source ZIP file.
# @PARAM: output_path (str) - Path where the transformed ZIP will be saved.
# @PARAM: db_mapping (Dict[str, str]) - Mapping of source UUID to target UUID.
# @PARAM: strip_databases (bool) - Whether to remove the databases directory from the archive.
# @PARAM: target_env_id (Optional[str]) - Used if fix_cross_filters is True to know which environment map to use.
# @PARAM: fix_cross_filters (bool) - Whether to patch dashboard json_metadata.
# @PRE: zip_path must point to a valid Superset export archive.
# @POST: Transformed archive is saved to output_path.
# @RETURN: bool - True if successful.
def transform_zip(self, zip_path: str, output_path: str, db_mapping: Dict[str, str], strip_databases: bool = True, target_env_id: Optional[str] = None, fix_cross_filters: bool = False) -> bool:
def transform_zip(self, zip_path: str, output_path: str, db_mapping: Dict[str, str], strip_databases: bool = True) -> bool:
"""
Transform a Superset export ZIP by replacing database UUIDs and optionally fixing cross-filters.
Transform a Superset export ZIP by replacing database UUIDs.
"""
with belief_scope("MigrationEngine.transform_zip"):
with tempfile.TemporaryDirectory() as temp_dir_str:
@@ -57,7 +44,8 @@ class MigrationEngine:
with zipfile.ZipFile(zip_path, 'r') as zf:
zf.extractall(temp_dir)
# 2. Transform YAMLs (Databases)
# 2. Transform YAMLs
# Datasets are usually in datasets/*.yaml
dataset_files = list(temp_dir.glob("**/datasets/**/*.yaml")) + list(temp_dir.glob("**/datasets/*.yaml"))
dataset_files = list(set(dataset_files))
@@ -66,20 +54,6 @@ class MigrationEngine:
logger.info(f"[MigrationEngine.transform_zip][Action] Transforming dataset: {ds_file}")
self._transform_yaml(ds_file, db_mapping)
# 2.5 Patch Cross-Filters (Dashboards)
if fix_cross_filters and self.mapping_service and target_env_id:
dash_files = list(temp_dir.glob("**/dashboards/**/*.yaml")) + list(temp_dir.glob("**/dashboards/*.yaml"))
dash_files = list(set(dash_files))
logger.info(f"[MigrationEngine.transform_zip][State] Found {len(dash_files)} dashboard files for patching.")
# Gather all source UUID-to-ID mappings from the archive first
source_id_to_uuid_map = self._extract_chart_uuids_from_archive(temp_dir)
for dash_file in dash_files:
logger.info(f"[MigrationEngine.transform_zip][Action] Patching dashboard: {dash_file}")
self._patch_dashboard_metadata(dash_file, target_env_id, source_id_to_uuid_map)
# 3. Re-package
logger.info(f"[MigrationEngine.transform_zip][Action] Re-packaging ZIP to: {output_path} (strip_databases={strip_databases})")
with zipfile.ZipFile(output_path, 'w', zipfile.ZIP_DEFLATED) as zf:
@@ -123,100 +97,6 @@ class MigrationEngine:
yaml.dump(data, f)
# [/DEF:_transform_yaml:Function]
# [DEF:_extract_chart_uuids_from_archive:Function]
# @PURPOSE: Scans the unpacked ZIP to map local exported integer IDs back to their UUIDs.
# @PARAM: temp_dir (Path) - Root dir of unpacked archive
# @RETURN: Dict[int, str] - Mapping of source Integer ID to UUID.
def _extract_chart_uuids_from_archive(self, temp_dir: Path) -> Dict[int, str]:
# Implementation Note: This is a placeholder for the logic that extracts
# actual Source IDs. In a real scenario, this involves parsing chart YAMLs
# or manifesting the export metadata structure where source IDs are stored.
# For simplicity in US1 MVP, we assume it's read from chart files if present.
mapping = {}
chart_files = list(temp_dir.glob("**/charts/**/*.yaml")) + list(temp_dir.glob("**/charts/*.yaml"))
for cf in set(chart_files):
try:
with open(cf, 'r') as f:
cdata = yaml.safe_load(f)
if cdata and 'id' in cdata and 'uuid' in cdata:
mapping[cdata['id']] = cdata['uuid']
except Exception:
pass
return mapping
# [/DEF:_extract_chart_uuids_from_archive:Function]
# [DEF:_patch_dashboard_metadata:Function]
# @PURPOSE: Replaces integer IDs in json_metadata.
# @PARAM: file_path (Path)
# @PARAM: target_env_id (str)
# @PARAM: source_map (Dict[int, str])
def _patch_dashboard_metadata(self, file_path: Path, target_env_id: str, source_map: Dict[int, str]):
with belief_scope("MigrationEngine._patch_dashboard_metadata"):
try:
with open(file_path, 'r') as f:
data = yaml.safe_load(f)
if not data or 'json_metadata' not in data:
return
metadata_str = data['json_metadata']
if not metadata_str:
return
metadata = json.loads(metadata_str)
modified = False
# We need to deeply traverse and replace. For MVP, string replacement over the raw JSON is an option,
# but careful dict traversal is safer.
# Fetch target UUIDs for everything we know:
uuids_needed = list(source_map.values())
target_ids = self.mapping_service.get_remote_ids_batch(target_env_id, ResourceType.CHART, uuids_needed)
if not target_ids:
logger.info("[MigrationEngine._patch_dashboard_metadata][Reflect] No remote target IDs found in mapping database.")
return
# Map Source Int -> Target Int
source_to_target = {}
missing_targets = []
for s_id, s_uuid in source_map.items():
if s_uuid in target_ids:
source_to_target[s_id] = target_ids[s_uuid]
else:
missing_targets.append(s_id)
if missing_targets:
logger.warning(f"[MigrationEngine._patch_dashboard_metadata][Coherence:Recoverable] Missing target IDs for source IDs: {missing_targets}. Cross-filters for these IDs might break.")
if not source_to_target:
logger.info("[MigrationEngine._patch_dashboard_metadata][Reflect] No source IDs matched remotely. Skipping patch.")
return
# Complex metadata traversal would go here (e.g. for native_filter_configuration)
# We use regex replacement over the string for safety over unknown nested dicts.
new_metadata_str = metadata_str
# Replace chartId and datasetId assignments explicitly.
# Pattern: "datasetId": 42 or "chartId": 42
for s_id, t_id in source_to_target.items():
# Replace in native_filter_configuration targets
new_metadata_str = re.sub(r'("datasetId"\s*:\s*)' + str(s_id) + r'(\b)', r'\g<1>' + str(t_id) + r'\g<2>', new_metadata_str)
new_metadata_str = re.sub(r'("chartId"\s*:\s*)' + str(s_id) + r'(\b)', r'\g<1>' + str(t_id) + r'\g<2>', new_metadata_str)
# Re-parse to validate valid JSON
data['json_metadata'] = json.dumps(json.loads(new_metadata_str))
with open(file_path, 'w') as f:
yaml.dump(data, f)
logger.info(f"[MigrationEngine._patch_dashboard_metadata][Reason] Re-serialized modified JSON metadata for dashboard.")
except Exception as e:
logger.error(f"[MigrationEngine._patch_dashboard_metadata][Coherence:Failed] Metadata patch failed: {e}")
# [/DEF:_patch_dashboard_metadata:Function]
# [/DEF:MigrationEngine:Class]
# [/DEF:backend.src.core.migration_engine:Module]

View File

@@ -11,7 +11,6 @@
# [SECTION: IMPORTS]
import json
import re
import zipfile
from pathlib import Path
from typing import Dict, List, Optional, Tuple, Union, cast
@@ -121,252 +120,6 @@ class SupersetClient:
return result
# [/DEF:get_dashboards_summary:Function]
# [DEF:get_dashboard:Function]
# @PURPOSE: Fetches a single dashboard by ID.
# @PRE: Client is authenticated and dashboard_id exists.
# @POST: Returns dashboard payload from Superset API.
# @RETURN: Dict
def get_dashboard(self, dashboard_id: int) -> Dict:
with belief_scope("SupersetClient.get_dashboard", f"id={dashboard_id}"):
response = self.network.request(method="GET", endpoint=f"/dashboard/{dashboard_id}")
return cast(Dict, response)
# [/DEF:get_dashboard:Function]
# [DEF:get_chart:Function]
# @PURPOSE: Fetches a single chart by ID.
# @PRE: Client is authenticated and chart_id exists.
# @POST: Returns chart payload from Superset API.
# @RETURN: Dict
def get_chart(self, chart_id: int) -> Dict:
with belief_scope("SupersetClient.get_chart", f"id={chart_id}"):
response = self.network.request(method="GET", endpoint=f"/chart/{chart_id}")
return cast(Dict, response)
# [/DEF:get_chart:Function]
# [DEF:get_dashboard_detail:Function]
# @PURPOSE: Fetches detailed dashboard information including related charts and datasets.
# @PRE: Client is authenticated and dashboard_id exists.
# @POST: Returns dashboard metadata with charts and datasets lists.
# @RETURN: Dict
def get_dashboard_detail(self, dashboard_id: int) -> Dict:
with belief_scope("SupersetClient.get_dashboard_detail", f"id={dashboard_id}"):
dashboard_response = self.get_dashboard(dashboard_id)
dashboard_data = dashboard_response.get("result", dashboard_response)
charts: List[Dict] = []
datasets: List[Dict] = []
def extract_dataset_id_from_form_data(form_data: Optional[Dict]) -> Optional[int]:
if not isinstance(form_data, dict):
return None
datasource = form_data.get("datasource")
if isinstance(datasource, str):
matched = re.match(r"^(\d+)__", datasource)
if matched:
try:
return int(matched.group(1))
except ValueError:
return None
if isinstance(datasource, dict):
ds_id = datasource.get("id")
try:
return int(ds_id) if ds_id is not None else None
except (TypeError, ValueError):
return None
ds_id = form_data.get("datasource_id")
try:
return int(ds_id) if ds_id is not None else None
except (TypeError, ValueError):
return None
# Canonical endpoints from Superset OpenAPI:
# /dashboard/{id_or_slug}/charts and /dashboard/{id_or_slug}/datasets.
try:
charts_response = self.network.request(
method="GET",
endpoint=f"/dashboard/{dashboard_id}/charts"
)
charts_payload = charts_response.get("result", []) if isinstance(charts_response, dict) else []
for chart_obj in charts_payload:
if not isinstance(chart_obj, dict):
continue
chart_id = chart_obj.get("id")
if chart_id is None:
continue
form_data = chart_obj.get("form_data")
if isinstance(form_data, str):
try:
form_data = json.loads(form_data)
except Exception:
form_data = {}
dataset_id = extract_dataset_id_from_form_data(form_data) or chart_obj.get("datasource_id")
charts.append({
"id": int(chart_id),
"title": chart_obj.get("slice_name") or chart_obj.get("name") or f"Chart {chart_id}",
"viz_type": (form_data.get("viz_type") if isinstance(form_data, dict) else None),
"dataset_id": int(dataset_id) if dataset_id is not None else None,
"last_modified": chart_obj.get("changed_on"),
"overview": chart_obj.get("description") or (form_data.get("viz_type") if isinstance(form_data, dict) else None) or "Chart",
})
except Exception as e:
app_logger.warning("[get_dashboard_detail][Warning] Failed to fetch dashboard charts: %s", e)
try:
datasets_response = self.network.request(
method="GET",
endpoint=f"/dashboard/{dashboard_id}/datasets"
)
datasets_payload = datasets_response.get("result", []) if isinstance(datasets_response, dict) else []
for dataset_obj in datasets_payload:
if not isinstance(dataset_obj, dict):
continue
dataset_id = dataset_obj.get("id")
if dataset_id is None:
continue
db_payload = dataset_obj.get("database")
db_name = db_payload.get("database_name") if isinstance(db_payload, dict) else None
table_name = dataset_obj.get("table_name") or dataset_obj.get("datasource_name") or dataset_obj.get("name") or f"Dataset {dataset_id}"
schema = dataset_obj.get("schema")
fq_name = f"{schema}.{table_name}" if schema else table_name
datasets.append({
"id": int(dataset_id),
"table_name": table_name,
"schema": schema,
"database": db_name or dataset_obj.get("database_name") or "Unknown",
"last_modified": dataset_obj.get("changed_on"),
"overview": fq_name,
})
except Exception as e:
app_logger.warning("[get_dashboard_detail][Warning] Failed to fetch dashboard datasets: %s", e)
# Fallback: derive chart IDs from layout metadata if dashboard charts endpoint fails.
if not charts:
raw_position_json = dashboard_data.get("position_json")
chart_ids_from_position = set()
if isinstance(raw_position_json, str) and raw_position_json:
try:
parsed_position = json.loads(raw_position_json)
chart_ids_from_position.update(self._extract_chart_ids_from_layout(parsed_position))
except Exception:
pass
elif isinstance(raw_position_json, dict):
chart_ids_from_position.update(self._extract_chart_ids_from_layout(raw_position_json))
raw_json_metadata = dashboard_data.get("json_metadata")
if isinstance(raw_json_metadata, str) and raw_json_metadata:
try:
parsed_metadata = json.loads(raw_json_metadata)
chart_ids_from_position.update(self._extract_chart_ids_from_layout(parsed_metadata))
except Exception:
pass
elif isinstance(raw_json_metadata, dict):
chart_ids_from_position.update(self._extract_chart_ids_from_layout(raw_json_metadata))
app_logger.info(
"[get_dashboard_detail][State] Extracted %s fallback chart IDs from layout (dashboard_id=%s)",
len(chart_ids_from_position),
dashboard_id,
)
for chart_id in sorted(chart_ids_from_position):
try:
chart_response = self.get_chart(int(chart_id))
chart_data = chart_response.get("result", chart_response)
charts.append({
"id": int(chart_id),
"title": chart_data.get("slice_name") or chart_data.get("name") or f"Chart {chart_id}",
"viz_type": chart_data.get("viz_type"),
"dataset_id": chart_data.get("datasource_id"),
"last_modified": chart_data.get("changed_on"),
"overview": chart_data.get("description") or chart_data.get("viz_type") or "Chart",
})
except Exception as e:
app_logger.warning("[get_dashboard_detail][Warning] Failed to resolve fallback chart %s: %s", chart_id, e)
# Backfill datasets from chart datasource IDs.
dataset_ids_from_charts = {
c.get("dataset_id")
for c in charts
if c.get("dataset_id") is not None
}
known_dataset_ids = {d.get("id") for d in datasets}
missing_dataset_ids = [ds_id for ds_id in dataset_ids_from_charts if ds_id not in known_dataset_ids]
for dataset_id in missing_dataset_ids:
try:
dataset_response = self.get_dataset(int(dataset_id))
dataset_data = dataset_response.get("result", dataset_response)
db_payload = dataset_data.get("database")
db_name = db_payload.get("database_name") if isinstance(db_payload, dict) else None
table_name = dataset_data.get("table_name") or f"Dataset {dataset_id}"
schema = dataset_data.get("schema")
fq_name = f"{schema}.{table_name}" if schema else table_name
datasets.append({
"id": int(dataset_id),
"table_name": table_name,
"schema": schema,
"database": db_name or "Unknown",
"last_modified": dataset_data.get("changed_on_utc") or dataset_data.get("changed_on"),
"overview": fq_name,
})
except Exception as e:
app_logger.warning("[get_dashboard_detail][Warning] Failed to resolve dataset %s: %s", dataset_id, e)
unique_charts = {}
for chart in charts:
unique_charts[chart["id"]] = chart
unique_datasets = {}
for dataset in datasets:
unique_datasets[dataset["id"]] = dataset
return {
"id": dashboard_data.get("id", dashboard_id),
"title": dashboard_data.get("dashboard_title") or dashboard_data.get("title") or f"Dashboard {dashboard_id}",
"slug": dashboard_data.get("slug"),
"url": dashboard_data.get("url"),
"description": dashboard_data.get("description") or "",
"last_modified": dashboard_data.get("changed_on_utc") or dashboard_data.get("changed_on"),
"published": dashboard_data.get("published"),
"charts": list(unique_charts.values()),
"datasets": list(unique_datasets.values()),
"chart_count": len(unique_charts),
"dataset_count": len(unique_datasets),
}
# [/DEF:get_dashboard_detail:Function]
# [DEF:_extract_chart_ids_from_layout:Function]
# @PURPOSE: Traverses dashboard layout metadata and extracts chart IDs from common keys.
# @PRE: payload can be dict/list/scalar.
# @POST: Returns a set of chart IDs found in nested structures.
def _extract_chart_ids_from_layout(self, payload: Union[Dict, List, str, int, None]) -> set:
with belief_scope("_extract_chart_ids_from_layout"):
found = set()
def walk(node):
if isinstance(node, dict):
for key, value in node.items():
if key in ("chartId", "chart_id", "slice_id", "sliceId"):
try:
found.add(int(value))
except (TypeError, ValueError):
pass
if key == "id" and isinstance(value, str):
match = re.match(r"^CHART-(\d+)$", value)
if match:
try:
found.add(int(match.group(1)))
except ValueError:
pass
walk(value)
elif isinstance(node, list):
for item in node:
walk(item)
walk(payload)
return found
# [/DEF:_extract_chart_ids_from_layout:Function]
# [DEF:export_dashboard:Function]
# @PURPOSE: Экспортирует дашборд в виде ZIP-архива.
# @PARAM: dashboard_id (int) - ID дашборда для экспорта.
@@ -493,15 +246,6 @@ class SupersetClient:
# @RELATION: CALLS -> self.network.request (for related_objects)
def get_dataset_detail(self, dataset_id: int) -> Dict:
with belief_scope("SupersetClient.get_dataset_detail", f"id={dataset_id}"):
def as_bool(value, default=False):
if value is None:
return default
if isinstance(value, bool):
return value
if isinstance(value, str):
return value.strip().lower() in ("1", "true", "yes", "y", "on")
return bool(value)
# Get base dataset info
response = self.get_dataset(dataset_id)
@@ -515,15 +259,12 @@ class SupersetClient:
columns = dataset.get("columns", [])
column_info = []
for col in columns:
col_id = col.get("id")
if col_id is None:
continue
column_info.append({
"id": int(col_id),
"id": col.get("id"),
"name": col.get("column_name"),
"type": col.get("type"),
"is_dttm": as_bool(col.get("is_dttm"), default=False),
"is_active": as_bool(col.get("is_active"), default=True),
"is_dttm": col.get("is_dttm", False),
"is_active": col.get("is_active", True),
"description": col.get("description", "")
})
@@ -545,25 +286,11 @@ class SupersetClient:
dashboards_data = []
for dash in dashboards_data:
if isinstance(dash, dict):
dash_id = dash.get("id")
if dash_id is None:
continue
linked_dashboards.append({
"id": int(dash_id),
"title": dash.get("dashboard_title") or dash.get("title", f"Dashboard {dash_id}"),
"slug": dash.get("slug")
})
else:
try:
dash_id = int(dash)
except (TypeError, ValueError):
continue
linked_dashboards.append({
"id": dash_id,
"title": f"Dashboard {dash_id}",
"slug": None
})
linked_dashboards.append({
"id": dash.get("id"),
"title": dash.get("dashboard_title") or dash.get("title", "Unknown"),
"slug": dash.get("slug")
})
except Exception as e:
app_logger.warning(f"[get_dataset_detail][Warning] Failed to fetch related dashboards: {e}")
linked_dashboards = []
@@ -575,18 +302,14 @@ class SupersetClient:
"id": dataset.get("id"),
"table_name": dataset.get("table_name"),
"schema": dataset.get("schema"),
"database": (
dataset.get("database", {}).get("database_name", "Unknown")
if isinstance(dataset.get("database"), dict)
else dataset.get("database_name") or "Unknown"
),
"database": dataset.get("database", {}).get("database_name", "Unknown"),
"description": dataset.get("description", ""),
"columns": column_info,
"column_count": len(column_info),
"sql": sql,
"linked_dashboards": linked_dashboards,
"linked_dashboard_count": len(linked_dashboards),
"is_sqllab_view": as_bool(dataset.get("is_sqllab_view"), default=False),
"is_sqllab_view": dataset.get("is_sqllab_view", False),
"created_on": dataset.get("created_on"),
"changed_on": dataset.get("changed_on")
}

View File

@@ -6,11 +6,9 @@
# @TIER: CRITICAL
# @INVARIANT: Each TaskContext is bound to a single task execution.
# [SECTION: IMPORTS]
# [SECTION: IMPORTS]
from typing import Dict, Any, Callable
from .task_logger import TaskLogger
from ..logger import belief_scope
# [/SECTION]
# [DEF:TaskContext:Class]
@@ -46,14 +44,13 @@ class TaskContext:
params: Dict[str, Any],
default_source: str = "plugin"
):
with belief_scope("__init__"):
self._task_id = task_id
self._params = params
self._logger = TaskLogger(
task_id=task_id,
add_log_fn=add_log_fn,
source=default_source
)
self._task_id = task_id
self._params = params
self._logger = TaskLogger(
task_id=task_id,
add_log_fn=add_log_fn,
source=default_source
)
# [/DEF:__init__:Function]
# [DEF:task_id:Function]
@@ -63,8 +60,7 @@ class TaskContext:
# @RETURN: str - The task ID.
@property
def task_id(self) -> str:
with belief_scope("task_id"):
return self._task_id
return self._task_id
# [/DEF:task_id:Function]
# [DEF:logger:Function]
@@ -74,8 +70,7 @@ class TaskContext:
# @RETURN: TaskLogger - The logger instance.
@property
def logger(self) -> TaskLogger:
with belief_scope("logger"):
return self._logger
return self._logger
# [/DEF:logger:Function]
# [DEF:params:Function]
@@ -85,8 +80,7 @@ class TaskContext:
# @RETURN: Dict[str, Any] - The task parameters.
@property
def params(self) -> Dict[str, Any]:
with belief_scope("params"):
return self._params
return self._params
# [/DEF:params:Function]
# [DEF:get_param:Function]
@@ -97,8 +91,7 @@ class TaskContext:
# @PARAM: default (Any) - Default value if key not found.
# @RETURN: Any - Parameter value or default.
def get_param(self, key: str, default: Any = None) -> Any:
with belief_scope("get_param"):
return self._params.get(key, default)
return self._params.get(key, default)
# [/DEF:get_param:Function]
# [DEF:create_sub_context:Function]
@@ -109,13 +102,12 @@ class TaskContext:
# @RETURN: TaskContext - New context with different source.
def create_sub_context(self, source: str) -> "TaskContext":
"""Create a sub-context with a different default source for logging."""
with belief_scope("create_sub_context"):
return TaskContext(
task_id=self._task_id,
add_log_fn=self._logger._add_log,
params=self._params,
default_source=source
)
return TaskContext(
task_id=self._task_id,
add_log_fn=self._logger._add_log,
params=self._params,
default_source=source
)
# [/DEF:create_sub_context:Function]
# [/DEF:TaskContext:Class]

View File

@@ -1,5 +1,4 @@
# [DEF:TaskManagerModule:Module]
# @TIER: CRITICAL
# @SEMANTICS: task, manager, lifecycle, execution, state
# @PURPOSE: Manages the lifecycle of tasks, including their creation, execution, and state tracking. It uses a thread pool to run plugins asynchronously.
# @LAYER: Core
@@ -75,10 +74,9 @@ class TaskManager:
# @POST: Logs are batch-written to database every LOG_FLUSH_INTERVAL seconds.
def _flusher_loop(self):
"""Background thread that flushes log buffer to database."""
with belief_scope("_flusher_loop"):
while not self._flusher_stop_event.is_set():
self._flush_logs()
self._flusher_stop_event.wait(self.LOG_FLUSH_INTERVAL)
while not self._flusher_stop_event.is_set():
self._flush_logs()
self._flusher_stop_event.wait(self.LOG_FLUSH_INTERVAL)
# [/DEF:_flusher_loop:Function]
# [DEF:_flush_logs:Function]
@@ -87,24 +85,23 @@ class TaskManager:
# @POST: All buffered logs are written to task_logs table.
def _flush_logs(self):
"""Flush all buffered logs to the database."""
with belief_scope("_flush_logs"):
with self._log_buffer_lock:
task_ids = list(self._log_buffer.keys())
for task_id in task_ids:
with self._log_buffer_lock:
task_ids = list(self._log_buffer.keys())
logs = self._log_buffer.pop(task_id, [])
for task_id in task_ids:
with self._log_buffer_lock:
logs = self._log_buffer.pop(task_id, [])
if logs:
try:
self.log_persistence_service.add_logs(task_id, logs)
except Exception as e:
logger.error(f"Failed to flush logs for task {task_id}: {e}")
# Re-add logs to buffer on failure
with self._log_buffer_lock:
if task_id not in self._log_buffer:
self._log_buffer[task_id] = []
self._log_buffer[task_id].extend(logs)
if logs:
try:
self.log_persistence_service.add_logs(task_id, logs)
except Exception as e:
logger.error(f"Failed to flush logs for task {task_id}: {e}")
# Re-add logs to buffer on failure
with self._log_buffer_lock:
if task_id not in self._log_buffer:
self._log_buffer[task_id] = []
self._log_buffer[task_id].extend(logs)
# [/DEF:_flush_logs:Function]
# [DEF:_flush_task_logs:Function]
@@ -114,15 +111,14 @@ class TaskManager:
# @PARAM: task_id (str) - The task ID.
def _flush_task_logs(self, task_id: str):
"""Flush logs for a specific task immediately."""
with belief_scope("_flush_task_logs"):
with self._log_buffer_lock:
logs = self._log_buffer.pop(task_id, [])
if logs:
try:
self.log_persistence_service.add_logs(task_id, logs)
except Exception as e:
logger.error(f"Failed to flush logs for task {task_id}: {e}")
with self._log_buffer_lock:
logs = self._log_buffer.pop(task_id, [])
if logs:
try:
self.log_persistence_service.add_logs(task_id, logs)
except Exception as e:
logger.error(f"Failed to flush logs for task {task_id}: {e}")
# [/DEF:_flush_task_logs:Function]
# [DEF:create_task:Function]

View File

@@ -1,5 +1,4 @@
# [DEF:TaskPersistenceModule:Module]
# @TIER: CRITICAL
# @SEMANTICS: persistence, sqlite, sqlalchemy, task, storage
# @PURPOSE: Handles the persistence of tasks using SQLAlchemy and the tasks.db database.
# @LAYER: Core
@@ -20,65 +19,42 @@ from ..logger import logger, belief_scope
# [/SECTION]
# [DEF:TaskPersistenceService:Class]
# @TIER: CRITICAL
# @SEMANTICS: persistence, service, database, sqlalchemy
# @PURPOSE: Provides methods to save and load tasks from the tasks.db database using SQLAlchemy.
# @INVARIANT: Persistence must handle potentially missing task fields natively.
class TaskPersistenceService:
# [DEF:_json_load_if_needed:Function]
# @PURPOSE: Safely load JSON strings from DB if necessary
# @PRE: value is an arbitrary database value
# @POST: Returns parsed JSON object, list, string, or primitive
@staticmethod
def _json_load_if_needed(value):
with belief_scope("TaskPersistenceService._json_load_if_needed"):
if value is None:
return None
if isinstance(value, (dict, list)):
return value
if isinstance(value, str):
stripped = value.strip()
if stripped == "" or stripped.lower() == "null":
return None
try:
return json.loads(stripped)
except json.JSONDecodeError:
return value
if value is None:
return None
if isinstance(value, (dict, list)):
return value
# [/DEF:_json_load_if_needed:Function]
if isinstance(value, str):
stripped = value.strip()
if stripped == "" or stripped.lower() == "null":
return None
try:
return json.loads(stripped)
except json.JSONDecodeError:
return value
return value
# [DEF:_parse_datetime:Function]
# @PURPOSE: Safely parse a datetime string from the database
# @PRE: value is an ISO string or datetime object
# @POST: Returns datetime object or None
@staticmethod
def _parse_datetime(value):
with belief_scope("TaskPersistenceService._parse_datetime"):
if value is None or isinstance(value, datetime):
return value
if isinstance(value, str):
try:
return datetime.fromisoformat(value)
except ValueError:
return None
return None
# [/DEF:_parse_datetime:Function]
if value is None or isinstance(value, datetime):
return value
if isinstance(value, str):
try:
return datetime.fromisoformat(value)
except ValueError:
return None
return None
# [DEF:_resolve_environment_id:Function]
# @TIER: STANDARD
# @PURPOSE: Resolve environment id based on provided value or fallback to default
# @PRE: Session is active
# @POST: Environment ID is returned
@staticmethod
def _resolve_environment_id(session: Session, env_id: Optional[str]) -> str:
with belief_scope("_resolve_environment_id"):
if env_id:
return env_id
repo_env = session.query(Environment).filter_by(name="default").first()
if repo_env:
return str(repo_env.id)
return "default"
# [/DEF:_resolve_environment_id:Function]
def _resolve_environment_id(session: Session, env_id: Optional[str]) -> Optional[str]:
if not env_id:
return None
exists = session.query(Environment.id).filter(Environment.id == env_id).first()
return env_id if exists else None
# [DEF:__init__:Function]
# @PURPOSE: Initializes the persistence service.
@@ -114,14 +90,13 @@ class TaskPersistenceService:
# Ensure params and result are JSON serializable
def json_serializable(obj):
with belief_scope("TaskPersistenceService.json_serializable"):
if isinstance(obj, dict):
return {k: json_serializable(v) for k, v in obj.items()}
elif isinstance(obj, list):
return [json_serializable(v) for v in obj]
elif isinstance(obj, datetime):
return obj.isoformat()
return obj
if isinstance(obj, dict):
return {k: json_serializable(v) for k, v in obj.items()}
elif isinstance(obj, list):
return [json_serializable(v) for v in obj]
elif isinstance(obj, datetime):
return obj.isoformat()
return obj
record.params = json_serializable(task.params)
record.result = json_serializable(task.result)
@@ -252,11 +227,9 @@ class TaskLogPersistenceService:
"""
# [DEF:__init__:Function]
# @TIER: STANDARD
# @PURPOSE: Initializes the TaskLogPersistenceService
# @PRE: config is provided or defaults are used
# @POST: Service is ready for log persistence
def __init__(self, config=None):
# @PURPOSE: Initialize the log persistence service.
# @POST: Service is ready.
def __init__(self):
pass
# [/DEF:__init__:Function]

View File

@@ -15,7 +15,6 @@ from typing import Dict, Any, Optional, Callable
# @PURPOSE: A wrapper around TaskManager._add_log that carries task_id and source context.
# @TIER: CRITICAL
# @INVARIANT: All log calls include the task_id and source.
# @TEST_DATA: task_logger -> {"task_id": "test_123", "source": "test_plugin"}
# @UX_STATE: Idle -> Logging -> (system records log)
class TaskLogger:
"""
@@ -72,7 +71,6 @@ class TaskLogger:
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source for this log entry.
# @PARAM: metadata (Optional[Dict]) - Additional structured data.
# @UX_STATE: Logging -> (writing internal log)
def _log(
self,
level: str,
@@ -92,8 +90,6 @@ class TaskLogger:
# [DEF:debug:Function]
# @PURPOSE: Log a DEBUG level message.
# @PRE: message is a string.
# @POST: Log entry added via internally with DEBUG level.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
@@ -108,8 +104,6 @@ class TaskLogger:
# [DEF:info:Function]
# @PURPOSE: Log an INFO level message.
# @PRE: message is a string.
# @POST: Log entry added internally with INFO level.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
@@ -124,8 +118,6 @@ class TaskLogger:
# [DEF:warning:Function]
# @PURPOSE: Log a WARNING level message.
# @PRE: message is a string.
# @POST: Log entry added internally with WARNING level.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.
@@ -140,8 +132,6 @@ class TaskLogger:
# [DEF:error:Function]
# @PURPOSE: Log an ERROR level message.
# @PRE: message is a string.
# @POST: Log entry added internally with ERROR level.
# @PARAM: message (str) - Log message.
# @PARAM: source (Optional[str]) - Override source.
# @PARAM: metadata (Optional[Dict]) - Additional data.

View File

@@ -1,235 +0,0 @@
# [DEF:test_report_models:Module]
# @TIER: CRITICAL
# @PURPOSE: Unit tests for report Pydantic models and their validators
# @LAYER: Domain
# @RELATION: TESTS -> backend.src.models.report
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
import pytest
from datetime import datetime, timedelta
class TestTaskType:
"""Tests for the TaskType enum."""
def test_enum_values(self):
from src.models.report import TaskType
assert TaskType.LLM_VERIFICATION == "llm_verification"
assert TaskType.BACKUP == "backup"
assert TaskType.MIGRATION == "migration"
assert TaskType.DOCUMENTATION == "documentation"
assert TaskType.UNKNOWN == "unknown"
class TestReportStatus:
"""Tests for the ReportStatus enum."""
def test_enum_values(self):
from src.models.report import ReportStatus
assert ReportStatus.SUCCESS == "success"
assert ReportStatus.FAILED == "failed"
assert ReportStatus.IN_PROGRESS == "in_progress"
assert ReportStatus.PARTIAL == "partial"
class TestErrorContext:
"""Tests for ErrorContext model."""
def test_valid_creation(self):
from src.models.report import ErrorContext
ctx = ErrorContext(message="Something failed", code="ERR_001", next_actions=["Retry"])
assert ctx.message == "Something failed"
assert ctx.code == "ERR_001"
assert ctx.next_actions == ["Retry"]
def test_minimal_creation(self):
from src.models.report import ErrorContext
ctx = ErrorContext(message="Error occurred")
assert ctx.code is None
assert ctx.next_actions == []
class TestTaskReport:
"""Tests for TaskReport model and its validators."""
def _make_report(self, **overrides):
from src.models.report import TaskReport, TaskType, ReportStatus
defaults = {
"report_id": "rpt-001",
"task_id": "task-001",
"task_type": TaskType.BACKUP,
"status": ReportStatus.SUCCESS,
"updated_at": datetime(2024, 1, 15, 12, 0, 0),
"summary": "Backup completed",
}
defaults.update(overrides)
return TaskReport(**defaults)
def test_valid_creation(self):
report = self._make_report()
assert report.report_id == "rpt-001"
assert report.task_id == "task-001"
assert report.summary == "Backup completed"
def test_empty_report_id_raises(self):
with pytest.raises(ValueError, match="non-empty"):
self._make_report(report_id="")
def test_whitespace_report_id_raises(self):
with pytest.raises(ValueError, match="non-empty"):
self._make_report(report_id=" ")
def test_empty_task_id_raises(self):
with pytest.raises(ValueError, match="non-empty"):
self._make_report(task_id="")
def test_empty_summary_raises(self):
with pytest.raises(ValueError, match="non-empty"):
self._make_report(summary="")
def test_summary_whitespace_trimmed(self):
report = self._make_report(summary=" Trimmed ")
assert report.summary == "Trimmed"
def test_optional_fields(self):
report = self._make_report()
assert report.started_at is None
assert report.details is None
assert report.error_context is None
assert report.source_ref is None
def test_with_error_context(self):
from src.models.report import ErrorContext
ctx = ErrorContext(message="Connection failed")
report = self._make_report(error_context=ctx)
assert report.error_context.message == "Connection failed"
class TestReportQuery:
"""Tests for ReportQuery model and its validators."""
def test_defaults(self):
from src.models.report import ReportQuery
q = ReportQuery()
assert q.page == 1
assert q.page_size == 20
assert q.task_types == []
assert q.statuses == []
assert q.sort_by == "updated_at"
assert q.sort_order == "desc"
def test_invalid_sort_by_raises(self):
from src.models.report import ReportQuery
with pytest.raises(ValueError, match="sort_by"):
ReportQuery(sort_by="invalid_field")
def test_valid_sort_by_values(self):
from src.models.report import ReportQuery
for field in ["updated_at", "status", "task_type"]:
q = ReportQuery(sort_by=field)
assert q.sort_by == field
def test_invalid_sort_order_raises(self):
from src.models.report import ReportQuery
with pytest.raises(ValueError, match="sort_order"):
ReportQuery(sort_order="invalid")
def test_valid_sort_order_values(self):
from src.models.report import ReportQuery
for order in ["asc", "desc"]:
q = ReportQuery(sort_order=order)
assert q.sort_order == order
def test_time_range_validation_valid(self):
from src.models.report import ReportQuery
now = datetime.utcnow()
q = ReportQuery(time_from=now - timedelta(days=1), time_to=now)
assert q.time_from < q.time_to
def test_time_range_validation_invalid(self):
from src.models.report import ReportQuery
now = datetime.utcnow()
with pytest.raises(ValueError, match="time_from"):
ReportQuery(time_from=now, time_to=now - timedelta(days=1))
def test_page_ge_1(self):
from src.models.report import ReportQuery
with pytest.raises(ValueError):
ReportQuery(page=0)
def test_page_size_bounds(self):
from src.models.report import ReportQuery
with pytest.raises(ValueError):
ReportQuery(page_size=0)
with pytest.raises(ValueError):
ReportQuery(page_size=101)
class TestReportCollection:
"""Tests for ReportCollection model."""
def test_valid_creation(self):
from src.models.report import ReportCollection, ReportQuery
col = ReportCollection(
items=[],
total=0,
page=1,
page_size=20,
has_next=False,
applied_filters=ReportQuery(),
)
assert col.total == 0
assert col.has_next is False
def test_with_items(self):
from src.models.report import ReportCollection, ReportQuery, TaskReport, TaskType, ReportStatus
report = TaskReport(
report_id="r1", task_id="t1", task_type=TaskType.BACKUP,
status=ReportStatus.SUCCESS, updated_at=datetime.utcnow(),
summary="OK"
)
col = ReportCollection(
items=[report], total=1, page=1, page_size=20,
has_next=False, applied_filters=ReportQuery()
)
assert len(col.items) == 1
assert col.items[0].report_id == "r1"
class TestReportDetailView:
"""Tests for ReportDetailView model."""
def test_valid_creation(self):
from src.models.report import ReportDetailView, TaskReport, TaskType, ReportStatus
report = TaskReport(
report_id="r1", task_id="t1", task_type=TaskType.BACKUP,
status=ReportStatus.SUCCESS, updated_at=datetime.utcnow(),
summary="Backup OK"
)
detail = ReportDetailView(report=report)
assert detail.report.report_id == "r1"
assert detail.timeline == []
assert detail.diagnostics is None
assert detail.next_actions == []
def test_with_all_fields(self):
from src.models.report import ReportDetailView, TaskReport, TaskType, ReportStatus
report = TaskReport(
report_id="r1", task_id="t1", task_type=TaskType.MIGRATION,
status=ReportStatus.FAILED, updated_at=datetime.utcnow(),
summary="Migration failed"
)
detail = ReportDetailView(
report=report,
timeline=[{"event": "started", "at": "2024-01-01T00:00:00"}],
diagnostics={"cause": "timeout"},
next_actions=["Retry", "Check connection"],
)
assert len(detail.timeline) == 1
assert detail.diagnostics["cause"] == "timeout"
assert "Retry" in detail.next_actions
# [/DEF:test_report_models:Module]

View File

@@ -1,74 +0,0 @@
# [DEF:backend.src.models.assistant:Module]
# @TIER: STANDARD
# @SEMANTICS: assistant, audit, confirmation, chat
# @PURPOSE: SQLAlchemy models for assistant audit trail and confirmation tokens.
# @LAYER: Domain
# @RELATION: DEPENDS_ON -> backend.src.models.mapping
# @INVARIANT: Assistant records preserve immutable ids and creation timestamps.
from datetime import datetime
from sqlalchemy import Column, String, DateTime, JSON, Text
from .mapping import Base
# [DEF:AssistantAuditRecord:Class]
# @TIER: STANDARD
# @PURPOSE: Store audit decisions and outcomes produced by assistant command handling.
# @PRE: user_id must identify the actor for every record.
# @POST: Audit payload remains available for compliance and debugging.
class AssistantAuditRecord(Base):
__tablename__ = "assistant_audit"
id = Column(String, primary_key=True)
user_id = Column(String, index=True, nullable=False)
conversation_id = Column(String, index=True, nullable=True)
decision = Column(String, nullable=True)
task_id = Column(String, nullable=True)
message = Column(Text, nullable=True)
payload = Column(JSON, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
# [/DEF:AssistantAuditRecord:Class]
# [DEF:AssistantMessageRecord:Class]
# @TIER: STANDARD
# @PURPOSE: Persist chat history entries for assistant conversations.
# @PRE: user_id, conversation_id, role and text must be present.
# @POST: Message row can be queried in chronological order.
class AssistantMessageRecord(Base):
__tablename__ = "assistant_messages"
id = Column(String, primary_key=True)
user_id = Column(String, index=True, nullable=False)
conversation_id = Column(String, index=True, nullable=False)
role = Column(String, nullable=False) # user | assistant
text = Column(Text, nullable=False)
state = Column(String, nullable=True)
task_id = Column(String, nullable=True)
confirmation_id = Column(String, nullable=True)
payload = Column(JSON, nullable=True)
created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
# [/DEF:AssistantMessageRecord:Class]
# [DEF:AssistantConfirmationRecord:Class]
# @TIER: STANDARD
# @PURPOSE: Persist risky operation confirmation tokens with lifecycle state.
# @PRE: intent/dispatch and expiry timestamp must be provided.
# @POST: State transitions can be tracked and audited.
class AssistantConfirmationRecord(Base):
__tablename__ = "assistant_confirmations"
id = Column(String, primary_key=True)
user_id = Column(String, index=True, nullable=False)
conversation_id = Column(String, index=True, nullable=False)
state = Column(String, index=True, nullable=False, default="pending")
intent = Column(JSON, nullable=False)
dispatch = Column(JSON, nullable=False)
expires_at = Column(DateTime, nullable=False)
created_at = Column(DateTime, default=datetime.utcnow, nullable=False)
consumed_at = Column(DateTime, nullable=True)
# [/DEF:AssistantConfirmationRecord:Class]
# [/DEF:backend.src.models.assistant:Module]

View File

@@ -26,7 +26,6 @@ class DashboardSelection(BaseModel):
source_env_id: str
target_env_id: str
replace_db_config: bool = False
fix_cross_filters: bool = True
# [/DEF:DashboardSelection:Class]
# [/DEF:backend.src.models.dashboard:Module]

View File

@@ -19,16 +19,6 @@ import enum
Base = declarative_base()
# [DEF:ResourceType:Class]
# @TIER: TRIVIAL
# @PURPOSE: Enumeration of possible Superset resource types for ID mapping.
class ResourceType(str, enum.Enum):
CHART = "chart"
DATASET = "dataset"
DASHBOARD = "dashboard"
# [/DEF:ResourceType:Class]
# [DEF:MigrationStatus:Class]
# @TIER: TRIVIAL
# @PURPOSE: Enumeration of possible migration job statuses.
@@ -80,21 +70,6 @@ class MigrationJob(Base):
status = Column(SQLEnum(MigrationStatus), default=MigrationStatus.PENDING)
replace_db = Column(Boolean, default=False)
created_at = Column(DateTime(timezone=True), server_default=func.now())
# [DEF:ResourceMapping:Class]
# @TIER: STANDARD
# @PURPOSE: Maps a universal UUID for a resource to its actual ID on a specific environment.
# @TEST_DATA: resource_mapping_record -> {'environment_id': 'prod-env-1', 'resource_type': 'chart', 'uuid': '123e4567-e89b-12d3-a456-426614174000', 'remote_integer_id': '42'}
class ResourceMapping(Base):
__tablename__ = "resource_mappings"
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
environment_id = Column(String, ForeignKey("environments.id"), nullable=False)
resource_type = Column(SQLEnum(ResourceType), nullable=False)
uuid = Column(String, nullable=False)
remote_integer_id = Column(String, nullable=False) # Stored as string to handle potentially large or composite IDs safely, though Superset usually uses integers.
resource_name = Column(String, nullable=True) # Used for UI display
last_synced_at = Column(DateTime(timezone=True), server_default=func.now(), onupdate=func.now())
# [/DEF:ResourceMapping:Class]
# [/DEF:MigrationJob:Class]
# [/DEF:backend.src.models.mapping:Module]

View File

@@ -16,9 +16,6 @@ from pydantic import BaseModel, Field, field_validator, model_validator
# [DEF:TaskType:Class]
# @TIER: CRITICAL
# @INVARIANT: Must contain valid generic task type mappings.
# @SEMANTICS: enum, type, task
# @PURPOSE: Supported normalized task report types.
class TaskType(str, Enum):
LLM_VERIFICATION = "llm_verification"
@@ -30,9 +27,6 @@ class TaskType(str, Enum):
# [DEF:ReportStatus:Class]
# @TIER: CRITICAL
# @INVARIANT: TaskStatus enum mapping logic holds.
# @SEMANTICS: enum, status, task
# @PURPOSE: Supported normalized report status values.
class ReportStatus(str, Enum):
SUCCESS = "success"
@@ -43,9 +37,6 @@ class ReportStatus(str, Enum):
# [DEF:ErrorContext:Class]
# @TIER: CRITICAL
# @INVARIANT: The properties accurately describe error state.
# @SEMANTICS: error, context, payload
# @PURPOSE: Error and recovery context for failed/partial reports.
class ErrorContext(BaseModel):
code: Optional[str] = None
@@ -55,9 +46,6 @@ class ErrorContext(BaseModel):
# [DEF:TaskReport:Class]
# @TIER: CRITICAL
# @INVARIANT: Must represent canonical task record attributes.
# @SEMANTICS: report, model, summary
# @PURPOSE: Canonical normalized report envelope for one task execution.
class TaskReport(BaseModel):
report_id: str
@@ -81,9 +69,6 @@ class TaskReport(BaseModel):
# [DEF:ReportQuery:Class]
# @TIER: CRITICAL
# @INVARIANT: Time and pagination queries are mutually consistent.
# @SEMANTICS: query, filter, search
# @PURPOSE: Query object for server-side report filtering, sorting, and pagination.
class ReportQuery(BaseModel):
page: int = Field(default=1, ge=1)
@@ -120,9 +105,6 @@ class ReportQuery(BaseModel):
# [DEF:ReportCollection:Class]
# @TIER: CRITICAL
# @INVARIANT: Represents paginated data correctly.
# @SEMANTICS: collection, pagination
# @PURPOSE: Paginated collection of normalized task reports.
class ReportCollection(BaseModel):
items: List[TaskReport]
@@ -135,9 +117,6 @@ class ReportCollection(BaseModel):
# [DEF:ReportDetailView:Class]
# @TIER: CRITICAL
# @INVARIANT: Incorporates a report and logs correctly.
# @SEMANTICS: view, detail, logs
# @PURPOSE: Detailed report representation including diagnostics and recovery actions.
class ReportDetailView(BaseModel):
report: TaskReport

View File

@@ -9,7 +9,6 @@ from typing import List
from tenacity import retry, stop_after_attempt, wait_exponential
from ..llm_analysis.service import LLMClient
from ...core.logger import belief_scope, logger
from ...services.llm_prompt_templates import DEFAULT_LLM_PROMPTS, render_prompt
# [DEF:GitLLMExtension:Class]
# @PURPOSE: Provides LLM capabilities to the Git plugin.
@@ -27,18 +26,21 @@ class GitLLMExtension:
wait=wait_exponential(multiplier=1, min=2, max=10),
reraise=True
)
async def suggest_commit_message(
self,
diff: str,
history: List[str],
prompt_template: str = DEFAULT_LLM_PROMPTS["git_commit_prompt"],
) -> str:
async def suggest_commit_message(self, diff: str, history: List[str]) -> str:
with belief_scope("suggest_commit_message"):
history_text = "\n".join(history)
prompt = render_prompt(
prompt_template,
{"history": history_text, "diff": diff},
)
prompt = f"""
Generate a concise and professional git commit message based on the following diff and recent history.
Use Conventional Commits format (e.g., feat: ..., fix: ..., docs: ...).
Recent History:
{history_text}
Diff:
{diff}
Commit Message:
"""
logger.debug(f"[suggest_commit_message] Calling LLM with model: {self.client.default_model}")
response = await self.client.client.chat.completions.create(
@@ -61,4 +63,4 @@ class GitLLMExtension:
# [/DEF:suggest_commit_message:Function]
# [/DEF:GitLLMExtension:Class]
# [/DEF:backend/src/plugins/git/llm_extension:Module]
# [/DEF:backend/src/plugins/git/llm_extension:Module]

View File

@@ -23,12 +23,6 @@ from .service import ScreenshotService, LLMClient
from .models import LLMProviderType, ValidationStatus, ValidationResult, DetectedIssue
from ...models.llm import ValidationRecord
from ...core.task_manager.context import TaskContext
from ...services.llm_prompt_templates import (
DEFAULT_LLM_PROMPTS,
is_multimodal_model,
normalize_llm_settings,
render_prompt,
)
# [DEF:DashboardValidationPlugin:Class]
# @PURPOSE: Plugin for automated dashboard health analysis using LLMs.
@@ -109,10 +103,6 @@ class DashboardValidationPlugin(PluginBase):
llm_log.debug(f" Base URL: {db_provider.base_url}")
llm_log.debug(f" Default Model: {db_provider.default_model}")
llm_log.debug(f" Is Active: {db_provider.is_active}")
if not is_multimodal_model(db_provider.default_model, db_provider.provider_type):
raise ValueError(
"Dashboard validation requires a multimodal model (image input support)."
)
api_key = llm_service.get_decrypted_api_key(provider_id)
llm_log.debug(f"API Key decrypted (first 8 chars): {api_key[:8] if api_key and len(api_key) > 8 else 'EMPTY_OR_NONE'}...")
@@ -191,16 +181,7 @@ class DashboardValidationPlugin(PluginBase):
)
llm_log.info(f"Analyzing dashboard {dashboard_id} with LLM")
llm_settings = normalize_llm_settings(config_mgr.get_config().settings.llm)
dashboard_prompt = llm_settings["prompts"].get(
"dashboard_validation_prompt",
DEFAULT_LLM_PROMPTS["dashboard_validation_prompt"],
)
analysis = await llm_client.analyze_dashboard(
screenshot_path,
logs,
prompt_template=dashboard_prompt,
)
analysis = await llm_client.analyze_dashboard(screenshot_path, logs)
# Log analysis summary to task logs for better visibility
llm_log.info(f"[ANALYSIS_SUMMARY] Status: {analysis['status']}")
@@ -360,18 +341,22 @@ class DocumentationPlugin(PluginBase):
default_model=db_provider.default_model
)
llm_settings = normalize_llm_settings(config_mgr.get_config().settings.llm)
documentation_prompt = llm_settings["prompts"].get(
"documentation_prompt",
DEFAULT_LLM_PROMPTS["documentation_prompt"],
)
prompt = render_prompt(
documentation_prompt,
{
"dataset_name": dataset.get("table_name") or "",
"columns_json": json.dumps(columns_data, ensure_ascii=False),
},
)
prompt = f"""
Generate professional documentation for the following dataset and its columns.
Dataset: {dataset.get('table_name')}
Columns: {columns_data}
Provide the documentation in JSON format:
{{
"dataset_description": "General description of the dataset",
"column_descriptions": [
{{
"name": "column_name",
"description": "Generated description"
}}
]
}}
"""
# Using a generic chat completion for text-only US2
llm_log.info(f"Generating documentation for dataset {dataset_id}")

View File

@@ -20,7 +20,6 @@ from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_excep
from .models import LLMProviderType
from ...core.logger import belief_scope, logger
from ...core.config_models import Environment
from ...services.llm_prompt_templates import DEFAULT_LLM_PROMPTS, render_prompt
# [DEF:ScreenshotService:Class]
# @PURPOSE: Handles capturing screenshots of Superset dashboards.
@@ -437,26 +436,6 @@ class LLMClient:
self.client = AsyncOpenAI(api_key=api_key, base_url=base_url)
# [/DEF:LLMClient.__init__:Function]
# [DEF:LLMClient._supports_json_response_format:Function]
# @PURPOSE: Detect whether provider/model is likely compatible with response_format=json_object.
# @PRE: Client initialized with base_url and default_model.
# @POST: Returns False for known-incompatible combinations to avoid avoidable 400 errors.
def _supports_json_response_format(self) -> bool:
base = (self.base_url or "").lower()
model = (self.default_model or "").lower()
# OpenRouter routes to many upstream providers; some models reject json_object mode.
if "openrouter.ai" in base:
incompatible_tokens = (
"stepfun/",
"step-",
":free",
)
if any(token in model for token in incompatible_tokens):
return False
return True
# [/DEF:LLMClient._supports_json_response_format:Function]
# [DEF:LLMClient.get_json_completion:Function]
# @PURPOSE: Helper to handle LLM calls with JSON mode and fallback parsing.
# @PRE: messages is a list of valid message dictionaries.
@@ -480,34 +459,19 @@ class LLMClient:
with belief_scope("get_json_completion"):
response = None
try:
use_json_mode = self._supports_json_response_format()
try:
logger.info(
f"[get_json_completion] Attempting LLM call for model: {self.default_model} "
f"(json_mode={'on' if use_json_mode else 'off'})"
)
logger.info(f"[get_json_completion] Attempting LLM call with JSON mode for model: {self.default_model}")
logger.info(f"[get_json_completion] Base URL being used: {self.base_url}")
logger.info(f"[get_json_completion] Number of messages: {len(messages)}")
logger.info(f"[get_json_completion] API Key present: {bool(self.api_key and len(self.api_key) > 0)}")
if use_json_mode:
response = await self.client.chat.completions.create(
model=self.default_model,
messages=messages,
response_format={"type": "json_object"}
)
else:
response = await self.client.chat.completions.create(
model=self.default_model,
messages=messages
)
response = await self.client.chat.completions.create(
model=self.default_model,
messages=messages,
response_format={"type": "json_object"}
)
except Exception as e:
if use_json_mode and (
"JSON mode is not enabled" in str(e)
or "json_object is not supported" in str(e).lower()
or "response_format" in str(e).lower()
or "400" in str(e)
):
if "JSON mode is not enabled" in str(e) or "400" in str(e):
logger.warning(f"[get_json_completion] JSON mode failed or not supported: {str(e)}. Falling back to plain text response.")
response = await self.client.chat.completions.create(
model=self.default_model,
@@ -584,12 +548,7 @@ class LLMClient:
# @PRE: screenshot_path exists, logs is a list of strings.
# @POST: Returns a structured analysis dictionary (status, summary, issues).
# @SIDE_EFFECT: Reads screenshot file and calls external LLM API.
async def analyze_dashboard(
self,
screenshot_path: str,
logs: List[str],
prompt_template: str = DEFAULT_LLM_PROMPTS["dashboard_validation_prompt"],
) -> Dict[str, Any]:
async def analyze_dashboard(self, screenshot_path: str, logs: List[str]) -> Dict[str, Any]:
with belief_scope("analyze_dashboard"):
# Optimize image to reduce token count (US1 / T023)
# Gemini/Gemma models have limits on input tokens, and large images contribute significantly.
@@ -623,7 +582,25 @@ class LLMClient:
base_64_image = base64.b64encode(image_file.read()).decode('utf-8')
log_text = "\n".join(logs)
prompt = render_prompt(prompt_template, {"logs": log_text})
prompt = f"""
Analyze the attached dashboard screenshot and the following execution logs for health and visual issues.
Logs:
{log_text}
Provide the analysis in JSON format with the following structure:
{{
"status": "PASS" | "WARN" | "FAIL",
"summary": "Short summary of findings",
"issues": [
{{
"severity": "WARN" | "FAIL",
"message": "Description of the issue",
"location": "Optional location info (e.g. chart name)"
}}
]
}}
"""
messages = [
{

View File

@@ -1,126 +0,0 @@
# [DEF:test_encryption_manager:Module]
# @TIER: CRITICAL
# @SEMANTICS: encryption, security, fernet, api-keys, tests
# @PURPOSE: Unit tests for EncryptionManager encrypt/decrypt functionality.
# @LAYER: Domain
# @RELATION: TESTS -> backend.src.services.llm_provider.EncryptionManager
# @INVARIANT: Encrypt+decrypt roundtrip always returns original plaintext.
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
import pytest
from unittest.mock import patch
from cryptography.fernet import Fernet, InvalidToken
# [DEF:TestEncryptionManager:Class]
# @PURPOSE: Validate EncryptionManager encrypt/decrypt roundtrip, uniqueness, and error handling.
# @PRE: cryptography package installed.
# @POST: All encrypt/decrypt invariants verified.
class TestEncryptionManager:
"""Tests for the EncryptionManager class."""
def _make_manager(self):
"""Construct EncryptionManager directly using Fernet (avoids relative import chain)."""
# Re-implement the same logic as EncryptionManager to avoid import issues
# with the llm_provider module's relative imports
import os
key = os.getenv("ENCRYPTION_KEY", "ZcytYzi0iHIl4Ttr-GdAEk117aGRogkGvN3wiTxrPpE=").encode()
fernet = Fernet(key)
class EncryptionManager:
def __init__(self):
self.key = key
self.fernet = fernet
def encrypt(self, data: str) -> str:
return self.fernet.encrypt(data.encode()).decode()
def decrypt(self, encrypted_data: str) -> str:
return self.fernet.decrypt(encrypted_data.encode()).decode()
return EncryptionManager()
# [DEF:test_encrypt_decrypt_roundtrip:Function]
# @PURPOSE: Encrypt then decrypt returns original plaintext.
# @PRE: Valid plaintext string.
# @POST: Decrypted output equals original input.
def test_encrypt_decrypt_roundtrip(self):
mgr = self._make_manager()
original = "my-secret-api-key-12345"
encrypted = mgr.encrypt(original)
assert encrypted != original
decrypted = mgr.decrypt(encrypted)
assert decrypted == original
# [/DEF:test_encrypt_decrypt_roundtrip:Function]
# [DEF:test_encrypt_produces_different_output:Function]
# @PURPOSE: Same plaintext produces different ciphertext (Fernet uses random IV).
# @PRE: Two encrypt calls with same input.
# @POST: Ciphertexts differ but both decrypt to same value.
def test_encrypt_produces_different_output(self):
mgr = self._make_manager()
ct1 = mgr.encrypt("same-key")
ct2 = mgr.encrypt("same-key")
assert ct1 != ct2
assert mgr.decrypt(ct1) == mgr.decrypt(ct2) == "same-key"
# [/DEF:test_encrypt_produces_different_output:Function]
# [DEF:test_different_inputs_yield_different_ciphertext:Function]
# @PURPOSE: Different inputs produce different ciphertexts.
# @PRE: Two different plaintext values.
# @POST: Encrypted outputs differ.
def test_different_inputs_yield_different_ciphertext(self):
mgr = self._make_manager()
ct1 = mgr.encrypt("key-one")
ct2 = mgr.encrypt("key-two")
assert ct1 != ct2
# [/DEF:test_different_inputs_yield_different_ciphertext:Function]
# [DEF:test_decrypt_invalid_data_raises:Function]
# @PURPOSE: Decrypting invalid data raises InvalidToken.
# @PRE: Invalid ciphertext string.
# @POST: Exception raised.
def test_decrypt_invalid_data_raises(self):
mgr = self._make_manager()
with pytest.raises(Exception):
mgr.decrypt("not-a-valid-fernet-token")
# [/DEF:test_decrypt_invalid_data_raises:Function]
# [DEF:test_encrypt_empty_string:Function]
# @PURPOSE: Encrypting and decrypting an empty string works.
# @PRE: Empty string input.
# @POST: Decrypted output equals empty string.
def test_encrypt_empty_string(self):
mgr = self._make_manager()
encrypted = mgr.encrypt("")
assert encrypted
decrypted = mgr.decrypt(encrypted)
assert decrypted == ""
# [/DEF:test_encrypt_empty_string:Function]
# [DEF:test_custom_key_roundtrip:Function]
# @PURPOSE: Custom Fernet key produces valid roundtrip.
# @PRE: Generated Fernet key.
# @POST: Encrypt/decrypt with custom key succeeds.
def test_custom_key_roundtrip(self):
custom_key = Fernet.generate_key()
fernet = Fernet(custom_key)
class CustomManager:
def __init__(self):
self.key = custom_key
self.fernet = fernet
def encrypt(self, data: str) -> str:
return self.fernet.encrypt(data.encode()).decode()
def decrypt(self, encrypted_data: str) -> str:
return self.fernet.decrypt(encrypted_data.encode()).decode()
mgr = CustomManager()
encrypted = mgr.encrypt("test-with-custom-key")
decrypted = mgr.decrypt(encrypted)
assert decrypted == "test-with-custom-key"
# [/DEF:test_custom_key_roundtrip:Function]
# [/DEF:TestEncryptionManager:Class]
# [/DEF:test_encryption_manager:Module]

View File

@@ -1,110 +0,0 @@
# [DEF:backend.src.services.__tests__.test_llm_prompt_templates:Module]
# @TIER: STANDARD
# @SEMANTICS: tests, llm, prompts, templates, settings
# @PURPOSE: Validate normalization and rendering behavior for configurable LLM prompt templates.
# @LAYER: Domain Tests
# @RELATION: DEPENDS_ON -> backend.src.services.llm_prompt_templates
# @INVARIANT: All required prompt keys remain available after normalization.
from src.services.llm_prompt_templates import (
DEFAULT_LLM_ASSISTANT_SETTINGS,
DEFAULT_LLM_PROVIDER_BINDINGS,
DEFAULT_LLM_PROMPTS,
is_multimodal_model,
normalize_llm_settings,
resolve_bound_provider_id,
render_prompt,
)
# [DEF:test_normalize_llm_settings_adds_default_prompts:Function]
# @TIER: STANDARD
# @PURPOSE: Ensure legacy/partial llm settings are expanded with all prompt defaults.
# @PRE: Input llm settings do not contain complete prompts object.
# @POST: Returned structure includes required prompt templates with fallback defaults.
def test_normalize_llm_settings_adds_default_prompts():
normalized = normalize_llm_settings({"default_provider": "x"})
assert "prompts" in normalized
assert "provider_bindings" in normalized
assert normalized["default_provider"] == "x"
for key in DEFAULT_LLM_PROMPTS:
assert key in normalized["prompts"]
assert isinstance(normalized["prompts"][key], str)
for key in DEFAULT_LLM_PROVIDER_BINDINGS:
assert key in normalized["provider_bindings"]
for key in DEFAULT_LLM_ASSISTANT_SETTINGS:
assert key in normalized
# [/DEF:test_normalize_llm_settings_adds_default_prompts:Function]
# [DEF:test_normalize_llm_settings_keeps_custom_prompt_values:Function]
# @TIER: STANDARD
# @PURPOSE: Ensure user-customized prompt values are preserved during normalization.
# @PRE: Input llm settings contain custom prompt override.
# @POST: Custom prompt value remains unchanged in normalized output.
def test_normalize_llm_settings_keeps_custom_prompt_values():
custom = "Doc for {dataset_name} using {columns_json}"
normalized = normalize_llm_settings(
{"prompts": {"documentation_prompt": custom}}
)
assert normalized["prompts"]["documentation_prompt"] == custom
# [/DEF:test_normalize_llm_settings_keeps_custom_prompt_values:Function]
# [DEF:test_render_prompt_replaces_known_placeholders:Function]
# @TIER: STANDARD
# @PURPOSE: Ensure template placeholders are deterministically replaced.
# @PRE: Template contains placeholders matching provided variables.
# @POST: Rendered prompt string contains substituted values.
def test_render_prompt_replaces_known_placeholders():
rendered = render_prompt(
"Hello {name}, diff={diff}",
{"name": "bot", "diff": "A->B"},
)
assert rendered == "Hello bot, diff=A->B"
# [/DEF:test_render_prompt_replaces_known_placeholders:Function]
# [DEF:test_is_multimodal_model_detects_known_vision_models:Function]
# @TIER: STANDARD
# @PURPOSE: Ensure multimodal model detection recognizes common vision-capable model names.
def test_is_multimodal_model_detects_known_vision_models():
assert is_multimodal_model("gpt-4o") is True
assert is_multimodal_model("claude-3-5-sonnet") is True
assert is_multimodal_model("stepfun/step-3.5-flash:free", "openrouter") is False
assert is_multimodal_model("text-only-model") is False
# [/DEF:test_is_multimodal_model_detects_known_vision_models:Function]
# [DEF:test_resolve_bound_provider_id_prefers_binding_then_default:Function]
# @TIER: STANDARD
# @PURPOSE: Verify provider binding resolution priority.
def test_resolve_bound_provider_id_prefers_binding_then_default():
settings = {
"default_provider": "default-1",
"provider_bindings": {"dashboard_validation": "vision-1"},
}
assert resolve_bound_provider_id(settings, "dashboard_validation") == "vision-1"
assert resolve_bound_provider_id(settings, "documentation") == "default-1"
# [/DEF:test_resolve_bound_provider_id_prefers_binding_then_default:Function]
# [DEF:test_normalize_llm_settings_keeps_assistant_planner_settings:Function]
# @TIER: STANDARD
# @PURPOSE: Ensure assistant planner provider/model fields are preserved and normalized.
def test_normalize_llm_settings_keeps_assistant_planner_settings():
normalized = normalize_llm_settings(
{
"assistant_planner_provider": "provider-a",
"assistant_planner_model": "gpt-4.1-mini",
}
)
assert normalized["assistant_planner_provider"] == "provider-a"
assert normalized["assistant_planner_model"] == "gpt-4.1-mini"
# [/DEF:test_normalize_llm_settings_keeps_assistant_planner_settings:Function]
# [/DEF:backend.src.services.__tests__.test_llm_prompt_templates:Module]

View File

@@ -1,200 +0,0 @@
# [DEF:backend.src.services.llm_prompt_templates:Module]
# @TIER: STANDARD
# @SEMANTICS: llm, prompts, templates, settings
# @PURPOSE: Provide default LLM prompt templates and normalization helpers for runtime usage.
# @LAYER: Domain
# @RELATION: DEPENDS_ON -> backend.src.core.config_manager
# @INVARIANT: All required prompt template keys are always present after normalization.
from __future__ import annotations
from copy import deepcopy
from typing import Dict, Any, Optional
# [DEF:DEFAULT_LLM_PROMPTS:Constant]
# @TIER: STANDARD
# @PURPOSE: Default prompt templates used by documentation, dashboard validation, and git commit generation.
DEFAULT_LLM_PROMPTS: Dict[str, str] = {
"dashboard_validation_prompt": (
"Analyze the attached dashboard screenshot and the following execution logs for health and visual issues.\n\n"
"Logs:\n"
"{logs}\n\n"
"Provide the analysis in JSON format with the following structure:\n"
"{\n"
' "status": "PASS" | "WARN" | "FAIL",\n'
' "summary": "Short summary of findings",\n'
' "issues": [\n'
" {\n"
' "severity": "WARN" | "FAIL",\n'
' "message": "Description of the issue",\n'
' "location": "Optional location info (e.g. chart name)"\n'
" }\n"
" ]\n"
"}"
),
"documentation_prompt": (
"Generate professional documentation for the following dataset and its columns.\n"
"Dataset: {dataset_name}\n"
"Columns: {columns_json}\n\n"
"Provide the documentation in JSON format:\n"
"{\n"
' "dataset_description": "General description of the dataset",\n'
' "column_descriptions": [\n'
" {\n"
' "name": "column_name",\n'
' "description": "Generated description"\n'
" }\n"
" ]\n"
"}"
),
"git_commit_prompt": (
"Generate a concise and professional git commit message based on the following diff and recent history.\n"
"Use Conventional Commits format (e.g., feat: ..., fix: ..., docs: ...).\n\n"
"Recent History:\n"
"{history}\n\n"
"Diff:\n"
"{diff}\n\n"
"Commit Message:"
),
}
# [/DEF:DEFAULT_LLM_PROMPTS:Constant]
# [DEF:DEFAULT_LLM_PROVIDER_BINDINGS:Constant]
# @TIER: STANDARD
# @PURPOSE: Default provider binding per task domain.
DEFAULT_LLM_PROVIDER_BINDINGS: Dict[str, str] = {
"dashboard_validation": "",
"documentation": "",
"git_commit": "",
}
# [/DEF:DEFAULT_LLM_PROVIDER_BINDINGS:Constant]
# [DEF:DEFAULT_LLM_ASSISTANT_SETTINGS:Constant]
# @TIER: STANDARD
# @PURPOSE: Default planner settings for assistant chat intent model/provider resolution.
DEFAULT_LLM_ASSISTANT_SETTINGS: Dict[str, str] = {
"assistant_planner_provider": "",
"assistant_planner_model": "",
}
# [/DEF:DEFAULT_LLM_ASSISTANT_SETTINGS:Constant]
# [DEF:normalize_llm_settings:Function]
# @TIER: STANDARD
# @PURPOSE: Ensure llm settings contain stable schema with prompts section and default templates.
# @PRE: llm_settings is dictionary-like value or None.
# @POST: Returned dict contains prompts with all required template keys.
def normalize_llm_settings(llm_settings: Any) -> Dict[str, Any]:
normalized: Dict[str, Any] = {
"providers": [],
"default_provider": "",
"prompts": {},
"provider_bindings": {},
**DEFAULT_LLM_ASSISTANT_SETTINGS,
}
if isinstance(llm_settings, dict):
normalized.update(
{
k: v
for k, v in llm_settings.items()
if k
in (
"providers",
"default_provider",
"prompts",
"provider_bindings",
"assistant_planner_provider",
"assistant_planner_model",
)
}
)
prompts = normalized.get("prompts") if isinstance(normalized.get("prompts"), dict) else {}
merged_prompts = deepcopy(DEFAULT_LLM_PROMPTS)
merged_prompts.update({k: v for k, v in prompts.items() if isinstance(v, str) and v.strip()})
normalized["prompts"] = merged_prompts
bindings = normalized.get("provider_bindings") if isinstance(normalized.get("provider_bindings"), dict) else {}
merged_bindings = deepcopy(DEFAULT_LLM_PROVIDER_BINDINGS)
merged_bindings.update({k: v for k, v in bindings.items() if isinstance(v, str)})
normalized["provider_bindings"] = merged_bindings
for key, default_value in DEFAULT_LLM_ASSISTANT_SETTINGS.items():
value = normalized.get(key, default_value)
normalized[key] = value.strip() if isinstance(value, str) else default_value
return normalized
# [/DEF:normalize_llm_settings:Function]
# [DEF:is_multimodal_model:Function]
# @TIER: STANDARD
# @PURPOSE: Heuristically determine whether model supports image input required for dashboard validation.
# @PRE: model_name may be empty or mixed-case.
# @POST: Returns True when model likely supports multimodal input.
def is_multimodal_model(model_name: str, provider_type: Optional[str] = None) -> bool:
token = (model_name or "").strip().lower()
if not token:
return False
provider = (provider_type or "").strip().lower()
text_only_markers = (
"text-only",
"embedding",
"rerank",
"whisper",
"tts",
"transcribe",
)
if any(marker in token for marker in text_only_markers):
return False
multimodal_markers = (
"gpt-4o",
"gpt-4.1",
"vision",
"vl",
"gemini",
"claude-3",
"claude-sonnet-4",
"omni",
"multimodal",
"pixtral",
"llava",
"internvl",
"qwen-vl",
"qwen2-vl",
)
if any(marker in token for marker in multimodal_markers):
return True
return False
# [/DEF:is_multimodal_model:Function]
# [DEF:resolve_bound_provider_id:Function]
# @TIER: STANDARD
# @PURPOSE: Resolve provider id configured for a task binding with fallback to default provider.
# @PRE: llm_settings is normalized or raw dict from config.
# @POST: Returns configured provider id or fallback id/empty string when not defined.
def resolve_bound_provider_id(llm_settings: Any, task_key: str) -> str:
normalized = normalize_llm_settings(llm_settings)
bindings = normalized.get("provider_bindings", {})
bound = bindings.get(task_key)
if isinstance(bound, str) and bound.strip():
return bound.strip()
default_provider = normalized.get("default_provider", "")
return default_provider.strip() if isinstance(default_provider, str) else ""
# [/DEF:resolve_bound_provider_id:Function]
# [DEF:render_prompt:Function]
# @TIER: STANDARD
# @PURPOSE: Render prompt template using deterministic placeholder replacement with graceful fallback.
# @PRE: template is a string and variables values are already stringifiable.
# @POST: Returns rendered prompt text with known placeholders substituted.
def render_prompt(template: str, variables: Dict[str, Any]) -> str:
rendered = template
for key, value in variables.items():
rendered = rendered.replace("{" + key + "}", str(value))
return rendered
# [/DEF:render_prompt:Function]
# [/DEF:backend.src.services.llm_prompt_templates:Module]

View File

@@ -24,7 +24,7 @@ class EncryptionManager:
# @PRE: ENCRYPTION_KEY env var must be set or use default dev key.
# @POST: Fernet instance ready for encryption/decryption.
def __init__(self):
self.key = os.getenv("ENCRYPTION_KEY", "ZcytYzi0iHIl4Ttr-GdAEk117aGRogkGvN3wiTxrPpE=").encode()
self.key = os.getenv("ENCRYPTION_KEY", "REMOVED_HISTORICAL_SECRET_DO_NOT_USE").encode()
self.fernet = Fernet(self.key)
# [/DEF:EncryptionManager.__init__:Function]
@@ -33,8 +33,7 @@ class EncryptionManager:
# @PRE: data must be a non-empty string.
# @POST: Returns encrypted string.
def encrypt(self, data: str) -> str:
with belief_scope("encrypt"):
return self.fernet.encrypt(data.encode()).decode()
return self.fernet.encrypt(data.encode()).decode()
# [/DEF:EncryptionManager.encrypt:Function]
# [DEF:EncryptionManager.decrypt:Function]
@@ -42,8 +41,7 @@ class EncryptionManager:
# @PRE: encrypted_data must be a valid Fernet-encrypted string.
# @POST: Returns original plaintext string.
def decrypt(self, encrypted_data: str) -> str:
with belief_scope("decrypt"):
return self.fernet.decrypt(encrypted_data.encode()).decode()
return self.fernet.decrypt(encrypted_data.encode()).decode()
# [/DEF:EncryptionManager.decrypt:Function]
# [/DEF:EncryptionManager:Class]

View File

@@ -1,181 +0,0 @@
# [DEF:test_report_service:Module]
# @TIER: CRITICAL
# @PURPOSE: Unit tests for ReportsService list/detail operations
# @LAYER: Domain
# @RELATION: TESTS -> backend.src.services.reports.report_service.ReportsService
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
import pytest
from unittest.mock import MagicMock, patch
from datetime import datetime, timezone, timedelta
def _make_task(task_id="task-1", plugin_id="superset-backup", status_value="SUCCESS",
started_at=None, finished_at=None, result=None, params=None, logs=None):
"""Create a mock Task object matching the Task model interface."""
from src.core.task_manager.models import Task, TaskStatus
task = Task(plugin_id=plugin_id, params=params or {})
task.id = task_id
task.status = TaskStatus(status_value)
task.started_at = started_at or datetime(2024, 1, 15, 10, 0, 0)
task.finished_at = finished_at or datetime(2024, 1, 15, 10, 5, 0)
task.result = result
if logs is not None:
task.logs = logs
return task
class TestReportsServiceList:
"""Tests for ReportsService.list_reports."""
def _make_service(self, tasks):
from src.services.reports.report_service import ReportsService
mock_tm = MagicMock()
mock_tm.get_all_tasks.return_value = tasks
return ReportsService(task_manager=mock_tm)
def test_empty_tasks_returns_empty_collection(self):
from src.models.report import ReportQuery
svc = self._make_service([])
result = svc.list_reports(ReportQuery())
assert result.total == 0
assert result.items == []
assert result.has_next is False
def test_single_task_normalized(self):
from src.models.report import ReportQuery
task = _make_task(result={"summary": "Backup completed"})
svc = self._make_service([task])
result = svc.list_reports(ReportQuery())
assert result.total == 1
assert result.items[0].task_id == "task-1"
assert result.items[0].summary == "Backup completed"
def test_pagination_first_page(self):
from src.models.report import ReportQuery
tasks = [
_make_task(task_id=f"task-{i}",
finished_at=datetime(2024, 1, 15, 10, i, 0))
for i in range(5)
]
svc = self._make_service(tasks)
result = svc.list_reports(ReportQuery(page=1, page_size=2))
assert len(result.items) == 2
assert result.total == 5
assert result.has_next is True
def test_pagination_last_page(self):
from src.models.report import ReportQuery
tasks = [
_make_task(task_id=f"task-{i}",
finished_at=datetime(2024, 1, 15, 10, i, 0))
for i in range(5)
]
svc = self._make_service(tasks)
result = svc.list_reports(ReportQuery(page=3, page_size=2))
assert len(result.items) == 1
assert result.has_next is False
def test_filter_by_status(self):
from src.models.report import ReportQuery, ReportStatus
tasks = [
_make_task(task_id="ok", status_value="SUCCESS"),
_make_task(task_id="fail", status_value="FAILED"),
]
svc = self._make_service(tasks)
result = svc.list_reports(ReportQuery(statuses=[ReportStatus.SUCCESS]))
assert result.total == 1
assert result.items[0].task_id == "ok"
def test_filter_by_task_type(self):
from src.models.report import ReportQuery, TaskType
tasks = [
_make_task(task_id="backup", plugin_id="superset-backup"),
_make_task(task_id="migrate", plugin_id="superset-migration"),
]
svc = self._make_service(tasks)
result = svc.list_reports(ReportQuery(task_types=[TaskType.BACKUP]))
assert result.total == 1
assert result.items[0].task_id == "backup"
def test_search_filter(self):
from src.models.report import ReportQuery
tasks = [
_make_task(task_id="t1", plugin_id="superset-migration",
result={"summary": "Migration complete"}),
_make_task(task_id="t2", plugin_id="documentation",
result={"summary": "Docs generated"}),
]
svc = self._make_service(tasks)
result = svc.list_reports(ReportQuery(search="migration"))
assert result.total == 1
assert result.items[0].task_id == "t1"
def test_sort_by_status(self):
from src.models.report import ReportQuery
tasks = [
_make_task(task_id="t1", status_value="SUCCESS"),
_make_task(task_id="t2", status_value="FAILED"),
]
svc = self._make_service(tasks)
result = svc.list_reports(ReportQuery(sort_by="status", sort_order="asc"))
statuses = [item.status.value for item in result.items]
assert statuses == sorted(statuses)
def test_applied_filters_echoed(self):
from src.models.report import ReportQuery
query = ReportQuery(page=2, page_size=5)
svc = self._make_service([])
result = svc.list_reports(query)
assert result.applied_filters.page == 2
assert result.applied_filters.page_size == 5
class TestReportsServiceDetail:
"""Tests for ReportsService.get_report_detail."""
def _make_service(self, tasks):
from src.services.reports.report_service import ReportsService
mock_tm = MagicMock()
mock_tm.get_all_tasks.return_value = tasks
return ReportsService(task_manager=mock_tm)
def test_detail_found(self):
task = _make_task(task_id="detail-task", result={"summary": "Done"})
svc = self._make_service([task])
detail = svc.get_report_detail("detail-task")
assert detail is not None
assert detail.report.task_id == "detail-task"
def test_detail_not_found(self):
svc = self._make_service([])
detail = svc.get_report_detail("nonexistent")
assert detail is None
def test_detail_includes_timeline(self):
task = _make_task(task_id="tl-task",
started_at=datetime(2024, 1, 15, 10, 0, 0),
finished_at=datetime(2024, 1, 15, 10, 5, 0))
svc = self._make_service([task])
detail = svc.get_report_detail("tl-task")
events = [e["event"] for e in detail.timeline]
assert "started" in events
assert "updated" in events
def test_detail_failed_task_has_next_actions(self):
task = _make_task(task_id="fail-task", status_value="FAILED")
svc = self._make_service([task])
detail = svc.get_report_detail("fail-task")
assert len(detail.next_actions) > 0
def test_detail_success_task_no_error_next_actions(self):
task = _make_task(task_id="ok-task", status_value="SUCCESS",
result={"summary": "All good"})
svc = self._make_service([task])
detail = svc.get_report_detail("ok-task")
assert detail.next_actions == []
# [/DEF:test_report_service:Module]

View File

@@ -12,7 +12,6 @@
from datetime import datetime
from typing import Any, Dict, Optional
from ...core.logger import belief_scope
from ...core.task_manager.models import Task, TaskStatus
from ...models.report import ErrorContext, ReportStatus, TaskReport
from .type_profiles import get_type_profile, resolve_task_type
@@ -26,15 +25,14 @@ from .type_profiles import get_type_profile, resolve_task_type
# @PARAM: status (Any) - Internal task status value.
# @RETURN: ReportStatus - Canonical report status.
def status_to_report_status(status: Any) -> ReportStatus:
with belief_scope("status_to_report_status"):
raw = str(status.value if isinstance(status, TaskStatus) else status).upper()
if raw == TaskStatus.SUCCESS.value:
return ReportStatus.SUCCESS
if raw == TaskStatus.FAILED.value:
return ReportStatus.FAILED
if raw in {TaskStatus.PENDING.value, TaskStatus.RUNNING.value, TaskStatus.AWAITING_INPUT.value, TaskStatus.AWAITING_MAPPING.value}:
return ReportStatus.IN_PROGRESS
return ReportStatus.PARTIAL
raw = str(status.value if isinstance(status, TaskStatus) else status).upper()
if raw == TaskStatus.SUCCESS.value:
return ReportStatus.SUCCESS
if raw == TaskStatus.FAILED.value:
return ReportStatus.FAILED
if raw in {TaskStatus.PENDING.value, TaskStatus.RUNNING.value, TaskStatus.AWAITING_INPUT.value, TaskStatus.AWAITING_MAPPING.value}:
return ReportStatus.IN_PROGRESS
return ReportStatus.PARTIAL
# [/DEF:status_to_report_status:Function]
@@ -46,20 +44,19 @@ def status_to_report_status(status: Any) -> ReportStatus:
# @PARAM: report_status (ReportStatus) - Canonical status.
# @RETURN: str - Normalized summary.
def build_summary(task: Task, report_status: ReportStatus) -> str:
with belief_scope("build_summary"):
result = task.result
if isinstance(result, dict):
for key in ("summary", "message", "status_message", "description"):
value = result.get(key)
if isinstance(value, str) and value.strip():
return value.strip()
if report_status == ReportStatus.SUCCESS:
return "Task completed successfully"
if report_status == ReportStatus.FAILED:
return "Task failed"
if report_status == ReportStatus.IN_PROGRESS:
return "Task is in progress"
return "Task completed with partial data"
result = task.result
if isinstance(result, dict):
for key in ("summary", "message", "status_message", "description"):
value = result.get(key)
if isinstance(value, str) and value.strip():
return value.strip()
if report_status == ReportStatus.SUCCESS:
return "Task completed successfully"
if report_status == ReportStatus.FAILED:
return "Task failed"
if report_status == ReportStatus.IN_PROGRESS:
return "Task is in progress"
return "Task completed with partial data"
# [/DEF:build_summary:Function]
@@ -71,39 +68,38 @@ def build_summary(task: Task, report_status: ReportStatus) -> str:
# @PARAM: report_status (ReportStatus) - Canonical status.
# @RETURN: Optional[ErrorContext] - Error context block.
def extract_error_context(task: Task, report_status: ReportStatus) -> Optional[ErrorContext]:
with belief_scope("extract_error_context"):
if report_status not in {ReportStatus.FAILED, ReportStatus.PARTIAL}:
return None
if report_status not in {ReportStatus.FAILED, ReportStatus.PARTIAL}:
return None
result = task.result if isinstance(task.result, dict) else {}
message = None
code = None
next_actions = []
result = task.result if isinstance(task.result, dict) else {}
message = None
code = None
next_actions = []
if isinstance(result.get("error"), dict):
error_obj = result.get("error", {})
message = error_obj.get("message") or message
code = error_obj.get("code") or code
actions = error_obj.get("next_actions")
if isinstance(actions, list):
next_actions = [str(action) for action in actions if str(action).strip()]
if isinstance(result.get("error"), dict):
error_obj = result.get("error", {})
message = error_obj.get("message") or message
code = error_obj.get("code") or code
actions = error_obj.get("next_actions")
if isinstance(actions, list):
next_actions = [str(action) for action in actions if str(action).strip()]
if not message:
message = result.get("error_message") if isinstance(result.get("error_message"), str) else None
if not message:
message = result.get("error_message") if isinstance(result.get("error_message"), str) else None
if not message:
for log in reversed(task.logs):
if str(log.level).upper() == "ERROR" and log.message:
message = log.message
break
if not message:
for log in reversed(task.logs):
if str(log.level).upper() == "ERROR" and log.message:
message = log.message
break
if not message:
message = "Not provided"
if not message:
message = "Not provided"
if not next_actions:
next_actions = ["Review task diagnostics", "Retry the operation"]
if not next_actions:
next_actions = ["Review task diagnostics", "Retry the operation"]
return ErrorContext(code=code, message=message, next_actions=next_actions)
return ErrorContext(code=code, message=message, next_actions=next_actions)
# [/DEF:extract_error_context:Function]
@@ -114,44 +110,43 @@ def extract_error_context(task: Task, report_status: ReportStatus) -> Optional[E
# @PARAM: task (Task) - Source task.
# @RETURN: TaskReport - Canonical normalized report.
def normalize_task_report(task: Task) -> TaskReport:
with belief_scope("normalize_task_report"):
task_type = resolve_task_type(task.plugin_id)
report_status = status_to_report_status(task.status)
profile = get_type_profile(task_type)
task_type = resolve_task_type(task.plugin_id)
report_status = status_to_report_status(task.status)
profile = get_type_profile(task_type)
started_at = task.started_at if isinstance(task.started_at, datetime) else None
updated_at = task.finished_at if isinstance(task.finished_at, datetime) else None
if not updated_at:
updated_at = started_at or datetime.utcnow()
started_at = task.started_at if isinstance(task.started_at, datetime) else None
updated_at = task.finished_at if isinstance(task.finished_at, datetime) else None
if not updated_at:
updated_at = started_at or datetime.utcnow()
details: Dict[str, Any] = {
"profile": {
"display_label": profile.get("display_label"),
"visual_variant": profile.get("visual_variant"),
"icon_token": profile.get("icon_token"),
"emphasis_rules": profile.get("emphasis_rules", []),
},
"result": task.result if task.result is not None else {"note": "Not provided"},
}
details: Dict[str, Any] = {
"profile": {
"display_label": profile.get("display_label"),
"visual_variant": profile.get("visual_variant"),
"icon_token": profile.get("icon_token"),
"emphasis_rules": profile.get("emphasis_rules", []),
},
"result": task.result if task.result is not None else {"note": "Not provided"},
}
source_ref: Dict[str, Any] = {}
if isinstance(task.params, dict):
for key in ("environment_id", "source_env_id", "target_env_id", "dashboard_id", "dataset_id", "resource_id"):
if key in task.params:
source_ref[key] = task.params.get(key)
source_ref: Dict[str, Any] = {}
if isinstance(task.params, dict):
for key in ("environment_id", "source_env_id", "target_env_id", "dashboard_id", "dataset_id", "resource_id"):
if key in task.params:
source_ref[key] = task.params.get(key)
return TaskReport(
report_id=task.id,
task_id=task.id,
task_type=task_type,
status=report_status,
started_at=started_at,
updated_at=updated_at,
summary=build_summary(task, report_status),
details=details,
error_context=extract_error_context(task, report_status),
source_ref=source_ref or None,
)
return TaskReport(
report_id=task.id,
task_id=task.id,
task_type=task_type,
status=report_status,
started_at=started_at,
updated_at=updated_at,
summary=build_summary(task, report_status),
details=details,
error_context=extract_error_context(task, report_status),
source_ref=source_ref or None,
)
# [/DEF:normalize_task_report:Function]
# [/DEF:backend.src.services.reports.normalizer:Module]

View File

@@ -12,8 +12,6 @@
from datetime import datetime, timezone
from typing import List, Optional
from ...core.logger import belief_scope
from ...core.task_manager import TaskManager
from ...models.report import ReportCollection, ReportDetailView, ReportQuery, ReportStatus, TaskReport, TaskType
from .normalizer import normalize_task_report
@@ -35,8 +33,7 @@ class ReportsService:
# @INVARIANT: Constructor performs no task mutations.
# @PARAM: task_manager (TaskManager) - Task manager providing source task history.
def __init__(self, task_manager: TaskManager):
with belief_scope("__init__"):
self.task_manager = task_manager
self.task_manager = task_manager
# [/DEF:__init__:Function]
# [DEF:_load_normalized_reports:Function]
@@ -46,10 +43,9 @@ class ReportsService:
# @INVARIANT: Every returned item is a TaskReport.
# @RETURN: List[TaskReport] - Reports sorted later by list logic.
def _load_normalized_reports(self) -> List[TaskReport]:
with belief_scope("_load_normalized_reports"):
tasks = self.task_manager.get_all_tasks()
reports = [normalize_task_report(task) for task in tasks]
return reports
tasks = self.task_manager.get_all_tasks()
reports = [normalize_task_report(task) for task in tasks]
return reports
# [/DEF:_load_normalized_reports:Function]
# [DEF:_to_utc_datetime:Function]
@@ -60,12 +56,11 @@ class ReportsService:
# @PARAM: value (Optional[datetime]) - Source datetime value.
# @RETURN: Optional[datetime] - UTC-aware datetime or None.
def _to_utc_datetime(self, value: Optional[datetime]) -> Optional[datetime]:
with belief_scope("_to_utc_datetime"):
if value is None:
return None
if value.tzinfo is None:
return value.replace(tzinfo=timezone.utc)
return value.astimezone(timezone.utc)
if value is None:
return None
if value.tzinfo is None:
return value.replace(tzinfo=timezone.utc)
return value.astimezone(timezone.utc)
# [/DEF:_to_utc_datetime:Function]
# [DEF:_datetime_sort_key:Function]
@@ -76,11 +71,10 @@ class ReportsService:
# @PARAM: report (TaskReport) - Report item.
# @RETURN: float - UTC timestamp key.
def _datetime_sort_key(self, report: TaskReport) -> float:
with belief_scope("_datetime_sort_key"):
updated = self._to_utc_datetime(report.updated_at)
if updated is None:
return 0.0
return updated.timestamp()
updated = self._to_utc_datetime(report.updated_at)
if updated is None:
return 0.0
return updated.timestamp()
# [/DEF:_datetime_sort_key:Function]
# [DEF:_matches_query:Function]
@@ -92,25 +86,24 @@ class ReportsService:
# @PARAM: query (ReportQuery) - Applied query.
# @RETURN: bool - True if report matches all filters.
def _matches_query(self, report: TaskReport, query: ReportQuery) -> bool:
with belief_scope("_matches_query"):
if query.task_types and report.task_type not in query.task_types:
return False
if query.statuses and report.status not in query.statuses:
return False
report_updated_at = self._to_utc_datetime(report.updated_at)
query_time_from = self._to_utc_datetime(query.time_from)
query_time_to = self._to_utc_datetime(query.time_to)
if query.task_types and report.task_type not in query.task_types:
return False
if query.statuses and report.status not in query.statuses:
return False
report_updated_at = self._to_utc_datetime(report.updated_at)
query_time_from = self._to_utc_datetime(query.time_from)
query_time_to = self._to_utc_datetime(query.time_to)
if query_time_from and report_updated_at and report_updated_at < query_time_from:
if query_time_from and report_updated_at and report_updated_at < query_time_from:
return False
if query_time_to and report_updated_at and report_updated_at > query_time_to:
return False
if query.search:
needle = query.search.lower()
haystack = f"{report.summary} {report.task_type.value} {report.status.value}".lower()
if needle not in haystack:
return False
if query_time_to and report_updated_at and report_updated_at > query_time_to:
return False
if query.search:
needle = query.search.lower()
haystack = f"{report.summary} {report.task_type.value} {report.status.value}".lower()
if needle not in haystack:
return False
return True
return True
# [/DEF:_matches_query:Function]
# [DEF:_sort_reports:Function]
@@ -122,17 +115,16 @@ class ReportsService:
# @PARAM: query (ReportQuery) - Sort config.
# @RETURN: List[TaskReport] - Sorted reports.
def _sort_reports(self, reports: List[TaskReport], query: ReportQuery) -> List[TaskReport]:
with belief_scope("_sort_reports"):
reverse = query.sort_order == "desc"
reverse = query.sort_order == "desc"
if query.sort_by == "status":
reports.sort(key=lambda item: item.status.value, reverse=reverse)
elif query.sort_by == "task_type":
reports.sort(key=lambda item: item.task_type.value, reverse=reverse)
else:
reports.sort(key=self._datetime_sort_key, reverse=reverse)
if query.sort_by == "status":
reports.sort(key=lambda item: item.status.value, reverse=reverse)
elif query.sort_by == "task_type":
reports.sort(key=lambda item: item.task_type.value, reverse=reverse)
else:
reports.sort(key=self._datetime_sort_key, reverse=reverse)
return reports
return reports
# [/DEF:_sort_reports:Function]
# [DEF:list_reports:Function]
@@ -142,25 +134,24 @@ class ReportsService:
# @PARAM: query (ReportQuery) - List filters and pagination.
# @RETURN: ReportCollection - Paginated unified reports payload.
def list_reports(self, query: ReportQuery) -> ReportCollection:
with belief_scope("list_reports"):
reports = self._load_normalized_reports()
filtered = [report for report in reports if self._matches_query(report, query)]
sorted_reports = self._sort_reports(filtered, query)
reports = self._load_normalized_reports()
filtered = [report for report in reports if self._matches_query(report, query)]
sorted_reports = self._sort_reports(filtered, query)
total = len(sorted_reports)
start = (query.page - 1) * query.page_size
end = start + query.page_size
items = sorted_reports[start:end]
has_next = end < total
total = len(sorted_reports)
start = (query.page - 1) * query.page_size
end = start + query.page_size
items = sorted_reports[start:end]
has_next = end < total
return ReportCollection(
items=items,
total=total,
page=query.page,
page_size=query.page_size,
has_next=has_next,
applied_filters=query,
)
return ReportCollection(
items=items,
total=total,
page=query.page,
page_size=query.page_size,
has_next=has_next,
applied_filters=query,
)
# [/DEF:list_reports:Function]
# [DEF:get_report_detail:Function]
@@ -170,35 +161,34 @@ class ReportsService:
# @PARAM: report_id (str) - Stable report identifier.
# @RETURN: Optional[ReportDetailView] - Detailed report or None if not found.
def get_report_detail(self, report_id: str) -> Optional[ReportDetailView]:
with belief_scope("get_report_detail"):
reports = self._load_normalized_reports()
target = next((report for report in reports if report.report_id == report_id), None)
if not target:
return None
reports = self._load_normalized_reports()
target = next((report for report in reports if report.report_id == report_id), None)
if not target:
return None
timeline = []
if target.started_at:
timeline.append({"event": "started", "at": target.started_at.isoformat()})
timeline.append({"event": "updated", "at": target.updated_at.isoformat()})
timeline = []
if target.started_at:
timeline.append({"event": "started", "at": target.started_at.isoformat()})
timeline.append({"event": "updated", "at": target.updated_at.isoformat()})
diagnostics = target.details or {}
if not diagnostics:
diagnostics = {"note": "Not provided"}
if target.error_context:
diagnostics["error_context"] = target.error_context.model_dump()
diagnostics = target.details or {}
if not diagnostics:
diagnostics = {"note": "Not provided"}
if target.error_context:
diagnostics["error_context"] = target.error_context.model_dump()
next_actions = []
if target.error_context and target.error_context.next_actions:
next_actions = target.error_context.next_actions
elif target.status in {ReportStatus.FAILED, ReportStatus.PARTIAL}:
next_actions = ["Review diagnostics", "Retry task if applicable"]
next_actions = []
if target.error_context and target.error_context.next_actions:
next_actions = target.error_context.next_actions
elif target.status in {ReportStatus.FAILED, ReportStatus.PARTIAL}:
next_actions = ["Review diagnostics", "Retry task if applicable"]
return ReportDetailView(
report=target,
timeline=timeline,
diagnostics=diagnostics,
next_actions=next_actions,
)
return ReportDetailView(
report=target,
timeline=timeline,
diagnostics=diagnostics,
next_actions=next_actions,
)
# [/DEF:get_report_detail:Function]
# [/DEF:ReportsService:Class]

View File

@@ -9,7 +9,6 @@
# [SECTION: IMPORTS]
from typing import Any, Dict, Optional
from ...core.logger import belief_scope
from ...models.report import TaskType
# [/SECTION]
@@ -72,11 +71,10 @@ TASK_TYPE_PROFILES: Dict[TaskType, Dict[str, Any]] = {
# @PARAM: plugin_id (Optional[str]) - Source plugin/task identifier from task record.
# @RETURN: TaskType - Resolved canonical type or UNKNOWN fallback.
def resolve_task_type(plugin_id: Optional[str]) -> TaskType:
with belief_scope("resolve_task_type"):
normalized = (plugin_id or "").strip()
if not normalized:
return TaskType.UNKNOWN
return PLUGIN_TO_TASK_TYPE.get(normalized, TaskType.UNKNOWN)
normalized = (plugin_id or "").strip()
if not normalized:
return TaskType.UNKNOWN
return PLUGIN_TO_TASK_TYPE.get(normalized, TaskType.UNKNOWN)
# [/DEF:resolve_task_type:Function]
@@ -87,8 +85,7 @@ def resolve_task_type(plugin_id: Optional[str]) -> TaskType:
# @PARAM: task_type (TaskType) - Canonical task type.
# @RETURN: Dict[str, Any] - Profile metadata used by normalization and UI contracts.
def get_type_profile(task_type: TaskType) -> Dict[str, Any]:
with belief_scope("get_type_profile"):
return TASK_TYPE_PROFILES.get(task_type, TASK_TYPE_PROFILES[TaskType.UNKNOWN])
return TASK_TYPE_PROFILES.get(task_type, TASK_TYPE_PROFILES[TaskType.UNKNOWN])
# [/DEF:get_type_profile:Function]
# [/DEF:backend.src.services.reports.type_profiles:Module]

Binary file not shown.

View File

@@ -1,76 +0,0 @@
#!/usr/bin/env python3
"""Debug script to test Superset API authentication"""
from pprint import pprint
from src.core.superset_client import SupersetClient
from src.core.config_manager import ConfigManager
def main():
print("Debugging Superset API authentication...")
config = ConfigManager()
# Select first available environment
environments = config.get_environments()
if not environments:
print("No environments configured")
return
env = environments[0]
print(f"\nTesting environment: {env.name}")
print(f"URL: {env.url}")
try:
# Test API client authentication
print("\n--- Testing API Authentication ---")
client = SupersetClient(env)
tokens = client.authenticate()
print("\nAPI Auth Success!")
print(f"Access Token: {tokens.get('access_token', 'N/A')}")
print(f"CSRF Token: {tokens.get('csrf_token', 'N/A')}")
# Debug cookies from session
print("\n--- Session Cookies ---")
for cookie in client.network.session.cookies:
print(f"{cookie.name}={cookie.value}")
# Test accessing UI via requests
print("\n--- Testing UI Access ---")
ui_url = env.url.rstrip('/').replace('/api/v1', '')
print(f"UI URL: {ui_url}")
# Try to access UI home page
ui_response = client.network.session.get(ui_url, timeout=30, allow_redirects=True)
print(f"Status Code: {ui_response.status_code}")
print(f"URL: {ui_response.url}")
# Check response headers
print("\n--- Response Headers ---")
pprint(dict(ui_response.headers))
print("\n--- Response Content Preview (200 chars) ---")
print(repr(ui_response.text[:200]))
if ui_response.status_code == 200:
print("\nUI Access: Success")
# Try to access a dashboard
# For testing, just use the home page
print("\n--- Checking if login is required ---")
if "login" in ui_response.url.lower() or "login" in ui_response.text.lower():
print("❌ Not logged in to UI")
else:
print("✅ Logged in to UI")
except Exception as e:
print(f"\n❌ Error: {type(e).__name__}: {e}")
import traceback
print("\nStack Trace:")
print(traceback.format_exc())
if __name__ == "__main__":
main()

View File

@@ -1,44 +0,0 @@
#!/usr/bin/env python3
"""Test script to debug API key decryption issue."""
from src.core.database import SessionLocal
from src.models.llm import LLMProvider
from cryptography.fernet import Fernet
import os
# Get the encryption key
key = os.getenv("ENCRYPTION_KEY", "ZcytYzi0iHIl4Ttr-GdAEk117aGRogkGvN3wiTxrPpE=").encode()
print(f"Encryption key (first 20 chars): {key[:20]}")
print(f"Encryption key length: {len(key)}")
# Create Fernet instance
fernet = Fernet(key)
# Get provider from database
db = SessionLocal()
provider = db.query(LLMProvider).filter(LLMProvider.id == '6c899741-4108-4196-aea4-f38ad2f0150e').first()
if provider:
print("\nProvider found:")
print(f" ID: {provider.id}")
print(f" Name: {provider.name}")
print(f" Encrypted API Key (first 50 chars): {provider.api_key[:50]}")
print(f" Encrypted API Key Length: {len(provider.api_key)}")
# Test decryption
print("\nAttempting decryption...")
try:
decrypted = fernet.decrypt(provider.api_key.encode()).decode()
print("Decryption successful!")
print(f" Decrypted key length: {len(decrypted)}")
print(f" Decrypted key (first 8 chars): {decrypted[:8]}")
print(f" Decrypted key is empty: {len(decrypted) == 0}")
except Exception as e:
print(f"Decryption failed with error: {e}")
print(f"Error type: {type(e).__name__}")
import traceback
traceback.print_exc()
else:
print("Provider not found")
db.close()

View File

@@ -1 +0,0 @@
[{"key[": 20, ")\n\n# Create Fernet instance\nfernet = Fernet(key)\n\n# Test encrypting an empty string\nempty_encrypted = fernet.encrypt(b\"": ".", "print(f": "nEncrypted empty string: {empty_encrypted"}, {"test-api-key-12345\"\ntest_encrypted = fernet.encrypt(test_key.encode()).decode()\nprint(f": "nEncrypted test key: {test_encrypted"}, {"gAAAAABphhwSZie0OwXjJ78Fk-c4Uo6doNJXipX49AX7Bypzp4ohiRX3hXPXKb45R1vhNUOqbm6Ke3-eRwu_KdWMZ9chFBKmqw==\"\nprint(f": "nStored encrypted key: {stored_key"}, {"len(stored_key)}": "Check if stored key matches empty string encryption\nif stored_key == empty_encrypted:\n print(", "string!": "else:\n print(", "print(f": "mpty string encryption: {empty_encrypted"}, {"stored_key}": "Try to decrypt the stored key\ntry:\n decrypted = fernet.decrypt(stored_key.encode()).decode()\n print(f", "print(f": "ecrypted key length: {len(decrypted)"}, {")\nexcept Exception as e:\n print(f": "nDecryption failed with error: {e"}]

View File

@@ -1,99 +0,0 @@
# [DEF:backend.tests.core.test_mapping_service:Module]
#
# @TIER: STANDARD
# @PURPOSE: Unit tests for the IdMappingService matching UUIDs to integer IDs.
# @LAYER: Domain
# @RELATION: VERIFIES -> backend.src.core.mapping_service
#
import pytest
from datetime import datetime, timezone
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
import sys
import os
from pathlib import Path
# Add backend directory to sys.path so 'src' can be resolved
backend_dir = str(Path(__file__).parent.parent.parent.resolve())
if backend_dir not in sys.path:
sys.path.insert(0, backend_dir)
from src.models.mapping import Base, ResourceMapping, ResourceType
from src.core.mapping_service import IdMappingService
@pytest.fixture
def db_session():
# In-memory SQLite for testing
engine = create_engine('sqlite:///:memory:')
Base.metadata.create_all(engine)
Session = sessionmaker(bind=engine)
session = Session()
yield session
session.close()
class MockSupersetClient:
def __init__(self, resources):
self.resources = resources
def get_all_resources(self, endpoint):
return self.resources.get(endpoint, [])
def test_sync_environment_upserts_correctly(db_session):
service = IdMappingService(db_session)
mock_client = MockSupersetClient({
"chart": [
{"id": 42, "uuid": "123e4567-e89b-12d3-a456-426614174000", "slice_name": "Test Chart"}
]
})
service.sync_environment("test-env", mock_client)
mapping = db_session.query(ResourceMapping).first()
assert mapping is not None
assert mapping.environment_id == "test-env"
assert mapping.resource_type == ResourceType.CHART
assert mapping.uuid == "123e4567-e89b-12d3-a456-426614174000"
assert mapping.remote_integer_id == "42"
assert mapping.resource_name == "Test Chart"
def test_get_remote_id_returns_integer(db_session):
service = IdMappingService(db_session)
mapping = ResourceMapping(
environment_id="test-env",
resource_type=ResourceType.DATASET,
uuid="uuid-1",
remote_integer_id="99",
resource_name="Test DS",
last_synced_at=datetime.now(timezone.utc)
)
db_session.add(mapping)
db_session.commit()
result = service.get_remote_id("test-env", ResourceType.DATASET, "uuid-1")
assert result == 99
def test_get_remote_ids_batch_returns_dict(db_session):
service = IdMappingService(db_session)
m1 = ResourceMapping(
environment_id="test-env",
resource_type=ResourceType.DASHBOARD,
uuid="uuid-1",
remote_integer_id="11"
)
m2 = ResourceMapping(
environment_id="test-env",
resource_type=ResourceType.DASHBOARD,
uuid="uuid-2",
remote_integer_id="22"
)
db_session.add_all([m1, m2])
db_session.commit()
result = service.get_remote_ids_batch("test-env", ResourceType.DASHBOARD, ["uuid-1", "uuid-2", "uuid-missing"])
assert len(result) == 2
assert result["uuid-1"] == 11
assert result["uuid-2"] == 22
assert "uuid-missing" not in result
# [/DEF:backend.tests.core.test_mapping_service:Module]

View File

@@ -1,66 +0,0 @@
# [DEF:backend.tests.core.test_migration_engine:Module]
#
# @TIER: STANDARD
# @PURPOSE: Unit tests for MigrationEngine's cross-filter patching algorithms.
# @LAYER: Domain
# @RELATION: VERIFIES -> backend.src.core.migration_engine
#
import pytest
import tempfile
import json
import yaml
import sys
import os
from pathlib import Path
backend_dir = str(Path(__file__).parent.parent.parent.resolve())
if backend_dir not in sys.path:
sys.path.insert(0, backend_dir)
from src.core.migration_engine import MigrationEngine
from src.core.mapping_service import IdMappingService
from src.models.mapping import ResourceType
class MockMappingService:
def __init__(self, mappings):
self.mappings = mappings
def get_remote_ids_batch(self, env_id, resource_type, uuids):
result = {}
for uuid in uuids:
if uuid in self.mappings:
result[uuid] = self.mappings[uuid]
return result
def test_patch_dashboard_metadata_replaces_ids():
engine = MigrationEngine(MockMappingService({"uuid-target-1": 999}))
with tempfile.TemporaryDirectory() as td:
file_path = Path(td) / "dash.yaml"
# Setup mock dashboard file
original_metadata = {
"native_filter_configuration": [
{
"targets": [{"datasetId": 10}, {"datasetId": 42}] # 42 is our source ID
}
]
}
with open(file_path, 'w') as f:
yaml.dump({"json_metadata": json.dumps(original_metadata)}, f)
source_map = {42: "uuid-target-1"} # Source ID 42 translates to Target ID 999
engine._patch_dashboard_metadata(file_path, "test-env", source_map)
with open(file_path, 'r') as f:
data = yaml.safe_load(f)
new_metadata = json.loads(data["json_metadata"])
# Since simple string replacement isn't implemented strictly in the engine yet
# (we left a placeholder `pass` for dataset replacement), this test sets up the
# infrastructure to verify the patch once fully mapped.
pass
# [/DEF:backend.tests.core.test_migration_engine:Module]

View File

@@ -3,24 +3,20 @@
# @PURPOSE: Unit tests for TaskLogPersistenceService.
# @LAYER: Test
# @RELATION: TESTS -> TaskLogPersistenceService
# @TIER: CRITICAL
# @TIER: STANDARD
# [SECTION: IMPORTS]
from datetime import datetime
from unittest.mock import patch
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.models.mapping import Base
from src.core.task_manager.persistence import TaskLogPersistenceService
from src.core.task_manager.models import LogEntry, LogFilter
from src.core.task_manager.models import LogEntry
# [/SECTION]
# [DEF:TestLogPersistence:Class]
# @PURPOSE: Test suite for TaskLogPersistenceService.
# @TIER: CRITICAL
# @TEST_DATA: log_entry -> {"task_id": "test-task-1", "level": "INFO", "source": "test_source", "message": "Test message"}
# @TIER: STANDARD
class TestLogPersistence:
# [DEF:setup_class:Function]
@@ -31,9 +27,8 @@ class TestLogPersistence:
def setup_class(cls):
"""Create an in-memory database for testing."""
cls.engine = create_engine("sqlite:///:memory:")
Base.metadata.create_all(bind=cls.engine)
cls.TestSessionLocal = sessionmaker(bind=cls.engine)
cls.service = TaskLogPersistenceService()
cls.SessionLocal = sessionmaker(bind=cls.engine)
cls.service = TaskLogPersistenceService(cls.engine)
# [/DEF:setup_class:Function]
# [DEF:teardown_class:Function]
@@ -47,108 +42,111 @@ class TestLogPersistence:
# [/DEF:teardown_class:Function]
# [DEF:setup_method:Function]
# @PURPOSE: Setup for each test method — clean task_logs table.
# @PURPOSE: Setup for each test method.
# @PRE: None.
# @POST: task_logs table is empty.
# @POST: Fresh database session created.
def setup_method(self):
"""Clean task_logs table before each test."""
session = self.TestSessionLocal()
from src.models.task import TaskLogRecord
session.query(TaskLogRecord).delete()
session.commit()
session.close()
"""Create a new session for each test."""
self.session = self.SessionLocal()
# [/DEF:setup_method:Function]
def _patched(self, method_name):
"""Helper: returns a patch context for TasksSessionLocal."""
return patch(
"src.core.task_manager.persistence.TasksSessionLocal",
self.TestSessionLocal
)
# [DEF:teardown_method:Function]
# @PURPOSE: Cleanup after each test method.
# @PRE: None.
# @POST: Session closed and rolled back.
def teardown_method(self):
"""Close the session after each test."""
self.session.close()
# [/DEF:teardown_method:Function]
# [DEF:test_add_logs_single:Function]
# [DEF:test_add_log_single:Function]
# @PURPOSE: Test adding a single log entry.
# @PRE: Service and session initialized.
# @POST: Log entry persisted to database.
def test_add_logs_single(self):
"""Test adding a single log entry via add_logs."""
def test_add_log_single(self):
"""Test adding a single log entry."""
entry = LogEntry(
timestamp=datetime.utcnow(),
task_id="test-task-1",
timestamp=datetime.now(),
level="INFO",
source="test_source",
message="Test message"
)
with self._patched("add_logs"):
self.service.add_logs("test-task-1", [entry])
self.service.add_log(entry)
# Query the database
from src.models.task import TaskLogRecord
session = self.TestSessionLocal()
result = session.query(TaskLogRecord).filter_by(task_id="test-task-1").first()
session.close()
result = self.session.query(LogEntry).filter_by(task_id="test-task-1").first()
assert result is not None
assert result.level == "INFO"
assert result.source == "test_source"
assert result.message == "Test message"
# [/DEF:test_add_logs_single:Function]
# [/DEF:test_add_log_single:Function]
# [DEF:test_add_logs_batch:Function]
# [DEF:test_add_log_batch:Function]
# @PURPOSE: Test adding multiple log entries in batch.
# @PRE: Service and session initialized.
# @POST: All log entries persisted to database.
def test_add_logs_batch(self):
def test_add_log_batch(self):
"""Test adding multiple log entries in batch."""
entries = [
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="source1", message="Message 1"),
LogEntry(timestamp=datetime.utcnow(), level="WARNING", source="source2", message="Message 2"),
LogEntry(timestamp=datetime.utcnow(), level="ERROR", source="source3", message="Message 3"),
LogEntry(
task_id="test-task-2",
timestamp=datetime.now(),
level="INFO",
source="source1",
message="Message 1"
),
LogEntry(
task_id="test-task-2",
timestamp=datetime.now(),
level="WARNING",
source="source2",
message="Message 2"
),
LogEntry(
task_id="test-task-2",
timestamp=datetime.now(),
level="ERROR",
source="source3",
message="Message 3"
),
]
with self._patched("add_logs"):
self.service.add_logs("test-task-2", entries)
from src.models.task import TaskLogRecord
session = self.TestSessionLocal()
results = session.query(TaskLogRecord).filter_by(task_id="test-task-2").all()
session.close()
self.service.add_logs(entries)
# Query the database
results = self.session.query(LogEntry).filter_by(task_id="test-task-2").all()
assert len(results) == 3
# [/DEF:test_add_logs_batch:Function]
# [DEF:test_add_logs_empty:Function]
# @PURPOSE: Test adding empty log list (should be no-op).
# @PRE: Service initialized.
# @POST: No logs added.
def test_add_logs_empty(self):
"""Test adding empty log list is a no-op."""
with self._patched("add_logs"):
self.service.add_logs("test-task-X", [])
from src.models.task import TaskLogRecord
session = self.TestSessionLocal()
results = session.query(TaskLogRecord).filter_by(task_id="test-task-X").all()
session.close()
assert len(results) == 0
# [/DEF:test_add_logs_empty:Function]
assert results[0].level == "INFO"
assert results[1].level == "WARNING"
assert results[2].level == "ERROR"
# [/DEF:test_add_log_batch:Function]
# [DEF:test_get_logs_by_task_id:Function]
# @PURPOSE: Test retrieving logs by task ID.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns logs for the specified task.
def test_get_logs_by_task_id(self):
"""Test retrieving logs by task ID using LogFilter."""
"""Test retrieving logs by task ID."""
# Add test logs
entries = [
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="src1", message=f"Message {i}")
LogEntry(
task_id="test-task-3",
timestamp=datetime.now(),
level="INFO",
source="source1",
message=f"Message {i}"
)
for i in range(5)
]
with self._patched("add_logs"):
self.service.add_logs("test-task-3", entries)
with self._patched("get_logs"):
logs = self.service.get_logs("test-task-3", LogFilter())
self.service.add_logs(entries)
# Retrieve logs
logs = self.service.get_logs("test-task-3")
assert len(logs) == 5
assert all(log.task_id == "test-task-3" for log in logs)
# [/DEF:test_get_logs_by_task_id:Function]
@@ -159,25 +157,45 @@ class TestLogPersistence:
# @POST: Returns filtered logs.
def test_get_logs_with_filters(self):
"""Test retrieving logs with level and source filters."""
# Add test logs with different levels and sources
entries = [
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="api", message="Info message"),
LogEntry(timestamp=datetime.utcnow(), level="WARNING", source="api", message="Warning message"),
LogEntry(timestamp=datetime.utcnow(), level="ERROR", source="storage", message="Error message"),
LogEntry(
task_id="test-task-4",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Info message"
),
LogEntry(
task_id="test-task-4",
timestamp=datetime.now(),
level="WARNING",
source="api",
message="Warning message"
),
LogEntry(
task_id="test-task-4",
timestamp=datetime.now(),
level="ERROR",
source="storage",
message="Error message"
),
]
with self._patched("add_logs"):
self.service.add_logs("test-task-4", entries)
self.service.add_logs(entries)
# Test level filter
with self._patched("get_logs"):
warning_logs = self.service.get_logs("test-task-4", LogFilter(level="WARNING"))
warning_logs = self.service.get_logs("test-task-4", level="WARNING")
assert len(warning_logs) == 1
assert warning_logs[0].level == "WARNING"
# Test source filter
with self._patched("get_logs"):
api_logs = self.service.get_logs("test-task-4", LogFilter(source="api"))
api_logs = self.service.get_logs("test-task-4", source="api")
assert len(api_logs) == 2
assert all(log.source == "api" for log in api_logs)
# Test combined filters
api_warning_logs = self.service.get_logs("test-task-4", level="WARNING", source="api")
assert len(api_warning_logs) == 1
# [/DEF:test_get_logs_with_filters:Function]
# [DEF:test_get_logs_with_pagination:Function]
@@ -186,19 +204,25 @@ class TestLogPersistence:
# @POST: Returns paginated logs.
def test_get_logs_with_pagination(self):
"""Test retrieving logs with pagination."""
# Add 15 test logs
entries = [
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="test", message=f"Message {i}")
LogEntry(
task_id="test-task-5",
timestamp=datetime.now(),
level="INFO",
source="test",
message=f"Message {i}"
)
for i in range(15)
]
with self._patched("add_logs"):
self.service.add_logs("test-task-5", entries)
with self._patched("get_logs"):
page1 = self.service.get_logs("test-task-5", LogFilter(limit=10, offset=0))
self.service.add_logs(entries)
# Test first page
page1 = self.service.get_logs("test-task-5", limit=10, offset=0)
assert len(page1) == 10
with self._patched("get_logs"):
page2 = self.service.get_logs("test-task-5", LogFilter(limit=10, offset=10))
# Test second page
page2 = self.service.get_logs("test-task-5", limit=10, offset=10)
assert len(page2) == 5
# [/DEF:test_get_logs_with_pagination:Function]
@@ -208,131 +232,164 @@ class TestLogPersistence:
# @POST: Returns logs matching search query.
def test_get_logs_with_search(self):
"""Test retrieving logs with search query."""
# Add test logs
entries = [
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="api", message="User authentication successful"),
LogEntry(timestamp=datetime.utcnow(), level="ERROR", source="api", message="Failed to connect to database"),
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="storage", message="File saved successfully"),
LogEntry(
task_id="test-task-6",
timestamp=datetime.now(),
level="INFO",
source="api",
message="User authentication successful"
),
LogEntry(
task_id="test-task-6",
timestamp=datetime.now(),
level="ERROR",
source="api",
message="Failed to connect to database"
),
LogEntry(
task_id="test-task-6",
timestamp=datetime.now(),
level="INFO",
source="storage",
message="File saved successfully"
),
]
with self._patched("add_logs"):
self.service.add_logs("test-task-6", entries)
with self._patched("get_logs"):
auth_logs = self.service.get_logs("test-task-6", LogFilter(search="authentication"))
self.service.add_logs(entries)
# Test search for "authentication"
auth_logs = self.service.get_logs("test-task-6", search="authentication")
assert len(auth_logs) == 1
assert "authentication" in auth_logs[0].message.lower()
# Test search for "failed"
failed_logs = self.service.get_logs("test-task-6", search="failed")
assert len(failed_logs) == 1
assert "failed" in failed_logs[0].message.lower()
# [/DEF:test_get_logs_with_search:Function]
# [DEF:test_get_log_stats:Function]
# @PURPOSE: Test retrieving log statistics.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns LogStats model with counts by level and source.
# @POST: Returns statistics grouped by level and source.
def test_get_log_stats(self):
"""Test retrieving log statistics as LogStats model."""
"""Test retrieving log statistics."""
# Add test logs
entries = [
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="api", message="Info 1"),
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="api", message="Info 2"),
LogEntry(timestamp=datetime.utcnow(), level="WARNING", source="api", message="Warning 1"),
LogEntry(timestamp=datetime.utcnow(), level="ERROR", source="storage", message="Error 1"),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Info 1"
),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Info 2"
),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="WARNING",
source="api",
message="Warning 1"
),
LogEntry(
task_id="test-task-7",
timestamp=datetime.now(),
level="ERROR",
source="storage",
message="Error 1"
),
]
with self._patched("add_logs"):
self.service.add_logs("test-task-7", entries)
with self._patched("get_log_stats"):
stats = self.service.get_log_stats("test-task-7")
self.service.add_logs(entries)
# Get stats
stats = self.service.get_log_stats("test-task-7")
assert stats is not None
assert stats.total_count == 4
assert stats.by_level["INFO"] == 2
assert stats.by_level["WARNING"] == 1
assert stats.by_level["ERROR"] == 1
assert stats.by_source["api"] == 3
assert stats.by_source["storage"] == 1
assert stats["by_level"]["INFO"] == 2
assert stats["by_level"]["WARNING"] == 1
assert stats["by_level"]["ERROR"] == 1
assert stats["by_source"]["api"] == 3
assert stats["by_source"]["storage"] == 1
# [/DEF:test_get_log_stats:Function]
# [DEF:test_get_sources:Function]
# [DEF:test_get_log_sources:Function]
# @PURPOSE: Test retrieving unique log sources.
# @PRE: Service and session initialized, logs exist.
# @POST: Returns list of unique sources.
def test_get_sources(self):
def test_get_log_sources(self):
"""Test retrieving unique log sources."""
# Add test logs
entries = [
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="api", message="Message 1"),
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="storage", message="Message 2"),
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="git", message="Message 3"),
LogEntry(
task_id="test-task-8",
timestamp=datetime.now(),
level="INFO",
source="api",
message="Message 1"
),
LogEntry(
task_id="test-task-8",
timestamp=datetime.now(),
level="INFO",
source="storage",
message="Message 2"
),
LogEntry(
task_id="test-task-8",
timestamp=datetime.now(),
level="INFO",
source="git",
message="Message 3"
),
]
with self._patched("add_logs"):
self.service.add_logs("test-task-8", entries)
with self._patched("get_sources"):
sources = self.service.get_sources("test-task-8")
self.service.add_logs(entries)
# Get sources
sources = self.service.get_log_sources("test-task-8")
assert len(sources) == 3
assert "api" in sources
assert "storage" in sources
assert "git" in sources
# [/DEF:test_get_sources:Function]
# [/DEF:test_get_log_sources:Function]
# [DEF:test_delete_logs_for_task:Function]
# [DEF:test_delete_logs_by_task_id:Function]
# @PURPOSE: Test deleting logs by task ID.
# @PRE: Service and session initialized, logs exist.
# @POST: Logs for the task are deleted.
def test_delete_logs_for_task(self):
def test_delete_logs_by_task_id(self):
"""Test deleting logs by task ID."""
# Add test logs
entries = [
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="test", message=f"Message {i}")
LogEntry(
task_id="test-task-9",
timestamp=datetime.now(),
level="INFO",
source="test",
message=f"Message {i}"
)
for i in range(3)
]
with self._patched("add_logs"):
self.service.add_logs("test-task-9", entries)
self.service.add_logs(entries)
# Verify logs exist
with self._patched("get_logs"):
logs_before = self.service.get_logs("test-task-9", LogFilter())
logs_before = self.service.get_logs("test-task-9")
assert len(logs_before) == 3
# Delete logs
with self._patched("delete_logs_for_task"):
self.service.delete_logs_for_task("test-task-9")
self.service.delete_logs("test-task-9")
# Verify logs are deleted
with self._patched("get_logs"):
logs_after = self.service.get_logs("test-task-9", LogFilter())
logs_after = self.service.get_logs("test-task-9")
assert len(logs_after) == 0
# [/DEF:test_delete_logs_for_task:Function]
# [DEF:test_delete_logs_for_tasks:Function]
# @PURPOSE: Test deleting logs for multiple tasks.
# @PRE: Service and session initialized, logs exist.
# @POST: Logs for all specified tasks are deleted.
def test_delete_logs_for_tasks(self):
"""Test deleting logs for multiple tasks at once."""
for task_id in ["multi-1", "multi-2", "multi-3"]:
entries = [
LogEntry(timestamp=datetime.utcnow(), level="INFO", source="test", message="msg")
]
with self._patched("add_logs"):
self.service.add_logs(task_id, entries)
with self._patched("delete_logs_for_tasks"):
self.service.delete_logs_for_tasks(["multi-1", "multi-2"])
from src.models.task import TaskLogRecord
session = self.TestSessionLocal()
remaining = session.query(TaskLogRecord).all()
session.close()
assert len(remaining) == 1
assert remaining[0].task_id == "multi-3"
# [/DEF:test_delete_logs_for_tasks:Function]
# [DEF:test_delete_logs_for_tasks_empty:Function]
# @PURPOSE: Test deleting with empty list (no-op).
# @PRE: Service initialized.
# @POST: No error, no deletion.
def test_delete_logs_for_tasks_empty(self):
"""Test deleting with empty list is a no-op."""
with self._patched("delete_logs_for_tasks"):
self.service.delete_logs_for_tasks([]) # Should not raise
# [/DEF:test_delete_logs_for_tasks_empty:Function]
# [/DEF:test_delete_logs_by_task_id:Function]
# [/DEF:TestLogPersistence:Class]
# [/DEF:test_log_persistence:Module]

View File

@@ -1,495 +0,0 @@
# [DEF:test_task_manager:Module]
# @TIER: CRITICAL
# @SEMANTICS: task-manager, lifecycle, CRUD, log-buffer, filtering, tests
# @PURPOSE: Unit tests for TaskManager lifecycle, CRUD, log buffering, and filtering.
# @LAYER: Core
# @RELATION: TESTS -> backend.src.core.task_manager.manager.TaskManager
# @INVARIANT: TaskManager state changes are deterministic and testable with mocked dependencies.
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent.parent / "src"))
import pytest
import asyncio
from unittest.mock import MagicMock, patch, AsyncMock
from datetime import datetime
# Helper to create a TaskManager with mocked dependencies
def _make_manager():
"""Create TaskManager with mocked plugin_loader and persistence services."""
mock_plugin_loader = MagicMock()
mock_plugin_loader.has_plugin.return_value = True
mock_plugin = MagicMock()
mock_plugin.name = "test_plugin"
mock_plugin.execute = MagicMock(return_value={"status": "ok"})
mock_plugin_loader.get_plugin.return_value = mock_plugin
with patch("src.core.task_manager.manager.TaskPersistenceService") as MockPersistence, \
patch("src.core.task_manager.manager.TaskLogPersistenceService") as MockLogPersistence:
MockPersistence.return_value.load_tasks.return_value = []
MockLogPersistence.return_value.add_logs = MagicMock()
MockLogPersistence.return_value.get_logs = MagicMock(return_value=[])
MockLogPersistence.return_value.get_log_stats = MagicMock()
MockLogPersistence.return_value.get_sources = MagicMock(return_value=[])
MockLogPersistence.return_value.delete_logs_for_tasks = MagicMock()
manager = None
try:
from src.core.task_manager.manager import TaskManager
manager = TaskManager(mock_plugin_loader)
except RuntimeError:
# No event loop — create one
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
from src.core.task_manager.manager import TaskManager
manager = TaskManager(mock_plugin_loader)
return manager, mock_plugin_loader, MockPersistence.return_value, MockLogPersistence.return_value
def _cleanup_manager(manager):
"""Stop the flusher thread."""
manager._flusher_stop_event.set()
manager._flusher_thread.join(timeout=2)
class TestTaskManagerInit:
"""Tests for TaskManager initialization."""
def test_init_creates_empty_tasks(self):
mgr, _, _, _ = _make_manager()
try:
assert isinstance(mgr.tasks, dict)
finally:
_cleanup_manager(mgr)
def test_init_loads_persisted_tasks(self):
mgr, _, persist_svc, _ = _make_manager()
try:
persist_svc.load_tasks.assert_called_once_with(limit=100)
finally:
_cleanup_manager(mgr)
def test_init_starts_flusher_thread(self):
mgr, _, _, _ = _make_manager()
try:
assert mgr._flusher_thread.is_alive()
finally:
_cleanup_manager(mgr)
class TestTaskManagerCRUD:
"""Tests for TaskManager task retrieval methods."""
def test_get_task_returns_none_for_missing(self):
mgr, _, _, _ = _make_manager()
try:
assert mgr.get_task("nonexistent") is None
finally:
_cleanup_manager(mgr)
def test_get_task_returns_existing(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task
task = Task(plugin_id="test", params={})
mgr.tasks[task.id] = task
assert mgr.get_task(task.id) is task
finally:
_cleanup_manager(mgr)
def test_get_all_tasks(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task
t1 = Task(plugin_id="p1", params={})
t2 = Task(plugin_id="p2", params={})
mgr.tasks[t1.id] = t1
mgr.tasks[t2.id] = t2
assert len(mgr.get_all_tasks()) == 2
finally:
_cleanup_manager(mgr)
def test_get_tasks_with_status_filter(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
t1 = Task(plugin_id="p1", params={})
t1.status = TaskStatus.SUCCESS
t1.started_at = datetime(2024, 1, 1, 12, 0, 0)
t2 = Task(plugin_id="p2", params={})
t2.status = TaskStatus.FAILED
t2.started_at = datetime(2024, 1, 1, 13, 0, 0)
mgr.tasks[t1.id] = t1
mgr.tasks[t2.id] = t2
result = mgr.get_tasks(status=TaskStatus.SUCCESS)
assert len(result) == 1
assert result[0].status == TaskStatus.SUCCESS
finally:
_cleanup_manager(mgr)
def test_get_tasks_with_plugin_filter(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task
t1 = Task(plugin_id="backup", params={})
t1.started_at = datetime(2024, 1, 1, 12, 0, 0)
t2 = Task(plugin_id="migrate", params={})
t2.started_at = datetime(2024, 1, 1, 13, 0, 0)
mgr.tasks[t1.id] = t1
mgr.tasks[t2.id] = t2
result = mgr.get_tasks(plugin_ids=["backup"])
assert len(result) == 1
assert result[0].plugin_id == "backup"
finally:
_cleanup_manager(mgr)
def test_get_tasks_with_pagination(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task
for i in range(5):
t = Task(plugin_id=f"p{i}", params={})
t.started_at = datetime(2024, 1, 1, i, 0, 0)
mgr.tasks[t.id] = t
result = mgr.get_tasks(limit=2, offset=0)
assert len(result) == 2
result2 = mgr.get_tasks(limit=2, offset=4)
assert len(result2) == 1
finally:
_cleanup_manager(mgr)
def test_get_tasks_completed_only(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
t1 = Task(plugin_id="p1", params={})
t1.status = TaskStatus.SUCCESS
t1.started_at = datetime(2024, 1, 1)
t2 = Task(plugin_id="p2", params={})
t2.status = TaskStatus.RUNNING
t2.started_at = datetime(2024, 1, 2)
t3 = Task(plugin_id="p3", params={})
t3.status = TaskStatus.FAILED
t3.started_at = datetime(2024, 1, 3)
mgr.tasks[t1.id] = t1
mgr.tasks[t2.id] = t2
mgr.tasks[t3.id] = t3
result = mgr.get_tasks(completed_only=True)
assert len(result) == 2 # SUCCESS + FAILED
statuses = {t.status for t in result}
assert TaskStatus.RUNNING not in statuses
finally:
_cleanup_manager(mgr)
class TestTaskManagerCreateTask:
"""Tests for TaskManager.create_task."""
@pytest.mark.asyncio
async def test_create_task_success(self):
mgr, loader, persist_svc, _ = _make_manager()
try:
task = await mgr.create_task("test_plugin", {"key": "value"})
assert task.plugin_id == "test_plugin"
assert task.params == {"key": "value"}
assert task.id in mgr.tasks
persist_svc.persist_task.assert_called()
finally:
_cleanup_manager(mgr)
@pytest.mark.asyncio
async def test_create_task_unknown_plugin_raises(self):
mgr, loader, _, _ = _make_manager()
try:
loader.has_plugin.return_value = False
with pytest.raises(ValueError, match="not found"):
await mgr.create_task("unknown_plugin", {})
finally:
_cleanup_manager(mgr)
@pytest.mark.asyncio
async def test_create_task_invalid_params_raises(self):
mgr, _, _, _ = _make_manager()
try:
with pytest.raises(ValueError, match="dictionary"):
await mgr.create_task("test_plugin", "not-a-dict")
finally:
_cleanup_manager(mgr)
class TestTaskManagerLogBuffer:
"""Tests for log buffering and flushing."""
def test_add_log_appends_to_task_and_buffer(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task
task = Task(plugin_id="p1", params={})
mgr.tasks[task.id] = task
mgr._add_log(task.id, "INFO", "Test log message", source="test")
assert len(task.logs) == 1
assert task.logs[0].message == "Test log message"
assert task.id in mgr._log_buffer
assert len(mgr._log_buffer[task.id]) == 1
finally:
_cleanup_manager(mgr)
def test_add_log_skips_nonexistent_task(self):
mgr, _, _, _ = _make_manager()
try:
mgr._add_log("nonexistent", "INFO", "Should not crash")
# No error raised
finally:
_cleanup_manager(mgr)
def test_flush_logs_writes_to_persistence(self):
mgr, _, _, log_persist = _make_manager()
try:
from src.core.task_manager.models import Task
task = Task(plugin_id="p1", params={})
mgr.tasks[task.id] = task
mgr._add_log(task.id, "INFO", "Log 1", source="test")
mgr._add_log(task.id, "INFO", "Log 2", source="test")
mgr._flush_logs()
log_persist.add_logs.assert_called_once()
args = log_persist.add_logs.call_args
assert args[0][0] == task.id # task_id
assert len(args[0][1]) == 2 # 2 log entries
finally:
_cleanup_manager(mgr)
def test_flush_task_logs_writes_single_task(self):
mgr, _, _, log_persist = _make_manager()
try:
from src.core.task_manager.models import Task
task = Task(plugin_id="p1", params={})
mgr.tasks[task.id] = task
mgr._add_log(task.id, "INFO", "Log 1", source="test")
mgr._flush_task_logs(task.id)
log_persist.add_logs.assert_called_once()
# Buffer should be empty now
assert task.id not in mgr._log_buffer
finally:
_cleanup_manager(mgr)
def test_flush_logs_requeues_on_failure(self):
mgr, _, _, log_persist = _make_manager()
try:
from src.core.task_manager.models import Task
task = Task(plugin_id="p1", params={})
mgr.tasks[task.id] = task
mgr._add_log(task.id, "INFO", "Log 1", source="test")
log_persist.add_logs.side_effect = Exception("DB error")
mgr._flush_logs()
# Logs should be re-added to buffer
assert task.id in mgr._log_buffer
assert len(mgr._log_buffer[task.id]) == 1
finally:
_cleanup_manager(mgr)
class TestTaskManagerClearTasks:
"""Tests for TaskManager.clear_tasks."""
def test_clear_all_non_active(self):
mgr, _, persist_svc, log_persist = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
t1 = Task(plugin_id="p1", params={})
t1.status = TaskStatus.SUCCESS
t2 = Task(plugin_id="p2", params={})
t2.status = TaskStatus.RUNNING
t3 = Task(plugin_id="p3", params={})
t3.status = TaskStatus.FAILED
mgr.tasks[t1.id] = t1
mgr.tasks[t2.id] = t2
mgr.tasks[t3.id] = t3
removed = mgr.clear_tasks()
assert removed == 2 # SUCCESS + FAILED
assert t2.id in mgr.tasks # RUNNING kept
assert t1.id not in mgr.tasks
persist_svc.delete_tasks.assert_called_once()
log_persist.delete_logs_for_tasks.assert_called_once()
finally:
_cleanup_manager(mgr)
def test_clear_by_status(self):
mgr, _, persist_svc, _ = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
t1 = Task(plugin_id="p1", params={})
t1.status = TaskStatus.SUCCESS
t2 = Task(plugin_id="p2", params={})
t2.status = TaskStatus.FAILED
mgr.tasks[t1.id] = t1
mgr.tasks[t2.id] = t2
removed = mgr.clear_tasks(status=TaskStatus.FAILED)
assert removed == 1
assert t1.id in mgr.tasks
assert t2.id not in mgr.tasks
finally:
_cleanup_manager(mgr)
def test_clear_preserves_awaiting_input(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
t1 = Task(plugin_id="p1", params={})
t1.status = TaskStatus.AWAITING_INPUT
mgr.tasks[t1.id] = t1
removed = mgr.clear_tasks()
assert removed == 0
assert t1.id in mgr.tasks
finally:
_cleanup_manager(mgr)
class TestTaskManagerSubscriptions:
"""Tests for log subscription management."""
@pytest.mark.asyncio
async def test_subscribe_creates_queue(self):
mgr, _, _, _ = _make_manager()
try:
queue = await mgr.subscribe_logs("task-1")
assert isinstance(queue, asyncio.Queue)
assert "task-1" in mgr.subscribers
assert queue in mgr.subscribers["task-1"]
finally:
_cleanup_manager(mgr)
@pytest.mark.asyncio
async def test_unsubscribe_removes_queue(self):
mgr, _, _, _ = _make_manager()
try:
queue = await mgr.subscribe_logs("task-1")
mgr.unsubscribe_logs("task-1", queue)
assert "task-1" not in mgr.subscribers
finally:
_cleanup_manager(mgr)
@pytest.mark.asyncio
async def test_multiple_subscribers(self):
mgr, _, _, _ = _make_manager()
try:
q1 = await mgr.subscribe_logs("task-1")
q2 = await mgr.subscribe_logs("task-1")
assert len(mgr.subscribers["task-1"]) == 2
mgr.unsubscribe_logs("task-1", q1)
assert len(mgr.subscribers["task-1"]) == 1
finally:
_cleanup_manager(mgr)
class TestTaskManagerInput:
"""Tests for await_input and resume_task_with_password."""
def test_await_input_sets_status(self):
mgr, _, persist_svc, _ = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
task = Task(plugin_id="p1", params={})
task.status = TaskStatus.RUNNING
mgr.tasks[task.id] = task
# NOTE: source code has a bug where await_input calls _add_log
# with a dict as 4th positional arg (source), causing Pydantic
# ValidationError. We patch _add_log to test the state transition.
mgr._add_log = MagicMock()
mgr.await_input(task.id, {"prompt": "Enter password"})
assert task.status == TaskStatus.AWAITING_INPUT
assert task.input_required is True
assert task.input_request == {"prompt": "Enter password"}
persist_svc.persist_task.assert_called()
finally:
_cleanup_manager(mgr)
def test_await_input_not_running_raises(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
task = Task(plugin_id="p1", params={})
task.status = TaskStatus.PENDING
mgr.tasks[task.id] = task
with pytest.raises(ValueError, match="not RUNNING"):
mgr.await_input(task.id, {})
finally:
_cleanup_manager(mgr)
def test_await_input_nonexistent_raises(self):
mgr, _, _, _ = _make_manager()
try:
with pytest.raises(ValueError, match="not found"):
mgr.await_input("nonexistent", {})
finally:
_cleanup_manager(mgr)
def test_resume_with_password(self):
mgr, _, persist_svc, _ = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
task = Task(plugin_id="p1", params={})
task.status = TaskStatus.AWAITING_INPUT
mgr.tasks[task.id] = task
# NOTE: source code has same _add_log positional-arg bug in resume too.
mgr._add_log = MagicMock()
mgr.resume_task_with_password(task.id, {"db1": "pass123"})
assert task.status == TaskStatus.RUNNING
assert task.params["passwords"] == {"db1": "pass123"}
assert task.input_required is False
assert task.input_request is None
finally:
_cleanup_manager(mgr)
def test_resume_not_awaiting_raises(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
task = Task(plugin_id="p1", params={})
task.status = TaskStatus.RUNNING
mgr.tasks[task.id] = task
with pytest.raises(ValueError, match="not AWAITING_INPUT"):
mgr.resume_task_with_password(task.id, {"db": "pass"})
finally:
_cleanup_manager(mgr)
def test_resume_empty_passwords_raises(self):
mgr, _, _, _ = _make_manager()
try:
from src.core.task_manager.models import Task, TaskStatus
task = Task(plugin_id="p1", params={})
task.status = TaskStatus.AWAITING_INPUT
mgr.tasks[task.id] = task
with pytest.raises(ValueError, match="non-empty"):
mgr.resume_task_with_password(task.id, {})
finally:
_cleanup_manager(mgr)
# [/DEF:test_task_manager:Module]

View File

@@ -1,406 +0,0 @@
# [DEF:test_task_persistence:Module]
# @SEMANTICS: test, task, persistence, unit_test
# @PURPOSE: Unit tests for TaskPersistenceService.
# @LAYER: Test
# @RELATION: TESTS -> TaskPersistenceService
# @TIER: CRITICAL
# @TEST_DATA: valid_task -> {"id": "test-uuid-1", "plugin_id": "backup", "status": "PENDING"}
# [SECTION: IMPORTS]
from datetime import datetime, timedelta
from unittest.mock import patch
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from src.models.mapping import Base
from src.models.task import TaskRecord
from src.core.task_manager.persistence import TaskPersistenceService
from src.core.task_manager.models import Task, TaskStatus, LogEntry
# [/SECTION]
# [DEF:TestTaskPersistenceHelpers:Class]
# @PURPOSE: Test suite for TaskPersistenceService static helper methods.
# @TIER: CRITICAL
class TestTaskPersistenceHelpers:
# [DEF:test_json_load_if_needed_none:Function]
# @PURPOSE: Test _json_load_if_needed with None input.
def test_json_load_if_needed_none(self):
assert TaskPersistenceService._json_load_if_needed(None) is None
# [/DEF:test_json_load_if_needed_none:Function]
# [DEF:test_json_load_if_needed_dict:Function]
# @PURPOSE: Test _json_load_if_needed with dict input.
def test_json_load_if_needed_dict(self):
data = {"key": "value"}
assert TaskPersistenceService._json_load_if_needed(data) == data
# [/DEF:test_json_load_if_needed_dict:Function]
# [DEF:test_json_load_if_needed_list:Function]
# @PURPOSE: Test _json_load_if_needed with list input.
def test_json_load_if_needed_list(self):
data = [1, 2, 3]
assert TaskPersistenceService._json_load_if_needed(data) == data
# [/DEF:test_json_load_if_needed_list:Function]
# [DEF:test_json_load_if_needed_json_string:Function]
# @PURPOSE: Test _json_load_if_needed with JSON string.
def test_json_load_if_needed_json_string(self):
result = TaskPersistenceService._json_load_if_needed('{"key": "value"}')
assert result == {"key": "value"}
# [/DEF:test_json_load_if_needed_json_string:Function]
# [DEF:test_json_load_if_needed_empty_string:Function]
# @PURPOSE: Test _json_load_if_needed with empty/null strings.
def test_json_load_if_needed_empty_string(self):
assert TaskPersistenceService._json_load_if_needed("") is None
assert TaskPersistenceService._json_load_if_needed("null") is None
assert TaskPersistenceService._json_load_if_needed(" null ") is None
# [/DEF:test_json_load_if_needed_empty_string:Function]
# [DEF:test_json_load_if_needed_plain_string:Function]
# @PURPOSE: Test _json_load_if_needed with non-JSON string.
def test_json_load_if_needed_plain_string(self):
result = TaskPersistenceService._json_load_if_needed("not json")
assert result == "not json"
# [/DEF:test_json_load_if_needed_plain_string:Function]
# [DEF:test_json_load_if_needed_integer:Function]
# @PURPOSE: Test _json_load_if_needed with integer.
def test_json_load_if_needed_integer(self):
assert TaskPersistenceService._json_load_if_needed(42) == 42
# [/DEF:test_json_load_if_needed_integer:Function]
# [DEF:test_parse_datetime_none:Function]
# @PURPOSE: Test _parse_datetime with None.
def test_parse_datetime_none(self):
assert TaskPersistenceService._parse_datetime(None) is None
# [/DEF:test_parse_datetime_none:Function]
# [DEF:test_parse_datetime_datetime_object:Function]
# @PURPOSE: Test _parse_datetime with datetime object.
def test_parse_datetime_datetime_object(self):
dt = datetime(2024, 1, 1, 12, 0, 0)
assert TaskPersistenceService._parse_datetime(dt) == dt
# [/DEF:test_parse_datetime_datetime_object:Function]
# [DEF:test_parse_datetime_iso_string:Function]
# @PURPOSE: Test _parse_datetime with ISO string.
def test_parse_datetime_iso_string(self):
result = TaskPersistenceService._parse_datetime("2024-01-01T12:00:00")
assert isinstance(result, datetime)
assert result.year == 2024
# [/DEF:test_parse_datetime_iso_string:Function]
# [DEF:test_parse_datetime_invalid_string:Function]
# @PURPOSE: Test _parse_datetime with invalid string.
def test_parse_datetime_invalid_string(self):
assert TaskPersistenceService._parse_datetime("not-a-date") is None
# [/DEF:test_parse_datetime_invalid_string:Function]
# [DEF:test_parse_datetime_integer:Function]
# @PURPOSE: Test _parse_datetime with non-string, non-datetime.
def test_parse_datetime_integer(self):
assert TaskPersistenceService._parse_datetime(12345) is None
# [/DEF:test_parse_datetime_integer:Function]
# [/DEF:TestTaskPersistenceHelpers:Class]
# [DEF:TestTaskPersistenceService:Class]
# @PURPOSE: Test suite for TaskPersistenceService CRUD operations.
# @TIER: CRITICAL
# @TEST_DATA: valid_task -> {"id": "test-uuid-1", "plugin_id": "backup", "status": "PENDING"}
class TestTaskPersistenceService:
# [DEF:setup_class:Function]
# @PURPOSE: Setup in-memory test database.
@classmethod
def setup_class(cls):
"""Create an in-memory SQLite database for testing."""
cls.engine = create_engine("sqlite:///:memory:")
Base.metadata.create_all(bind=cls.engine)
cls.TestSessionLocal = sessionmaker(bind=cls.engine)
cls.service = TaskPersistenceService()
# [/DEF:setup_class:Function]
# [DEF:teardown_class:Function]
# @PURPOSE: Dispose of test database.
@classmethod
def teardown_class(cls):
cls.engine.dispose()
# [/DEF:teardown_class:Function]
# [DEF:setup_method:Function]
# @PURPOSE: Clean task_records table before each test.
def setup_method(self):
session = self.TestSessionLocal()
session.query(TaskRecord).delete()
session.commit()
session.close()
# [/DEF:setup_method:Function]
def _patched(self):
"""Helper: returns a patch context for TasksSessionLocal."""
return patch(
"src.core.task_manager.persistence.TasksSessionLocal",
self.TestSessionLocal
)
def _make_task(self, **kwargs):
"""Helper: create a Task with test defaults."""
defaults = {
"id": "test-uuid-1",
"plugin_id": "backup",
"status": TaskStatus.PENDING,
"params": {"source_env_id": "env-1"},
}
defaults.update(kwargs)
return Task(**defaults)
# [DEF:test_persist_task_new:Function]
# @PURPOSE: Test persisting a new task creates a record.
# @PRE: Empty database.
# @POST: TaskRecord exists in database.
def test_persist_task_new(self):
"""Test persisting a new task creates a record."""
task = self._make_task()
with self._patched():
self.service.persist_task(task)
session = self.TestSessionLocal()
record = session.query(TaskRecord).filter_by(id="test-uuid-1").first()
session.close()
assert record is not None
assert record.type == "backup"
assert record.status == "PENDING"
# [/DEF:test_persist_task_new:Function]
# [DEF:test_persist_task_update:Function]
# @PURPOSE: Test updating an existing task.
# @PRE: Task already persisted.
# @POST: Task record updated with new status.
def test_persist_task_update(self):
"""Test persisting an existing task updates the record."""
task = self._make_task(status=TaskStatus.PENDING)
with self._patched():
self.service.persist_task(task)
# Update status
task.status = TaskStatus.RUNNING
task.started_at = datetime.utcnow()
with self._patched():
self.service.persist_task(task)
session = self.TestSessionLocal()
record = session.query(TaskRecord).filter_by(id="test-uuid-1").first()
session.close()
assert record.status == "RUNNING"
assert record.started_at is not None
# [/DEF:test_persist_task_update:Function]
# [DEF:test_persist_task_with_logs:Function]
# @PURPOSE: Test persisting a task with log entries.
# @PRE: Task has logs attached.
# @POST: Logs serialized as JSON in task record.
def test_persist_task_with_logs(self):
"""Test persisting a task with log entries."""
task = self._make_task()
task.logs = [
LogEntry(message="Step 1", level="INFO", source="plugin"),
LogEntry(message="Step 2", level="INFO", source="plugin"),
]
with self._patched():
self.service.persist_task(task)
session = self.TestSessionLocal()
record = session.query(TaskRecord).filter_by(id="test-uuid-1").first()
session.close()
assert record.logs is not None
assert len(record.logs) == 2
# [/DEF:test_persist_task_with_logs:Function]
# [DEF:test_persist_task_failed_extracts_error:Function]
# @PURPOSE: Test that FAILED task extracts last error message.
# @PRE: Task has FAILED status with ERROR logs.
# @POST: record.error contains last error message.
def test_persist_task_failed_extracts_error(self):
"""Test that FAILED tasks extract the last error message."""
task = self._make_task(status=TaskStatus.FAILED)
task.logs = [
LogEntry(message="Started OK", level="INFO", source="plugin"),
LogEntry(message="Connection failed", level="ERROR", source="plugin"),
LogEntry(message="Retrying...", level="INFO", source="plugin"),
LogEntry(message="Fatal: timeout", level="ERROR", source="plugin"),
]
with self._patched():
self.service.persist_task(task)
session = self.TestSessionLocal()
record = session.query(TaskRecord).filter_by(id="test-uuid-1").first()
session.close()
assert record.error == "Fatal: timeout"
# [/DEF:test_persist_task_failed_extracts_error:Function]
# [DEF:test_persist_tasks_batch:Function]
# @PURPOSE: Test persisting multiple tasks.
# @PRE: Empty database.
# @POST: All task records created.
def test_persist_tasks_batch(self):
"""Test persisting multiple tasks at once."""
tasks = [
self._make_task(id=f"batch-{i}", plugin_id="migration")
for i in range(3)
]
with self._patched():
self.service.persist_tasks(tasks)
session = self.TestSessionLocal()
count = session.query(TaskRecord).count()
session.close()
assert count == 3
# [/DEF:test_persist_tasks_batch:Function]
# [DEF:test_load_tasks:Function]
# @PURPOSE: Test loading tasks from database.
# @PRE: Tasks persisted.
# @POST: Returns list of Task objects with correct data.
def test_load_tasks(self):
"""Test loading tasks from database (round-trip)."""
task = self._make_task(
status=TaskStatus.SUCCESS,
started_at=datetime.utcnow(),
finished_at=datetime.utcnow(),
)
task.params = {"key": "value"}
task.result = {"output": "done"}
task.logs = [LogEntry(message="Done", level="INFO", source="plugin")]
with self._patched():
self.service.persist_task(task)
with self._patched():
loaded = self.service.load_tasks(limit=10)
assert len(loaded) == 1
assert loaded[0].id == "test-uuid-1"
assert loaded[0].plugin_id == "backup"
assert loaded[0].status == TaskStatus.SUCCESS
assert loaded[0].params == {"key": "value"}
# [/DEF:test_load_tasks:Function]
# [DEF:test_load_tasks_with_status_filter:Function]
# @PURPOSE: Test loading tasks filtered by status.
# @PRE: Tasks with different statuses persisted.
# @POST: Returns only tasks matching status filter.
def test_load_tasks_with_status_filter(self):
"""Test loading tasks filtered by status."""
tasks = [
self._make_task(id="s1", status=TaskStatus.SUCCESS),
self._make_task(id="s2", status=TaskStatus.FAILED),
self._make_task(id="s3", status=TaskStatus.SUCCESS),
]
with self._patched():
self.service.persist_tasks(tasks)
with self._patched():
failed_tasks = self.service.load_tasks(status=TaskStatus.FAILED)
assert len(failed_tasks) == 1
assert failed_tasks[0].id == "s2"
assert failed_tasks[0].status == TaskStatus.FAILED
# [/DEF:test_load_tasks_with_status_filter:Function]
# [DEF:test_load_tasks_with_limit:Function]
# @PURPOSE: Test loading tasks with limit.
# @PRE: Multiple tasks persisted.
# @POST: Returns at most `limit` tasks.
def test_load_tasks_with_limit(self):
"""Test loading tasks respects limit parameter."""
tasks = [
self._make_task(id=f"lim-{i}")
for i in range(10)
]
with self._patched():
self.service.persist_tasks(tasks)
with self._patched():
loaded = self.service.load_tasks(limit=3)
assert len(loaded) == 3
# [/DEF:test_load_tasks_with_limit:Function]
# [DEF:test_delete_tasks:Function]
# @PURPOSE: Test deleting tasks by ID list.
# @PRE: Tasks persisted.
# @POST: Specified tasks deleted, others remain.
def test_delete_tasks(self):
"""Test deleting tasks by ID list."""
tasks = [
self._make_task(id="del-1"),
self._make_task(id="del-2"),
self._make_task(id="keep-1"),
]
with self._patched():
self.service.persist_tasks(tasks)
with self._patched():
self.service.delete_tasks(["del-1", "del-2"])
session = self.TestSessionLocal()
remaining = session.query(TaskRecord).all()
session.close()
assert len(remaining) == 1
assert remaining[0].id == "keep-1"
# [/DEF:test_delete_tasks:Function]
# [DEF:test_delete_tasks_empty_list:Function]
# @PURPOSE: Test deleting with empty list (no-op).
# @PRE: None.
# @POST: No error, no changes.
def test_delete_tasks_empty_list(self):
"""Test deleting with empty list is a no-op."""
with self._patched():
self.service.delete_tasks([]) # Should not raise
# [/DEF:test_delete_tasks_empty_list:Function]
# [DEF:test_persist_task_with_datetime_in_params:Function]
# @PURPOSE: Test json_serializable handles datetime in params.
# @PRE: Task params contain datetime values.
# @POST: Params serialized correctly.
def test_persist_task_with_datetime_in_params(self):
"""Test that datetime values in params are serialized to ISO format."""
dt = datetime(2024, 6, 15, 10, 30, 0)
task = self._make_task(params={"timestamp": dt, "name": "test"})
with self._patched():
self.service.persist_task(task)
session = self.TestSessionLocal()
record = session.query(TaskRecord).filter_by(id="test-uuid-1").first()
session.close()
assert record.params is not None
assert record.params["timestamp"] == "2024-06-15T10:30:00"
assert record.params["name"] == "test"
# [/DEF:test_persist_task_with_datetime_in_params:Function]
# [/DEF:TestTaskPersistenceService:Class]
# [/DEF:test_task_persistence:Module]

View File

@@ -17,11 +17,11 @@ services:
timeout: 5s
retries: 10
backend:
app:
build:
context: .
dockerfile: docker/backend.Dockerfile
container_name: ss_tools_backend
dockerfile: Dockerfile
container_name: ss_tools_app
restart: unless-stopped
depends_on:
db:
@@ -33,22 +33,11 @@ services:
AUTH_DATABASE_URL: postgresql+psycopg2://postgres:postgres@db:5432/ss_tools
BACKEND_PORT: 8000
ports:
- "${BACKEND_HOST_PORT:-8001}:8000"
- "8000:8000"
volumes:
- ./config.json:/app/config.json
- ./backups:/app/backups
- ./backend/git_repos:/app/backend/git_repos
frontend:
build:
context: .
dockerfile: docker/frontend.Dockerfile
container_name: ss_tools_frontend
restart: unless-stopped
depends_on:
- backend
ports:
- "${FRONTEND_HOST_PORT:-8000}:80"
volumes:
postgres_data:

View File

@@ -1,16 +0,0 @@
FROM node:20-alpine AS build
WORKDIR /app/frontend
COPY frontend/package*.json ./
RUN npm ci
COPY frontend/ ./
RUN npm run build
FROM nginx:1.27-alpine
COPY docker/nginx.conf /etc/nginx/conf.d/default.conf
COPY --from=build /app/frontend/build /usr/share/nginx/html
EXPOSE 80

View File

@@ -1,31 +0,0 @@
server {
listen 80;
server_name _;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location /api/ {
proxy_pass http://backend:8000/api/;
proxy_http_version 1.1;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /ws/ {
proxy_pass http://backend:8000/ws/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}

View File

@@ -26,18 +26,18 @@
// [/SECTION]
// [SECTION: STATE]
let filterText = $state("");
let currentPage = $state(0);
let pageSize = $state(20);
let sortColumn: keyof DashboardMetadata = $state("title");
let sortDirection: "asc" | "desc" = $state("asc");
let filterText = "";
let currentPage = 0;
let pageSize = 20;
let sortColumn: keyof DashboardMetadata = "title";
let sortDirection: "asc" | "desc" = "asc";
// [/SECTION]
// [SECTION: UI STATE]
let showGitManager = $state(false);
let gitDashboardId: number | null = $state(null);
let gitDashboardTitle = $state("");
let validatingIds: Set<number> = $state(new Set());
let showGitManager = false;
let gitDashboardId: number | null = null;
let gitDashboardTitle = "";
let validatingIds: Set<number> = new Set();
// [/SECTION]
// [DEF:handleValidate:Function]
@@ -48,14 +48,33 @@
if (validatingIds.has(dashboard.id)) return;
validatingIds.add(dashboard.id);
validatingIds = new Set(validatingIds);
validatingIds = validatingIds; // Trigger reactivity
try {
// TODO: Get provider_id from settings or prompt user
// For now, we assume a default provider or let the backend handle it if possible,
// but the plugin requires provider_id.
// In a real implementation, we might open a modal to select provider if not configured globally.
// Or we pick the first active one.
// Fetch active provider first
const providers = await api.fetchApi("/llm/providers");
const activeProvider = providers.find((p: any) => p.is_active);
if (!activeProvider) {
toast(
"No active LLM provider found. Please configure one in settings.",
"error",
);
return;
}
await api.postApi("/tasks", {
plugin_id: "llm_dashboard_validation",
params: {
dashboard_id: dashboard.id.toString(),
environment_id: environmentId,
provider_id: activeProvider.id,
},
});
@@ -64,7 +83,7 @@
toast(e.message || "Validation failed to start", "error");
} finally {
validatingIds.delete(dashboard.id);
validatingIds = new Set(validatingIds);
validatingIds = validatingIds;
}
}
// [/DEF:handleValidate:Function]
@@ -202,14 +221,14 @@
type="checkbox"
checked={allSelected}
indeterminate={someSelected && !allSelected}
onchange={(e) =>
on:change={(e) =>
handleSelectAll((e.target as HTMLInputElement).checked)}
class="h-4 w-4 text-blue-600 border-gray-300 rounded focus:ring-blue-500"
/>
</th>
<th
class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider cursor-pointer hover:text-gray-700 transition-colors"
onclick={() => handleSort("title")}
on:click={() => handleSort("title")}
>
{$t.dashboard.title}
{sortColumn === "title"
@@ -220,7 +239,7 @@
</th>
<th
class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider cursor-pointer hover:text-gray-700 transition-colors"
onclick={() => handleSort("last_modified")}
on:click={() => handleSort("last_modified")}
>
{$t.dashboard.last_modified}
{sortColumn === "last_modified"
@@ -231,7 +250,7 @@
</th>
<th
class="px-6 py-3 text-left text-xs font-medium text-gray-500 uppercase tracking-wider cursor-pointer hover:text-gray-700 transition-colors"
onclick={() => handleSort("status")}
on:click={() => handleSort("status")}
>
{$t.dashboard.status}
{sortColumn === "status"
@@ -257,7 +276,7 @@
<input
type="checkbox"
checked={selectedIds.includes(dashboard.id)}
onchange={(e) =>
on:change={(e) =>
handleSelectionChange(
dashboard.id,
(e.target as HTMLInputElement).checked,

View File

@@ -11,7 +11,6 @@
<script lang="ts">
// [SECTION: IMPORTS]
import { onMount, createEventDispatcher } from 'svelte';
import { t } from '../lib/i18n';
// [/SECTION]
// [SECTION: PROPS]
@@ -49,7 +48,7 @@
value={selectedId}
on:change={handleSelect}
>
<option value="" disabled>{$t.common?.choose_environment || "-- Choose an environment --"}</option>
<option value="" disabled>-- Choose an environment --</option>
{#each environments as env}
<option value={env.id}>{env.name} ({env.url})</option>
{/each}
@@ -58,4 +57,4 @@
<!-- [/SECTION] -->
<!-- [/DEF:EnvSelector:Component] -->
<!-- [/DEF:EnvSelector:Component] -->

View File

@@ -23,7 +23,7 @@
// [/SECTION]
let selectedTargetUuid = $state("");
let selectedTargetUuid = "";
const dispatch = createEventDispatcher();
// [DEF:resolve:Function]
@@ -94,7 +94,7 @@
<div class="mt-5 sm:mt-6 sm:grid sm:grid-cols-2 sm:gap-3 sm:grid-flow-row-dense">
<button
type="button"
onclick={resolve}
on:click={resolve}
disabled={!selectedTargetUuid}
class="w-full inline-flex justify-center rounded-md border border-transparent shadow-sm px-4 py-2 bg-indigo-600 text-base font-medium text-white hover:bg-indigo-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-indigo-500 sm:col-start-2 sm:text-sm disabled:bg-gray-400"
>
@@ -102,7 +102,7 @@
</button>
<button
type="button"
onclick={cancel}
on:click={cancel}
class="mt-3 w-full inline-flex justify-center rounded-md border border-gray-300 shadow-sm px-4 py-2 bg-white text-base font-medium text-gray-700 hover:bg-gray-50 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-indigo-500 sm:mt-0 sm:col-start-1 sm:text-sm"
>
Cancel Migration

View File

@@ -12,8 +12,6 @@
/**
* @TIER CRITICAL
* @PURPOSE Displays detailed logs for a specific task inline or in a modal using TaskLogPanel.
* @PRE Needs a valid taskId to fetch logs for.
* @POST task logs are displayed and updated in real time.
* @UX_STATE Loading -> Shows spinner/text while fetching initial logs
* @UX_STATE Streaming -> Displays logs with auto-scroll, real-time appending
* @UX_STATE Error -> Shows error message with recovery option
@@ -44,9 +42,6 @@
let shouldShow = $derived(inline || show);
// [DEF:handleRealTimeLogs:Action]
// @PURPOSE: Sync real-time logs to the current log list
// @PRE: None
// @POST: logs are updated with new real-time log entries
$effect(() => {
if (realTimeLogs && realTimeLogs.length > 0) {
const lastLog = realTimeLogs[realTimeLogs.length - 1];
@@ -63,20 +58,11 @@
// [/DEF:handleRealTimeLogs:Action]
// [DEF:fetchLogs:Function]
// @PURPOSE: Fetches logs for a given task ID
// @PRE: taskId is set
// @POST: logs are populated with API response
async function fetchLogs() {
if (!taskId) return;
try {
console.log(`[TaskLogViewer][API][fetchLogs:STARTED] id=${taskId}`);
logs = await getTaskLogs(taskId);
console.log(`[TaskLogViewer][API][fetchLogs:SUCCESS] id=${taskId}`);
} catch (e) {
console.error(
`[TaskLogViewer][API][fetchLogs:FAILED] id=${taskId}`,
e,
);
error = e.message;
} finally {
loading = false;
@@ -84,25 +70,13 @@
}
// [/DEF:fetchLogs:Function]
// [DEF:handleFilterChange:Function]
// @PURPOSE: Updates filter conditions for the log viewer
// @PRE: event contains detail with source and level
// @POST: Log viewer filters updated
function handleFilterChange(event) {
console.log("[TaskLogViewer][UI][handleFilterChange:START]");
const { source, level } = event.detail;
}
// [/DEF:handleFilterChange:Function]
// [DEF:handleRefresh:Function]
// @PURPOSE: Refreshes the logs by polling the API
// @PRE: None
// @POST: Logs refetched
function handleRefresh() {
console.log("[TaskLogViewer][UI][handleRefresh:START]");
fetchLogs();
}
// [/DEF:handleRefresh:Function]
$effect(() => {
if (shouldShow && taskId) {
@@ -130,11 +104,6 @@
</script>
{#if shouldShow}
<!-- [DEF:showInline:Component] -->
<!-- @PURPOSE: Shows inline logs -->
<!-- @LAYER: UI -->
<!-- @SEMANTICS: logs, inline -->
<!-- @TIER: STANDARD -->
{#if inline}
<div class="flex flex-col h-full w-full">
{#if loading && logs.length === 0}
@@ -167,13 +136,7 @@
/>
{/if}
</div>
<!-- [/DEF:showInline:Component] -->
{:else}
<!-- [DEF:showModal:Component] -->
<!-- @PURPOSE: Shows modal logs -->
<!-- @LAYER: UI -->
<!-- @SEMANTICS: logs, modal -->
<!-- @TIER: STANDARD -->
<div
class="fixed inset-0 z-50 overflow-y-auto"
aria-labelledby="modal-title"
@@ -236,6 +199,5 @@
</div>
{/if}
{/if}
// [/DEF:showModal:Component]
<!-- [/DEF:TaskLogViewer:Component] -->

View File

@@ -1,128 +0,0 @@
// [DEF:frontend.src.components.__tests__.task_log_viewer:Module]
// @TIER: CRITICAL
// @SEMANTICS: tests, task-log, viewer, mount, components
// @PURPOSE: Unit tests for TaskLogViewer component by mounting it and observing the DOM.
// @LAYER: UI (Tests)
// @RELATION: TESTS -> frontend.src.components.TaskLogViewer
// @INVARIANT: Duplicate logs are never appended. Polling only active for in-progress tasks.
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { render, screen, waitFor } from '@testing-library/svelte';
import TaskLogViewer from '../TaskLogViewer.svelte';
import { getTaskLogs } from '../../services/taskService.js';
vi.mock('../../services/taskService.js', () => ({
getTaskLogs: vi.fn()
}));
vi.mock('../../lib/i18n', () => ({
t: {
subscribe: (fn) => {
fn({
tasks: {
loading: 'Loading...'
}
});
return () => { };
}
}
}));
describe('TaskLogViewer Component', () => {
beforeEach(() => {
vi.clearAllMocks();
});
it('renders loading state initially', () => {
getTaskLogs.mockResolvedValue([]);
render(TaskLogViewer, { inline: true, taskId: 'task-123' });
expect(screen.getByText('Loading...')).toBeDefined();
});
it('fetches and displays historical logs', async () => {
getTaskLogs.mockResolvedValue([
{ timestamp: '2024-01-01T00:00:00', level: 'INFO', message: 'Historical log entry' }
]);
render(TaskLogViewer, { inline: true, taskId: 'task-123' });
await waitFor(() => {
expect(screen.getByText(/Historical log entry/)).toBeDefined();
});
expect(getTaskLogs).toHaveBeenCalledWith('task-123');
});
it('displays error message on fetch failure', async () => {
getTaskLogs.mockRejectedValue(new Error('Network error fetching logs'));
render(TaskLogViewer, { inline: true, taskId: 'task-123' });
await waitFor(() => {
expect(screen.getByText('Network error fetching logs')).toBeDefined();
expect(screen.getByText('Retry')).toBeDefined();
});
});
it('appends real-time logs passed as props', async () => {
getTaskLogs.mockResolvedValue([
{ timestamp: '2024-01-01T00:00:00', level: 'INFO', message: 'Historical log entry' }
]);
const { rerender } = render(TaskLogViewer, {
inline: true,
taskId: 'task-123',
realTimeLogs: []
});
await waitFor(() => {
expect(screen.getByText(/Historical log entry/)).toBeDefined();
});
// Simulate receiving a new real-time log
await rerender({
inline: true,
taskId: 'task-123',
realTimeLogs: [
{ timestamp: '2024-01-01T00:00:01', level: 'DEBUG', message: 'Realtime log entry' }
]
});
await waitFor(() => {
expect(screen.getByText(/Realtime log entry/)).toBeDefined();
});
});
it('deduplicates real-time logs that are already in historical logs', async () => {
getTaskLogs.mockResolvedValue([
{ timestamp: '2024-01-01T00:00:00', level: 'INFO', message: 'Duplicate log entry' }
]);
const { rerender } = render(TaskLogViewer, {
inline: true,
taskId: 'task-123',
realTimeLogs: []
});
await waitFor(() => {
expect(screen.getByText(/Duplicate log entry/)).toBeDefined();
});
// Pass the exact same log as realtime
await rerender({
inline: true,
taskId: 'task-123',
realTimeLogs: [
{ timestamp: '2024-01-01T00:00:00', level: 'INFO', message: 'Duplicate log entry' }
]
});
// Wait a bit to ensure no explosive re-renders or double additions
await new Promise((r) => setTimeout(r, 50));
// In RTL, if there were duplicates, getAllByText would return > 1 elements.
// getByText asserts there is exactly *one* match.
expect(() => screen.getByText(/Duplicate log entry/)).not.toThrow();
});
});
// [/DEF:frontend.src.components.__tests__.task_log_viewer:Module]

View File

@@ -27,10 +27,10 @@
// [/SECTION]
// [SECTION: STATE]
let branches = $state([]);
let loading = $state(false);
let showCreate = $state(false);
let newBranchName = $state('');
let branches = [];
let loading = false;
let showCreate = false;
let newBranchName = '';
// [/SECTION]
const dispatch = createEventDispatcher();
@@ -129,7 +129,7 @@
<div class="flex-grow">
<Select
bind:value={currentBranch}
onchange={handleSelect}
on:change={handleSelect}
disabled={loading}
options={branches.map(b => ({ value: b.name, label: b.name }))}
/>
@@ -138,7 +138,7 @@
<Button
variant="ghost"
size="sm"
onclick={() => showCreate = !showCreate}
on:click={() => showCreate = !showCreate}
disabled={loading}
class="text-blue-600"
>
@@ -158,7 +158,7 @@
<Button
variant="primary"
size="sm"
onclick={handleCreate}
on:click={handleCreate}
disabled={loading || !newBranchName}
isLoading={loading}
class="bg-green-600 hover:bg-green-700"
@@ -168,7 +168,7 @@
<Button
variant="ghost"
size="sm"
onclick={() => showCreate = false}
on:click={() => showCreate = false}
disabled={loading}
>
{$t.common.cancel}
@@ -178,4 +178,4 @@
</div>
<!-- [/SECTION] -->
<!-- [/DEF:BranchSelector:Component] -->
<!-- [/DEF:BranchSelector:Component] -->

View File

@@ -23,8 +23,8 @@
// [/SECTION]
// [SECTION: STATE]
let history = $state([]);
let loading = $state(false);
let history = [];
let loading = false;
// [/SECTION]
// [DEF:onMount:Function]
@@ -66,7 +66,7 @@
<h3 class="text-sm font-semibold text-gray-400 uppercase tracking-wider">
{$t.git.history}
</h3>
<Button variant="ghost" size="sm" onclick={loadHistory} class="text-blue-600">
<Button variant="ghost" size="sm" on:click={loadHistory} class="text-blue-600">
{$t.git.refresh}
</Button>
</div>
@@ -95,4 +95,4 @@
</div>
<!-- [/SECTION] -->
<!-- [/DEF:CommitHistory:Component] -->
<!-- [/DEF:CommitHistory:Component] -->

View File

@@ -16,7 +16,6 @@
import { gitService } from "../../services/gitService";
import { addToast as toast } from "../../lib/toasts.js";
import { api } from "../../lib/api";
import { t } from "../../lib/i18n";
// [/SECTION]
// [SECTION: PROPS]
@@ -25,12 +24,12 @@
// [/SECTION]
// [SECTION: STATE]
let message = $state("");
let committing = $state(false);
let status = $state(null);
let diff = $state("");
let loading = $state(false);
let generatingMessage = $state(false);
let message = "";
let committing = false;
let status = null;
let diff = "";
let loading = false;
let generatingMessage = false;
// [/SECTION]
const dispatch = createEventDispatcher();
@@ -50,10 +49,10 @@
`/git/repositories/${dashboardId}/generate-message`,
);
message = data.message;
toast($t.git?.commit_message_generated || "Commit message generated", "success");
toast("Commit message generated", "success");
} catch (e) {
console.error(`[CommitModal][Coherence:Failed] ${e.message}`);
toast(e.message || ($t.git?.commit_message_failed || "Failed to generate message"), "error");
toast(e.message || "Failed to generate message", "error");
} finally {
generatingMessage = false;
}
@@ -94,7 +93,7 @@
if (!diff) diff = "";
} catch (e) {
console.error(`[CommitModal][Coherence:Failed] ${e.message}`);
toast($t.git?.load_changes_failed || "Failed to load changes", "error");
toast("Failed to load changes", "error");
} finally {
loading = false;
}
@@ -115,7 +114,7 @@
committing = true;
try {
await gitService.commit(dashboardId, message, []);
toast($t.git?.commit_success || "Changes committed successfully", "success");
toast("Changes committed successfully", "success");
dispatch("commit");
show = false;
message = "";
@@ -142,7 +141,7 @@
<div
class="bg-white p-6 rounded-lg shadow-xl w-full max-w-4xl max-h-[90vh] flex flex-col"
>
<h2 class="text-xl font-bold mb-4">{$t.git?.commit || "Commit Changes"}</h2>
<h2 class="text-xl font-bold mb-4">Commit Changes</h2>
<div class="flex flex-col md:flex-row gap-4 flex-1 overflow-hidden">
<!-- Left: Message and Files -->
@@ -151,24 +150,24 @@
<div class="flex justify-between items-center mb-1">
<label
class="block text-sm font-medium text-gray-700"
>{$t.git?.commit_message || "Commit Message"}</label
>Commit Message</label
>
<button
onclick={handleGenerateMessage}
on:click={handleGenerateMessage}
disabled={generatingMessage || loading}
class="text-xs text-blue-600 hover:text-blue-800 disabled:opacity-50 flex items-center"
>
{#if generatingMessage}
<span class="animate-spin mr-1"></span> {$t.mapper?.generating || "Generating..."}
<span class="animate-spin mr-1"></span> Generating...
{:else}
<span class="mr-1"></span> {$t.git?.generate_with_ai || "Generate with AI"}
<span class="mr-1"></span> Generate with AI
{/if}
</button>
</div>
<textarea
bind:value={message}
class="w-full border rounded p-2 h-32 focus:ring-2 focus:ring-blue-500 outline-none resize-none"
placeholder={$t.git?.describe_changes || "Describe your changes..."}
placeholder="Describe your changes..."
></textarea>
</div>
@@ -177,7 +176,7 @@
<h3
class="text-sm font-bold text-gray-500 uppercase mb-2"
>
{$t.git?.changed_files || "Changed Files"}
Changed Files
</h3>
<ul class="text-xs space-y-1">
{#each status.staged_files as file}
@@ -219,14 +218,14 @@
<div
class="bg-gray-200 px-3 py-1 text-xs font-bold text-gray-600 border-b"
>
{$t.git?.changes_preview || "Changes Preview"}
Changes Preview
</div>
<div class="flex-1 overflow-auto p-2">
{#if loading}
<div
class="flex items-center justify-center h-full text-gray-500"
>
{$t.git?.loading_diff || "Loading diff..."}
Loading diff...
</div>
{:else if diff}
<pre
@@ -235,7 +234,7 @@
<div
class="flex items-center justify-center h-full text-gray-500 italic"
>
{$t.git?.no_changes || "No changes detected"}
No changes detected
</div>
{/if}
</div>
@@ -244,13 +243,13 @@
<div class="flex justify-end space-x-3 mt-6 pt-4 border-t">
<button
onclick={() => (show = false)}
on:click={() => (show = false)}
class="px-4 py-2 text-gray-600 hover:bg-gray-100 rounded"
>
{$t.common?.cancel || "Cancel"}
Cancel
</button>
<button
onclick={handleCommit}
on:click={handleCommit}
disabled={committing ||
!message ||
loading ||
@@ -258,7 +257,7 @@
status?.staged_files?.length === 0)}
class="px-4 py-2 bg-blue-600 text-white rounded hover:bg-blue-700 disabled:opacity-50"
>
{committing ? ($t.git?.committing || "Committing...") : ($t.git?.commit || "Commit")}
{committing ? "Committing..." : "Commit"}
</button>
</div>
</div>

View File

@@ -26,7 +26,7 @@
// [SECTION: STATE]
const dispatch = createEventDispatcher();
/** @type {Object.<string, 'mine' | 'theirs' | 'manual'>} */
let resolutions = $state({});
let resolutions = {};
// [/SECTION]
// [DEF:resolve:Function]
@@ -126,7 +126,7 @@
] === 'mine'
? 'bg-blue-600 text-white'
: 'bg-gray-50 hover:bg-blue-50 text-blue-600'}"
onclick={() =>
on:click={() =>
resolve(conflict.file_path, "mine")}
>
Keep Mine
@@ -148,7 +148,7 @@
] === 'theirs'
? 'bg-green-600 text-white'
: 'bg-gray-50 hover:bg-green-50 text-green-600'}"
onclick={() =>
on:click={() =>
resolve(conflict.file_path, "theirs")}
>
Keep Theirs
@@ -161,13 +161,13 @@
<div class="flex justify-end space-x-3 pt-4 border-t">
<button
onclick={() => (show = false)}
on:click={() => (show = false)}
class="px-4 py-2 text-gray-600 hover:bg-gray-100 rounded transition-colors"
>
Cancel
</button>
<button
onclick={handleSave}
on:click={handleSave}
class="px-4 py-2 bg-blue-600 text-white rounded hover:bg-blue-700 transition-colors shadow-sm"
>
Resolve & Continue

View File

@@ -14,7 +14,6 @@
import { onMount, createEventDispatcher } from "svelte";
import { gitService } from "../../services/gitService";
import { addToast as toast } from "../../lib/toasts.js";
import { t } from "../../lib/i18n";
// [/SECTION]
// [SECTION: PROPS]
@@ -23,10 +22,10 @@
// [/SECTION]
// [SECTION: STATE]
let environments = $state([]);
let selectedEnv = $state("");
let loading = $state(false);
let deploying = $state(false);
let environments = [];
let selectedEnv = "";
let loading = false;
let deploying = false;
// [/SECTION]
const dispatch = createEventDispatcher();
@@ -56,7 +55,7 @@
);
} catch (e) {
console.error(`[DeploymentModal][Coherence:Failed] ${e.message}`);
toast($t.migration?.loading_envs_failed || "Failed to load environments", "error");
toast("Failed to load environments", "error");
} finally {
loading = false;
}
@@ -77,7 +76,7 @@
try {
const result = await gitService.deploy(dashboardId, selectedEnv);
toast(
result.message || ($t.git?.deploy_success || "Deployment triggered successfully"),
result.message || "Deployment triggered successfully",
"success",
);
dispatch("deploy");
@@ -99,26 +98,26 @@
class="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50"
>
<div class="bg-white p-6 rounded-lg shadow-xl w-96">
<h2 class="text-xl font-bold mb-4">{$t.git?.deploy || "Deploy Dashboard"}</h2>
<h2 class="text-xl font-bold mb-4">Deploy Dashboard</h2>
{#if loading}
<p class="text-gray-500">{$t.migration?.loading_envs || "Loading environments..."}</p>
<p class="text-gray-500">Loading environments...</p>
{:else if environments.length === 0}
<p class="text-red-500 mb-4">
{$t.git?.no_deploy_envs || "No deployment environments configured."}
No deployment environments configured.
</p>
<div class="flex justify-end">
<button
onclick={() => (show = false)}
on:click={() => (show = false)}
class="px-4 py-2 bg-gray-200 text-gray-800 rounded hover:bg-gray-300"
>
{$t.common?.close || "Close"}
Close
</button>
</div>
{:else}
<div class="mb-6">
<label class="block text-sm font-medium text-gray-700 mb-2"
>{$t.migration?.target_env || "Select Target Environment"}</label
>Select Target Environment</label
>
<select
bind:value={selectedEnv}
@@ -134,13 +133,13 @@
<div class="flex justify-end space-x-3">
<button
onclick={() => (show = false)}
on:click={() => (show = false)}
class="px-4 py-2 text-gray-600 hover:bg-gray-100 rounded"
>
{$t.common?.cancel || "Cancel"}
Cancel
</button>
<button
onclick={handleDeploy}
on:click={handleDeploy}
disabled={deploying || !selectedEnv}
class="px-4 py-2 bg-green-600 text-white rounded hover:bg-green-700 disabled:opacity-50 flex items-center"
>
@@ -165,9 +164,9 @@
d="M4 12a8 8 0 018-8V0C5.373 0 0 5.373 0 12h4zm2 5.291A7.962 7.962 0 014 12H0c0 3.042 1.135 5.824 3 7.938l3-2.647z"
></path>
</svg>
{$t.git?.deploying || "Deploying..."}
Deploying...
{:else}
{$t.git?.deploy || "Deploy"}
Deploy
{/if}
</button>
</div>

View File

@@ -35,20 +35,20 @@
// [/SECTION]
// [SECTION: STATE]
let currentBranch = $state('main');
let showCommitModal = $state(false);
let showDeployModal = $state(false);
let currentBranch = 'main';
let showCommitModal = false;
let showDeployModal = false;
let showHistory = true;
let showConflicts = $state(false);
let showConflicts = false;
let conflicts = [];
let loading = $state(false);
let initialized = $state(false);
let checkingStatus = $state(true);
let loading = false;
let initialized = false;
let checkingStatus = true;
// Initialization form state
let configs = $state([]);
let selectedConfigId = $state("");
let remoteUrl = $state("");
let configs = [];
let selectedConfigId = "";
let remoteUrl = "";
// [/SECTION]
// [DEF:checkStatus:Function]
@@ -82,13 +82,13 @@
*/
async function handleInit() {
if (!selectedConfigId || !remoteUrl) {
toast($t.git?.init_validation_error || 'Please select a Git server and provide remote URL', 'error');
toast('Please select a Git server and provide remote URL', 'error');
return;
}
loading = true;
try {
await gitService.initRepository(dashboardId, selectedConfigId, remoteUrl);
toast($t.git?.init_success || 'Repository initialized successfully', 'success');
toast('Repository initialized successfully', 'success');
initialized = true;
} catch (e) {
toast(e.message, 'error');
@@ -110,7 +110,7 @@
// Try to get selected environment from localStorage (set by EnvSelector)
const sourceEnvId = localStorage.getItem('selected_env_id');
await gitService.sync(dashboardId, sourceEnvId);
toast($t.git?.sync_success || 'Dashboard state synced to Git', 'success');
toast('Dashboard state synced to Git', 'success');
} catch (e) {
toast(e.message, 'error');
} finally {
@@ -129,7 +129,7 @@
loading = true;
try {
await gitService.push(dashboardId);
toast($t.git?.push_success || 'Changes pushed to remote', 'success');
toast('Changes pushed to remote', 'success');
} catch (e) {
toast(e.message, 'error');
} finally {
@@ -148,7 +148,7 @@
loading = true;
try {
await gitService.pull(dashboardId);
toast($t.git?.pull_success || 'Changes pulled from remote', 'success');
toast('Changes pulled from remote', 'success');
} catch (e) {
toast(e.message, 'error');
} finally {
@@ -164,10 +164,10 @@
{#if show}
<div class="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50">
<div class="bg-white p-6 rounded-lg shadow-2xl w-full max-w-4xl max-h-[90vh] overflow-y-auto">
<PageHeader title={`${$t.git?.management || "Git Management"}: ${dashboardTitle}`}>
<div slot="subtitle" class="text-sm text-gray-500">{$t.common?.id || "ID"}: {dashboardId}</div>
<PageHeader title="{$t.git.management}: {dashboardTitle}">
<div slot="subtitle" class="text-sm text-gray-500">ID: {dashboardId}</div>
<div slot="actions">
<button onclick={() => show = false} class="text-gray-400 hover:text-gray-600 transition-colors">
<button on:click={() => show = false} class="text-gray-400 hover:text-gray-600 transition-colors">
<svg xmlns="http://www.w3.org/2000/svg" class="h-6 w-6" fill="none" viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12" />
</svg>
@@ -193,17 +193,17 @@
options={configs.map(c => ({ value: c.id, label: `${c.name} (${c.provider})` }))}
/>
{#if configs.length === 0}
<p class="text-xs text-red-500 -mt-4">{$t.git?.no_servers_configured || "No Git servers configured. Go to Settings -> Git to add one."}</p>
<p class="text-xs text-red-500 -mt-4">No Git servers configured. Go to Settings -> Git to add one.</p>
{/if}
<Input
label={$t.git.remote_url}
bind:value={remoteUrl}
placeholder={$t.git?.remote_url_placeholder || "https://github.com/org/repo.git"}
placeholder="https://github.com/org/repo.git"
/>
<Button
onclick={handleInit}
on:click={handleInit}
disabled={loading || configs.length === 0}
isLoading={loading}
class="w-full"
@@ -226,14 +226,14 @@
<h3 class="text-sm font-semibold text-gray-400 uppercase tracking-wider mb-3">{$t.git.actions}</h3>
<Button
variant="secondary"
onclick={handleSync}
on:click={handleSync}
disabled={loading}
class="w-full"
>
{$t.git.sync}
</Button>
<Button
onclick={() => showCommitModal = true}
on:click={() => showCommitModal = true}
disabled={loading}
class="w-full"
>
@@ -242,7 +242,7 @@
<div class="grid grid-cols-2 gap-3">
<Button
variant="ghost"
onclick={handlePull}
on:click={handlePull}
disabled={loading}
class="border border-gray-200"
>
@@ -250,7 +250,7 @@
</Button>
<Button
variant="ghost"
onclick={handlePush}
on:click={handlePush}
disabled={loading}
class="border border-gray-200"
>
@@ -263,7 +263,7 @@
<h3 class="text-sm font-semibold text-gray-400 uppercase tracking-wider mb-3">{$t.git.deployment}</h3>
<Button
variant="primary"
onclick={() => showDeployModal = true}
on:click={() => showDeployModal = true}
disabled={loading}
class="w-full bg-green-600 hover:bg-green-700 focus-visible:ring-green-500"
>
@@ -300,4 +300,4 @@
/>
<!-- [/SECTION] -->
<!-- [/DEF:GitManager:Component] -->
<!-- [/DEF:GitManager:Component] -->

View File

@@ -11,17 +11,13 @@
/** @type {Object} */
let {
documentation = null,
content = "",
type = 'markdown',
format = 'text',
onSave = async () => {},
onCancel = () => {},
} = $props();
let previewDoc = $derived(documentation || content);
let isSaving = $state(false);
let isSaving = false;
async function handleSave() {
isSaving = true;
@@ -35,14 +31,14 @@
}
</script>
{#if previewDoc}
{#if documentation}
<div class="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50">
<div class="bg-white p-6 rounded-lg shadow-xl w-full max-w-2xl max-h-[90vh] flex flex-col">
<h3 class="text-lg font-semibold mb-4">{$t.llm.doc_preview_title}</h3>
<div class="flex-1 overflow-y-auto mb-6 prose prose-sm max-w-none border rounded p-4 bg-gray-50">
<h4 class="text-md font-bold text-gray-800 mb-2">{$t.llm.dataset_desc}</h4>
<p class="text-gray-700 mb-4 whitespace-pre-wrap">{previewDoc.description || 'No description generated.'}</p>
<p class="text-gray-700 mb-4 whitespace-pre-wrap">{documentation.description || 'No description generated.'}</p>
<h4 class="text-md font-bold text-gray-800 mb-2">{$t.llm.column_doc}</h4>
<table class="min-w-full divide-y divide-gray-200">
@@ -53,7 +49,7 @@
</tr>
</thead>
<tbody class="divide-y divide-gray-200">
{#each Object.entries(previewDoc.columns || {}) as [name, desc]}
{#each Object.entries(documentation.columns || {}) as [name, desc]}
<tr>
<td class="px-3 py-2 text-sm font-mono text-gray-900">{name}</td>
<td class="px-3 py-2 text-sm text-gray-700">{desc}</td>
@@ -66,14 +62,14 @@
<div class="flex justify-end gap-3">
<button
class="px-4 py-2 border rounded hover:bg-gray-50"
onclick={onCancel}
on:click={onCancel}
disabled={isSaving}
>
{$t.llm.cancel}
</button>
<button
class="px-4 py-2 bg-blue-600 text-white rounded hover:bg-blue-700 disabled:opacity-50"
onclick={handleSave}
on:click={handleSave}
disabled={isSaving}
>
{isSaving ? $t.llm.applying : $t.llm.apply_doc}
@@ -83,4 +79,4 @@
</div>
{/if}
<!-- [/DEF:DocPreview:Component] -->
<!-- [/DEF:DocPreview:Component] -->

View File

@@ -7,12 +7,12 @@
-->
<script>
import { onMount } from "svelte";
import { t } from "../../lib/i18n";
import { requestApi } from "../../lib/api";
/** @type {Array} */
export let providers = [];
export let onSave = () => {};
let { providers = [], onSave = () => {} } = $props();
let editingProvider = null;
let showForm = false;
@@ -29,20 +29,6 @@
let testStatus = { type: "", message: "" };
let isTesting = false;
function isMultimodalModel(modelName) {
const token = (modelName || "").toLowerCase();
if (!token) return false;
return (
token.includes("gpt-4o") ||
token.includes("gpt-4.1") ||
token.includes("vision") ||
token.includes("vl") ||
token.includes("gemini") ||
token.includes("claude-3") ||
token.includes("claude-sonnet-4")
);
}
function resetForm() {
formData = {
name: "",
@@ -57,18 +43,8 @@
}
function handleEdit(provider) {
console.log("[ProviderConfig][Action] Editing provider", provider?.id);
editingProvider = provider;
// Normalize provider fields to editable form shape.
formData = {
name: provider?.name ?? "",
provider_type: provider?.provider_type ?? "openai",
base_url: provider?.base_url ?? "https://api.openai.com/v1",
api_key: "",
default_model: provider?.default_model ?? "gpt-4o",
is_active: Boolean(provider?.is_active),
};
testStatus = { type: "", message: "" };
formData = { ...provider, api_key: "" }; // Don't populate key for security
showForm = true;
}
@@ -145,22 +121,19 @@
<div class="flex justify-between items-center mb-6">
<h2 class="text-xl font-bold">{$t.llm.providers_title}</h2>
<button
type="button"
class="bg-blue-600 text-white px-4 py-2 rounded hover:bg-blue-700 transition"
on:click|preventDefault={() => {
on:click={() => {
resetForm();
showForm = true;
}}
>
{$t.llm.add_provider}
{$t.llm.add_provider}
</button>
</div>
{#if showForm}
<div
class="fixed inset-0 bg-black bg-opacity-50 flex items-center justify-center z-50"
role="dialog"
aria-modal="true"
>
<div class="bg-white p-6 rounded-lg shadow-xl w-full max-w-md">
<h3 class="text-lg font-semibold mb-4">
@@ -268,26 +241,23 @@
<div class="mt-6 flex justify-between gap-2">
<button
type="button"
class="px-4 py-2 border rounded hover:bg-gray-50 flex-1"
on:click|preventDefault={() => {
on:click={() => {
showForm = false;
}}
>
{$t.llm.cancel}
</button>
<button
type="button"
class="px-4 py-2 bg-gray-600 text-white rounded hover:bg-gray-700 flex-1"
disabled={isTesting}
on:click|preventDefault={testConnection}
on:click={testConnection}
>
{isTesting ? $t.llm.testing : $t.llm.test}
</button>
<button
type="button"
class="px-4 py-2 bg-blue-600 text-white rounded hover:bg-blue-700 flex-1"
on:click|preventDefault={handleSubmit}
on:click={handleSubmit}
>
{$t.llm.save}
</button>
@@ -309,13 +279,6 @@
>
{provider.is_active ? $t.llm.active : "Inactive"}
</span>
<span
class={`text-xs px-2 py-0.5 rounded-full ${isMultimodalModel(provider.default_model) ? "bg-sky-100 text-sky-800" : "bg-amber-100 text-amber-800"}`}
>
{isMultimodalModel(provider.default_model)
? ($t.llm?.multimodal || "Multimodal")
: ($t.llm?.text_only || "Text only")}
</span>
</div>
<div class="text-sm text-gray-500">
{provider.provider_type}{provider.default_model}
@@ -323,16 +286,14 @@
</div>
<div class="flex gap-2">
<button
type="button"
class="text-sm text-blue-600 hover:underline"
on:click|preventDefault|stopPropagation={() => handleEdit(provider)}
on:click={() => handleEdit(provider)}
>
{$t.common.edit}
</button>
<button
type="button"
class={`text-sm ${provider.is_active ? "text-orange-600" : "text-green-600"} hover:underline`}
on:click|preventDefault|stopPropagation={() => toggleActive(provider)}
on:click={() => toggleActive(provider)}
>
{provider.is_active ? "Deactivate" : "Activate"}
</button>

View File

@@ -1,44 +0,0 @@
// [DEF:frontend.src.components.llm.__tests__.provider_config_integration:Module]
// @TIER: STANDARD
// @SEMANTICS: llm, provider-config, integration-test, edit-flow
// @PURPOSE: Protect edit-button interaction contract in LLM provider settings UI.
// @LAYER: UI Tests
// @RELATION: VERIFIES -> frontend/src/components/llm/ProviderConfig.svelte
// @INVARIANT: Edit action keeps explicit click handler and opens normalized edit form.
import { describe, it, expect } from 'vitest';
import fs from 'node:fs';
import path from 'node:path';
const COMPONENT_PATH = path.resolve(
process.cwd(),
'src/components/llm/ProviderConfig.svelte',
);
// [DEF:provider_config_edit_contract_tests:Function]
// @TIER: STANDARD
// @PURPOSE: Validate edit button handler wiring and normalized edit form state mapping.
// @PRE: ProviderConfig component source exists in expected path.
// @POST: Contract checks ensure edit click cannot degrade into no-op flow.
describe('ProviderConfig edit interaction contract', () => {
it('keeps explicit edit click handler with guarded button semantics', () => {
const source = fs.readFileSync(COMPONENT_PATH, 'utf-8');
expect(source).toContain('type="button"');
expect(source).toContain(
"on:click|preventDefault|stopPropagation={() => handleEdit(provider)}",
);
});
it('normalizes provider payload into editable form shape', () => {
const source = fs.readFileSync(COMPONENT_PATH, 'utf-8');
expect(source).toContain('formData = {');
expect(source).toContain('name: provider?.name ?? ""');
expect(source).toContain('provider_type: provider?.provider_type ?? "openai"');
expect(source).toContain('default_model: provider?.default_model ?? "gpt-4o"');
expect(source).toContain('showForm = true;');
});
});
// [/DEF:provider_config_edit_contract_tests:Function]
// [/DEF:frontend.src.components.llm.__tests__.provider_config_integration:Module]

View File

@@ -31,8 +31,8 @@
path = '',
} = $props();
let isUploading = $state(false);
let dragOver = $state(false);
let isUploading = false;
let dragOver = false;
async function handleUpload() {
const file = fileInput.files[0];
@@ -94,9 +94,9 @@
<div
class="mt-1 flex justify-center px-6 pt-5 pb-6 border-2 border-dashed rounded-md transition-colors
{dragOver ? 'border-indigo-500 bg-indigo-50' : 'border-gray-300'}"
ondragover={(event) => { event.preventDefault(); dragOver = true; }}
ondragleave={(event) => { event.preventDefault(); dragOver = false; }}
ondrop={(event) => { event.preventDefault(); handleDrop(event); }}
on:dragover|preventDefault={() => dragOver = true}
on:dragleave|preventDefault={() => dragOver = false}
on:drop|preventDefault={handleDrop}
>
<div class="space-y-1 text-center">
<svg class="mx-auto h-12 w-12 text-gray-400" stroke="currentColor" fill="none" viewBox="0 0 48 48" aria-hidden="true">
@@ -111,7 +111,7 @@
type="file"
class="sr-only"
bind:this={fileInput}
onchange={handleUpload}
on:change={handleUpload}
disabled={isUploading}
>
</label>
@@ -132,4 +132,4 @@
<!-- [/SECTION: TEMPLATE] -->
<!-- [/DEF:FileUpload:Component] -->
<!-- [/DEF:FileUpload:Component] -->

View File

@@ -31,7 +31,7 @@
// @POST: A new connection is created via the connection service and a success event is dispatched.
async function handleSubmit() {
if (!name || !host || !database || !username || !password) {
addToast($t.connections?.required_fields || 'Please fill in all required fields', 'warning');
addToast('Please fill in all required fields', 'warning');
return;
}
@@ -40,7 +40,7 @@
const newConnection = await createConnection({
name, type, host, port: Number(port), database, username, password
});
addToast($t.connections?.created_success || 'Connection created successfully', 'success');
addToast('Connection created successfully', 'success');
dispatch('success', newConnection);
resetForm();
} catch (e) {
@@ -70,10 +70,10 @@
<!-- [SECTION: TEMPLATE] -->
<Card title={$t.connections?.add_new || "Add New Connection"}>
<form on:submit|preventDefault={handleSubmit} class="space-y-6">
<Input label={$t.connections?.name || "Connection Name"} bind:value={name} placeholder={$t.connections?.name_placeholder || "e.g. Production DWH"} />
<Input label={$t.connections?.name || "Connection Name"} bind:value={name} placeholder="e.g. Production DWH" />
<div class="grid grid-cols-1 md:grid-cols-2 gap-6">
<Input label={$t.connections?.host || "Host"} bind:value={host} placeholder={$t.connections?.host_placeholder || "10.0.0.1"} />
<Input label={$t.connections?.host || "Host"} bind:value={host} placeholder="10.0.0.1" />
<Input label={$t.connections?.port || "Port"} type="number" bind:value={port} />
</div>
@@ -92,4 +92,4 @@
</form>
</Card>
<!-- [/SECTION] -->
<!-- [/DEF:ConnectionForm:Component] -->
<!-- [/DEF:ConnectionForm:Component] -->

View File

@@ -28,7 +28,7 @@
try {
connections = await getConnections();
} catch (e) {
addToast($t.connections?.fetch_failed || 'Failed to fetch connections', 'error');
addToast('Failed to fetch connections', 'error');
} finally {
isLoading = false;
}
@@ -40,11 +40,11 @@
// @PRE: id is provided and user confirms deletion.
// @POST: Connection is deleted from backend and list is reloaded.
async function handleDelete(id) {
if (!confirm($t.connections?.delete_confirm || 'Are you sure you want to delete this connection?')) return;
if (!confirm('Are you sure you want to delete this connection?')) return;
try {
await deleteConnection(id);
addToast($t.connections?.deleted_success || 'Connection deleted', 'success');
addToast('Connection deleted', 'success');
await fetchConnections();
} catch (e) {
addToast(e.message, 'error');
@@ -85,4 +85,4 @@
</ul>
</Card>
<!-- [/SECTION] -->
<!-- [/DEF:ConnectionList:Component] -->
<!-- [/DEF:ConnectionList:Component] -->

View File

@@ -12,7 +12,6 @@
import { runTask, getTaskStatus } from '../../services/toolsService.js';
import { selectedTask } from '../../lib/stores.js';
import { addToast } from '../../lib/toasts.js';
import { t } from '../../lib/i18n';
// [/SECTION]
let envs = [];
@@ -36,7 +35,7 @@
try {
envs = await api.getEnvironmentsList();
} catch (e) {
addToast($t.debug?.fetch_env_failed || 'Failed to fetch environments', 'error');
addToast('Failed to fetch environments', 'error');
}
}
// [/DEF:fetchEnvironments:Function]
@@ -55,7 +54,7 @@
let params = { action };
if (action === 'test-db-api') {
if (!sourceEnv || !targetEnv) {
addToast($t.debug?.source_target_required || 'Source and Target environments are required', 'warning');
addToast('Source and Target environments are required', 'warning');
isRunning = false;
return;
}
@@ -65,7 +64,7 @@
params.target_env = tEnv.name;
} else {
if (!selectedEnv || !datasetId) {
addToast($t.debug?.env_dataset_required || 'Environment and Dataset ID are required', 'warning');
addToast('Environment and Dataset ID are required', 'warning');
isRunning = false;
return;
}
@@ -102,11 +101,11 @@
clearInterval(pollInterval);
isRunning = false;
results = task.result;
addToast($t.debug?.completed || 'Debug task completed', 'success');
addToast('Debug task completed', 'success');
} else if (task.status === 'FAILED') {
clearInterval(pollInterval);
isRunning = false;
addToast($t.debug?.failed || 'Debug task failed', 'error');
addToast('Debug task failed', 'error');
}
} catch (e) {
clearInterval(pollInterval);
@@ -121,31 +120,31 @@
<div class="space-y-6">
<div class="bg-white p-6 rounded-lg shadow-sm border border-gray-200">
<h3 class="text-lg font-medium text-gray-900 mb-4">{$t.debug?.title || 'System Diagnostics'}</h3>
<h3 class="text-lg font-medium text-gray-900 mb-4">System Diagnostics</h3>
<div class="mb-4">
<label class="block text-sm font-medium text-gray-700">{$t.debug?.action || 'Debug Action'}</label>
<label class="block text-sm font-medium text-gray-700">Debug Action</label>
<select bind:value={action} class="mt-1 block w-full border-gray-300 rounded-md shadow-sm focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm">
<option value="test-db-api">{$t.debug?.test_db_api || 'Test Database API (Compare Envs)'}</option>
<option value="get-dataset-structure">{$t.debug?.get_dataset_structure || 'Get Dataset Structure (JSON)'}</option>
<option value="test-db-api">Test Database API (Compare Envs)</option>
<option value="get-dataset-structure">Get Dataset Structure (JSON)</option>
</select>
</div>
{#if action === 'test-db-api'}
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<div>
<label for="src-env" class="block text-sm font-medium text-gray-700">{$t.migration?.source_env || 'Source Environment'}</label>
<label for="src-env" class="block text-sm font-medium text-gray-700">Source Environment</label>
<select id="src-env" bind:value={sourceEnv} class="mt-1 block w-full border-gray-300 rounded-md shadow-sm focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm">
<option value="" disabled>{$t.debug?.select_source || '-- Select Source --'}</option>
<option value="" disabled>-- Select Source --</option>
{#each envs as env}
<option value={env.id}>{env.name}</option>
{/each}
</select>
</div>
<div>
<label for="tgt-env" class="block text-sm font-medium text-gray-700">{$t.migration?.target_env || 'Target Environment'}</label>
<label for="tgt-env" class="block text-sm font-medium text-gray-700">Target Environment</label>
<select id="tgt-env" bind:value={targetEnv} class="mt-1 block w-full border-gray-300 rounded-md shadow-sm focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm">
<option value="" disabled>{$t.debug?.select_target || '-- Select Target --'}</option>
<option value="" disabled>-- Select Target --</option>
{#each envs as env}
<option value={env.id}>{env.name}</option>
{/each}
@@ -155,16 +154,16 @@
{:else}
<div class="grid grid-cols-1 md:grid-cols-2 gap-4">
<div>
<label for="debug-env" class="block text-sm font-medium text-gray-700">{$t.dashboard?.environment || 'Environment'}</label>
<label for="debug-env" class="block text-sm font-medium text-gray-700">Environment</label>
<select id="debug-env" bind:value={selectedEnv} class="mt-1 block w-full border-gray-300 rounded-md shadow-sm focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm">
<option value="" disabled>{$t.common?.choose_environment || '-- Select Environment --'}</option>
<option value="" disabled>-- Select Environment --</option>
{#each envs as env}
<option value={env.id}>{env.name}</option>
{/each}
</select>
</div>
<div>
<label for="debug-ds-id" class="block text-sm font-medium text-gray-700">{$t.mapper?.dataset_id || 'Dataset ID'}</label>
<label for="debug-ds-id" class="block text-sm font-medium text-gray-700">Dataset ID</label>
<input type="number" id="debug-ds-id" bind:value={datasetId} class="mt-1 block w-full border-gray-300 rounded-md shadow-sm focus:ring-indigo-500 focus:border-indigo-500 sm:text-sm" />
</div>
</div>
@@ -172,7 +171,7 @@
<div class="mt-4 flex justify-end">
<button on:click={handleRunDebug} disabled={isRunning} class="inline-flex items-center px-4 py-2 border border-transparent text-sm font-medium rounded-md shadow-sm text-white bg-indigo-600 hover:bg-indigo-700 focus:outline-none focus:ring-2 focus:ring-offset-2 focus:ring-indigo-500 disabled:opacity-50">
{isRunning ? ($t.dashboard?.running || 'Running...') : ($t.debug?.run || 'Run Diagnostics')}
{isRunning ? 'Running...' : 'Run Diagnostics'}
</button>
</div>
</div>
@@ -180,7 +179,7 @@
{#if results}
<div class="bg-white shadow overflow-hidden sm:rounded-md border border-gray-200">
<div class="px-4 py-5 sm:px-6 bg-gray-50 border-b border-gray-200">
<h3 class="text-lg leading-6 font-medium text-gray-900">{$t.debug?.output || 'Debug Output'}</h3>
<h3 class="text-lg leading-6 font-medium text-gray-900">Debug Output</h3>
</div>
<div class="p-4">
<pre class="text-xs text-gray-600 bg-gray-900 text-green-400 p-4 rounded-md overflow-x-auto h-96">{JSON.stringify(results, null, 2)}</pre>
@@ -188,4 +187,4 @@
</div>
{/if}
</div>
<!-- [/DEF:DebugTool:Component] -->
<!-- [/DEF:DebugTool:Component] -->

Some files were not shown because too many files have changed in this diff Show More