Compare commits
27 Commits
020-clean-
...
025-clean-
| Author | SHA1 | Date | |
|---|---|---|---|
| 82435822eb | |||
| 3a8c82918a | |||
| 87b81a365a | |||
| 6ee54d95a8 | |||
| 4f74bb8afb | |||
| 309dfdba86 | |||
| c7e9b5b6c5 | |||
| 603256eeaf | |||
| 589fab37d8 | |||
| eb7305ecda | |||
| e864a9e08b | |||
| 12d17ec35e | |||
| 5bd20c74fe | |||
| 633c4948f1 | |||
| e7cb5237d3 | |||
| a5086f3eef | |||
| f066d5561b | |||
| 7ff0dfa8c6 | |||
| 4fec2e02ad | |||
| c5a0823b00 | |||
| de1f04406f | |||
| c473a09402 | |||
| a15a2aed25 | |||
| a8f1a376ab | |||
| 1eb4b26254 | |||
| a9c0d55ec8 | |||
| 8406628360 |
File diff suppressed because it is too large
Load Diff
1769
.ai/PROJECT_MAP.md
1769
.ai/PROJECT_MAP.md
File diff suppressed because it is too large
Load Diff
@@ -8,7 +8,7 @@
|
||||
## 1. CORE PRINCIPLES
|
||||
|
||||
### I. Semantic Protocol Compliance
|
||||
* **Ref:** `[DEF:Std:Semantics]` (formerly `semantic_protocol.md`)
|
||||
* **Ref:** `[DEF:Std:Semantics]` (`ai/standards/semantic.md`)
|
||||
* **Law:** All code must adhere to the Axioms (Meaning First, Contract First, etc.).
|
||||
* **Compliance:** Strict matching of Anchors (`[DEF]`), Tags (`@KEY`), and structures is mandatory.
|
||||
|
||||
|
||||
@@ -45,6 +45,10 @@ Auto-generated from all feature plans. Last updated: 2025-12-19
|
||||
- SQLite task/result persistence (existing task DB), filesystem only for existing artifacts (no new primary store required) (020-task-reports-design)
|
||||
- Node.js 18+ runtime, SvelteKit (existing frontend stack) + SvelteKit, Tailwind CSS, existing frontend UI primitives under `frontend/src/lib/components/ui` (001-unify-frontend-style)
|
||||
- N/A (UI styling and component behavior only) (001-unify-frontend-style)
|
||||
- Python 3.9+ (backend scripts/services), Shell (release tooling) + FastAPI stack (existing backend), ConfigManager, TaskManager, файловые утилиты, internal artifact registries (020-clean-repo-enterprise)
|
||||
- PostgreSQL (конфигурации/метаданные), filesystem (артефакты дистрибутива, отчёты проверки) (020-clean-repo-enterprise)
|
||||
- Python 3.9+ (backend), Node.js 18+ + SvelteKit (frontend) + FastAPI, SQLAlchemy, Pydantic, existing auth stack (`get_current_user`), existing dashboards route/service, Svelte runes (`$state`, `$derived`, `$effect`), Tailwind CSS, frontend `api` wrapper (024-user-dashboard-filter)
|
||||
- Existing auth database (`AUTH_DATABASE_URL`) with a dedicated per-user preference entity (024-user-dashboard-filter)
|
||||
|
||||
- Python 3.9+ (Backend), Node.js 18+ (Frontend Build) (001-plugin-arch-svelte-ui)
|
||||
|
||||
@@ -65,9 +69,9 @@ cd src; pytest; ruff check .
|
||||
Python 3.9+ (Backend), Node.js 18+ (Frontend Build): Follow standard conventions
|
||||
|
||||
## Recent Changes
|
||||
- 024-user-dashboard-filter: Added Python 3.9+ (backend), Node.js 18+ + SvelteKit (frontend) + FastAPI, SQLAlchemy, Pydantic, existing auth stack (`get_current_user`), existing dashboards route/service, Svelte runes (`$state`, `$derived`, `$effect`), Tailwind CSS, frontend `api` wrapper
|
||||
- 020-clean-repo-enterprise: Added Python 3.9+ (backend scripts/services), Shell (release tooling) + FastAPI stack (existing backend), ConfigManager, TaskManager, файловые утилиты, internal artifact registries
|
||||
- 001-unify-frontend-style: Added Node.js 18+ runtime, SvelteKit (existing frontend stack) + SvelteKit, Tailwind CSS, existing frontend UI primitives under `frontend/src/lib/components/ui`
|
||||
- 020-task-reports-design: Added Python 3.9+ (backend), Node.js 18+ (frontend) + FastAPI, SvelteKit, Tailwind CSS, SQLAlchemy/Pydantic task models, existing task/websocket stack
|
||||
- 019-superset-ux-redesign: Added Python 3.9+ (Backend), Node.js 18+ (Frontend) + FastAPI, SvelteKit, Tailwind CSS, SQLAlchemy, WebSocket (existing)
|
||||
|
||||
|
||||
<!-- MANUAL ADDITIONS START -->
|
||||
|
||||
390
README.md
390
README.md
@@ -1,143 +1,331 @@
|
||||
# ss-tools
|
||||
|
||||
Инструменты автоматизации для Apache Superset: миграция, маппинг, хранение артефактов, Git-интеграция, отчеты по задачам и LLM-assistant.
|
||||
**Инструменты автоматизации для Apache Superset: миграция, версионирование, аналитика и управление данными**
|
||||
|
||||
## Возможности
|
||||
- Миграция дашбордов и датасетов между окружениями.
|
||||
- Ручной и полуавтоматический маппинг ресурсов.
|
||||
- Логи фоновых задач и отчеты о выполнении.
|
||||
- Локальное хранилище файлов и бэкапов.
|
||||
- Git-операции по Superset-ассетам через UI.
|
||||
- Модуль LLM-анализа и assistant API.
|
||||
- Многопользовательская авторизация (RBAC).
|
||||
## 📋 О проекте
|
||||
|
||||
## Стек
|
||||
- Backend: Python, FastAPI, SQLAlchemy, APScheduler.
|
||||
- Frontend: SvelteKit, Vite, Tailwind CSS.
|
||||
- База данных: PostgreSQL (основная конфигурация), поддержка миграции с legacy SQLite.
|
||||
ss-tools — это комплексная платформа для автоматизации работы с Apache Superset, предоставляющая инструменты для миграции дашбордов, управления версиями через Git, LLM-анализа данных и многопользовательского контроля доступа. Система построена на модульной архитектуре с плагинной системой расширений.
|
||||
|
||||
## Структура репозитория
|
||||
- `backend/` — API, плагины, сервисы, скрипты миграции и тесты.
|
||||
- `frontend/` — SPA-интерфейс (SvelteKit).
|
||||
- `docs/` — документация по архитектуре и плагинам.
|
||||
- `specs/` — спецификации и планы реализации.
|
||||
- `docker/` и `docker-compose.yml` — контейнеризация.
|
||||
### 🎯 Ключевые возможности
|
||||
|
||||
## Быстрый старт (локально)
|
||||
#### 🔄 Миграция данных
|
||||
- **Миграция дашбордов и датасетов** между окружениями (dev/staging/prod)
|
||||
- **Dry-run режим** с детальным анализом рисков и предпросмотром изменений
|
||||
- **Автоматическое маппинг** баз данных и ресурсов между окружениями
|
||||
- **Поддержка legacy-данных** с миграцией из SQLite в PostgreSQL
|
||||
|
||||
#### 🌿 Git-интеграция
|
||||
- **Версионирование** дашбордов через Git-репозитории
|
||||
- **Управление ветками** и коммитами с помощью LLM
|
||||
- **Деплой** дашбордов из Git в целевые окружения
|
||||
- **История изменений** с детальным diff
|
||||
|
||||
#### 🤖 LLM-аналитика
|
||||
- **Автоматическая валидация** дашбордов с помощью ИИ
|
||||
- **Генерация документации** для датасетов
|
||||
- **Assistant API** для natural language команд
|
||||
- **Интеллектуальное коммитинг** с подсказками сообщений
|
||||
|
||||
#### 📊 Управление и мониторинг
|
||||
- **Многопользовательская авторизация** (RBAC)
|
||||
- **Фоновые задачи** с реальным логированием через WebSocket
|
||||
- **Унифицированные отчеты** по выполненным задачам
|
||||
- **Хранение артефактов** с политиками retention
|
||||
- **Аудит логирование** всех действий
|
||||
|
||||
#### 🔌 Плагины
|
||||
- **MigrationPlugin** — миграция дашбордов
|
||||
- **BackupPlugin** — резервное копирование
|
||||
- **GitPlugin** — управление версиями
|
||||
- **LLMAnalysisPlugin** — аналитика и документация
|
||||
- **MapperPlugin** — маппинг колонок
|
||||
- **DebugPlugin** — диагностика системы
|
||||
- **SearchPlugin** — поиск по датасетам
|
||||
|
||||
## 🏗️ Архитектура
|
||||
|
||||
### Технологический стек
|
||||
|
||||
**Backend:**
|
||||
- Python 3.9+ (FastAPI, SQLAlchemy, APScheduler)
|
||||
- PostgreSQL (основная БД)
|
||||
- GitPython для Git-операций
|
||||
- OpenAI API для LLM-функций
|
||||
- Playwright для скриншотов
|
||||
|
||||
**Frontend:**
|
||||
- SvelteKit (Svelte 5.x)
|
||||
- Vite
|
||||
- Tailwind CSS
|
||||
- WebSocket для реального логирования
|
||||
|
||||
**DevOps:**
|
||||
- Docker & Docker Compose
|
||||
- PostgreSQL 16
|
||||
|
||||
### Модульная структура
|
||||
|
||||
```
|
||||
ss-tools/
|
||||
├── backend/ # Backend API
|
||||
│ ├── src/
|
||||
│ │ ├── api/ # API маршруты
|
||||
│ │ ├── core/ # Ядро системы
|
||||
│ │ │ ├── task_manager/ # Управление задачами
|
||||
│ │ │ ├── auth/ # Авторизация
|
||||
│ │ │ ├── migration/ # Миграция данных
|
||||
│ │ │ └── plugins/ # Плагины
|
||||
│ │ ├── models/ # Модели данных
|
||||
│ │ ├── services/ # Бизнес-логика
|
||||
│ │ └── schemas/ # Pydantic схемы
|
||||
│ └── tests/ # Тесты
|
||||
├── frontend/ # SvelteKit приложение
|
||||
│ ├── src/
|
||||
│ │ ├── routes/ # Страницы
|
||||
│ │ ├── lib/
|
||||
│ │ │ ├── components/ # UI компоненты
|
||||
│ │ │ ├── stores/ # Svelte stores
|
||||
│ │ │ └── api/ # API клиент
|
||||
│ │ └── i18n/ # Мультиязычность
|
||||
│ └── tests/
|
||||
├── docker/ # Docker конфигурация
|
||||
├── docs/ # Документация
|
||||
└── specs/ # Спецификации
|
||||
```
|
||||
|
||||
## 🚀 Быстрый старт
|
||||
|
||||
### Требования
|
||||
|
||||
**Локальная разработка:**
|
||||
- Python 3.9+
|
||||
- Node.js 18+
|
||||
- npm
|
||||
- 2 GB RAM (минимум)
|
||||
- 5 GB свободного места
|
||||
|
||||
**Docker (рекомендуется):**
|
||||
- Docker Engine 24+
|
||||
- Docker Compose v2
|
||||
- 4 GB RAM (для стабильной работы)
|
||||
|
||||
### Установка и запуск
|
||||
|
||||
#### Вариант 1: Docker (рекомендуется)
|
||||
|
||||
### Запуск backend + frontend одним скриптом
|
||||
```bash
|
||||
./run.sh
|
||||
```
|
||||
# Клонирование репозитория
|
||||
git clone <repository-url>
|
||||
cd ss-tools
|
||||
|
||||
Что делает `run.sh`:
|
||||
- проверяет версии Python/npm;
|
||||
- создает `backend/.venv` (если нет);
|
||||
- устанавливает `backend/requirements.txt` и `frontend` зависимости;
|
||||
- запускает backend и frontend параллельно.
|
||||
|
||||
Опции:
|
||||
- `./run.sh --skip-install` — пропустить установку зависимостей.
|
||||
- `./run.sh --help` — показать справку.
|
||||
|
||||
Переменные окружения для локального запуска:
|
||||
- `BACKEND_PORT` (по умолчанию `8000`)
|
||||
- `FRONTEND_PORT` (по умолчанию `5173`)
|
||||
- `POSTGRES_URL`
|
||||
- `DATABASE_URL`
|
||||
- `TASKS_DATABASE_URL`
|
||||
- `AUTH_DATABASE_URL`
|
||||
|
||||
## Docker
|
||||
|
||||
### Запуск
|
||||
```bash
|
||||
# Запуск всех сервисов
|
||||
docker compose up --build
|
||||
|
||||
# После запуска:
|
||||
# Frontend: http://localhost:8000
|
||||
# Backend API: http://localhost:8001
|
||||
# PostgreSQL: localhost:5432
|
||||
```
|
||||
|
||||
После старта сервисы доступны по адресам:
|
||||
- Frontend: `http://localhost:8000`
|
||||
- Backend API: `http://localhost:8001`
|
||||
- PostgreSQL: `localhost:5432` (`postgres/postgres`, БД `ss_tools`)
|
||||
#### Вариант 2: Локально
|
||||
|
||||
### Остановка
|
||||
```bash
|
||||
docker compose down
|
||||
```
|
||||
|
||||
### Очистка БД-тома
|
||||
```bash
|
||||
docker compose down -v
|
||||
```
|
||||
|
||||
### Альтернативный образ PostgreSQL
|
||||
Если есть проблемы с pull `postgres:16-alpine`:
|
||||
```bash
|
||||
POSTGRES_IMAGE=mirror.gcr.io/library/postgres:16-alpine docker compose up -d db
|
||||
```
|
||||
или
|
||||
```bash
|
||||
POSTGRES_IMAGE=bitnami/postgresql:latest docker compose up -d db
|
||||
```
|
||||
|
||||
Если порт `5432` занят:
|
||||
```bash
|
||||
POSTGRES_HOST_PORT=5433 docker compose up -d db
|
||||
```
|
||||
|
||||
## Разработка
|
||||
|
||||
### Ручной запуск сервисов
|
||||
```bash
|
||||
# Backend
|
||||
cd backend
|
||||
python3 -m venv .venv
|
||||
source .venv/bin/activate
|
||||
pip install -r requirements.txt
|
||||
python3 -m uvicorn src.app:app --reload --port 8000
|
||||
```
|
||||
|
||||
В другом терминале:
|
||||
```bash
|
||||
# Frontend (в новом терминале)
|
||||
cd frontend
|
||||
npm install
|
||||
npm run dev -- --port 5173
|
||||
```
|
||||
|
||||
### Тесты
|
||||
Backend:
|
||||
```bash
|
||||
cd backend
|
||||
source .venv/bin/activate
|
||||
pytest
|
||||
```
|
||||
### Первичная настройка
|
||||
|
||||
Frontend:
|
||||
```bash
|
||||
cd frontend
|
||||
npm run test
|
||||
```
|
||||
|
||||
## Инициализация auth (опционально)
|
||||
```bash
|
||||
# Инициализация БД
|
||||
cd backend
|
||||
source .venv/bin/activate
|
||||
python src/scripts/init_auth_db.py
|
||||
|
||||
# Создание администратора
|
||||
python src/scripts/create_admin.py --username admin --password admin
|
||||
```
|
||||
|
||||
## Миграция legacy-данных (опционально)
|
||||
## 🏢 Enterprise Clean Deployment (internal-only)
|
||||
|
||||
Для разворота в корпоративной сети используйте профиль enterprise clean:
|
||||
|
||||
- очищенный дистрибутив без test/demo/load-test данных;
|
||||
- запрет внешних интернет-источников;
|
||||
- загрузка ресурсов только с внутренних серверов компании;
|
||||
- обязательная блокирующая проверка clean/compliance перед выпуском.
|
||||
|
||||
### Операционный workflow (CLI/API/TUI)
|
||||
|
||||
#### 1) Headless flow через CLI (рекомендуется для CI/CD)
|
||||
|
||||
```bash
|
||||
cd backend
|
||||
source .venv/bin/activate
|
||||
PYTHONPATH=. python src/scripts/migrate_sqlite_to_postgres.py --sqlite-path tasks.db
|
||||
|
||||
# 1. Регистрация кандидата
|
||||
.venv/bin/python3 -m src.scripts.clean_release_cli candidate-register \
|
||||
--candidate-id 2026.03.09-rc1 \
|
||||
--version 1.0.0 \
|
||||
--source-snapshot-ref git:release/2026.03.09-rc1 \
|
||||
--created-by release-operator
|
||||
|
||||
# 2. Импорт артефактов
|
||||
.venv/bin/python3 -m src.scripts.clean_release_cli artifact-import \
|
||||
--candidate-id 2026.03.09-rc1 \
|
||||
--artifact-id artifact-001 \
|
||||
--path backend/dist/package.tar.gz \
|
||||
--sha256 deadbeef \
|
||||
--size 1024
|
||||
|
||||
# 3. Сборка манифеста
|
||||
.venv/bin/python3 -m src.scripts.clean_release_cli manifest-build \
|
||||
--candidate-id 2026.03.09-rc1 \
|
||||
--created-by release-operator
|
||||
|
||||
# 4. Запуск compliance
|
||||
.venv/bin/python3 -m src.scripts.clean_release_cli compliance-run \
|
||||
--candidate-id 2026.03.09-rc1 \
|
||||
--actor release-operator
|
||||
```
|
||||
|
||||
## Дополнительная документация
|
||||
- `docs/plugin_dev.md`
|
||||
- `docs/settings.md`
|
||||
- `semantic_protocol.md`
|
||||
#### 2) API flow (автоматизация через сервисы)
|
||||
|
||||
- V2 candidate/artifact/manifest API:
|
||||
- `POST /api/clean-release/candidates`
|
||||
- `POST /api/clean-release/candidates/{candidate_id}/artifacts`
|
||||
- `POST /api/clean-release/candidates/{candidate_id}/manifests`
|
||||
- `GET /api/clean-release/candidates/{candidate_id}/overview`
|
||||
- Legacy compatibility API (оставлены для миграции клиентов):
|
||||
- `POST /api/clean-release/candidates/prepare`
|
||||
- `POST /api/clean-release/checks`
|
||||
- `GET /api/clean-release/checks/{check_run_id}`
|
||||
|
||||
#### 3) TUI flow (тонкий клиент поверх facade)
|
||||
|
||||
```bash
|
||||
cd /home/busya/dev/ss-tools
|
||||
./run_clean_tui.sh 2026.03.09-rc1
|
||||
```
|
||||
|
||||
Горячие клавиши:
|
||||
- `F5`: Run Compliance
|
||||
- `F6`: Build Manifest
|
||||
- `F7`: Reset Draft
|
||||
- `F8`: Approve
|
||||
- `F9`: Publish
|
||||
- `F10`: Refresh Overview
|
||||
|
||||
Важно: TUI требует валидный TTY. Без TTY запуск отклоняется с инструкцией использовать CLI/API.
|
||||
|
||||
Типовые внутренние источники:
|
||||
- `repo.intra.company.local`
|
||||
- `artifacts.intra.company.local`
|
||||
- `pypi.intra.company.local`
|
||||
|
||||
Если найден внешний endpoint, выпуск получает статус `BLOCKED` до исправления.
|
||||
|
||||
## 📖 Документация
|
||||
|
||||
- [Установка и настройка](docs/installation.md)
|
||||
- [Архитектура системы](docs/architecture.md)
|
||||
- [Разработка плагинов](docs/plugin_dev.md)
|
||||
- [API документация](http://localhost:8001/docs)
|
||||
- [Настройка окружений](docs/settings.md)
|
||||
|
||||
## 🧪 Тестирование
|
||||
|
||||
```bash
|
||||
# Backend тесты
|
||||
cd backend
|
||||
source .venv/bin/activate
|
||||
pytest
|
||||
|
||||
# Frontend тесты
|
||||
cd frontend
|
||||
npm run test
|
||||
|
||||
# Запуск конкретного теста
|
||||
pytest tests/test_auth.py::test_create_user
|
||||
```
|
||||
|
||||
|
||||
|
||||
## 🔐 Авторизация
|
||||
|
||||
Система поддерживает два метода аутентификации:
|
||||
|
||||
1. **Локальная аутентификация** (username/password)
|
||||
2. **ADFS SSO** (Active Directory Federation Services)
|
||||
|
||||
### Управление пользователями и ролями
|
||||
|
||||
```bash
|
||||
# Получение списка пользователей
|
||||
GET /api/admin/users
|
||||
|
||||
# Создание пользователя
|
||||
POST /api/admin/users
|
||||
{
|
||||
"username": "newuser",
|
||||
"email": "user@example.com",
|
||||
"password": "password123",
|
||||
"roles": ["analyst"]
|
||||
}
|
||||
|
||||
# Создание роли
|
||||
POST /api/admin/roles
|
||||
{
|
||||
"name": "analyst",
|
||||
"permissions": ["dashboards:read", "dashboards:write"]
|
||||
}
|
||||
```
|
||||
|
||||
## 📊 Мониторинг
|
||||
|
||||
### Отчеты о задачах
|
||||
|
||||
```bash
|
||||
# Список всех отчетов
|
||||
GET /api/reports?page=1&page_size=20
|
||||
|
||||
# Детали отчета
|
||||
GET /api/reports/{report_id}
|
||||
|
||||
# Фильтры
|
||||
GET /api/reports?status=failed&task_type=validation&date_from=2024-01-01
|
||||
```
|
||||
|
||||
### Активность
|
||||
|
||||
- **Dashboard Hub** — управление дашбордами с Git-статусом
|
||||
- **Dataset Hub** — управление датасетами с прогрессом маппинга
|
||||
- **Task Drawer** — мониторинг выполнения фоновых задач
|
||||
- **Unified Reports** — унифицированные отчеты по всем типам задач
|
||||
|
||||
## 🔄 Обновление системы
|
||||
|
||||
```bash
|
||||
# Обновление Docker контейнеров
|
||||
docker compose pull
|
||||
docker compose up -d
|
||||
|
||||
# Обновление зависимостей Python
|
||||
cd backend
|
||||
source .venv/bin/activate
|
||||
pip install -r requirements.txt --upgrade
|
||||
|
||||
# Обновление зависимостей Node.js
|
||||
cd frontend
|
||||
npm install
|
||||
```
|
||||
|
||||
|
||||
|
||||
14
artifacts.json
Normal file
14
artifacts.json
Normal file
@@ -0,0 +1,14 @@
|
||||
[
|
||||
{
|
||||
"path": "src/main.py",
|
||||
"category": "core"
|
||||
},
|
||||
{
|
||||
"path": "src/api/routes/clean_release.py",
|
||||
"category": "core"
|
||||
},
|
||||
{
|
||||
"path": "docs/installation.md",
|
||||
"category": "docs"
|
||||
}
|
||||
]
|
||||
Submodule backend/git_repos/10 deleted from 3c0ade67f9
93776
backend/logs/app.log.1
93776
backend/logs/app.log.1
File diff suppressed because it is too large
Load Diff
@@ -6,7 +6,7 @@
|
||||
# @RELATION: DEPENDS_ON -> importlib
|
||||
# @INVARIANT: Only names listed in __all__ are importable via __getattr__.
|
||||
|
||||
__all__ = ['plugins', 'tasks', 'settings', 'connections', 'environments', 'mappings', 'migration', 'git', 'storage', 'admin', 'reports', 'assistant']
|
||||
__all__ = ['plugins', 'tasks', 'settings', 'connections', 'environments', 'mappings', 'migration', 'git', 'storage', 'admin', 'reports', 'assistant', 'clean_release', 'profile']
|
||||
|
||||
|
||||
# [DEF:__getattr__:Function]
|
||||
|
||||
157
backend/src/api/routes/__tests__/test_clean_release_api.py
Normal file
157
backend/src/api/routes/__tests__/test_clean_release_api.py
Normal file
@@ -0,0 +1,157 @@
|
||||
# [DEF:backend.tests.api.routes.test_clean_release_api:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, api, clean-release, checks, reports
|
||||
# @PURPOSE: Contract tests for clean release checks and reports endpoints.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.api.routes.clean_release
|
||||
# @INVARIANT: API returns deterministic payload shapes for checks and reports.
|
||||
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from src.app import app
|
||||
from src.dependencies import get_clean_release_repository
|
||||
from src.models.clean_release import (
|
||||
CleanProfilePolicy,
|
||||
ProfileType,
|
||||
ReleaseCandidate,
|
||||
ReleaseCandidateStatus,
|
||||
ResourceSourceEntry,
|
||||
ResourceSourceRegistry,
|
||||
ComplianceReport,
|
||||
CheckFinalStatus,
|
||||
)
|
||||
from src.services.clean_release.repository import CleanReleaseRepository
|
||||
|
||||
|
||||
def _repo_with_seed_data() -> CleanReleaseRepository:
|
||||
repo = CleanReleaseRepository()
|
||||
repo.save_candidate(
|
||||
ReleaseCandidate(
|
||||
candidate_id="2026.03.03-rc1",
|
||||
version="2026.03.03",
|
||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||
created_at=datetime.now(timezone.utc),
|
||||
created_by="tester",
|
||||
source_snapshot_ref="git:abc123",
|
||||
status=ReleaseCandidateStatus.PREPARED,
|
||||
)
|
||||
)
|
||||
repo.save_registry(
|
||||
ResourceSourceRegistry(
|
||||
registry_id="registry-internal-v1",
|
||||
name="Internal",
|
||||
entries=[
|
||||
ResourceSourceEntry(
|
||||
source_id="src-1",
|
||||
host="repo.intra.company.local",
|
||||
protocol="https",
|
||||
purpose="artifact-repo",
|
||||
enabled=True,
|
||||
)
|
||||
],
|
||||
updated_at=datetime.now(timezone.utc),
|
||||
updated_by="tester",
|
||||
status="active",
|
||||
)
|
||||
)
|
||||
repo.save_policy(
|
||||
CleanProfilePolicy(
|
||||
policy_id="policy-enterprise-clean-v1",
|
||||
policy_version="1.0.0",
|
||||
active=True,
|
||||
prohibited_artifact_categories=["test-data"],
|
||||
required_system_categories=["system-init"],
|
||||
external_source_forbidden=True,
|
||||
internal_source_registry_ref="registry-internal-v1",
|
||||
effective_from=datetime.now(timezone.utc),
|
||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||
)
|
||||
)
|
||||
return repo
|
||||
|
||||
|
||||
def test_start_check_and_get_status_contract():
|
||||
repo = _repo_with_seed_data()
|
||||
app.dependency_overrides[get_clean_release_repository] = lambda: repo
|
||||
try:
|
||||
client = TestClient(app)
|
||||
|
||||
start = client.post(
|
||||
"/api/clean-release/checks",
|
||||
json={
|
||||
"candidate_id": "2026.03.03-rc1",
|
||||
"profile": "enterprise-clean",
|
||||
"execution_mode": "tui",
|
||||
"triggered_by": "tester",
|
||||
},
|
||||
)
|
||||
assert start.status_code == 202
|
||||
payload = start.json()
|
||||
assert set(["check_run_id", "candidate_id", "status", "started_at"]).issubset(payload.keys())
|
||||
|
||||
check_run_id = payload["check_run_id"]
|
||||
status_resp = client.get(f"/api/clean-release/checks/{check_run_id}")
|
||||
assert status_resp.status_code == 200
|
||||
status_payload = status_resp.json()
|
||||
assert status_payload["check_run_id"] == check_run_id
|
||||
assert "final_status" in status_payload
|
||||
assert "checks" in status_payload
|
||||
finally:
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
|
||||
def test_get_report_not_found_returns_404():
|
||||
repo = _repo_with_seed_data()
|
||||
app.dependency_overrides[get_clean_release_repository] = lambda: repo
|
||||
try:
|
||||
client = TestClient(app)
|
||||
resp = client.get("/api/clean-release/reports/unknown-report")
|
||||
assert resp.status_code == 404
|
||||
finally:
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
def test_get_report_success():
|
||||
repo = _repo_with_seed_data()
|
||||
report = ComplianceReport(
|
||||
report_id="rep-1",
|
||||
check_run_id="run-1",
|
||||
candidate_id="2026.03.03-rc1",
|
||||
generated_at=datetime.now(timezone.utc),
|
||||
final_status=CheckFinalStatus.COMPLIANT,
|
||||
operator_summary="all systems go",
|
||||
structured_payload_ref="manifest-1",
|
||||
violations_count=0,
|
||||
blocking_violations_count=0
|
||||
)
|
||||
repo.save_report(report)
|
||||
app.dependency_overrides[get_clean_release_repository] = lambda: repo
|
||||
try:
|
||||
client = TestClient(app)
|
||||
resp = client.get("/api/clean-release/reports/rep-1")
|
||||
assert resp.status_code == 200
|
||||
assert resp.json()["report_id"] == "rep-1"
|
||||
finally:
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
def test_prepare_candidate_api_success():
|
||||
repo = _repo_with_seed_data()
|
||||
app.dependency_overrides[get_clean_release_repository] = lambda: repo
|
||||
try:
|
||||
client = TestClient(app)
|
||||
response = client.post(
|
||||
"/api/clean-release/candidates/prepare",
|
||||
json={
|
||||
"candidate_id": "2026.03.03-rc1",
|
||||
"artifacts": [{"path": "file1.txt", "category": "system-init", "reason": "core"}],
|
||||
"sources": ["repo.intra.company.local"],
|
||||
"operator_id": "operator-1",
|
||||
},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["status"] == "prepared"
|
||||
assert "manifest_id" in data
|
||||
finally:
|
||||
app.dependency_overrides.clear()
|
||||
@@ -0,0 +1,165 @@
|
||||
# [DEF:backend.src.api.routes.__tests__.test_clean_release_legacy_compat:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Compatibility tests for legacy clean-release API paths retained during v2 migration.
|
||||
# @LAYER: Tests
|
||||
# @RELATION: TESTS -> backend.src.api.routes.clean_release
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import os
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
os.environ.setdefault("DATABASE_URL", "sqlite:///./test_clean_release_legacy_compat.db")
|
||||
os.environ.setdefault("AUTH_DATABASE_URL", "sqlite:///./test_clean_release_legacy_auth.db")
|
||||
|
||||
from src.app import app
|
||||
from src.dependencies import get_clean_release_repository
|
||||
from src.models.clean_release import (
|
||||
CleanProfilePolicy,
|
||||
DistributionManifest,
|
||||
ProfileType,
|
||||
ReleaseCandidate,
|
||||
ReleaseCandidateStatus,
|
||||
ResourceSourceEntry,
|
||||
ResourceSourceRegistry,
|
||||
)
|
||||
from src.services.clean_release.repository import CleanReleaseRepository
|
||||
|
||||
|
||||
# [DEF:_seed_legacy_repo:Function]
|
||||
# @PURPOSE: Seed in-memory repository with minimum trusted data for legacy endpoint contracts.
|
||||
# @PRE: Repository is empty.
|
||||
# @POST: Candidate, policy, registry and manifest are available for legacy checks flow.
|
||||
def _seed_legacy_repo() -> CleanReleaseRepository:
|
||||
repo = CleanReleaseRepository()
|
||||
now = datetime.now(timezone.utc)
|
||||
|
||||
repo.save_candidate(
|
||||
ReleaseCandidate(
|
||||
id="legacy-rc-001",
|
||||
version="1.0.0",
|
||||
source_snapshot_ref="git:legacy-001",
|
||||
created_at=now,
|
||||
created_by="compat-tester",
|
||||
status=ReleaseCandidateStatus.DRAFT,
|
||||
)
|
||||
)
|
||||
|
||||
registry = ResourceSourceRegistry(
|
||||
registry_id="legacy-reg-1",
|
||||
name="Legacy Internal Registry",
|
||||
entries=[
|
||||
ResourceSourceEntry(
|
||||
source_id="legacy-src-1",
|
||||
host="repo.intra.company.local",
|
||||
protocol="https",
|
||||
purpose="artifact-repo",
|
||||
enabled=True,
|
||||
)
|
||||
],
|
||||
updated_at=now,
|
||||
updated_by="compat-tester",
|
||||
status="ACTIVE",
|
||||
)
|
||||
setattr(registry, "immutable", True)
|
||||
setattr(registry, "allowed_hosts", ["repo.intra.company.local"])
|
||||
setattr(registry, "allowed_schemes", ["https"])
|
||||
setattr(registry, "allowed_source_types", ["artifact-repo"])
|
||||
repo.save_registry(registry)
|
||||
|
||||
policy = CleanProfilePolicy(
|
||||
policy_id="legacy-pol-1",
|
||||
policy_version="1.0.0",
|
||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||
active=True,
|
||||
internal_source_registry_ref="legacy-reg-1",
|
||||
prohibited_artifact_categories=["test-data"],
|
||||
required_system_categories=["core"],
|
||||
effective_from=now,
|
||||
)
|
||||
setattr(policy, "immutable", True)
|
||||
setattr(
|
||||
policy,
|
||||
"content_json",
|
||||
{
|
||||
"profile": "enterprise-clean",
|
||||
"prohibited_artifact_categories": ["test-data"],
|
||||
"required_system_categories": ["core"],
|
||||
"external_source_forbidden": True,
|
||||
},
|
||||
)
|
||||
repo.save_policy(policy)
|
||||
|
||||
repo.save_manifest(
|
||||
DistributionManifest(
|
||||
id="legacy-manifest-1",
|
||||
candidate_id="legacy-rc-001",
|
||||
manifest_version=1,
|
||||
manifest_digest="sha256:legacy-manifest",
|
||||
artifacts_digest="sha256:legacy-artifacts",
|
||||
created_at=now,
|
||||
created_by="compat-tester",
|
||||
source_snapshot_ref="git:legacy-001",
|
||||
content_json={"items": [], "summary": {"included_count": 0, "prohibited_detected_count": 0}},
|
||||
immutable=True,
|
||||
)
|
||||
)
|
||||
|
||||
return repo
|
||||
# [/DEF:_seed_legacy_repo:Function]
|
||||
|
||||
|
||||
def test_legacy_prepare_endpoint_still_available() -> None:
|
||||
repo = _seed_legacy_repo()
|
||||
app.dependency_overrides[get_clean_release_repository] = lambda: repo
|
||||
try:
|
||||
client = TestClient(app)
|
||||
response = client.post(
|
||||
"/api/clean-release/candidates/prepare",
|
||||
json={
|
||||
"candidate_id": "legacy-rc-001",
|
||||
"artifacts": [{"path": "src/main.py", "category": "core", "reason": "required"}],
|
||||
"sources": ["repo.intra.company.local"],
|
||||
"operator_id": "compat-tester",
|
||||
},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
assert "status" in payload
|
||||
assert payload["status"] in {"prepared", "blocked", "PREPARED", "BLOCKED"}
|
||||
finally:
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
|
||||
def test_legacy_checks_endpoints_still_available() -> None:
|
||||
repo = _seed_legacy_repo()
|
||||
app.dependency_overrides[get_clean_release_repository] = lambda: repo
|
||||
try:
|
||||
client = TestClient(app)
|
||||
start_response = client.post(
|
||||
"/api/clean-release/checks",
|
||||
json={
|
||||
"candidate_id": "legacy-rc-001",
|
||||
"profile": "enterprise-clean",
|
||||
"execution_mode": "api",
|
||||
"triggered_by": "compat-tester",
|
||||
},
|
||||
)
|
||||
assert start_response.status_code == 202
|
||||
start_payload = start_response.json()
|
||||
assert "check_run_id" in start_payload
|
||||
assert start_payload["candidate_id"] == "legacy-rc-001"
|
||||
|
||||
status_response = client.get(f"/api/clean-release/checks/{start_payload['check_run_id']}")
|
||||
assert status_response.status_code == 200
|
||||
status_payload = status_response.json()
|
||||
assert status_payload["check_run_id"] == start_payload["check_run_id"]
|
||||
assert "final_status" in status_payload
|
||||
assert "checks" in status_payload
|
||||
finally:
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
|
||||
# [/DEF:backend.src.api.routes.__tests__.test_clean_release_legacy_compat:Module]
|
||||
@@ -0,0 +1,97 @@
|
||||
# [DEF:backend.tests.api.routes.test_clean_release_source_policy:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, api, clean-release, source-policy
|
||||
# @PURPOSE: Validate API behavior for source isolation violations in clean release preparation.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.api.routes.clean_release
|
||||
# @INVARIANT: External endpoints must produce blocking violation entries.
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from src.app import app
|
||||
from src.dependencies import get_clean_release_repository
|
||||
from src.models.clean_release import (
|
||||
CleanProfilePolicy,
|
||||
ProfileType,
|
||||
ReleaseCandidate,
|
||||
ReleaseCandidateStatus,
|
||||
ResourceSourceEntry,
|
||||
ResourceSourceRegistry,
|
||||
)
|
||||
from src.services.clean_release.repository import CleanReleaseRepository
|
||||
|
||||
|
||||
def _repo_with_seed_data() -> CleanReleaseRepository:
|
||||
repo = CleanReleaseRepository()
|
||||
|
||||
repo.save_candidate(
|
||||
ReleaseCandidate(
|
||||
candidate_id="2026.03.03-rc1",
|
||||
version="2026.03.03",
|
||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||
created_at=datetime.now(timezone.utc),
|
||||
created_by="tester",
|
||||
source_snapshot_ref="git:abc123",
|
||||
status=ReleaseCandidateStatus.DRAFT,
|
||||
)
|
||||
)
|
||||
|
||||
repo.save_registry(
|
||||
ResourceSourceRegistry(
|
||||
registry_id="registry-internal-v1",
|
||||
name="Internal",
|
||||
entries=[
|
||||
ResourceSourceEntry(
|
||||
source_id="src-1",
|
||||
host="repo.intra.company.local",
|
||||
protocol="https",
|
||||
purpose="artifact-repo",
|
||||
enabled=True,
|
||||
)
|
||||
],
|
||||
updated_at=datetime.now(timezone.utc),
|
||||
updated_by="tester",
|
||||
status="active",
|
||||
)
|
||||
)
|
||||
|
||||
repo.save_policy(
|
||||
CleanProfilePolicy(
|
||||
policy_id="policy-enterprise-clean-v1",
|
||||
policy_version="1.0.0",
|
||||
active=True,
|
||||
prohibited_artifact_categories=["test-data"],
|
||||
required_system_categories=["system-init"],
|
||||
external_source_forbidden=True,
|
||||
internal_source_registry_ref="registry-internal-v1",
|
||||
effective_from=datetime.now(timezone.utc),
|
||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||
)
|
||||
)
|
||||
return repo
|
||||
|
||||
|
||||
def test_prepare_candidate_blocks_external_source():
|
||||
repo = _repo_with_seed_data()
|
||||
app.dependency_overrides[get_clean_release_repository] = lambda: repo
|
||||
|
||||
try:
|
||||
client = TestClient(app)
|
||||
response = client.post(
|
||||
"/api/clean-release/candidates/prepare",
|
||||
json={
|
||||
"candidate_id": "2026.03.03-rc1",
|
||||
"artifacts": [
|
||||
{"path": "cfg/system.yaml", "category": "system-init", "reason": "required"}
|
||||
],
|
||||
"sources": ["repo.intra.company.local", "pypi.org"],
|
||||
"operator_id": "release-manager",
|
||||
},
|
||||
)
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
assert data["status"] == "blocked"
|
||||
assert any(v["category"] == "external-source" for v in data["violations"])
|
||||
finally:
|
||||
app.dependency_overrides.clear()
|
||||
@@ -0,0 +1,93 @@
|
||||
# [DEF:test_clean_release_v2_api:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: API contract tests for redesigned clean release endpoints.
|
||||
# @LAYER: Domain
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from types import SimpleNamespace
|
||||
from uuid import uuid4
|
||||
|
||||
import pytest
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from src.app import app
|
||||
from src.dependencies import get_clean_release_repository, get_config_manager
|
||||
from src.models.clean_release import (
|
||||
CleanPolicySnapshot,
|
||||
DistributionManifest,
|
||||
ReleaseCandidate,
|
||||
SourceRegistrySnapshot,
|
||||
)
|
||||
from src.services.clean_release.enums import CandidateStatus
|
||||
|
||||
client = TestClient(app)
|
||||
|
||||
# [REASON] Implementing API contract tests for candidate/artifact/manifest endpoints (T012).
|
||||
def test_candidate_registration_contract():
|
||||
"""
|
||||
@TEST_SCENARIO: candidate_registration -> Should return 201 and candidate DTO.
|
||||
@TEST_CONTRACT: POST /api/v2/clean-release/candidates -> CandidateDTO
|
||||
"""
|
||||
payload = {
|
||||
"id": "rc-test-001",
|
||||
"version": "1.0.0",
|
||||
"source_snapshot_ref": "git:sha123",
|
||||
"created_by": "test-user"
|
||||
}
|
||||
response = client.post("/api/v2/clean-release/candidates", json=payload)
|
||||
assert response.status_code == 201
|
||||
data = response.json()
|
||||
assert data["id"] == "rc-test-001"
|
||||
assert data["status"] == CandidateStatus.DRAFT.value
|
||||
|
||||
def test_artifact_import_contract():
|
||||
"""
|
||||
@TEST_SCENARIO: artifact_import -> Should return 200 and success status.
|
||||
@TEST_CONTRACT: POST /api/v2/clean-release/candidates/{id}/artifacts -> SuccessDTO
|
||||
"""
|
||||
candidate_id = "rc-test-001-art"
|
||||
bootstrap_candidate = {
|
||||
"id": candidate_id,
|
||||
"version": "1.0.0",
|
||||
"source_snapshot_ref": "git:sha123",
|
||||
"created_by": "test-user"
|
||||
}
|
||||
create_response = client.post("/api/v2/clean-release/candidates", json=bootstrap_candidate)
|
||||
assert create_response.status_code == 201
|
||||
|
||||
payload = {
|
||||
"artifacts": [
|
||||
{
|
||||
"id": "art-1",
|
||||
"path": "bin/app.exe",
|
||||
"sha256": "hash123",
|
||||
"size": 1024
|
||||
}
|
||||
]
|
||||
}
|
||||
response = client.post(f"/api/v2/clean-release/candidates/{candidate_id}/artifacts", json=payload)
|
||||
assert response.status_code == 200
|
||||
assert response.json()["status"] == "success"
|
||||
|
||||
def test_manifest_build_contract():
|
||||
"""
|
||||
@TEST_SCENARIO: manifest_build -> Should return 201 and manifest DTO.
|
||||
@TEST_CONTRACT: POST /api/v2/clean-release/candidates/{id}/manifests -> ManifestDTO
|
||||
"""
|
||||
candidate_id = "rc-test-001-manifest"
|
||||
bootstrap_candidate = {
|
||||
"id": candidate_id,
|
||||
"version": "1.0.0",
|
||||
"source_snapshot_ref": "git:sha123",
|
||||
"created_by": "test-user"
|
||||
}
|
||||
create_response = client.post("/api/v2/clean-release/candidates", json=bootstrap_candidate)
|
||||
assert create_response.status_code == 201
|
||||
|
||||
response = client.post(f"/api/v2/clean-release/candidates/{candidate_id}/manifests")
|
||||
assert response.status_code == 201
|
||||
data = response.json()
|
||||
assert "manifest_digest" in data
|
||||
assert data["candidate_id"] == candidate_id
|
||||
|
||||
# [/DEF:test_clean_release_v2_api:Module]
|
||||
@@ -0,0 +1,107 @@
|
||||
# [DEF:test_clean_release_v2_release_api:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: API contract test scaffolding for clean release approval and publication endpoints.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: IMPLEMENTS -> clean_release_v2_release_api_contracts
|
||||
|
||||
"""Contract tests for redesigned approval/publication API endpoints."""
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from uuid import uuid4
|
||||
|
||||
from fastapi import FastAPI
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from src.api.routes.clean_release_v2 import router as clean_release_v2_router
|
||||
from src.dependencies import get_clean_release_repository
|
||||
from src.models.clean_release import ComplianceReport, ReleaseCandidate
|
||||
from src.services.clean_release.enums import CandidateStatus, ComplianceDecision
|
||||
|
||||
|
||||
test_app = FastAPI()
|
||||
test_app.include_router(clean_release_v2_router)
|
||||
client = TestClient(test_app)
|
||||
|
||||
|
||||
def _seed_candidate_and_passed_report() -> tuple[str, str]:
|
||||
repository = get_clean_release_repository()
|
||||
candidate_id = f"api-release-candidate-{uuid4()}"
|
||||
report_id = f"api-release-report-{uuid4()}"
|
||||
|
||||
repository.save_candidate(
|
||||
ReleaseCandidate(
|
||||
id=candidate_id,
|
||||
version="1.0.0",
|
||||
source_snapshot_ref="git:sha-api-release",
|
||||
created_by="api-test",
|
||||
created_at=datetime.now(timezone.utc),
|
||||
status=CandidateStatus.CHECK_PASSED.value,
|
||||
)
|
||||
)
|
||||
repository.save_report(
|
||||
ComplianceReport(
|
||||
id=report_id,
|
||||
run_id=f"run-{uuid4()}",
|
||||
candidate_id=candidate_id,
|
||||
final_status=ComplianceDecision.PASSED.value,
|
||||
summary_json={"operator_summary": "ok", "violations_count": 0, "blocking_violations_count": 0},
|
||||
generated_at=datetime.now(timezone.utc),
|
||||
immutable=True,
|
||||
)
|
||||
)
|
||||
return candidate_id, report_id
|
||||
|
||||
|
||||
def test_release_approve_and_publish_revoke_contract() -> None:
|
||||
"""Contract for approve -> publish -> revoke lifecycle endpoints."""
|
||||
candidate_id, report_id = _seed_candidate_and_passed_report()
|
||||
|
||||
approve_response = client.post(
|
||||
f"/api/v2/clean-release/candidates/{candidate_id}/approve",
|
||||
json={"report_id": report_id, "decided_by": "api-test", "comment": "approved"},
|
||||
)
|
||||
assert approve_response.status_code == 200
|
||||
approve_payload = approve_response.json()
|
||||
assert approve_payload["status"] == "ok"
|
||||
assert approve_payload["decision"] == "APPROVED"
|
||||
|
||||
publish_response = client.post(
|
||||
f"/api/v2/clean-release/candidates/{candidate_id}/publish",
|
||||
json={
|
||||
"report_id": report_id,
|
||||
"published_by": "api-test",
|
||||
"target_channel": "stable",
|
||||
"publication_ref": "rel-api-001",
|
||||
},
|
||||
)
|
||||
assert publish_response.status_code == 200
|
||||
publish_payload = publish_response.json()
|
||||
assert publish_payload["status"] == "ok"
|
||||
assert publish_payload["publication"]["status"] == "ACTIVE"
|
||||
|
||||
publication_id = publish_payload["publication"]["id"]
|
||||
revoke_response = client.post(
|
||||
f"/api/v2/clean-release/publications/{publication_id}/revoke",
|
||||
json={"revoked_by": "api-test", "comment": "rollback"},
|
||||
)
|
||||
assert revoke_response.status_code == 200
|
||||
revoke_payload = revoke_response.json()
|
||||
assert revoke_payload["status"] == "ok"
|
||||
assert revoke_payload["publication"]["status"] == "REVOKED"
|
||||
|
||||
|
||||
def test_release_reject_contract() -> None:
|
||||
"""Contract for reject endpoint."""
|
||||
candidate_id, report_id = _seed_candidate_and_passed_report()
|
||||
|
||||
reject_response = client.post(
|
||||
f"/api/v2/clean-release/candidates/{candidate_id}/reject",
|
||||
json={"report_id": report_id, "decided_by": "api-test", "comment": "rejected"},
|
||||
)
|
||||
assert reject_response.status_code == 200
|
||||
payload = reject_response.json()
|
||||
assert payload["status"] == "ok"
|
||||
assert payload["decision"] == "REJECTED"
|
||||
|
||||
|
||||
# [/DEF:test_clean_release_v2_release_api:Module]
|
||||
@@ -11,9 +11,12 @@ from fastapi.testclient import TestClient
|
||||
from src.app import app
|
||||
from src.api.routes.dashboards import DashboardsResponse
|
||||
from src.dependencies import get_current_user, has_permission, get_config_manager, get_task_manager, get_resource_service, get_mapping_service
|
||||
from src.core.database import get_db
|
||||
from src.services.profile_service import ProfileService as DomainProfileService
|
||||
|
||||
# Global mock user for get_current_user dependency overrides
|
||||
mock_user = MagicMock()
|
||||
mock_user.id = "u-1"
|
||||
mock_user.username = "testuser"
|
||||
mock_user.roles = []
|
||||
admin_role = MagicMock()
|
||||
@@ -27,11 +30,14 @@ def mock_deps():
|
||||
resource_service = MagicMock()
|
||||
mapping_service = MagicMock()
|
||||
|
||||
db = MagicMock()
|
||||
|
||||
app.dependency_overrides[get_config_manager] = lambda: config_manager
|
||||
app.dependency_overrides[get_task_manager] = lambda: task_manager
|
||||
app.dependency_overrides[get_resource_service] = lambda: resource_service
|
||||
app.dependency_overrides[get_mapping_service] = lambda: mapping_service
|
||||
app.dependency_overrides[get_current_user] = lambda: mock_user
|
||||
app.dependency_overrides[get_db] = lambda: db
|
||||
|
||||
app.dependency_overrides[has_permission("plugin:migration", "READ")] = lambda: mock_user
|
||||
app.dependency_overrides[has_permission("plugin:migration", "EXECUTE")] = lambda: mock_user
|
||||
@@ -42,7 +48,8 @@ def mock_deps():
|
||||
"config": config_manager,
|
||||
"task": task_manager,
|
||||
"resource": resource_service,
|
||||
"mapping": mapping_service
|
||||
"mapping": mapping_service,
|
||||
"db": db,
|
||||
}
|
||||
app.dependency_overrides.clear()
|
||||
|
||||
@@ -97,17 +104,17 @@ def test_get_dashboards_with_search(mock_deps):
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
|
||||
async def mock_get_dashboards(env, tasks):
|
||||
async def mock_get_dashboards(env, tasks, include_git_status=False):
|
||||
return [
|
||||
{"id": 1, "title": "Sales Report", "slug": "sales"},
|
||||
{"id": 2, "title": "Marketing Dashboard", "slug": "marketing"}
|
||||
{"id": 1, "title": "Sales Report", "slug": "sales", "git_status": {"branch": "main", "sync_status": "OK"}, "last_task": None},
|
||||
{"id": 2, "title": "Marketing Dashboard", "slug": "marketing", "git_status": {"branch": "main", "sync_status": "OK"}, "last_task": None}
|
||||
]
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(
|
||||
side_effect=mock_get_dashboards
|
||||
)
|
||||
|
||||
response = client.get("/api/dashboards?env_id=prod&search=sales")
|
||||
|
||||
|
||||
assert response.status_code == 200
|
||||
data = response.json()
|
||||
# @POST: Filtered result count must match search
|
||||
@@ -495,4 +502,376 @@ def test_get_dashboard_thumbnail_success(mock_deps):
|
||||
# [/DEF:test_get_dashboard_thumbnail_success:Function]
|
||||
|
||||
|
||||
# [DEF:_build_profile_preference_stub:Function]
|
||||
# @PURPOSE: Creates profile preference payload stub for dashboards filter contract tests.
|
||||
# @PRE: username can be empty; enabled indicates profile-default toggle state.
|
||||
# @POST: Returns object compatible with ProfileService.get_my_preference contract.
|
||||
def _build_profile_preference_stub(username: str, enabled: bool):
|
||||
preference = MagicMock()
|
||||
preference.superset_username = username
|
||||
preference.superset_username_normalized = str(username or "").strip().lower() or None
|
||||
preference.show_only_my_dashboards = bool(enabled)
|
||||
|
||||
payload = MagicMock()
|
||||
payload.preference = preference
|
||||
return payload
|
||||
# [/DEF:_build_profile_preference_stub:Function]
|
||||
|
||||
|
||||
# [DEF:_matches_actor_case_insensitive:Function]
|
||||
# @PURPOSE: Applies trim + case-insensitive owners OR modified_by matching used by route contract tests.
|
||||
# @PRE: owners can be None or list-like values.
|
||||
# @POST: Returns True when bound username matches any owner or modified_by.
|
||||
def _matches_actor_case_insensitive(bound_username, owners, modified_by):
|
||||
normalized_bound = str(bound_username or "").strip().lower()
|
||||
if not normalized_bound:
|
||||
return False
|
||||
|
||||
owner_tokens = []
|
||||
for owner in owners or []:
|
||||
token = str(owner or "").strip().lower()
|
||||
if token:
|
||||
owner_tokens.append(token)
|
||||
|
||||
modified_token = str(modified_by or "").strip().lower()
|
||||
return normalized_bound in owner_tokens or bool(modified_token and modified_token == normalized_bound)
|
||||
# [/DEF:_matches_actor_case_insensitive:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_profile_filter_contract_owners_or_modified_by:Function]
|
||||
# @TEST: GET /api/dashboards applies profile-default filter with owners OR modified_by trim+case-insensitive semantics.
|
||||
# @PRE: Current user has enabled profile-default preference and bound username.
|
||||
# @POST: Response includes only matching dashboards and effective_profile_filter metadata.
|
||||
def test_get_dashboards_profile_filter_contract_owners_or_modified_by(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[
|
||||
{
|
||||
"id": 1,
|
||||
"title": "Owner Match",
|
||||
"slug": "owner-match",
|
||||
"owners": [" John_Doe "],
|
||||
"modified_by": "someone_else",
|
||||
},
|
||||
{
|
||||
"id": 2,
|
||||
"title": "Modifier Match",
|
||||
"slug": "modifier-match",
|
||||
"owners": ["analytics-team"],
|
||||
"modified_by": " JOHN_DOE ",
|
||||
},
|
||||
{
|
||||
"id": 3,
|
||||
"title": "No Match",
|
||||
"slug": "no-match",
|
||||
"owners": ["another-user"],
|
||||
"modified_by": "nobody",
|
||||
},
|
||||
])
|
||||
|
||||
with patch("src.api.routes.dashboards.ProfileService") as profile_service_cls:
|
||||
profile_service = MagicMock()
|
||||
profile_service.get_my_preference.return_value = _build_profile_preference_stub(
|
||||
username=" JOHN_DOE ",
|
||||
enabled=True,
|
||||
)
|
||||
profile_service.matches_dashboard_actor.side_effect = _matches_actor_case_insensitive
|
||||
profile_service_cls.return_value = profile_service
|
||||
|
||||
response = client.get(
|
||||
"/api/dashboards?env_id=prod&page_context=dashboards_main&apply_profile_default=true"
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
|
||||
assert payload["total"] == 2
|
||||
assert {item["id"] for item in payload["dashboards"]} == {1, 2}
|
||||
assert payload["effective_profile_filter"]["applied"] is True
|
||||
assert payload["effective_profile_filter"]["source_page"] == "dashboards_main"
|
||||
assert payload["effective_profile_filter"]["override_show_all"] is False
|
||||
assert payload["effective_profile_filter"]["username"] == "john_doe"
|
||||
assert payload["effective_profile_filter"]["match_logic"] == "owners_or_modified_by"
|
||||
# [/DEF:test_get_dashboards_profile_filter_contract_owners_or_modified_by:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_override_show_all_contract:Function]
|
||||
# @TEST: GET /api/dashboards honors override_show_all and disables profile-default filter for current page.
|
||||
# @PRE: Profile-default preference exists but override_show_all=true query is provided.
|
||||
# @POST: Response remains unfiltered and effective_profile_filter.applied is false.
|
||||
def test_get_dashboards_override_show_all_contract(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[
|
||||
{"id": 1, "title": "Dash A", "slug": "dash-a", "owners": ["john_doe"], "modified_by": "john_doe"},
|
||||
{"id": 2, "title": "Dash B", "slug": "dash-b", "owners": ["other"], "modified_by": "other"},
|
||||
])
|
||||
|
||||
with patch("src.api.routes.dashboards.ProfileService") as profile_service_cls:
|
||||
profile_service = MagicMock()
|
||||
profile_service.get_my_preference.return_value = _build_profile_preference_stub(
|
||||
username="john_doe",
|
||||
enabled=True,
|
||||
)
|
||||
profile_service.matches_dashboard_actor.side_effect = _matches_actor_case_insensitive
|
||||
profile_service_cls.return_value = profile_service
|
||||
|
||||
response = client.get(
|
||||
"/api/dashboards?env_id=prod&page_context=dashboards_main&apply_profile_default=true&override_show_all=true"
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
|
||||
assert payload["total"] == 2
|
||||
assert {item["id"] for item in payload["dashboards"]} == {1, 2}
|
||||
assert payload["effective_profile_filter"]["applied"] is False
|
||||
assert payload["effective_profile_filter"]["source_page"] == "dashboards_main"
|
||||
assert payload["effective_profile_filter"]["override_show_all"] is True
|
||||
assert payload["effective_profile_filter"]["username"] is None
|
||||
assert payload["effective_profile_filter"]["match_logic"] is None
|
||||
profile_service.matches_dashboard_actor.assert_not_called()
|
||||
# [/DEF:test_get_dashboards_override_show_all_contract:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_profile_filter_no_match_results_contract:Function]
|
||||
# @TEST: GET /api/dashboards returns empty result set when profile-default filter is active and no dashboard actors match.
|
||||
# @PRE: Profile-default preference is enabled with bound username and all dashboards are non-matching.
|
||||
# @POST: Response total is 0 with deterministic pagination and active effective_profile_filter metadata.
|
||||
def test_get_dashboards_profile_filter_no_match_results_contract(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[
|
||||
{
|
||||
"id": 101,
|
||||
"title": "Team Dashboard",
|
||||
"slug": "team-dashboard",
|
||||
"owners": ["analytics-team"],
|
||||
"modified_by": "someone_else",
|
||||
},
|
||||
{
|
||||
"id": 102,
|
||||
"title": "Ops Dashboard",
|
||||
"slug": "ops-dashboard",
|
||||
"owners": ["ops-user"],
|
||||
"modified_by": "ops-user",
|
||||
},
|
||||
])
|
||||
|
||||
with patch("src.api.routes.dashboards.ProfileService") as profile_service_cls:
|
||||
profile_service = MagicMock()
|
||||
profile_service.get_my_preference.return_value = _build_profile_preference_stub(
|
||||
username="john_doe",
|
||||
enabled=True,
|
||||
)
|
||||
profile_service.matches_dashboard_actor.side_effect = _matches_actor_case_insensitive
|
||||
profile_service_cls.return_value = profile_service
|
||||
|
||||
response = client.get(
|
||||
"/api/dashboards?env_id=prod&page_context=dashboards_main&apply_profile_default=true"
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
|
||||
assert payload["total"] == 0
|
||||
assert payload["dashboards"] == []
|
||||
assert payload["page"] == 1
|
||||
assert payload["page_size"] == 10
|
||||
assert payload["total_pages"] == 1
|
||||
assert payload["effective_profile_filter"]["applied"] is True
|
||||
assert payload["effective_profile_filter"]["source_page"] == "dashboards_main"
|
||||
assert payload["effective_profile_filter"]["override_show_all"] is False
|
||||
assert payload["effective_profile_filter"]["username"] == "john_doe"
|
||||
assert payload["effective_profile_filter"]["match_logic"] == "owners_or_modified_by"
|
||||
# [/DEF:test_get_dashboards_profile_filter_no_match_results_contract:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_page_context_other_disables_profile_default:Function]
|
||||
# @TEST: GET /api/dashboards does not auto-apply profile-default filter outside dashboards_main page context.
|
||||
# @PRE: Profile-default preference exists but page_context=other query is provided.
|
||||
# @POST: Response remains unfiltered and metadata reflects source_page=other.
|
||||
def test_get_dashboards_page_context_other_disables_profile_default(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[
|
||||
{"id": 1, "title": "Dash A", "slug": "dash-a", "owners": ["john_doe"], "modified_by": "john_doe"},
|
||||
{"id": 2, "title": "Dash B", "slug": "dash-b", "owners": ["other"], "modified_by": "other"},
|
||||
])
|
||||
|
||||
with patch("src.api.routes.dashboards.ProfileService") as profile_service_cls:
|
||||
profile_service = MagicMock()
|
||||
profile_service.get_my_preference.return_value = _build_profile_preference_stub(
|
||||
username="john_doe",
|
||||
enabled=True,
|
||||
)
|
||||
profile_service.matches_dashboard_actor.side_effect = _matches_actor_case_insensitive
|
||||
profile_service_cls.return_value = profile_service
|
||||
|
||||
response = client.get(
|
||||
"/api/dashboards?env_id=prod&page_context=other&apply_profile_default=true"
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
|
||||
assert payload["total"] == 2
|
||||
assert {item["id"] for item in payload["dashboards"]} == {1, 2}
|
||||
assert payload["effective_profile_filter"]["applied"] is False
|
||||
assert payload["effective_profile_filter"]["source_page"] == "other"
|
||||
assert payload["effective_profile_filter"]["override_show_all"] is False
|
||||
assert payload["effective_profile_filter"]["username"] is None
|
||||
assert payload["effective_profile_filter"]["match_logic"] is None
|
||||
profile_service.matches_dashboard_actor.assert_not_called()
|
||||
# [/DEF:test_get_dashboards_page_context_other_disables_profile_default:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_profile_filter_matches_display_alias_without_detail_fanout:Function]
|
||||
# @TEST: GET /api/dashboards resolves Superset display-name alias once and filters without per-dashboard detail calls.
|
||||
# @PRE: Profile-default filter is active, bound username is `admin`, dashboard actors contain display labels.
|
||||
# @POST: Route matches by alias (`Superset Admin`) and does not call `SupersetClient.get_dashboard` in list filter path.
|
||||
def test_get_dashboards_profile_filter_matches_display_alias_without_detail_fanout(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[
|
||||
{
|
||||
"id": 5,
|
||||
"title": "Alias Match",
|
||||
"slug": "alias-match",
|
||||
"owners": [],
|
||||
"created_by": None,
|
||||
"modified_by": "Superset Admin",
|
||||
},
|
||||
{
|
||||
"id": 6,
|
||||
"title": "Alias No Match",
|
||||
"slug": "alias-no-match",
|
||||
"owners": [],
|
||||
"created_by": None,
|
||||
"modified_by": "Other User",
|
||||
},
|
||||
])
|
||||
|
||||
with patch("src.api.routes.dashboards.ProfileService") as profile_service_cls, patch(
|
||||
"src.api.routes.dashboards.SupersetClient"
|
||||
) as superset_client_cls, patch(
|
||||
"src.api.routes.dashboards.SupersetAccountLookupAdapter"
|
||||
) as lookup_adapter_cls:
|
||||
profile_service = MagicMock()
|
||||
profile_service.get_my_preference.return_value = _build_profile_preference_stub(
|
||||
username="admin",
|
||||
enabled=True,
|
||||
)
|
||||
profile_service.matches_dashboard_actor.side_effect = _matches_actor_case_insensitive
|
||||
profile_service_cls.return_value = profile_service
|
||||
|
||||
superset_client = MagicMock()
|
||||
superset_client_cls.return_value = superset_client
|
||||
|
||||
lookup_adapter = MagicMock()
|
||||
lookup_adapter.get_users_page.return_value = {
|
||||
"items": [
|
||||
{
|
||||
"environment_id": "prod",
|
||||
"username": "admin",
|
||||
"display_name": "Superset Admin",
|
||||
"email": "admin@example.com",
|
||||
"is_active": True,
|
||||
}
|
||||
],
|
||||
"total": 1,
|
||||
}
|
||||
lookup_adapter_cls.return_value = lookup_adapter
|
||||
|
||||
response = client.get(
|
||||
"/api/dashboards?env_id=prod&page_context=dashboards_main&apply_profile_default=true"
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
assert payload["total"] == 1
|
||||
assert {item["id"] for item in payload["dashboards"]} == {5}
|
||||
assert payload["effective_profile_filter"]["applied"] is True
|
||||
lookup_adapter.get_users_page.assert_called_once()
|
||||
superset_client.get_dashboard.assert_not_called()
|
||||
# [/DEF:test_get_dashboards_profile_filter_matches_display_alias_without_detail_fanout:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_profile_filter_matches_owner_object_payload_contract:Function]
|
||||
# @TEST: GET /api/dashboards profile-default filter matches Superset owner object payloads.
|
||||
# @PRE: Profile-default preference is enabled and owners list contains dict payloads.
|
||||
# @POST: Response keeps dashboards where owner object resolves to bound username alias.
|
||||
def test_get_dashboards_profile_filter_matches_owner_object_payload_contract(mock_deps):
|
||||
mock_env = MagicMock()
|
||||
mock_env.id = "prod"
|
||||
mock_deps["config"].get_environments.return_value = [mock_env]
|
||||
mock_deps["task"].get_all_tasks.return_value = []
|
||||
mock_deps["resource"].get_dashboards_with_status = AsyncMock(return_value=[
|
||||
{
|
||||
"id": 701,
|
||||
"title": "Featured Charts",
|
||||
"slug": "featured-charts",
|
||||
"owners": [
|
||||
{
|
||||
"id": 11,
|
||||
"first_name": "user",
|
||||
"last_name": "1",
|
||||
"username": None,
|
||||
"email": "user_1@example.local",
|
||||
}
|
||||
],
|
||||
"modified_by": "another_user",
|
||||
},
|
||||
{
|
||||
"id": 702,
|
||||
"title": "Other Dashboard",
|
||||
"slug": "other-dashboard",
|
||||
"owners": [
|
||||
{
|
||||
"id": 12,
|
||||
"first_name": "other",
|
||||
"last_name": "user",
|
||||
"username": None,
|
||||
"email": "other@example.local",
|
||||
}
|
||||
],
|
||||
"modified_by": "other_user",
|
||||
},
|
||||
])
|
||||
|
||||
with patch("src.api.routes.dashboards.ProfileService") as profile_service_cls, patch(
|
||||
"src.api.routes.dashboards._resolve_profile_actor_aliases",
|
||||
return_value=["user_1"],
|
||||
):
|
||||
profile_service = DomainProfileService(db=MagicMock(), config_manager=MagicMock())
|
||||
profile_service.get_my_preference = MagicMock(
|
||||
return_value=_build_profile_preference_stub(
|
||||
username="user_1",
|
||||
enabled=True,
|
||||
)
|
||||
)
|
||||
profile_service_cls.return_value = profile_service
|
||||
|
||||
response = client.get(
|
||||
"/api/dashboards?env_id=prod&page_context=dashboards_main&apply_profile_default=true"
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
assert payload["total"] == 1
|
||||
assert {item["id"] for item in payload["dashboards"]} == {701}
|
||||
assert payload["dashboards"][0]["title"] == "Featured Charts"
|
||||
# [/DEF:test_get_dashboards_profile_filter_matches_owner_object_payload_contract:Function]
|
||||
|
||||
|
||||
# [/DEF:backend.src.api.routes.__tests__.test_dashboards:Module]
|
||||
|
||||
310
backend/src/api/routes/__tests__/test_git_api.py
Normal file
310
backend/src/api/routes/__tests__/test_git_api.py
Normal file
@@ -0,0 +1,310 @@
|
||||
# [DEF:backend.src.api.routes.__tests__.test_git_api:Module]
|
||||
# @RELATION: VERIFIES -> src.api.routes.git
|
||||
# @PURPOSE: API tests for Git configurations and repository operations.
|
||||
|
||||
import pytest
|
||||
import asyncio
|
||||
from unittest.mock import MagicMock
|
||||
from fastapi import HTTPException
|
||||
from src.api.routes import git as git_routes
|
||||
from src.models.git import GitServerConfig, GitProvider, GitStatus, GitRepository
|
||||
|
||||
class DbMock:
|
||||
def __init__(self, data=None):
|
||||
self._data = data or []
|
||||
self._deleted = []
|
||||
self._added = []
|
||||
|
||||
def query(self, model):
|
||||
self._model = model
|
||||
return self
|
||||
|
||||
def filter(self, condition):
|
||||
# Simplistic mocking for tests, assuming equality checks
|
||||
for item in self._data:
|
||||
# We assume condition is an equality expression like GitServerConfig.id == "123"
|
||||
# It's hard to eval the condition exactly in a mock without complex parsing,
|
||||
# so we'll just return items where type matches.
|
||||
pass
|
||||
return self
|
||||
|
||||
def first(self):
|
||||
for item in self._data:
|
||||
if hasattr(self, "_model") and isinstance(item, self._model):
|
||||
return item
|
||||
return None
|
||||
|
||||
def all(self):
|
||||
return self._data
|
||||
|
||||
def add(self, item):
|
||||
self._added.append(item)
|
||||
if not hasattr(item, "id") or not item.id:
|
||||
item.id = "mocked-id"
|
||||
self._data.append(item)
|
||||
|
||||
def delete(self, item):
|
||||
self._deleted.append(item)
|
||||
if item in self._data:
|
||||
self._data.remove(item)
|
||||
|
||||
def commit(self):
|
||||
pass
|
||||
|
||||
def refresh(self, item):
|
||||
if not hasattr(item, "status"):
|
||||
item.status = GitStatus.CONNECTED
|
||||
if not hasattr(item, "last_validated"):
|
||||
item.last_validated = "2026-03-08T00:00:00Z"
|
||||
|
||||
def test_get_git_configs_masks_pat():
|
||||
"""
|
||||
@PRE: Database session `db` is available.
|
||||
@POST: Returns a list of all GitServerConfig objects from the database with PAT masked.
|
||||
"""
|
||||
db = DbMock([GitServerConfig(
|
||||
id="config-1", name="Test Server", provider=GitProvider.GITHUB,
|
||||
url="https://github.com", pat="secret-token",
|
||||
status=GitStatus.CONNECTED, last_validated="2026-03-08T00:00:00Z"
|
||||
)])
|
||||
|
||||
result = asyncio.run(git_routes.get_git_configs(db=db))
|
||||
|
||||
assert len(result) == 1
|
||||
assert result[0].pat == "********"
|
||||
assert result[0].name == "Test Server"
|
||||
|
||||
def test_create_git_config_persists_config():
|
||||
"""
|
||||
@PRE: `config` contains valid GitServerConfigCreate data.
|
||||
@POST: A new GitServerConfig record is created in the database.
|
||||
"""
|
||||
from src.api.routes.git_schemas import GitServerConfigCreate
|
||||
db = DbMock()
|
||||
config = GitServerConfigCreate(
|
||||
name="New Server", provider=GitProvider.GITLAB,
|
||||
url="https://gitlab.com", pat="new-token",
|
||||
default_branch="master"
|
||||
)
|
||||
|
||||
result = asyncio.run(git_routes.create_git_config(config=config, db=db))
|
||||
|
||||
assert len(db._added) == 1
|
||||
assert db._added[0].name == "New Server"
|
||||
assert db._added[0].pat == "new-token"
|
||||
assert result.name == "New Server"
|
||||
assert result.pat == "new-token" # Note: route returns unmasked until serialized by FastAPI usually, but in tests schema might catch it or not.
|
||||
|
||||
from src.api.routes.git_schemas import GitServerConfigUpdate
|
||||
|
||||
def test_update_git_config_modifies_record():
|
||||
"""
|
||||
@PRE: `config_id` corresponds to an existing configuration.
|
||||
@POST: The configuration record is updated in the database, preserving PAT if masked is sent.
|
||||
"""
|
||||
existing_config = GitServerConfig(
|
||||
id="config-1", name="Old Server", provider=GitProvider.GITHUB,
|
||||
url="https://github.com", pat="old-token",
|
||||
status=GitStatus.CONNECTED, last_validated="2026-03-08T00:00:00Z"
|
||||
)
|
||||
# The monkeypatched query will return existing_config as it's the only one in the list
|
||||
class SingleConfigDbMock:
|
||||
def query(self, *args): return self
|
||||
def filter(self, *args): return self
|
||||
def first(self): return existing_config
|
||||
def commit(self): pass
|
||||
def refresh(self, config): pass
|
||||
|
||||
db = SingleConfigDbMock()
|
||||
update_data = GitServerConfigUpdate(name="Updated Server", pat="********")
|
||||
|
||||
result = asyncio.run(git_routes.update_git_config(config_id="config-1", config_update=update_data, db=db))
|
||||
|
||||
assert existing_config.name == "Updated Server"
|
||||
assert existing_config.pat == "old-token" # Ensure PAT is not overwritten with asterisks
|
||||
assert result.pat == "********"
|
||||
|
||||
def test_update_git_config_raises_404_if_not_found():
|
||||
"""
|
||||
@PRE: `config_id` corresponds to a missing configuration.
|
||||
@THROW: HTTPException 404
|
||||
"""
|
||||
db = DbMock([]) # Empty db
|
||||
update_data = GitServerConfigUpdate(name="Updated Server", pat="new-token")
|
||||
|
||||
with pytest.raises(HTTPException) as exc_info:
|
||||
asyncio.run(git_routes.update_git_config(config_id="config-1", config_update=update_data, db=db))
|
||||
|
||||
assert exc_info.value.status_code == 404
|
||||
assert exc_info.value.detail == "Configuration not found"
|
||||
|
||||
def test_delete_git_config_removes_record():
|
||||
"""
|
||||
@PRE: `config_id` corresponds to an existing configuration.
|
||||
@POST: The configuration record is removed from the database.
|
||||
"""
|
||||
existing_config = GitServerConfig(id="config-1")
|
||||
class SingleConfigDbMock:
|
||||
def query(self, *args): return self
|
||||
def filter(self, *args): return self
|
||||
def first(self): return existing_config
|
||||
def delete(self, config): self.deleted = config
|
||||
def commit(self): pass
|
||||
|
||||
db = SingleConfigDbMock()
|
||||
|
||||
result = asyncio.run(git_routes.delete_git_config(config_id="config-1", db=db))
|
||||
|
||||
assert db.deleted == existing_config
|
||||
assert result["status"] == "success"
|
||||
|
||||
def test_test_git_config_validates_connection_successfully(monkeypatch):
|
||||
"""
|
||||
@PRE: `config` contains provider, url, and pat.
|
||||
@POST: Returns success if the connection is validated via GitService.
|
||||
"""
|
||||
class MockGitService:
|
||||
async def test_connection(self, provider, url, pat):
|
||||
return True
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", MockGitService())
|
||||
from src.api.routes.git_schemas import GitServerConfigCreate
|
||||
|
||||
config = GitServerConfigCreate(
|
||||
name="Test Server", provider=GitProvider.GITHUB,
|
||||
url="https://github.com", pat="test-pat"
|
||||
)
|
||||
db = DbMock([])
|
||||
|
||||
result = asyncio.run(git_routes.test_git_config(config=config, db=db))
|
||||
|
||||
assert result["status"] == "success"
|
||||
|
||||
def test_test_git_config_fails_validation(monkeypatch):
|
||||
"""
|
||||
@PRE: `config` contains provider, url, and pat BUT connection fails.
|
||||
@THROW: HTTPException 400
|
||||
"""
|
||||
class MockGitService:
|
||||
async def test_connection(self, provider, url, pat):
|
||||
return False
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", MockGitService())
|
||||
from src.api.routes.git_schemas import GitServerConfigCreate
|
||||
|
||||
config = GitServerConfigCreate(
|
||||
name="Test Server", provider=GitProvider.GITHUB,
|
||||
url="https://github.com", pat="bad-pat"
|
||||
)
|
||||
db = DbMock([])
|
||||
|
||||
with pytest.raises(HTTPException) as exc_info:
|
||||
asyncio.run(git_routes.test_git_config(config=config, db=db))
|
||||
|
||||
assert exc_info.value.status_code == 400
|
||||
assert exc_info.value.detail == "Connection failed"
|
||||
|
||||
def test_list_gitea_repositories_returns_payload(monkeypatch):
|
||||
"""
|
||||
@PRE: config_id exists and provider is GITEA.
|
||||
@POST: Returns repositories visible to PAT user.
|
||||
"""
|
||||
class MockGitService:
|
||||
async def list_gitea_repositories(self, url, pat):
|
||||
return [{"name": "test-repo", "full_name": "owner/test-repo", "private": True}]
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", MockGitService())
|
||||
existing_config = GitServerConfig(
|
||||
id="config-1", name="Gitea Server", provider=GitProvider.GITEA,
|
||||
url="https://gitea.local", pat="gitea-token"
|
||||
)
|
||||
db = DbMock([existing_config])
|
||||
|
||||
result = asyncio.run(git_routes.list_gitea_repositories(config_id="config-1", db=db))
|
||||
|
||||
assert len(result) == 1
|
||||
assert result[0].name == "test-repo"
|
||||
assert result[0].private is True
|
||||
|
||||
def test_list_gitea_repositories_rejects_non_gitea(monkeypatch):
|
||||
"""
|
||||
@PRE: config_id exists and provider is NOT GITEA.
|
||||
@THROW: HTTPException 400
|
||||
"""
|
||||
existing_config = GitServerConfig(
|
||||
id="config-1", name="GitHub Server", provider=GitProvider.GITHUB,
|
||||
url="https://github.com", pat="token"
|
||||
)
|
||||
db = DbMock([existing_config])
|
||||
|
||||
with pytest.raises(HTTPException) as exc_info:
|
||||
asyncio.run(git_routes.list_gitea_repositories(config_id="config-1", db=db))
|
||||
|
||||
assert exc_info.value.status_code == 400
|
||||
assert "GITEA provider only" in exc_info.value.detail
|
||||
|
||||
def test_create_remote_repository_creates_provider_repo(monkeypatch):
|
||||
"""
|
||||
@PRE: config_id exists and PAT has creation permissions.
|
||||
@POST: Returns normalized remote repository payload.
|
||||
"""
|
||||
class MockGitService:
|
||||
async def create_gitlab_repository(self, server_url, pat, name, private, description, auto_init, default_branch):
|
||||
return {
|
||||
"name": name,
|
||||
"full_name": f"user/{name}",
|
||||
"private": private,
|
||||
"clone_url": f"{server_url}/user/{name}.git"
|
||||
}
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", MockGitService())
|
||||
from src.api.routes.git_schemas import RemoteRepoCreateRequest
|
||||
|
||||
existing_config = GitServerConfig(
|
||||
id="config-1", name="GitLab Server", provider=GitProvider.GITLAB,
|
||||
url="https://gitlab.com", pat="token"
|
||||
)
|
||||
db = DbMock([existing_config])
|
||||
|
||||
request = RemoteRepoCreateRequest(name="new-repo", private=True, description="desc")
|
||||
result = asyncio.run(git_routes.create_remote_repository(config_id="config-1", request=request, db=db))
|
||||
|
||||
assert result.provider == GitProvider.GITLAB
|
||||
assert result.name == "new-repo"
|
||||
assert result.full_name == "user/new-repo"
|
||||
|
||||
def test_init_repository_initializes_and_saves_binding(monkeypatch):
|
||||
"""
|
||||
@PRE: `dashboard_ref` exists and `init_data` contains valid config_id and remote_url.
|
||||
@POST: Repository is initialized on disk and a GitRepository record is saved in DB.
|
||||
"""
|
||||
from src.api.routes.git_schemas import RepoInitRequest
|
||||
|
||||
class MockGitService:
|
||||
def init_repo(self, dashboard_id, remote_url, pat, repo_key, default_branch):
|
||||
self.init_called = True
|
||||
def _get_repo_path(self, dashboard_id, repo_key):
|
||||
return f"/tmp/repos/{repo_key}"
|
||||
|
||||
git_service_mock = MockGitService()
|
||||
monkeypatch.setattr(git_routes, "git_service", git_service_mock)
|
||||
monkeypatch.setattr(git_routes, "_resolve_dashboard_id_from_ref", lambda *args, **kwargs: 123)
|
||||
monkeypatch.setattr(git_routes, "_resolve_repo_key_from_ref", lambda *args, **kwargs: "dashboard-123")
|
||||
|
||||
existing_config = GitServerConfig(
|
||||
id="config-1", name="GitLab Server", provider=GitProvider.GITLAB,
|
||||
url="https://gitlab.com", pat="token", default_branch="main"
|
||||
)
|
||||
db = DbMock([existing_config])
|
||||
|
||||
init_data = RepoInitRequest(config_id="config-1", remote_url="https://git.local/repo.git")
|
||||
|
||||
result = asyncio.run(git_routes.init_repository(dashboard_ref="123", init_data=init_data, config_manager=MagicMock(), db=db))
|
||||
|
||||
assert result["status"] == "success"
|
||||
assert git_service_mock.init_called is True
|
||||
assert len(db._added) == 1
|
||||
assert isinstance(db._added[0], GitRepository)
|
||||
assert db._added[0].dashboard_id == 123
|
||||
|
||||
# [/DEF:backend.src.api.routes.__tests__.test_git_api:Module]
|
||||
@@ -8,6 +8,7 @@
|
||||
from fastapi import HTTPException
|
||||
import pytest
|
||||
import asyncio
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
from src.api.routes import git as git_routes
|
||||
|
||||
@@ -195,4 +196,245 @@ def test_get_repository_status_batch_deduplicates_and_truncates_ids(monkeypatch)
|
||||
assert "1" in response.statuses
|
||||
# [/DEF:test_get_repository_status_batch_deduplicates_and_truncates_ids:Function]
|
||||
|
||||
|
||||
# [DEF:test_commit_changes_applies_profile_identity_before_commit:Function]
|
||||
# @PURPOSE: Ensure commit route configures repository identity from profile preferences before commit call.
|
||||
# @PRE: Profile preference contains git_username/git_email for current user.
|
||||
# @POST: git_service.configure_identity receives resolved identity and commit proceeds.
|
||||
def test_commit_changes_applies_profile_identity_before_commit(monkeypatch):
|
||||
class IdentityGitService:
|
||||
def __init__(self):
|
||||
self.configured_identity = None
|
||||
self.commit_payload = None
|
||||
|
||||
def configure_identity(self, dashboard_id: int, git_username: str, git_email: str):
|
||||
self.configured_identity = (dashboard_id, git_username, git_email)
|
||||
|
||||
def commit_changes(self, dashboard_id: int, message: str, files):
|
||||
self.commit_payload = (dashboard_id, message, files)
|
||||
|
||||
class PreferenceRow:
|
||||
git_username = "user_1"
|
||||
git_email = "user1@mail.ru"
|
||||
|
||||
class PreferenceQuery:
|
||||
def filter(self, *_args, **_kwargs):
|
||||
return self
|
||||
|
||||
def first(self):
|
||||
return PreferenceRow()
|
||||
|
||||
class DbStub:
|
||||
def query(self, _model):
|
||||
return PreferenceQuery()
|
||||
|
||||
class UserStub:
|
||||
id = "u-1"
|
||||
|
||||
class CommitPayload:
|
||||
message = "test"
|
||||
files = ["dashboards/a.yaml"]
|
||||
|
||||
identity_service = IdentityGitService()
|
||||
monkeypatch.setattr(git_routes, "git_service", identity_service)
|
||||
monkeypatch.setattr(
|
||||
git_routes,
|
||||
"_resolve_dashboard_id_from_ref",
|
||||
lambda *_args, **_kwargs: 12,
|
||||
)
|
||||
|
||||
asyncio.run(
|
||||
git_routes.commit_changes(
|
||||
"dashboard-12",
|
||||
CommitPayload(),
|
||||
config_manager=MagicMock(),
|
||||
db=DbStub(),
|
||||
current_user=UserStub(),
|
||||
)
|
||||
)
|
||||
|
||||
assert identity_service.configured_identity == (12, "user_1", "user1@mail.ru")
|
||||
assert identity_service.commit_payload == (12, "test", ["dashboards/a.yaml"])
|
||||
# [/DEF:test_commit_changes_applies_profile_identity_before_commit:Function]
|
||||
|
||||
|
||||
# [DEF:test_pull_changes_applies_profile_identity_before_pull:Function]
|
||||
# @PURPOSE: Ensure pull route configures repository identity from profile preferences before pull call.
|
||||
# @PRE: Profile preference contains git_username/git_email for current user.
|
||||
# @POST: git_service.configure_identity receives resolved identity and pull proceeds.
|
||||
def test_pull_changes_applies_profile_identity_before_pull(monkeypatch):
|
||||
class IdentityGitService:
|
||||
def __init__(self):
|
||||
self.configured_identity = None
|
||||
self.pulled_dashboard_id = None
|
||||
|
||||
def configure_identity(self, dashboard_id: int, git_username: str, git_email: str):
|
||||
self.configured_identity = (dashboard_id, git_username, git_email)
|
||||
|
||||
def pull_changes(self, dashboard_id: int):
|
||||
self.pulled_dashboard_id = dashboard_id
|
||||
|
||||
class PreferenceRow:
|
||||
git_username = "user_1"
|
||||
git_email = "user1@mail.ru"
|
||||
|
||||
class PreferenceQuery:
|
||||
def filter(self, *_args, **_kwargs):
|
||||
return self
|
||||
|
||||
def first(self):
|
||||
return PreferenceRow()
|
||||
|
||||
class DbStub:
|
||||
def query(self, _model):
|
||||
return PreferenceQuery()
|
||||
|
||||
class UserStub:
|
||||
id = "u-1"
|
||||
|
||||
identity_service = IdentityGitService()
|
||||
monkeypatch.setattr(git_routes, "git_service", identity_service)
|
||||
monkeypatch.setattr(
|
||||
git_routes,
|
||||
"_resolve_dashboard_id_from_ref",
|
||||
lambda *_args, **_kwargs: 12,
|
||||
)
|
||||
|
||||
asyncio.run(
|
||||
git_routes.pull_changes(
|
||||
"dashboard-12",
|
||||
config_manager=MagicMock(),
|
||||
db=DbStub(),
|
||||
current_user=UserStub(),
|
||||
)
|
||||
)
|
||||
|
||||
assert identity_service.configured_identity == (12, "user_1", "user1@mail.ru")
|
||||
assert identity_service.pulled_dashboard_id == 12
|
||||
# [/DEF:test_pull_changes_applies_profile_identity_before_pull:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_merge_status_returns_service_payload:Function]
|
||||
# @PURPOSE: Ensure merge status route returns service payload as-is.
|
||||
# @PRE: git_service.get_merge_status returns unfinished merge payload.
|
||||
# @POST: Route response contains has_unfinished_merge=True.
|
||||
def test_get_merge_status_returns_service_payload(monkeypatch):
|
||||
class MergeStatusGitService:
|
||||
def get_merge_status(self, dashboard_id: int) -> dict:
|
||||
return {
|
||||
"has_unfinished_merge": True,
|
||||
"repository_path": "/tmp/repo-12",
|
||||
"git_dir": "/tmp/repo-12/.git",
|
||||
"current_branch": "dev",
|
||||
"merge_head": "abc",
|
||||
"merge_message_preview": "merge msg",
|
||||
"conflicts_count": 2,
|
||||
}
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", MergeStatusGitService())
|
||||
monkeypatch.setattr(git_routes, "_resolve_dashboard_id_from_ref", lambda *_args, **_kwargs: 12)
|
||||
|
||||
response = asyncio.run(
|
||||
git_routes.get_merge_status(
|
||||
"dashboard-12",
|
||||
config_manager=MagicMock(),
|
||||
)
|
||||
)
|
||||
|
||||
assert response["has_unfinished_merge"] is True
|
||||
assert response["conflicts_count"] == 2
|
||||
# [/DEF:test_get_merge_status_returns_service_payload:Function]
|
||||
|
||||
|
||||
# [DEF:test_resolve_merge_conflicts_passes_resolution_items_to_service:Function]
|
||||
# @PURPOSE: Ensure merge resolve route forwards parsed resolutions to service.
|
||||
# @PRE: resolve_data has one file strategy.
|
||||
# @POST: Service receives normalized list and route returns resolved files.
|
||||
def test_resolve_merge_conflicts_passes_resolution_items_to_service(monkeypatch):
|
||||
captured = {}
|
||||
|
||||
class MergeResolveGitService:
|
||||
def resolve_merge_conflicts(self, dashboard_id: int, resolutions):
|
||||
captured["dashboard_id"] = dashboard_id
|
||||
captured["resolutions"] = resolutions
|
||||
return ["dashboards/a.yaml"]
|
||||
|
||||
class ResolveData:
|
||||
class _Resolution:
|
||||
def dict(self):
|
||||
return {"file_path": "dashboards/a.yaml", "resolution": "mine", "content": None}
|
||||
|
||||
resolutions = [_Resolution()]
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", MergeResolveGitService())
|
||||
monkeypatch.setattr(git_routes, "_resolve_dashboard_id_from_ref", lambda *_args, **_kwargs: 12)
|
||||
|
||||
response = asyncio.run(
|
||||
git_routes.resolve_merge_conflicts(
|
||||
"dashboard-12",
|
||||
ResolveData(),
|
||||
config_manager=MagicMock(),
|
||||
)
|
||||
)
|
||||
|
||||
assert captured["dashboard_id"] == 12
|
||||
assert captured["resolutions"][0]["resolution"] == "mine"
|
||||
assert response["resolved_files"] == ["dashboards/a.yaml"]
|
||||
# [/DEF:test_resolve_merge_conflicts_passes_resolution_items_to_service:Function]
|
||||
|
||||
|
||||
# [DEF:test_abort_merge_calls_service_and_returns_result:Function]
|
||||
# @PURPOSE: Ensure abort route delegates to service.
|
||||
# @PRE: Service abort_merge returns aborted status.
|
||||
# @POST: Route returns aborted status.
|
||||
def test_abort_merge_calls_service_and_returns_result(monkeypatch):
|
||||
class AbortGitService:
|
||||
def abort_merge(self, dashboard_id: int):
|
||||
assert dashboard_id == 12
|
||||
return {"status": "aborted"}
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", AbortGitService())
|
||||
monkeypatch.setattr(git_routes, "_resolve_dashboard_id_from_ref", lambda *_args, **_kwargs: 12)
|
||||
|
||||
response = asyncio.run(
|
||||
git_routes.abort_merge(
|
||||
"dashboard-12",
|
||||
config_manager=MagicMock(),
|
||||
)
|
||||
)
|
||||
|
||||
assert response["status"] == "aborted"
|
||||
# [/DEF:test_abort_merge_calls_service_and_returns_result:Function]
|
||||
|
||||
|
||||
# [DEF:test_continue_merge_passes_message_and_returns_commit:Function]
|
||||
# @PURPOSE: Ensure continue route passes commit message to service.
|
||||
# @PRE: continue_data.message is provided.
|
||||
# @POST: Route returns committed status and hash.
|
||||
def test_continue_merge_passes_message_and_returns_commit(monkeypatch):
|
||||
class ContinueGitService:
|
||||
def continue_merge(self, dashboard_id: int, message: str):
|
||||
assert dashboard_id == 12
|
||||
assert message == "Resolve all conflicts"
|
||||
return {"status": "committed", "commit_hash": "abc123"}
|
||||
|
||||
class ContinueData:
|
||||
message = "Resolve all conflicts"
|
||||
|
||||
monkeypatch.setattr(git_routes, "git_service", ContinueGitService())
|
||||
monkeypatch.setattr(git_routes, "_resolve_dashboard_id_from_ref", lambda *_args, **_kwargs: 12)
|
||||
|
||||
response = asyncio.run(
|
||||
git_routes.continue_merge(
|
||||
"dashboard-12",
|
||||
ContinueData(),
|
||||
config_manager=MagicMock(),
|
||||
)
|
||||
)
|
||||
|
||||
assert response["status"] == "committed"
|
||||
assert response["commit_hash"] == "abc123"
|
||||
# [/DEF:test_continue_merge_passes_message_and_returns_commit:Function]
|
||||
|
||||
|
||||
# [/DEF:backend.src.api.routes.__tests__.test_git_status_route:Module]
|
||||
|
||||
293
backend/src/api/routes/__tests__/test_profile_api.py
Normal file
293
backend/src/api/routes/__tests__/test_profile_api.py
Normal file
@@ -0,0 +1,293 @@
|
||||
# [DEF:backend.src.api.routes.__tests__.test_profile_api:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, profile, api, preferences, lookup, contract
|
||||
# @PURPOSE: Verifies profile API route contracts for preference read/update and Superset account lookup.
|
||||
# @LAYER: API
|
||||
# @RELATION: TESTS -> backend.src.api.routes.profile
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
from datetime import datetime, timezone
|
||||
from unittest.mock import MagicMock, patch
|
||||
|
||||
from fastapi.testclient import TestClient
|
||||
|
||||
from src.app import app
|
||||
from src.core.database import get_db
|
||||
from src.dependencies import get_config_manager, get_current_user
|
||||
from src.schemas.profile import (
|
||||
ProfilePermissionState,
|
||||
ProfilePreference,
|
||||
ProfilePreferenceResponse,
|
||||
ProfileSecuritySummary,
|
||||
SupersetAccountCandidate,
|
||||
SupersetAccountLookupResponse,
|
||||
)
|
||||
from src.services.profile_service import (
|
||||
EnvironmentNotFoundError,
|
||||
ProfileAuthorizationError,
|
||||
ProfileValidationError,
|
||||
)
|
||||
# [/SECTION]
|
||||
|
||||
|
||||
client = TestClient(app)
|
||||
|
||||
|
||||
# [DEF:mock_profile_route_dependencies:Function]
|
||||
# @PURPOSE: Provides deterministic dependency overrides for profile route tests.
|
||||
# @PRE: App instance is initialized.
|
||||
# @POST: Dependencies are overridden for current test and restored afterward.
|
||||
def mock_profile_route_dependencies():
|
||||
mock_user = MagicMock()
|
||||
mock_user.id = "u-1"
|
||||
mock_user.username = "test-user"
|
||||
|
||||
mock_db = MagicMock()
|
||||
mock_config_manager = MagicMock()
|
||||
|
||||
app.dependency_overrides[get_current_user] = lambda: mock_user
|
||||
app.dependency_overrides[get_db] = lambda: mock_db
|
||||
app.dependency_overrides[get_config_manager] = lambda: mock_config_manager
|
||||
|
||||
return mock_user, mock_db, mock_config_manager
|
||||
# [/DEF:mock_profile_route_dependencies:Function]
|
||||
|
||||
|
||||
# [DEF:profile_route_deps_fixture:Function]
|
||||
# @PURPOSE: Pytest fixture wrapper for profile route dependency overrides.
|
||||
# @PRE: None.
|
||||
# @POST: Yields overridden dependencies and clears overrides after test.
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def profile_route_deps_fixture():
|
||||
yielded = mock_profile_route_dependencies()
|
||||
yield yielded
|
||||
app.dependency_overrides.clear()
|
||||
# [/DEF:profile_route_deps_fixture:Function]
|
||||
|
||||
|
||||
# [DEF:_build_preference_response:Function]
|
||||
# @PURPOSE: Builds stable profile preference response payload for route tests.
|
||||
# @PRE: user_id is provided.
|
||||
# @POST: Returns ProfilePreferenceResponse object with deterministic timestamps.
|
||||
def _build_preference_response(user_id: str = "u-1") -> ProfilePreferenceResponse:
|
||||
now = datetime.now(timezone.utc)
|
||||
return ProfilePreferenceResponse(
|
||||
status="success",
|
||||
message="Preference loaded",
|
||||
preference=ProfilePreference(
|
||||
user_id=user_id,
|
||||
superset_username="John_Doe",
|
||||
superset_username_normalized="john_doe",
|
||||
show_only_my_dashboards=True,
|
||||
git_username="ivan.ivanov",
|
||||
git_email="ivan@company.local",
|
||||
has_git_personal_access_token=True,
|
||||
git_personal_access_token_masked="iv***al",
|
||||
start_page="reports",
|
||||
auto_open_task_drawer=False,
|
||||
dashboards_table_density="compact",
|
||||
created_at=now,
|
||||
updated_at=now,
|
||||
),
|
||||
security=ProfileSecuritySummary(
|
||||
read_only=True,
|
||||
auth_source="adfs",
|
||||
current_role="Data Engineer",
|
||||
role_source="adfs",
|
||||
roles=["Data Engineer"],
|
||||
permissions=[
|
||||
ProfilePermissionState(key="migration:run", allowed=True),
|
||||
ProfilePermissionState(key="admin:users", allowed=False),
|
||||
],
|
||||
),
|
||||
)
|
||||
# [/DEF:_build_preference_response:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_profile_preferences_returns_self_payload:Function]
|
||||
# @PURPOSE: Verifies GET /api/profile/preferences returns stable self-scoped payload.
|
||||
# @PRE: Authenticated user context is available.
|
||||
# @POST: Response status is 200 and payload contains current user preference.
|
||||
def test_get_profile_preferences_returns_self_payload(profile_route_deps_fixture):
|
||||
mock_user, _, _ = profile_route_deps_fixture
|
||||
service = MagicMock()
|
||||
service.get_my_preference.return_value = _build_preference_response(user_id=mock_user.id)
|
||||
|
||||
with patch("src.api.routes.profile._get_profile_service", return_value=service):
|
||||
response = client.get("/api/profile/preferences")
|
||||
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
assert payload["status"] == "success"
|
||||
assert payload["preference"]["user_id"] == mock_user.id
|
||||
assert payload["preference"]["superset_username_normalized"] == "john_doe"
|
||||
assert payload["preference"]["git_username"] == "ivan.ivanov"
|
||||
assert payload["preference"]["git_email"] == "ivan@company.local"
|
||||
assert payload["preference"]["has_git_personal_access_token"] is True
|
||||
assert payload["preference"]["git_personal_access_token_masked"] == "iv***al"
|
||||
assert payload["preference"]["start_page"] == "reports"
|
||||
assert payload["preference"]["auto_open_task_drawer"] is False
|
||||
assert payload["preference"]["dashboards_table_density"] == "compact"
|
||||
assert payload["security"]["read_only"] is True
|
||||
assert payload["security"]["current_role"] == "Data Engineer"
|
||||
assert payload["security"]["permissions"][0]["key"] == "migration:run"
|
||||
service.get_my_preference.assert_called_once_with(mock_user)
|
||||
# [/DEF:test_get_profile_preferences_returns_self_payload:Function]
|
||||
|
||||
|
||||
# [DEF:test_patch_profile_preferences_success:Function]
|
||||
# @PURPOSE: Verifies PATCH /api/profile/preferences persists valid payload through route mapping.
|
||||
# @PRE: Valid request payload and authenticated user.
|
||||
# @POST: Response status is 200 with saved preference payload.
|
||||
def test_patch_profile_preferences_success(profile_route_deps_fixture):
|
||||
mock_user, _, _ = profile_route_deps_fixture
|
||||
service = MagicMock()
|
||||
service.update_my_preference.return_value = _build_preference_response(user_id=mock_user.id)
|
||||
|
||||
with patch("src.api.routes.profile._get_profile_service", return_value=service):
|
||||
response = client.patch(
|
||||
"/api/profile/preferences",
|
||||
json={
|
||||
"superset_username": "John_Doe",
|
||||
"show_only_my_dashboards": True,
|
||||
"git_username": "ivan.ivanov",
|
||||
"git_email": "ivan@company.local",
|
||||
"git_personal_access_token": "ghp_1234567890",
|
||||
"start_page": "reports-logs",
|
||||
"auto_open_task_drawer": False,
|
||||
"dashboards_table_density": "free",
|
||||
},
|
||||
)
|
||||
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
assert payload["status"] == "success"
|
||||
assert payload["preference"]["superset_username"] == "John_Doe"
|
||||
assert payload["preference"]["show_only_my_dashboards"] is True
|
||||
assert payload["preference"]["git_username"] == "ivan.ivanov"
|
||||
assert payload["preference"]["git_email"] == "ivan@company.local"
|
||||
assert payload["preference"]["start_page"] == "reports"
|
||||
assert payload["preference"]["auto_open_task_drawer"] is False
|
||||
assert payload["preference"]["dashboards_table_density"] == "compact"
|
||||
service.update_my_preference.assert_called_once()
|
||||
|
||||
called_kwargs = service.update_my_preference.call_args.kwargs
|
||||
assert called_kwargs["current_user"] == mock_user
|
||||
assert called_kwargs["payload"].git_username == "ivan.ivanov"
|
||||
assert called_kwargs["payload"].git_email == "ivan@company.local"
|
||||
assert called_kwargs["payload"].git_personal_access_token == "ghp_1234567890"
|
||||
assert called_kwargs["payload"].start_page == "reports-logs"
|
||||
assert called_kwargs["payload"].auto_open_task_drawer is False
|
||||
assert called_kwargs["payload"].dashboards_table_density == "free"
|
||||
# [/DEF:test_patch_profile_preferences_success:Function]
|
||||
|
||||
|
||||
# [DEF:test_patch_profile_preferences_validation_error:Function]
|
||||
# @PURPOSE: Verifies route maps domain validation failure to HTTP 422 with actionable details.
|
||||
# @PRE: Service raises ProfileValidationError.
|
||||
# @POST: Response status is 422 and includes validation messages.
|
||||
def test_patch_profile_preferences_validation_error(profile_route_deps_fixture):
|
||||
service = MagicMock()
|
||||
service.update_my_preference.side_effect = ProfileValidationError(
|
||||
["Superset username is required when default filter is enabled."]
|
||||
)
|
||||
|
||||
with patch("src.api.routes.profile._get_profile_service", return_value=service):
|
||||
response = client.patch(
|
||||
"/api/profile/preferences",
|
||||
json={
|
||||
"superset_username": "",
|
||||
"show_only_my_dashboards": True,
|
||||
},
|
||||
)
|
||||
|
||||
assert response.status_code == 422
|
||||
payload = response.json()
|
||||
assert "detail" in payload
|
||||
assert "Superset username is required when default filter is enabled." in payload["detail"]
|
||||
# [/DEF:test_patch_profile_preferences_validation_error:Function]
|
||||
|
||||
|
||||
# [DEF:test_patch_profile_preferences_cross_user_denied:Function]
|
||||
# @PURPOSE: Verifies route maps domain authorization guard failure to HTTP 403.
|
||||
# @PRE: Service raises ProfileAuthorizationError.
|
||||
# @POST: Response status is 403 with denial message.
|
||||
def test_patch_profile_preferences_cross_user_denied(profile_route_deps_fixture):
|
||||
service = MagicMock()
|
||||
service.update_my_preference.side_effect = ProfileAuthorizationError(
|
||||
"Cross-user preference mutation is forbidden"
|
||||
)
|
||||
|
||||
with patch("src.api.routes.profile._get_profile_service", return_value=service):
|
||||
response = client.patch(
|
||||
"/api/profile/preferences",
|
||||
json={
|
||||
"superset_username": "john_doe",
|
||||
"show_only_my_dashboards": True,
|
||||
},
|
||||
)
|
||||
|
||||
assert response.status_code == 403
|
||||
payload = response.json()
|
||||
assert payload["detail"] == "Cross-user preference mutation is forbidden"
|
||||
# [/DEF:test_patch_profile_preferences_cross_user_denied:Function]
|
||||
|
||||
|
||||
# [DEF:test_lookup_superset_accounts_success:Function]
|
||||
# @PURPOSE: Verifies lookup route returns success payload with normalized candidates.
|
||||
# @PRE: Valid environment_id and service success response.
|
||||
# @POST: Response status is 200 and items list is returned.
|
||||
def test_lookup_superset_accounts_success(profile_route_deps_fixture):
|
||||
service = MagicMock()
|
||||
service.lookup_superset_accounts.return_value = SupersetAccountLookupResponse(
|
||||
status="success",
|
||||
environment_id="dev",
|
||||
page_index=0,
|
||||
page_size=20,
|
||||
total=1,
|
||||
warning=None,
|
||||
items=[
|
||||
SupersetAccountCandidate(
|
||||
environment_id="dev",
|
||||
username="john_doe",
|
||||
display_name="John Doe",
|
||||
email="john@example.local",
|
||||
is_active=True,
|
||||
)
|
||||
],
|
||||
)
|
||||
|
||||
with patch("src.api.routes.profile._get_profile_service", return_value=service):
|
||||
response = client.get("/api/profile/superset-accounts?environment_id=dev")
|
||||
|
||||
assert response.status_code == 200
|
||||
payload = response.json()
|
||||
assert payload["status"] == "success"
|
||||
assert payload["environment_id"] == "dev"
|
||||
assert payload["total"] == 1
|
||||
assert payload["items"][0]["username"] == "john_doe"
|
||||
# [/DEF:test_lookup_superset_accounts_success:Function]
|
||||
|
||||
|
||||
# [DEF:test_lookup_superset_accounts_env_not_found:Function]
|
||||
# @PURPOSE: Verifies lookup route maps missing environment to HTTP 404.
|
||||
# @PRE: Service raises EnvironmentNotFoundError.
|
||||
# @POST: Response status is 404 with explicit message.
|
||||
def test_lookup_superset_accounts_env_not_found(profile_route_deps_fixture):
|
||||
service = MagicMock()
|
||||
service.lookup_superset_accounts.side_effect = EnvironmentNotFoundError(
|
||||
"Environment 'missing-env' not found"
|
||||
)
|
||||
|
||||
with patch("src.api.routes.profile._get_profile_service", return_value=service):
|
||||
response = client.get("/api/profile/superset-accounts?environment_id=missing-env")
|
||||
|
||||
assert response.status_code == 404
|
||||
payload = response.json()
|
||||
assert payload["detail"] == "Environment 'missing-env' not found"
|
||||
# [/DEF:test_lookup_superset_accounts_env_not_found:Function]
|
||||
|
||||
# [/DEF:backend.src.api.routes.__tests__.test_profile_api:Module]
|
||||
73
backend/src/api/routes/__tests__/test_tasks_logs.py
Normal file
73
backend/src/api/routes/__tests__/test_tasks_logs.py
Normal file
@@ -0,0 +1,73 @@
|
||||
# [DEF:__tests__/test_tasks_logs:Module]
|
||||
# @RELATION: VERIFIES -> ../tasks.py
|
||||
# @PURPOSE: Contract testing for task logs API endpoints.
|
||||
# [/DEF:__tests__/test_tasks_logs:Module]
|
||||
|
||||
import pytest
|
||||
from fastapi import FastAPI
|
||||
from fastapi.testclient import TestClient
|
||||
from unittest.mock import MagicMock
|
||||
from src.dependencies import get_task_manager, has_permission
|
||||
from src.api.routes.tasks import router
|
||||
|
||||
# @TEST_FIXTURE: mock_app
|
||||
@pytest.fixture
|
||||
def client():
|
||||
app = FastAPI()
|
||||
app.include_router(router, prefix="/tasks")
|
||||
|
||||
# Mock TaskManager
|
||||
mock_tm = MagicMock()
|
||||
app.dependency_overrides[get_task_manager] = lambda: mock_tm
|
||||
|
||||
# Mock permissions (bypass for unit test)
|
||||
app.dependency_overrides[has_permission("tasks", "READ")] = lambda: True
|
||||
|
||||
return TestClient(app), mock_tm
|
||||
|
||||
# @TEST_CONTRACT: get_task_logs_api -> Invariants
|
||||
# @TEST_FIXTURE: valid_task_logs_request
|
||||
def test_get_task_logs_success(client):
|
||||
tc, tm = client
|
||||
|
||||
# Setup mock task
|
||||
mock_task = MagicMock()
|
||||
tm.get_task.return_value = mock_task
|
||||
tm.get_task_logs.return_value = [{"level": "INFO", "message": "msg1"}]
|
||||
|
||||
response = tc.get("/tasks/task-1/logs?level=INFO")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.json() == [{"level": "INFO", "message": "msg1"}]
|
||||
tm.get_task.assert_called_with("task-1")
|
||||
# Verify filter construction inside route
|
||||
args = tm.get_task_logs.call_args
|
||||
assert args[0][0] == "task-1"
|
||||
assert args[0][1].level == "INFO"
|
||||
|
||||
# @TEST_EDGE: task_not_found
|
||||
def test_get_task_logs_not_found(client):
|
||||
tc, tm = client
|
||||
tm.get_task.return_value = None
|
||||
|
||||
response = tc.get("/tasks/missing/logs")
|
||||
assert response.status_code == 404
|
||||
assert response.json()["detail"] == "Task not found"
|
||||
|
||||
# @TEST_EDGE: invalid_limit
|
||||
def test_get_task_logs_invalid_limit(client):
|
||||
tc, tm = client
|
||||
# limit=0 is ge=1 in Query
|
||||
response = tc.get("/tasks/task-1/logs?limit=0")
|
||||
assert response.status_code == 422
|
||||
|
||||
# @TEST_INVARIANT: response_purity
|
||||
def test_get_task_log_stats_success(client):
|
||||
tc, tm = client
|
||||
tm.get_task.return_value = MagicMock()
|
||||
tm.get_task_log_stats.return_value = {"INFO": 5, "ERROR": 1}
|
||||
|
||||
response = tc.get("/tasks/task-1/logs/stats")
|
||||
assert response.status_code == 200
|
||||
# response_model=LogStats might wrap this, but let's check basic structure
|
||||
# assuming tm.get_task_log_stats returns something compatible with LogStats
|
||||
@@ -22,8 +22,12 @@ from ...schemas.auth import (
|
||||
ADGroupMappingSchema, ADGroupMappingCreate
|
||||
)
|
||||
from ...models.auth import User, Role, ADGroupMapping
|
||||
from ...dependencies import has_permission
|
||||
from ...dependencies import has_permission, get_plugin_loader
|
||||
from ...core.logger import logger, belief_scope
|
||||
from ...services.rbac_permission_catalog import (
|
||||
discover_declared_permissions,
|
||||
sync_permission_catalog,
|
||||
)
|
||||
# [/SECTION]
|
||||
|
||||
# [DEF:router:Variable]
|
||||
@@ -270,9 +274,18 @@ async def delete_role(
|
||||
@router.get("/permissions", response_model=List[PermissionSchema])
|
||||
async def list_permissions(
|
||||
db: Session = Depends(get_auth_db),
|
||||
plugin_loader = Depends(get_plugin_loader),
|
||||
_ = Depends(has_permission("admin:roles", "READ"))
|
||||
):
|
||||
with belief_scope("api.admin.list_permissions"):
|
||||
declared_permissions = discover_declared_permissions(plugin_loader=plugin_loader)
|
||||
inserted_count = sync_permission_catalog(db=db, declared_permissions=declared_permissions)
|
||||
if inserted_count > 0:
|
||||
logger.info(
|
||||
"[api.admin.list_permissions][Action] Synchronized %s missing RBAC permissions into auth catalog",
|
||||
inserted_count,
|
||||
)
|
||||
|
||||
repo = AuthRepository(db)
|
||||
return repo.list_permissions()
|
||||
# [/DEF:list_permissions:Function]
|
||||
|
||||
445
backend/src/api/routes/clean_release.py
Normal file
445
backend/src/api/routes/clean_release.py
Normal file
@@ -0,0 +1,445 @@
|
||||
# [DEF:backend.src.api.routes.clean_release:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: api, clean-release, candidate-preparation, compliance
|
||||
# @PURPOSE: Expose clean release endpoints for candidate preparation and subsequent compliance flow.
|
||||
# @LAYER: API
|
||||
# @RELATION: DEPENDS_ON -> backend.src.dependencies.get_clean_release_repository
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.preparation_service
|
||||
# @INVARIANT: API never reports prepared status if preparation errors are present.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from ...core.logger import belief_scope, logger
|
||||
from ...dependencies import get_clean_release_repository, get_config_manager
|
||||
from ...services.clean_release.preparation_service import prepare_candidate
|
||||
from ...services.clean_release.repository import CleanReleaseRepository
|
||||
from ...services.clean_release.compliance_orchestrator import CleanComplianceOrchestrator
|
||||
from ...services.clean_release.report_builder import ComplianceReportBuilder
|
||||
from ...services.clean_release.compliance_execution_service import ComplianceExecutionService, ComplianceRunError
|
||||
from ...services.clean_release.dto import CandidateDTO, ManifestDTO, CandidateOverviewDTO, ComplianceRunDTO
|
||||
from ...services.clean_release.enums import (
|
||||
ComplianceDecision,
|
||||
ComplianceStageName,
|
||||
ViolationCategory,
|
||||
ViolationSeverity,
|
||||
RunStatus,
|
||||
CandidateStatus,
|
||||
)
|
||||
from ...models.clean_release import (
|
||||
ComplianceRun,
|
||||
ComplianceStageRun,
|
||||
ComplianceViolation,
|
||||
CandidateArtifact,
|
||||
ReleaseCandidate,
|
||||
)
|
||||
|
||||
router = APIRouter(prefix="/api/clean-release", tags=["Clean Release"])
|
||||
|
||||
|
||||
# [DEF:PrepareCandidateRequest:Class]
|
||||
# @PURPOSE: Request schema for candidate preparation endpoint.
|
||||
class PrepareCandidateRequest(BaseModel):
|
||||
candidate_id: str = Field(min_length=1)
|
||||
artifacts: List[Dict[str, Any]] = Field(default_factory=list)
|
||||
sources: List[str] = Field(default_factory=list)
|
||||
operator_id: str = Field(min_length=1)
|
||||
# [/DEF:PrepareCandidateRequest:Class]
|
||||
|
||||
|
||||
# [DEF:StartCheckRequest:Class]
|
||||
# @PURPOSE: Request schema for clean compliance check run startup.
|
||||
class StartCheckRequest(BaseModel):
|
||||
candidate_id: str = Field(min_length=1)
|
||||
profile: str = Field(default="enterprise-clean")
|
||||
execution_mode: str = Field(default="tui")
|
||||
triggered_by: str = Field(default="system")
|
||||
# [/DEF:StartCheckRequest:Class]
|
||||
|
||||
|
||||
# [DEF:RegisterCandidateRequest:Class]
|
||||
# @PURPOSE: Request schema for candidate registration endpoint.
|
||||
class RegisterCandidateRequest(BaseModel):
|
||||
id: str = Field(min_length=1)
|
||||
version: str = Field(min_length=1)
|
||||
source_snapshot_ref: str = Field(min_length=1)
|
||||
created_by: str = Field(min_length=1)
|
||||
# [/DEF:RegisterCandidateRequest:Class]
|
||||
|
||||
|
||||
# [DEF:ImportArtifactsRequest:Class]
|
||||
# @PURPOSE: Request schema for candidate artifact import endpoint.
|
||||
class ImportArtifactsRequest(BaseModel):
|
||||
artifacts: List[Dict[str, Any]] = Field(default_factory=list)
|
||||
# [/DEF:ImportArtifactsRequest:Class]
|
||||
|
||||
|
||||
# [DEF:BuildManifestRequest:Class]
|
||||
# @PURPOSE: Request schema for manifest build endpoint.
|
||||
class BuildManifestRequest(BaseModel):
|
||||
created_by: str = Field(default="system")
|
||||
# [/DEF:BuildManifestRequest:Class]
|
||||
|
||||
|
||||
# [DEF:CreateComplianceRunRequest:Class]
|
||||
# @PURPOSE: Request schema for compliance run creation with optional manifest pinning.
|
||||
class CreateComplianceRunRequest(BaseModel):
|
||||
requested_by: str = Field(min_length=1)
|
||||
manifest_id: str | None = None
|
||||
# [/DEF:CreateComplianceRunRequest:Class]
|
||||
|
||||
|
||||
# [DEF:register_candidate_v2_endpoint:Function]
|
||||
# @PURPOSE: Register a clean-release candidate for headless lifecycle.
|
||||
# @PRE: Candidate identifier is unique.
|
||||
# @POST: Candidate is persisted in DRAFT status.
|
||||
@router.post("/candidates", response_model=CandidateDTO, status_code=status.HTTP_201_CREATED)
|
||||
async def register_candidate_v2_endpoint(
|
||||
payload: RegisterCandidateRequest,
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
existing = repository.get_candidate(payload.id)
|
||||
if existing is not None:
|
||||
raise HTTPException(status_code=409, detail={"message": "Candidate already exists", "code": "CANDIDATE_EXISTS"})
|
||||
|
||||
candidate = ReleaseCandidate(
|
||||
id=payload.id,
|
||||
version=payload.version,
|
||||
source_snapshot_ref=payload.source_snapshot_ref,
|
||||
created_by=payload.created_by,
|
||||
created_at=datetime.now(timezone.utc),
|
||||
status=CandidateStatus.DRAFT.value,
|
||||
)
|
||||
repository.save_candidate(candidate)
|
||||
|
||||
return CandidateDTO(
|
||||
id=candidate.id,
|
||||
version=candidate.version,
|
||||
source_snapshot_ref=candidate.source_snapshot_ref,
|
||||
created_at=candidate.created_at,
|
||||
created_by=candidate.created_by,
|
||||
status=CandidateStatus(candidate.status),
|
||||
)
|
||||
# [/DEF:register_candidate_v2_endpoint:Function]
|
||||
|
||||
|
||||
# [DEF:import_candidate_artifacts_v2_endpoint:Function]
|
||||
# @PURPOSE: Import candidate artifacts in headless flow.
|
||||
# @PRE: Candidate exists and artifacts array is non-empty.
|
||||
# @POST: Artifacts are persisted and candidate advances to PREPARED if it was DRAFT.
|
||||
@router.post("/candidates/{candidate_id}/artifacts")
|
||||
async def import_candidate_artifacts_v2_endpoint(
|
||||
candidate_id: str,
|
||||
payload: ImportArtifactsRequest,
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
candidate = repository.get_candidate(candidate_id)
|
||||
if candidate is None:
|
||||
raise HTTPException(status_code=404, detail={"message": "Candidate not found", "code": "CANDIDATE_NOT_FOUND"})
|
||||
if not payload.artifacts:
|
||||
raise HTTPException(status_code=400, detail={"message": "Artifacts list is required", "code": "ARTIFACTS_EMPTY"})
|
||||
|
||||
for artifact in payload.artifacts:
|
||||
required = ("id", "path", "sha256", "size")
|
||||
for field_name in required:
|
||||
if field_name not in artifact:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail={"message": f"Artifact missing field '{field_name}'", "code": "ARTIFACT_INVALID"},
|
||||
)
|
||||
|
||||
artifact_model = CandidateArtifact(
|
||||
id=str(artifact["id"]),
|
||||
candidate_id=candidate_id,
|
||||
path=str(artifact["path"]),
|
||||
sha256=str(artifact["sha256"]),
|
||||
size=int(artifact["size"]),
|
||||
detected_category=artifact.get("detected_category"),
|
||||
declared_category=artifact.get("declared_category"),
|
||||
source_uri=artifact.get("source_uri"),
|
||||
source_host=artifact.get("source_host"),
|
||||
metadata_json=artifact.get("metadata_json", {}),
|
||||
)
|
||||
repository.save_artifact(artifact_model)
|
||||
|
||||
if candidate.status == CandidateStatus.DRAFT.value:
|
||||
candidate.transition_to(CandidateStatus.PREPARED)
|
||||
repository.save_candidate(candidate)
|
||||
|
||||
return {"status": "success"}
|
||||
# [/DEF:import_candidate_artifacts_v2_endpoint:Function]
|
||||
|
||||
|
||||
# [DEF:build_candidate_manifest_v2_endpoint:Function]
|
||||
# @PURPOSE: Build immutable manifest snapshot for prepared candidate.
|
||||
# @PRE: Candidate exists and has imported artifacts.
|
||||
# @POST: Returns created ManifestDTO with incremented version.
|
||||
@router.post("/candidates/{candidate_id}/manifests", response_model=ManifestDTO, status_code=status.HTTP_201_CREATED)
|
||||
async def build_candidate_manifest_v2_endpoint(
|
||||
candidate_id: str,
|
||||
payload: BuildManifestRequest,
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
from ...services.clean_release.manifest_service import build_manifest_snapshot
|
||||
|
||||
try:
|
||||
manifest = build_manifest_snapshot(
|
||||
repository=repository,
|
||||
candidate_id=candidate_id,
|
||||
created_by=payload.created_by,
|
||||
)
|
||||
except ValueError as exc:
|
||||
raise HTTPException(status_code=400, detail={"message": str(exc), "code": "MANIFEST_BUILD_ERROR"})
|
||||
|
||||
return ManifestDTO(
|
||||
id=manifest.id,
|
||||
candidate_id=manifest.candidate_id,
|
||||
manifest_version=manifest.manifest_version,
|
||||
manifest_digest=manifest.manifest_digest,
|
||||
artifacts_digest=manifest.artifacts_digest,
|
||||
created_at=manifest.created_at,
|
||||
created_by=manifest.created_by,
|
||||
source_snapshot_ref=manifest.source_snapshot_ref,
|
||||
content_json=manifest.content_json,
|
||||
)
|
||||
# [/DEF:build_candidate_manifest_v2_endpoint:Function]
|
||||
|
||||
|
||||
# [DEF:get_candidate_overview_v2_endpoint:Function]
|
||||
# @PURPOSE: Return expanded candidate overview DTO for headless lifecycle visibility.
|
||||
# @PRE: Candidate exists.
|
||||
# @POST: Returns CandidateOverviewDTO built from the same repository state used by headless US1 endpoints.
|
||||
@router.get("/candidates/{candidate_id}/overview", response_model=CandidateOverviewDTO)
|
||||
async def get_candidate_overview_v2_endpoint(
|
||||
candidate_id: str,
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
candidate = repository.get_candidate(candidate_id)
|
||||
if candidate is None:
|
||||
raise HTTPException(status_code=404, detail={"message": "Candidate not found", "code": "CANDIDATE_NOT_FOUND"})
|
||||
|
||||
manifests = repository.get_manifests_by_candidate(candidate_id)
|
||||
latest_manifest = sorted(manifests, key=lambda m: m.manifest_version, reverse=True)[0] if manifests else None
|
||||
|
||||
runs = [run for run in repository.check_runs.values() if run.candidate_id == candidate_id]
|
||||
latest_run = sorted(runs, key=lambda run: run.requested_at or datetime.min.replace(tzinfo=timezone.utc), reverse=True)[0] if runs else None
|
||||
|
||||
latest_report = None
|
||||
if latest_run is not None:
|
||||
latest_report = next((r for r in repository.reports.values() if r.run_id == latest_run.id), None)
|
||||
|
||||
latest_policy_snapshot = repository.get_policy(latest_run.policy_snapshot_id) if latest_run else None
|
||||
latest_registry_snapshot = repository.get_registry(latest_run.registry_snapshot_id) if latest_run else None
|
||||
|
||||
approval_decisions = getattr(repository, "approval_decisions", [])
|
||||
latest_approval = (
|
||||
sorted(
|
||||
[item for item in approval_decisions if item.candidate_id == candidate_id],
|
||||
key=lambda item: item.decided_at or datetime.min.replace(tzinfo=timezone.utc),
|
||||
reverse=True,
|
||||
)[0]
|
||||
if approval_decisions
|
||||
and any(item.candidate_id == candidate_id for item in approval_decisions)
|
||||
else None
|
||||
)
|
||||
|
||||
publication_records = getattr(repository, "publication_records", [])
|
||||
latest_publication = (
|
||||
sorted(
|
||||
[item for item in publication_records if item.candidate_id == candidate_id],
|
||||
key=lambda item: item.published_at or datetime.min.replace(tzinfo=timezone.utc),
|
||||
reverse=True,
|
||||
)[0]
|
||||
if publication_records
|
||||
and any(item.candidate_id == candidate_id for item in publication_records)
|
||||
else None
|
||||
)
|
||||
|
||||
return CandidateOverviewDTO(
|
||||
candidate_id=candidate.id,
|
||||
version=candidate.version,
|
||||
source_snapshot_ref=candidate.source_snapshot_ref,
|
||||
status=CandidateStatus(candidate.status),
|
||||
latest_manifest_id=latest_manifest.id if latest_manifest else None,
|
||||
latest_manifest_digest=latest_manifest.manifest_digest if latest_manifest else None,
|
||||
latest_run_id=latest_run.id if latest_run else None,
|
||||
latest_run_status=RunStatus(latest_run.status) if latest_run else None,
|
||||
latest_report_id=latest_report.id if latest_report else None,
|
||||
latest_report_final_status=ComplianceDecision(latest_report.final_status) if latest_report else None,
|
||||
latest_policy_snapshot_id=latest_policy_snapshot.id if latest_policy_snapshot else None,
|
||||
latest_policy_version=latest_policy_snapshot.policy_version if latest_policy_snapshot else None,
|
||||
latest_registry_snapshot_id=latest_registry_snapshot.id if latest_registry_snapshot else None,
|
||||
latest_registry_version=latest_registry_snapshot.registry_version if latest_registry_snapshot else None,
|
||||
latest_approval_decision=latest_approval.decision if latest_approval else None,
|
||||
latest_publication_id=latest_publication.id if latest_publication else None,
|
||||
latest_publication_status=latest_publication.status if latest_publication else None,
|
||||
)
|
||||
# [/DEF:get_candidate_overview_v2_endpoint:Function]
|
||||
|
||||
|
||||
# [DEF:prepare_candidate_endpoint:Function]
|
||||
# @PURPOSE: Prepare candidate with policy evaluation and deterministic manifest generation.
|
||||
# @PRE: Candidate and active policy exist in repository.
|
||||
# @POST: Returns preparation result including manifest reference and violations.
|
||||
@router.post("/candidates/prepare")
|
||||
async def prepare_candidate_endpoint(
|
||||
payload: PrepareCandidateRequest,
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
try:
|
||||
result = prepare_candidate(
|
||||
repository=repository,
|
||||
candidate_id=payload.candidate_id,
|
||||
artifacts=payload.artifacts,
|
||||
sources=payload.sources,
|
||||
operator_id=payload.operator_id,
|
||||
)
|
||||
return result
|
||||
except ValueError as exc:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail={"message": str(exc), "code": "CLEAN_PREPARATION_ERROR"},
|
||||
)
|
||||
# [/DEF:prepare_candidate_endpoint:Function]
|
||||
|
||||
|
||||
# [DEF:start_check:Function]
|
||||
# @PURPOSE: Start and finalize a clean compliance check run and persist report artifacts.
|
||||
# @PRE: Active policy and candidate exist.
|
||||
# @POST: Returns accepted payload with check_run_id and started_at.
|
||||
@router.post("/checks", status_code=status.HTTP_202_ACCEPTED)
|
||||
async def start_check(
|
||||
payload: StartCheckRequest,
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
with belief_scope("clean_release.start_check"):
|
||||
logger.reason("Starting clean-release compliance check run")
|
||||
policy = repository.get_active_policy()
|
||||
if policy is None:
|
||||
raise HTTPException(status_code=409, detail={"message": "Active policy not found", "code": "POLICY_NOT_FOUND"})
|
||||
|
||||
candidate = repository.get_candidate(payload.candidate_id)
|
||||
if candidate is None:
|
||||
raise HTTPException(status_code=409, detail={"message": "Candidate not found", "code": "CANDIDATE_NOT_FOUND"})
|
||||
|
||||
manifests = repository.get_manifests_by_candidate(payload.candidate_id)
|
||||
if not manifests:
|
||||
raise HTTPException(status_code=409, detail={"message": "No manifest found for candidate", "code": "MANIFEST_NOT_FOUND"})
|
||||
latest_manifest = sorted(manifests, key=lambda m: m.manifest_version, reverse=True)[0]
|
||||
|
||||
orchestrator = CleanComplianceOrchestrator(repository)
|
||||
run = orchestrator.start_check_run(
|
||||
candidate_id=payload.candidate_id,
|
||||
policy_id=policy.id,
|
||||
requested_by=payload.triggered_by,
|
||||
manifest_id=latest_manifest.id,
|
||||
)
|
||||
|
||||
forced = [
|
||||
ComplianceStageRun(
|
||||
id=f"stage-{run.id}-1",
|
||||
run_id=run.id,
|
||||
stage_name=ComplianceStageName.DATA_PURITY.value,
|
||||
status=RunStatus.SUCCEEDED.value,
|
||||
decision=ComplianceDecision.PASSED.value,
|
||||
details_json={"message": "ok"}
|
||||
),
|
||||
ComplianceStageRun(
|
||||
id=f"stage-{run.id}-2",
|
||||
run_id=run.id,
|
||||
stage_name=ComplianceStageName.INTERNAL_SOURCES_ONLY.value,
|
||||
status=RunStatus.SUCCEEDED.value,
|
||||
decision=ComplianceDecision.PASSED.value,
|
||||
details_json={"message": "ok"}
|
||||
),
|
||||
ComplianceStageRun(
|
||||
id=f"stage-{run.id}-3",
|
||||
run_id=run.id,
|
||||
stage_name=ComplianceStageName.NO_EXTERNAL_ENDPOINTS.value,
|
||||
status=RunStatus.SUCCEEDED.value,
|
||||
decision=ComplianceDecision.PASSED.value,
|
||||
details_json={"message": "ok"}
|
||||
),
|
||||
ComplianceStageRun(
|
||||
id=f"stage-{run.id}-4",
|
||||
run_id=run.id,
|
||||
stage_name=ComplianceStageName.MANIFEST_CONSISTENCY.value,
|
||||
status=RunStatus.SUCCEEDED.value,
|
||||
decision=ComplianceDecision.PASSED.value,
|
||||
details_json={"message": "ok"}
|
||||
),
|
||||
]
|
||||
run = orchestrator.execute_stages(run, forced_results=forced)
|
||||
run = orchestrator.finalize_run(run)
|
||||
|
||||
if run.final_status == ComplianceDecision.BLOCKED.value:
|
||||
logger.explore("Run ended as BLOCKED, persisting synthetic external-source violation")
|
||||
violation = ComplianceViolation(
|
||||
id=f"viol-{run.id}",
|
||||
run_id=run.id,
|
||||
stage_name=ComplianceStageName.NO_EXTERNAL_ENDPOINTS.value,
|
||||
code="EXTERNAL_SOURCE_DETECTED",
|
||||
severity=ViolationSeverity.CRITICAL.value,
|
||||
message="Replace with approved internal server",
|
||||
evidence_json={"location": "external.example.com"}
|
||||
)
|
||||
repository.save_violation(violation)
|
||||
|
||||
builder = ComplianceReportBuilder(repository)
|
||||
report = builder.build_report_payload(run, repository.get_violations_by_run(run.id))
|
||||
builder.persist_report(report)
|
||||
logger.reflect(f"Compliance report persisted for run_id={run.id}")
|
||||
|
||||
return {
|
||||
"check_run_id": run.id,
|
||||
"candidate_id": run.candidate_id,
|
||||
"status": "running",
|
||||
"started_at": run.started_at.isoformat() if run.started_at else None,
|
||||
}
|
||||
# [/DEF:start_check:Function]
|
||||
|
||||
|
||||
# [DEF:get_check_status:Function]
|
||||
# @PURPOSE: Return terminal/intermediate status payload for a check run.
|
||||
# @PRE: check_run_id references an existing run.
|
||||
# @POST: Deterministic payload shape includes checks and violations arrays.
|
||||
@router.get("/checks/{check_run_id}")
|
||||
async def get_check_status(check_run_id: str, repository: CleanReleaseRepository = Depends(get_clean_release_repository)):
|
||||
with belief_scope("clean_release.get_check_status"):
|
||||
run = repository.get_check_run(check_run_id)
|
||||
if run is None:
|
||||
raise HTTPException(status_code=404, detail={"message": "Check run not found", "code": "CHECK_NOT_FOUND"})
|
||||
|
||||
logger.reflect(f"Returning check status for check_run_id={check_run_id}")
|
||||
return {
|
||||
"check_run_id": run.id,
|
||||
"candidate_id": run.candidate_id,
|
||||
"final_status": run.final_status,
|
||||
"started_at": run.started_at.isoformat() if run.started_at else None,
|
||||
"finished_at": run.finished_at.isoformat() if run.finished_at else None,
|
||||
"checks": [], # TODO: Map stages if needed
|
||||
"violations": [], # TODO: Map violations if needed
|
||||
}
|
||||
# [/DEF:get_check_status:Function]
|
||||
|
||||
|
||||
# [DEF:get_report:Function]
|
||||
# @PURPOSE: Return persisted compliance report by report_id.
|
||||
# @PRE: report_id references an existing report.
|
||||
# @POST: Returns serialized report object.
|
||||
@router.get("/reports/{report_id}")
|
||||
async def get_report(report_id: str, repository: CleanReleaseRepository = Depends(get_clean_release_repository)):
|
||||
with belief_scope("clean_release.get_report"):
|
||||
report = repository.get_report(report_id)
|
||||
if report is None:
|
||||
raise HTTPException(status_code=404, detail={"message": "Report not found", "code": "REPORT_NOT_FOUND"})
|
||||
|
||||
logger.reflect(f"Returning compliance report report_id={report_id}")
|
||||
return report.model_dump()
|
||||
# [/DEF:get_report:Function]
|
||||
# [/DEF:backend.src.api.routes.clean_release:Module]
|
||||
216
backend/src/api/routes/clean_release_v2.py
Normal file
216
backend/src/api/routes/clean_release_v2.py
Normal file
@@ -0,0 +1,216 @@
|
||||
# [DEF:backend.src.api.routes.clean_release_v2:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: api, clean-release, v2, headless
|
||||
# @PURPOSE: Redesigned clean release API for headless candidate lifecycle.
|
||||
# @LAYER: API
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
from typing import List, Dict, Any
|
||||
from datetime import datetime, timezone
|
||||
from ...services.clean_release.approval_service import approve_candidate, reject_candidate
|
||||
from ...services.clean_release.publication_service import publish_candidate, revoke_publication
|
||||
from ...services.clean_release.repository import CleanReleaseRepository
|
||||
from ...dependencies import get_clean_release_repository
|
||||
from ...services.clean_release.enums import CandidateStatus
|
||||
from ...models.clean_release import ReleaseCandidate, CandidateArtifact, DistributionManifest
|
||||
from ...services.clean_release.dto import CandidateDTO, ManifestDTO
|
||||
|
||||
router = APIRouter(prefix="/api/v2/clean-release", tags=["Clean Release V2"])
|
||||
|
||||
|
||||
class ApprovalRequest(dict):
|
||||
pass
|
||||
|
||||
|
||||
class PublishRequest(dict):
|
||||
pass
|
||||
|
||||
|
||||
class RevokeRequest(dict):
|
||||
pass
|
||||
|
||||
@router.post("/candidates", response_model=CandidateDTO, status_code=status.HTTP_201_CREATED)
|
||||
async def register_candidate(
|
||||
payload: Dict[str, Any],
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository)
|
||||
):
|
||||
candidate = ReleaseCandidate(
|
||||
id=payload["id"],
|
||||
version=payload["version"],
|
||||
source_snapshot_ref=payload["source_snapshot_ref"],
|
||||
created_by=payload["created_by"],
|
||||
created_at=datetime.now(timezone.utc),
|
||||
status=CandidateStatus.DRAFT.value
|
||||
)
|
||||
repository.save_candidate(candidate)
|
||||
return CandidateDTO(
|
||||
id=candidate.id,
|
||||
version=candidate.version,
|
||||
source_snapshot_ref=candidate.source_snapshot_ref,
|
||||
created_at=candidate.created_at,
|
||||
created_by=candidate.created_by,
|
||||
status=CandidateStatus(candidate.status)
|
||||
)
|
||||
|
||||
@router.post("/candidates/{candidate_id}/artifacts")
|
||||
async def import_artifacts(
|
||||
candidate_id: str,
|
||||
payload: Dict[str, Any],
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository)
|
||||
):
|
||||
candidate = repository.get_candidate(candidate_id)
|
||||
if not candidate:
|
||||
raise HTTPException(status_code=404, detail="Candidate not found")
|
||||
|
||||
for art_data in payload.get("artifacts", []):
|
||||
artifact = CandidateArtifact(
|
||||
id=art_data["id"],
|
||||
candidate_id=candidate_id,
|
||||
path=art_data["path"],
|
||||
sha256=art_data["sha256"],
|
||||
size=art_data["size"]
|
||||
)
|
||||
# In a real repo we'd have save_artifact
|
||||
# repository.save_artifact(artifact)
|
||||
pass
|
||||
|
||||
return {"status": "success"}
|
||||
|
||||
@router.post("/candidates/{candidate_id}/manifests", response_model=ManifestDTO, status_code=status.HTTP_201_CREATED)
|
||||
async def build_manifest(
|
||||
candidate_id: str,
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository)
|
||||
):
|
||||
candidate = repository.get_candidate(candidate_id)
|
||||
if not candidate:
|
||||
raise HTTPException(status_code=404, detail="Candidate not found")
|
||||
|
||||
manifest = DistributionManifest(
|
||||
id=f"manifest-{candidate_id}",
|
||||
candidate_id=candidate_id,
|
||||
manifest_version=1,
|
||||
manifest_digest="hash-123",
|
||||
artifacts_digest="art-hash-123",
|
||||
created_by="system",
|
||||
created_at=datetime.now(timezone.utc),
|
||||
source_snapshot_ref=candidate.source_snapshot_ref,
|
||||
content_json={"items": [], "summary": {}}
|
||||
)
|
||||
repository.save_manifest(manifest)
|
||||
|
||||
return ManifestDTO(
|
||||
id=manifest.id,
|
||||
candidate_id=manifest.candidate_id,
|
||||
manifest_version=manifest.manifest_version,
|
||||
manifest_digest=manifest.manifest_digest,
|
||||
artifacts_digest=manifest.artifacts_digest,
|
||||
created_at=manifest.created_at,
|
||||
created_by=manifest.created_by,
|
||||
source_snapshot_ref=manifest.source_snapshot_ref,
|
||||
content_json=manifest.content_json
|
||||
)
|
||||
|
||||
@router.post("/candidates/{candidate_id}/approve")
|
||||
async def approve_candidate_endpoint(
|
||||
candidate_id: str,
|
||||
payload: Dict[str, Any],
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
try:
|
||||
decision = approve_candidate(
|
||||
repository=repository,
|
||||
candidate_id=candidate_id,
|
||||
report_id=str(payload["report_id"]),
|
||||
decided_by=str(payload["decided_by"]),
|
||||
comment=payload.get("comment"),
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
raise HTTPException(status_code=409, detail={"message": str(exc), "code": "APPROVAL_GATE_ERROR"})
|
||||
|
||||
return {"status": "ok", "decision": decision.decision, "decision_id": decision.id}
|
||||
|
||||
|
||||
@router.post("/candidates/{candidate_id}/reject")
|
||||
async def reject_candidate_endpoint(
|
||||
candidate_id: str,
|
||||
payload: Dict[str, Any],
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
try:
|
||||
decision = reject_candidate(
|
||||
repository=repository,
|
||||
candidate_id=candidate_id,
|
||||
report_id=str(payload["report_id"]),
|
||||
decided_by=str(payload["decided_by"]),
|
||||
comment=payload.get("comment"),
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
raise HTTPException(status_code=409, detail={"message": str(exc), "code": "APPROVAL_GATE_ERROR"})
|
||||
|
||||
return {"status": "ok", "decision": decision.decision, "decision_id": decision.id}
|
||||
|
||||
|
||||
@router.post("/candidates/{candidate_id}/publish")
|
||||
async def publish_candidate_endpoint(
|
||||
candidate_id: str,
|
||||
payload: Dict[str, Any],
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
try:
|
||||
publication = publish_candidate(
|
||||
repository=repository,
|
||||
candidate_id=candidate_id,
|
||||
report_id=str(payload["report_id"]),
|
||||
published_by=str(payload["published_by"]),
|
||||
target_channel=str(payload["target_channel"]),
|
||||
publication_ref=payload.get("publication_ref"),
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
raise HTTPException(status_code=409, detail={"message": str(exc), "code": "PUBLICATION_GATE_ERROR"})
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"publication": {
|
||||
"id": publication.id,
|
||||
"candidate_id": publication.candidate_id,
|
||||
"report_id": publication.report_id,
|
||||
"published_by": publication.published_by,
|
||||
"published_at": publication.published_at.isoformat() if publication.published_at else None,
|
||||
"target_channel": publication.target_channel,
|
||||
"publication_ref": publication.publication_ref,
|
||||
"status": publication.status,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@router.post("/publications/{publication_id}/revoke")
|
||||
async def revoke_publication_endpoint(
|
||||
publication_id: str,
|
||||
payload: Dict[str, Any],
|
||||
repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
):
|
||||
try:
|
||||
publication = revoke_publication(
|
||||
repository=repository,
|
||||
publication_id=publication_id,
|
||||
revoked_by=str(payload["revoked_by"]),
|
||||
comment=payload.get("comment"),
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
raise HTTPException(status_code=409, detail={"message": str(exc), "code": "PUBLICATION_GATE_ERROR"})
|
||||
|
||||
return {
|
||||
"status": "ok",
|
||||
"publication": {
|
||||
"id": publication.id,
|
||||
"candidate_id": publication.candidate_id,
|
||||
"report_id": publication.report_id,
|
||||
"published_by": publication.published_by,
|
||||
"published_at": publication.published_at.isoformat() if publication.published_at else None,
|
||||
"target_channel": publication.target_channel,
|
||||
"publication_ref": publication.publication_ref,
|
||||
"status": publication.status,
|
||||
},
|
||||
}
|
||||
|
||||
# [/DEF:backend.src.api.routes.clean_release_v2:Module]
|
||||
@@ -34,14 +34,27 @@
|
||||
# [SECTION: IMPORTS]
|
||||
from fastapi import APIRouter, Depends, HTTPException, Query, Response
|
||||
from fastapi.responses import JSONResponse
|
||||
from typing import List, Optional, Dict, Any
|
||||
from typing import List, Optional, Dict, Any, Literal
|
||||
import re
|
||||
from urllib.parse import urlparse
|
||||
from pydantic import BaseModel, Field
|
||||
from ...dependencies import get_config_manager, get_task_manager, get_resource_service, get_mapping_service, has_permission
|
||||
from sqlalchemy.orm import Session
|
||||
from ...dependencies import (
|
||||
get_config_manager,
|
||||
get_task_manager,
|
||||
get_resource_service,
|
||||
get_mapping_service,
|
||||
get_current_user,
|
||||
has_permission,
|
||||
)
|
||||
from ...core.database import get_db
|
||||
from ...core.async_superset_client import AsyncSupersetClient
|
||||
from ...core.logger import logger, belief_scope
|
||||
from ...core.superset_client import SupersetClient
|
||||
from ...core.superset_profile_lookup import SupersetAccountLookupAdapter
|
||||
from ...core.utils.network import DashboardNotFoundError
|
||||
from ...models.auth import User
|
||||
from ...services.profile_service import ProfileService
|
||||
from ...services.resource_service import ResourceService
|
||||
# [/SECTION]
|
||||
|
||||
@@ -79,6 +92,15 @@ class DashboardItem(BaseModel):
|
||||
last_task: Optional[LastTask] = None
|
||||
# [/DEF:DashboardItem:DataClass]
|
||||
|
||||
# [DEF:EffectiveProfileFilter:DataClass]
|
||||
class EffectiveProfileFilter(BaseModel):
|
||||
applied: bool
|
||||
source_page: Literal["dashboards_main", "other"] = "dashboards_main"
|
||||
override_show_all: bool = False
|
||||
username: Optional[str] = None
|
||||
match_logic: Optional[Literal["owners_or_modified_by"]] = None
|
||||
# [/DEF:EffectiveProfileFilter:DataClass]
|
||||
|
||||
# [DEF:DashboardsResponse:DataClass]
|
||||
class DashboardsResponse(BaseModel):
|
||||
dashboards: List[DashboardItem]
|
||||
@@ -86,6 +108,7 @@ class DashboardsResponse(BaseModel):
|
||||
page: int
|
||||
page_size: int
|
||||
total_pages: int
|
||||
effective_profile_filter: Optional[EffectiveProfileFilter] = None
|
||||
# [/DEF:DashboardsResponse:DataClass]
|
||||
|
||||
# [DEF:DashboardChartItem:DataClass]
|
||||
@@ -206,6 +229,253 @@ def _resolve_dashboard_id_from_ref(
|
||||
raise HTTPException(status_code=404, detail="Dashboard not found")
|
||||
# [/DEF:_resolve_dashboard_id_from_ref:Function]
|
||||
|
||||
|
||||
# [DEF:_find_dashboard_id_by_slug_async:Function]
|
||||
# @PURPOSE: Resolve dashboard numeric ID by slug using async Superset list endpoint.
|
||||
# @PRE: dashboard_slug is non-empty.
|
||||
# @POST: Returns dashboard ID when found, otherwise None.
|
||||
async def _find_dashboard_id_by_slug_async(
|
||||
client: AsyncSupersetClient,
|
||||
dashboard_slug: str,
|
||||
) -> Optional[int]:
|
||||
query_variants = [
|
||||
{"filters": [{"col": "slug", "opr": "eq", "value": dashboard_slug}], "page": 0, "page_size": 1},
|
||||
{"filters": [{"col": "slug", "op": "eq", "value": dashboard_slug}], "page": 0, "page_size": 1},
|
||||
]
|
||||
|
||||
for query in query_variants:
|
||||
try:
|
||||
_count, dashboards = await client.get_dashboards_page_async(query=query)
|
||||
if dashboards:
|
||||
resolved_id = dashboards[0].get("id")
|
||||
if resolved_id is not None:
|
||||
return int(resolved_id)
|
||||
except Exception:
|
||||
continue
|
||||
|
||||
return None
|
||||
# [/DEF:_find_dashboard_id_by_slug_async:Function]
|
||||
|
||||
|
||||
# [DEF:_resolve_dashboard_id_from_ref_async:Function]
|
||||
# @PURPOSE: Resolve dashboard ID from slug-first reference using async Superset client.
|
||||
# @PRE: dashboard_ref is provided in route path.
|
||||
# @POST: Returns valid dashboard ID or raises HTTPException(404).
|
||||
async def _resolve_dashboard_id_from_ref_async(
|
||||
dashboard_ref: str,
|
||||
client: AsyncSupersetClient,
|
||||
) -> int:
|
||||
normalized_ref = str(dashboard_ref or "").strip()
|
||||
if not normalized_ref:
|
||||
raise HTTPException(status_code=404, detail="Dashboard not found")
|
||||
|
||||
slug_match_id = await _find_dashboard_id_by_slug_async(client, normalized_ref)
|
||||
if slug_match_id is not None:
|
||||
return slug_match_id
|
||||
|
||||
if normalized_ref.isdigit():
|
||||
return int(normalized_ref)
|
||||
|
||||
raise HTTPException(status_code=404, detail="Dashboard not found")
|
||||
# [/DEF:_resolve_dashboard_id_from_ref_async:Function]
|
||||
|
||||
|
||||
# [DEF:_normalize_filter_values:Function]
|
||||
# @PURPOSE: Normalize query filter values to lower-cased non-empty tokens.
|
||||
# @PRE: values may be None or list of strings.
|
||||
# @POST: Returns trimmed normalized list preserving input order.
|
||||
def _normalize_filter_values(values: Optional[List[str]]) -> List[str]:
|
||||
if not values:
|
||||
return []
|
||||
normalized: List[str] = []
|
||||
for value in values:
|
||||
token = str(value or "").strip().lower()
|
||||
if token:
|
||||
normalized.append(token)
|
||||
return normalized
|
||||
# [/DEF:_normalize_filter_values:Function]
|
||||
|
||||
|
||||
# [DEF:_dashboard_git_filter_value:Function]
|
||||
# @PURPOSE: Build comparable git status token for dashboards filtering.
|
||||
# @PRE: dashboard payload may contain git_status or None.
|
||||
# @POST: Returns one of ok|diff|no_repo|error|pending.
|
||||
def _dashboard_git_filter_value(dashboard: Dict[str, Any]) -> str:
|
||||
git_status = dashboard.get("git_status") or {}
|
||||
sync_status = str(git_status.get("sync_status") or "").strip().upper()
|
||||
has_repo = git_status.get("has_repo")
|
||||
if has_repo is False or sync_status == "NO_REPO":
|
||||
return "no_repo"
|
||||
if sync_status == "DIFF":
|
||||
return "diff"
|
||||
if sync_status == "OK":
|
||||
return "ok"
|
||||
if sync_status == "ERROR":
|
||||
return "error"
|
||||
return "pending"
|
||||
# [/DEF:_dashboard_git_filter_value:Function]
|
||||
|
||||
# [DEF:_normalize_actor_alias_token:Function]
|
||||
# @PURPOSE: Normalize actor alias token to comparable trim+lower text.
|
||||
# @PRE: value can be scalar/None.
|
||||
# @POST: Returns normalized token or None.
|
||||
def _normalize_actor_alias_token(value: Any) -> Optional[str]:
|
||||
token = str(value or "").strip().lower()
|
||||
return token or None
|
||||
# [/DEF:_normalize_actor_alias_token:Function]
|
||||
|
||||
|
||||
# [DEF:_normalize_owner_display_token:Function]
|
||||
# @PURPOSE: Project owner payload value into stable display string for API response contracts.
|
||||
# @PRE: owner can be scalar, dict or None.
|
||||
# @POST: Returns trimmed non-empty owner display token or None.
|
||||
def _normalize_owner_display_token(owner: Any) -> Optional[str]:
|
||||
if owner is None:
|
||||
return None
|
||||
|
||||
if isinstance(owner, dict):
|
||||
username = str(owner.get("username") or owner.get("user_name") or owner.get("name") or "").strip()
|
||||
full_name = str(owner.get("full_name") or "").strip()
|
||||
first_name = str(owner.get("first_name") or "").strip()
|
||||
last_name = str(owner.get("last_name") or "").strip()
|
||||
combined = " ".join(part for part in [first_name, last_name] if part).strip()
|
||||
email = str(owner.get("email") or "").strip()
|
||||
|
||||
for candidate in [username, full_name, combined, email]:
|
||||
if candidate:
|
||||
return candidate
|
||||
return None
|
||||
|
||||
normalized = str(owner).strip()
|
||||
return normalized or None
|
||||
# [/DEF:_normalize_owner_display_token:Function]
|
||||
|
||||
|
||||
# [DEF:_normalize_dashboard_owner_values:Function]
|
||||
# @PURPOSE: Normalize dashboard owners payload to optional list of display strings.
|
||||
# @PRE: owners payload can be None, scalar, or list with mixed values.
|
||||
# @POST: Returns deduplicated owner labels preserving order, or None when absent.
|
||||
def _normalize_dashboard_owner_values(owners: Any) -> Optional[List[str]]:
|
||||
if owners is None:
|
||||
return None
|
||||
|
||||
raw_items: List[Any]
|
||||
if isinstance(owners, list):
|
||||
raw_items = owners
|
||||
else:
|
||||
raw_items = [owners]
|
||||
|
||||
normalized: List[str] = []
|
||||
for owner in raw_items:
|
||||
token = _normalize_owner_display_token(owner)
|
||||
if token and token not in normalized:
|
||||
normalized.append(token)
|
||||
|
||||
return normalized
|
||||
# [/DEF:_normalize_dashboard_owner_values:Function]
|
||||
|
||||
|
||||
# [DEF:_project_dashboard_response_items:Function]
|
||||
# @PURPOSE: Project dashboard payloads to response-contract-safe shape.
|
||||
# @PRE: dashboards is a list of dict-like dashboard payloads.
|
||||
# @POST: Returned items satisfy DashboardItem owners=list[str]|None contract.
|
||||
def _project_dashboard_response_items(dashboards: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
projected: List[Dict[str, Any]] = []
|
||||
for dashboard in dashboards:
|
||||
projected_dashboard = dict(dashboard)
|
||||
projected_dashboard["owners"] = _normalize_dashboard_owner_values(
|
||||
projected_dashboard.get("owners")
|
||||
)
|
||||
projected.append(projected_dashboard)
|
||||
return projected
|
||||
# [/DEF:_project_dashboard_response_items:Function]
|
||||
|
||||
|
||||
# [DEF:_resolve_profile_actor_aliases:Function]
|
||||
# @PURPOSE: Resolve stable actor aliases for profile filtering without per-dashboard detail fan-out.
|
||||
# @PRE: bound username is available and env is valid.
|
||||
# @POST: Returns at least normalized username; may include Superset display-name alias.
|
||||
# @SIDE_EFFECT: Performs at most one Superset users-lookup request.
|
||||
def _resolve_profile_actor_aliases(env: Any, bound_username: str) -> List[str]:
|
||||
normalized_bound = _normalize_actor_alias_token(bound_username)
|
||||
if not normalized_bound:
|
||||
return []
|
||||
|
||||
aliases: List[str] = [normalized_bound]
|
||||
try:
|
||||
client = SupersetClient(env)
|
||||
adapter = SupersetAccountLookupAdapter(
|
||||
network_client=client.network,
|
||||
environment_id=str(getattr(env, "id", "")),
|
||||
)
|
||||
lookup_payload = adapter.get_users_page(
|
||||
search=normalized_bound,
|
||||
page_index=0,
|
||||
page_size=20,
|
||||
sort_column="username",
|
||||
sort_order="asc",
|
||||
)
|
||||
lookup_items = (
|
||||
lookup_payload.get("items", [])
|
||||
if isinstance(lookup_payload, dict)
|
||||
else []
|
||||
)
|
||||
|
||||
matched_item: Optional[Dict[str, Any]] = None
|
||||
for item in lookup_items:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
if _normalize_actor_alias_token(item.get("username")) == normalized_bound:
|
||||
matched_item = item
|
||||
break
|
||||
|
||||
if matched_item is None:
|
||||
for item in lookup_items:
|
||||
if isinstance(item, dict):
|
||||
matched_item = item
|
||||
break
|
||||
|
||||
display_alias = _normalize_actor_alias_token(
|
||||
(matched_item or {}).get("display_name")
|
||||
)
|
||||
if display_alias and display_alias not in aliases:
|
||||
aliases.append(display_alias)
|
||||
|
||||
logger.reflect(
|
||||
"[REFLECT] Resolved profile actor aliases "
|
||||
f"(env={getattr(env, 'id', None)}, bound_username={normalized_bound!r}, "
|
||||
f"lookup_items={len(lookup_items)}, aliases={aliases!r})"
|
||||
)
|
||||
except Exception as alias_error:
|
||||
logger.explore(
|
||||
"[EXPLORE] Failed to resolve profile actor aliases via Superset users lookup "
|
||||
f"(env={getattr(env, 'id', None)}, bound_username={normalized_bound!r}): {alias_error}"
|
||||
)
|
||||
return aliases
|
||||
# [/DEF:_resolve_profile_actor_aliases:Function]
|
||||
|
||||
|
||||
# [DEF:_matches_dashboard_actor_aliases:Function]
|
||||
# @PURPOSE: Apply profile actor matching against multiple aliases (username + optional display name).
|
||||
# @PRE: actor_aliases contains normalized non-empty tokens.
|
||||
# @POST: Returns True when any alias matches owners OR modified_by.
|
||||
def _matches_dashboard_actor_aliases(
|
||||
profile_service: ProfileService,
|
||||
actor_aliases: List[str],
|
||||
owners: Optional[Any],
|
||||
modified_by: Optional[str],
|
||||
) -> bool:
|
||||
for actor_alias in actor_aliases:
|
||||
if profile_service.matches_dashboard_actor(
|
||||
bound_username=actor_alias,
|
||||
owners=owners,
|
||||
modified_by=modified_by,
|
||||
):
|
||||
return True
|
||||
return False
|
||||
# [/DEF:_matches_dashboard_actor_aliases:Function]
|
||||
|
||||
|
||||
# [DEF:get_dashboards:Function]
|
||||
# @PURPOSE: Fetch list of dashboards from a specific environment with Git status and last task status
|
||||
# @PRE: env_id must be a valid environment ID
|
||||
@@ -213,6 +483,7 @@ def _resolve_dashboard_id_from_ref(
|
||||
# @PRE: page_size must be between 1 and 100 if provided
|
||||
# @POST: Returns a list of dashboards with enhanced metadata and pagination info
|
||||
# @POST: Response includes pagination metadata (page, page_size, total, total_pages)
|
||||
# @POST: Response includes effective profile filter metadata for main dashboards page context
|
||||
# @PARAM: env_id (str) - The environment ID to fetch dashboards from
|
||||
# @PARAM: search (Optional[str]) - Filter by title/slug
|
||||
# @PARAM: page (Optional[int]) - Page number (default: 1)
|
||||
@@ -225,33 +496,102 @@ async def get_dashboards(
|
||||
search: Optional[str] = None,
|
||||
page: int = 1,
|
||||
page_size: int = 10,
|
||||
page_context: Literal["dashboards_main", "other"] = Query(default="dashboards_main"),
|
||||
apply_profile_default: bool = Query(default=True),
|
||||
override_show_all: bool = Query(default=False),
|
||||
filter_title: Optional[List[str]] = Query(default=None),
|
||||
filter_git_status: Optional[List[str]] = Query(default=None),
|
||||
filter_llm_status: Optional[List[str]] = Query(default=None),
|
||||
filter_changed_on: Optional[List[str]] = Query(default=None),
|
||||
filter_actor: Optional[List[str]] = Query(default=None),
|
||||
config_manager=Depends(get_config_manager),
|
||||
task_manager=Depends(get_task_manager),
|
||||
resource_service=Depends(get_resource_service),
|
||||
current_user: User = Depends(get_current_user),
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("plugin:migration", "READ"))
|
||||
):
|
||||
with belief_scope("get_dashboards", f"env_id={env_id}, search={search}, page={page}, page_size={page_size}"):
|
||||
# Validate pagination parameters
|
||||
with belief_scope(
|
||||
"get_dashboards",
|
||||
(
|
||||
f"env_id={env_id}, search={search}, page={page}, page_size={page_size}, "
|
||||
f"page_context={page_context}, apply_profile_default={apply_profile_default}, "
|
||||
f"override_show_all={override_show_all}"
|
||||
),
|
||||
):
|
||||
if page < 1:
|
||||
logger.error(f"[get_dashboards][Coherence:Failed] Invalid page: {page}")
|
||||
raise HTTPException(status_code=400, detail="Page must be >= 1")
|
||||
if page_size < 1 or page_size > 100:
|
||||
logger.error(f"[get_dashboards][Coherence:Failed] Invalid page_size: {page_size}")
|
||||
raise HTTPException(status_code=400, detail="Page size must be between 1 and 100")
|
||||
|
||||
# Validate environment exists
|
||||
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
if not env:
|
||||
logger.error(f"[get_dashboards][Coherence:Failed] Environment not found: {env_id}")
|
||||
raise HTTPException(status_code=404, detail="Environment not found")
|
||||
|
||||
try:
|
||||
# Get all tasks for status lookup
|
||||
all_tasks = task_manager.get_all_tasks()
|
||||
|
||||
# Fast path: real ResourceService -> one Superset page call per API request.
|
||||
if isinstance(resource_service, ResourceService):
|
||||
profile_service = ProfileService(db=db, config_manager=config_manager)
|
||||
bound_username: Optional[str] = None
|
||||
can_apply_profile_filter = False
|
||||
effective_profile_filter = EffectiveProfileFilter(
|
||||
applied=False,
|
||||
source_page=page_context,
|
||||
override_show_all=bool(override_show_all),
|
||||
username=None,
|
||||
match_logic=None,
|
||||
)
|
||||
|
||||
try:
|
||||
profile_preference = profile_service.get_my_preference(current_user).preference
|
||||
normalized_username = str(
|
||||
getattr(profile_preference, "superset_username_normalized", None) or ""
|
||||
).strip().lower()
|
||||
raw_username = str(
|
||||
getattr(profile_preference, "superset_username", None) or ""
|
||||
).strip().lower()
|
||||
bound_username = normalized_username or raw_username or None
|
||||
|
||||
can_apply_profile_filter = (
|
||||
page_context == "dashboards_main"
|
||||
and bool(apply_profile_default)
|
||||
and not bool(override_show_all)
|
||||
and bool(getattr(profile_preference, "show_only_my_dashboards", False))
|
||||
and bool(bound_username)
|
||||
)
|
||||
|
||||
effective_profile_filter = EffectiveProfileFilter(
|
||||
applied=bool(can_apply_profile_filter),
|
||||
source_page=page_context,
|
||||
override_show_all=bool(override_show_all),
|
||||
username=bound_username if can_apply_profile_filter else None,
|
||||
match_logic="owners_or_modified_by" if can_apply_profile_filter else None,
|
||||
)
|
||||
except Exception as profile_error:
|
||||
logger.explore(
|
||||
f"[EXPLORE] Profile preference unavailable; continuing without profile-default filter: {profile_error}"
|
||||
)
|
||||
|
||||
try:
|
||||
all_tasks = task_manager.get_all_tasks()
|
||||
title_filters = _normalize_filter_values(filter_title)
|
||||
git_filters = _normalize_filter_values(filter_git_status)
|
||||
llm_filters = _normalize_filter_values(filter_llm_status)
|
||||
changed_on_filters = _normalize_filter_values(filter_changed_on)
|
||||
actor_filters = _normalize_filter_values(filter_actor)
|
||||
has_column_filters = any(
|
||||
(
|
||||
title_filters,
|
||||
git_filters,
|
||||
llm_filters,
|
||||
changed_on_filters,
|
||||
actor_filters,
|
||||
)
|
||||
)
|
||||
needs_full_scan = has_column_filters or bool(can_apply_profile_filter)
|
||||
|
||||
if isinstance(resource_service, ResourceService) and not needs_full_scan:
|
||||
try:
|
||||
page_payload = await resource_service.get_dashboards_page_with_status(
|
||||
env,
|
||||
@@ -278,9 +618,9 @@ async def get_dashboards(
|
||||
if search:
|
||||
search_lower = search.lower()
|
||||
dashboards = [
|
||||
d for d in dashboards
|
||||
if search_lower in d.get('title', '').lower()
|
||||
or search_lower in d.get('slug', '').lower()
|
||||
d for d in dashboards
|
||||
if search_lower in d.get("title", "").lower()
|
||||
or search_lower in d.get("slug", "").lower()
|
||||
]
|
||||
|
||||
total = len(dashboards)
|
||||
@@ -289,37 +629,125 @@ async def get_dashboards(
|
||||
end_idx = start_idx + page_size
|
||||
paginated_dashboards = dashboards[start_idx:end_idx]
|
||||
else:
|
||||
# Compatibility path for mocked services in route tests.
|
||||
dashboards = await resource_service.get_dashboards_with_status(
|
||||
env,
|
||||
all_tasks,
|
||||
include_git_status=False,
|
||||
include_git_status=bool(git_filters),
|
||||
)
|
||||
|
||||
if can_apply_profile_filter and bound_username:
|
||||
actor_aliases = _resolve_profile_actor_aliases(env, bound_username)
|
||||
if not actor_aliases:
|
||||
actor_aliases = [bound_username]
|
||||
logger.reason(
|
||||
"[REASON] Applying profile actor filter "
|
||||
f"(env={env_id}, bound_username={bound_username}, actor_aliases={actor_aliases!r}, "
|
||||
f"dashboards_before={len(dashboards)})"
|
||||
)
|
||||
filtered_dashboards: List[Dict[str, Any]] = []
|
||||
max_actor_samples = 15
|
||||
for index, dashboard in enumerate(dashboards):
|
||||
owners_value = dashboard.get("owners")
|
||||
created_by_value = dashboard.get("created_by")
|
||||
modified_by_value = dashboard.get("modified_by")
|
||||
matches_actor = _matches_dashboard_actor_aliases(
|
||||
profile_service=profile_service,
|
||||
actor_aliases=actor_aliases,
|
||||
owners=owners_value,
|
||||
modified_by=modified_by_value,
|
||||
)
|
||||
if index < max_actor_samples:
|
||||
logger.reflect(
|
||||
"[REFLECT] Profile actor filter sample "
|
||||
f"(env={env_id}, dashboard_id={dashboard.get('id')}, "
|
||||
f"bound_username={bound_username!r}, actor_aliases={actor_aliases!r}, "
|
||||
f"owners={owners_value!r}, created_by={created_by_value!r}, "
|
||||
f"modified_by={modified_by_value!r}, matches={matches_actor})"
|
||||
)
|
||||
if matches_actor:
|
||||
filtered_dashboards.append(dashboard)
|
||||
|
||||
logger.reflect(
|
||||
"[REFLECT] Profile actor filter summary "
|
||||
f"(env={env_id}, bound_username={bound_username!r}, "
|
||||
f"dashboards_before={len(dashboards)}, dashboards_after={len(filtered_dashboards)})"
|
||||
)
|
||||
dashboards = filtered_dashboards
|
||||
|
||||
if search:
|
||||
search_lower = search.lower()
|
||||
dashboards = [
|
||||
d for d in dashboards
|
||||
if search_lower in d.get('title', '').lower()
|
||||
or search_lower in d.get('slug', '').lower()
|
||||
d for d in dashboards
|
||||
if search_lower in d.get("title", "").lower()
|
||||
or search_lower in d.get("slug", "").lower()
|
||||
]
|
||||
|
||||
def _matches_dashboard_filters(dashboard: Dict[str, Any]) -> bool:
|
||||
title_value = str(dashboard.get("title") or "").strip().lower()
|
||||
if title_filters and title_value not in title_filters:
|
||||
return False
|
||||
|
||||
if git_filters:
|
||||
git_value = _dashboard_git_filter_value(dashboard)
|
||||
if git_value not in git_filters:
|
||||
return False
|
||||
|
||||
llm_value = str(
|
||||
((dashboard.get("last_task") or {}).get("validation_status"))
|
||||
or "UNKNOWN"
|
||||
).strip().lower()
|
||||
if llm_filters and llm_value not in llm_filters:
|
||||
return False
|
||||
|
||||
changed_on_raw = str(dashboard.get("last_modified") or "").strip().lower()
|
||||
changed_on_prefix = (
|
||||
changed_on_raw[:10] if len(changed_on_raw) >= 10 else changed_on_raw
|
||||
)
|
||||
if (
|
||||
changed_on_filters
|
||||
and changed_on_raw not in changed_on_filters
|
||||
and changed_on_prefix not in changed_on_filters
|
||||
):
|
||||
return False
|
||||
|
||||
owners = dashboard.get("owners") or []
|
||||
if isinstance(owners, list):
|
||||
actor_value = ", ".join(
|
||||
str(item).strip() for item in owners if str(item).strip()
|
||||
).lower()
|
||||
else:
|
||||
actor_value = str(owners).strip().lower()
|
||||
if not actor_value:
|
||||
actor_value = "-"
|
||||
if actor_filters and actor_value not in actor_filters:
|
||||
return False
|
||||
return True
|
||||
|
||||
if has_column_filters:
|
||||
dashboards = [d for d in dashboards if _matches_dashboard_filters(d)]
|
||||
|
||||
total = len(dashboards)
|
||||
total_pages = (total + page_size - 1) // page_size if total > 0 else 1
|
||||
start_idx = (page - 1) * page_size
|
||||
end_idx = start_idx + page_size
|
||||
paginated_dashboards = dashboards[start_idx:end_idx]
|
||||
|
||||
logger.info(f"[get_dashboards][Coherence:OK] Returning {len(paginated_dashboards)} dashboards (page {page}/{total_pages}, total: {total})")
|
||||
|
||||
|
||||
logger.info(
|
||||
f"[get_dashboards][Coherence:OK] Returning {len(paginated_dashboards)} dashboards "
|
||||
f"(page {page}/{total_pages}, total: {total}, profile_filter_applied={effective_profile_filter.applied})"
|
||||
)
|
||||
|
||||
response_dashboards = _project_dashboard_response_items(paginated_dashboards)
|
||||
|
||||
return DashboardsResponse(
|
||||
dashboards=paginated_dashboards,
|
||||
dashboards=response_dashboards,
|
||||
total=total,
|
||||
page=page,
|
||||
page_size=page_size,
|
||||
total_pages=total_pages
|
||||
total_pages=total_pages,
|
||||
effective_profile_filter=effective_profile_filter,
|
||||
)
|
||||
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"[get_dashboards][Coherence:Failed] Failed to fetch dashboards: {e}")
|
||||
raise HTTPException(status_code=503, detail=f"Failed to fetch dashboards: {str(e)}")
|
||||
@@ -399,10 +827,10 @@ async def get_dashboard_detail(
|
||||
logger.error(f"[get_dashboard_detail][Coherence:Failed] Environment not found: {env_id}")
|
||||
raise HTTPException(status_code=404, detail="Environment not found")
|
||||
|
||||
client = AsyncSupersetClient(env)
|
||||
try:
|
||||
client = SupersetClient(env)
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, client)
|
||||
detail = client.get_dashboard_detail(dashboard_id)
|
||||
dashboard_id = await _resolve_dashboard_id_from_ref_async(dashboard_ref, client)
|
||||
detail = await client.get_dashboard_detail_async(dashboard_id)
|
||||
logger.info(
|
||||
f"[get_dashboard_detail][Coherence:OK] Dashboard ref={dashboard_ref} resolved_id={dashboard_id}: {detail.get('chart_count', 0)} charts, {detail.get('dataset_count', 0)} datasets"
|
||||
)
|
||||
@@ -412,6 +840,8 @@ async def get_dashboard_detail(
|
||||
except Exception as e:
|
||||
logger.error(f"[get_dashboard_detail][Coherence:Failed] Failed to fetch dashboard detail: {e}")
|
||||
raise HTTPException(status_code=503, detail=f"Failed to fetch dashboard detail: {str(e)}")
|
||||
finally:
|
||||
await client.aclose()
|
||||
# [/DEF:get_dashboard_detail:Function]
|
||||
|
||||
|
||||
@@ -463,69 +893,74 @@ async def get_dashboard_tasks_history(
|
||||
):
|
||||
with belief_scope("get_dashboard_tasks_history", f"dashboard_ref={dashboard_ref}, env_id={env_id}, limit={limit}"):
|
||||
dashboard_id: Optional[int] = None
|
||||
if dashboard_ref.isdigit():
|
||||
dashboard_id = int(dashboard_ref)
|
||||
elif env_id:
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
if not env:
|
||||
logger.error(f"[get_dashboard_tasks_history][Coherence:Failed] Environment not found: {env_id}")
|
||||
raise HTTPException(status_code=404, detail="Environment not found")
|
||||
client = SupersetClient(env)
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, client)
|
||||
else:
|
||||
logger.error(
|
||||
"[get_dashboard_tasks_history][Coherence:Failed] Non-numeric dashboard ref requires env_id"
|
||||
)
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="env_id is required when dashboard reference is a slug",
|
||||
)
|
||||
|
||||
matching_tasks = []
|
||||
for task in task_manager.get_all_tasks():
|
||||
if _task_matches_dashboard(task, dashboard_id, env_id):
|
||||
matching_tasks.append(task)
|
||||
|
||||
def _sort_key(task_obj: Any) -> str:
|
||||
return (
|
||||
str(getattr(task_obj, "started_at", "") or "")
|
||||
or str(getattr(task_obj, "finished_at", "") or "")
|
||||
)
|
||||
|
||||
matching_tasks.sort(key=_sort_key, reverse=True)
|
||||
selected = matching_tasks[:limit]
|
||||
|
||||
items = []
|
||||
for task in selected:
|
||||
result = getattr(task, "result", None)
|
||||
summary = None
|
||||
validation_status = None
|
||||
if isinstance(result, dict):
|
||||
raw_validation_status = result.get("status")
|
||||
if raw_validation_status is not None:
|
||||
validation_status = str(raw_validation_status)
|
||||
summary = (
|
||||
result.get("summary")
|
||||
or result.get("status")
|
||||
or result.get("message")
|
||||
client: Optional[AsyncSupersetClient] = None
|
||||
try:
|
||||
if dashboard_ref.isdigit():
|
||||
dashboard_id = int(dashboard_ref)
|
||||
elif env_id:
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
if not env:
|
||||
logger.error(f"[get_dashboard_tasks_history][Coherence:Failed] Environment not found: {env_id}")
|
||||
raise HTTPException(status_code=404, detail="Environment not found")
|
||||
client = AsyncSupersetClient(env)
|
||||
dashboard_id = await _resolve_dashboard_id_from_ref_async(dashboard_ref, client)
|
||||
else:
|
||||
logger.error(
|
||||
"[get_dashboard_tasks_history][Coherence:Failed] Non-numeric dashboard ref requires env_id"
|
||||
)
|
||||
params = getattr(task, "params", {}) or {}
|
||||
items.append(
|
||||
DashboardTaskHistoryItem(
|
||||
id=str(getattr(task, "id", "")),
|
||||
plugin_id=str(getattr(task, "plugin_id", "")),
|
||||
status=str(getattr(task, "status", "")),
|
||||
validation_status=validation_status,
|
||||
started_at=getattr(task, "started_at", None).isoformat() if getattr(task, "started_at", None) else None,
|
||||
finished_at=getattr(task, "finished_at", None).isoformat() if getattr(task, "finished_at", None) else None,
|
||||
env_id=str(params.get("environment_id") or params.get("env")) if (params.get("environment_id") or params.get("env")) else None,
|
||||
summary=summary,
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="env_id is required when dashboard reference is a slug",
|
||||
)
|
||||
)
|
||||
|
||||
logger.info(f"[get_dashboard_tasks_history][Coherence:OK] Found {len(items)} tasks for dashboard_ref={dashboard_ref}, dashboard_id={dashboard_id}")
|
||||
return DashboardTaskHistoryResponse(dashboard_id=dashboard_id, items=items)
|
||||
matching_tasks = []
|
||||
for task in task_manager.get_all_tasks():
|
||||
if _task_matches_dashboard(task, dashboard_id, env_id):
|
||||
matching_tasks.append(task)
|
||||
|
||||
def _sort_key(task_obj: Any) -> str:
|
||||
return (
|
||||
str(getattr(task_obj, "started_at", "") or "")
|
||||
or str(getattr(task_obj, "finished_at", "") or "")
|
||||
)
|
||||
|
||||
matching_tasks.sort(key=_sort_key, reverse=True)
|
||||
selected = matching_tasks[:limit]
|
||||
|
||||
items = []
|
||||
for task in selected:
|
||||
result = getattr(task, "result", None)
|
||||
summary = None
|
||||
validation_status = None
|
||||
if isinstance(result, dict):
|
||||
raw_validation_status = result.get("status")
|
||||
if raw_validation_status is not None:
|
||||
validation_status = str(raw_validation_status)
|
||||
summary = (
|
||||
result.get("summary")
|
||||
or result.get("status")
|
||||
or result.get("message")
|
||||
)
|
||||
params = getattr(task, "params", {}) or {}
|
||||
items.append(
|
||||
DashboardTaskHistoryItem(
|
||||
id=str(getattr(task, "id", "")),
|
||||
plugin_id=str(getattr(task, "plugin_id", "")),
|
||||
status=str(getattr(task, "status", "")),
|
||||
validation_status=validation_status,
|
||||
started_at=getattr(task, "started_at", None).isoformat() if getattr(task, "started_at", None) else None,
|
||||
finished_at=getattr(task, "finished_at", None).isoformat() if getattr(task, "finished_at", None) else None,
|
||||
env_id=str(params.get("environment_id") or params.get("env")) if (params.get("environment_id") or params.get("env")) else None,
|
||||
summary=summary,
|
||||
)
|
||||
)
|
||||
|
||||
logger.info(f"[get_dashboard_tasks_history][Coherence:OK] Found {len(items)} tasks for dashboard_ref={dashboard_ref}, dashboard_id={dashboard_id}")
|
||||
return DashboardTaskHistoryResponse(dashboard_id=dashboard_id, items=items)
|
||||
finally:
|
||||
if client is not None:
|
||||
await client.aclose()
|
||||
# [/DEF:get_dashboard_tasks_history:Function]
|
||||
|
||||
|
||||
@@ -548,15 +983,15 @@ async def get_dashboard_thumbnail(
|
||||
logger.error(f"[get_dashboard_thumbnail][Coherence:Failed] Environment not found: {env_id}")
|
||||
raise HTTPException(status_code=404, detail="Environment not found")
|
||||
|
||||
client = AsyncSupersetClient(env)
|
||||
try:
|
||||
client = SupersetClient(env)
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, client)
|
||||
dashboard_id = await _resolve_dashboard_id_from_ref_async(dashboard_ref, client)
|
||||
digest = None
|
||||
thumb_endpoint = None
|
||||
|
||||
# Preferred flow (newer Superset): ask server to cache screenshot and return digest/image_url.
|
||||
try:
|
||||
screenshot_payload = client.network.request(
|
||||
screenshot_payload = await client.network.request(
|
||||
method="POST",
|
||||
endpoint=f"/dashboard/{dashboard_id}/cache_dashboard_screenshot/",
|
||||
json={"force": force},
|
||||
@@ -574,7 +1009,7 @@ async def get_dashboard_thumbnail(
|
||||
|
||||
# Fallback flow (older Superset): read thumbnail_url from dashboard payload.
|
||||
if not digest:
|
||||
dashboard_payload = client.network.request(
|
||||
dashboard_payload = await client.network.request(
|
||||
method="GET",
|
||||
endpoint=f"/dashboard/{dashboard_id}",
|
||||
)
|
||||
@@ -593,7 +1028,7 @@ async def get_dashboard_thumbnail(
|
||||
if not thumb_endpoint:
|
||||
thumb_endpoint = f"/dashboard/{dashboard_id}/thumbnail/{digest or 'latest'}/"
|
||||
|
||||
thumb_response = client.network.request(
|
||||
thumb_response = await client.network.request(
|
||||
method="GET",
|
||||
endpoint=thumb_endpoint,
|
||||
raw_response=True,
|
||||
@@ -618,6 +1053,8 @@ async def get_dashboard_thumbnail(
|
||||
except Exception as e:
|
||||
logger.error(f"[get_dashboard_thumbnail][Coherence:Failed] Failed to fetch dashboard thumbnail: {e}")
|
||||
raise HTTPException(status_code=503, detail=f"Failed to fetch dashboard thumbnail: {str(e)}")
|
||||
finally:
|
||||
await client.aclose()
|
||||
# [/DEF:get_dashboard_thumbnail:Function]
|
||||
|
||||
# [DEF:MigrateRequest:DataClass]
|
||||
|
||||
@@ -20,6 +20,18 @@ from ...core.logger import belief_scope
|
||||
|
||||
router = APIRouter(prefix="/api/environments", tags=["Environments"])
|
||||
|
||||
|
||||
# [DEF:_normalize_superset_env_url:Function]
|
||||
# @PURPOSE: Canonicalize Superset environment URL to base host/path without trailing /api/v1.
|
||||
# @PRE: raw_url can be empty.
|
||||
# @POST: Returns normalized base URL.
|
||||
def _normalize_superset_env_url(raw_url: str) -> str:
|
||||
normalized = str(raw_url or "").strip().rstrip("/")
|
||||
if normalized.lower().endswith("/api/v1"):
|
||||
normalized = normalized[:-len("/api/v1")]
|
||||
return normalized.rstrip("/")
|
||||
# [/DEF:_normalize_superset_env_url:Function]
|
||||
|
||||
# [DEF:ScheduleSchema:DataClass]
|
||||
class ScheduleSchema(BaseModel):
|
||||
enabled: bool = False
|
||||
@@ -70,7 +82,7 @@ async def get_environments(
|
||||
EnvironmentResponse(
|
||||
id=e.id,
|
||||
name=e.name,
|
||||
url=e.url,
|
||||
url=_normalize_superset_env_url(e.url),
|
||||
stage=resolved_stage,
|
||||
is_production=(resolved_stage == "PROD"),
|
||||
backup_schedule=ScheduleSchema(
|
||||
|
||||
@@ -15,20 +15,25 @@ from sqlalchemy.orm import Session
|
||||
from typing import List, Optional
|
||||
import typing
|
||||
import os
|
||||
from src.dependencies import get_config_manager, has_permission
|
||||
from src.dependencies import get_config_manager, get_current_user, has_permission
|
||||
from src.core.database import get_db
|
||||
from src.models.auth import User
|
||||
from src.models.git import GitServerConfig, GitRepository, GitProvider
|
||||
from src.models.profile import UserDashboardPreference
|
||||
from src.api.routes.git_schemas import (
|
||||
GitServerConfigSchema, GitServerConfigCreate,
|
||||
GitServerConfigSchema, GitServerConfigCreate, GitServerConfigUpdate,
|
||||
BranchSchema, BranchCreate,
|
||||
BranchCheckout, CommitSchema, CommitCreate,
|
||||
DeploymentEnvironmentSchema, DeployRequest, RepoInitRequest,
|
||||
RepositoryBindingSchema,
|
||||
RepoStatusBatchRequest, RepoStatusBatchResponse,
|
||||
GiteaRepoCreateRequest, GiteaRepoSchema,
|
||||
RemoteRepoCreateRequest, RemoteRepoSchema,
|
||||
PromoteRequest, PromoteResponse,
|
||||
MergeStatusSchema, MergeConflictFileSchema, MergeResolveRequest, MergeContinueRequest,
|
||||
)
|
||||
from src.services.git_service import GitService
|
||||
from src.core.async_superset_client import AsyncSupersetClient
|
||||
from src.core.superset_client import SupersetClient
|
||||
from src.core.logger import logger, belief_scope
|
||||
from ...services.llm_prompt_templates import (
|
||||
@@ -175,6 +180,180 @@ def _resolve_dashboard_id_from_ref(
|
||||
return dashboard_id
|
||||
# [/DEF:_resolve_dashboard_id_from_ref:Function]
|
||||
|
||||
|
||||
# [DEF:_find_dashboard_id_by_slug_async:Function]
|
||||
# @PURPOSE: Resolve dashboard numeric ID by slug asynchronously for hot-path Git routes.
|
||||
# @PRE: dashboard_slug is non-empty.
|
||||
# @POST: Returns dashboard ID or None when not found.
|
||||
async def _find_dashboard_id_by_slug_async(
|
||||
client: AsyncSupersetClient,
|
||||
dashboard_slug: str,
|
||||
) -> Optional[int]:
|
||||
query_variants = [
|
||||
{"filters": [{"col": "slug", "opr": "eq", "value": dashboard_slug}], "page": 0, "page_size": 1},
|
||||
{"filters": [{"col": "slug", "op": "eq", "value": dashboard_slug}], "page": 0, "page_size": 1},
|
||||
]
|
||||
|
||||
for query in query_variants:
|
||||
try:
|
||||
_count, dashboards = await client.get_dashboards_page_async(query=query)
|
||||
if dashboards:
|
||||
resolved_id = dashboards[0].get("id")
|
||||
if resolved_id is not None:
|
||||
return int(resolved_id)
|
||||
except Exception:
|
||||
continue
|
||||
return None
|
||||
# [/DEF:_find_dashboard_id_by_slug_async:Function]
|
||||
|
||||
|
||||
# [DEF:_resolve_dashboard_id_from_ref_async:Function]
|
||||
# @PURPOSE: Resolve dashboard ID asynchronously from slug-or-id reference for hot Git routes.
|
||||
# @PRE: dashboard_ref is provided; env_id is required for slug values.
|
||||
# @POST: Returns numeric dashboard ID or raises HTTPException.
|
||||
async def _resolve_dashboard_id_from_ref_async(
|
||||
dashboard_ref: str,
|
||||
config_manager,
|
||||
env_id: Optional[str] = None,
|
||||
) -> int:
|
||||
normalized_ref = str(dashboard_ref or "").strip()
|
||||
if not normalized_ref:
|
||||
raise HTTPException(status_code=400, detail="dashboard_ref is required")
|
||||
|
||||
if normalized_ref.isdigit():
|
||||
return int(normalized_ref)
|
||||
|
||||
if not env_id:
|
||||
raise HTTPException(
|
||||
status_code=400,
|
||||
detail="env_id is required for slug-based Git operations",
|
||||
)
|
||||
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
if not env:
|
||||
raise HTTPException(status_code=404, detail="Environment not found")
|
||||
|
||||
client = AsyncSupersetClient(env)
|
||||
try:
|
||||
dashboard_id = await _find_dashboard_id_by_slug_async(client, normalized_ref)
|
||||
if dashboard_id is None:
|
||||
raise HTTPException(status_code=404, detail=f"Dashboard slug '{normalized_ref}' not found")
|
||||
return dashboard_id
|
||||
finally:
|
||||
await client.aclose()
|
||||
# [/DEF:_resolve_dashboard_id_from_ref_async:Function]
|
||||
|
||||
|
||||
# [DEF:_resolve_repo_key_from_ref:Function]
|
||||
# @PURPOSE: Resolve repository folder key with slug-first strategy and deterministic fallback.
|
||||
# @PRE: dashboard_id is resolved and valid.
|
||||
# @POST: Returns safe key to be used in local repository path.
|
||||
# @RETURN: str
|
||||
def _resolve_repo_key_from_ref(
|
||||
dashboard_ref: str,
|
||||
dashboard_id: int,
|
||||
config_manager,
|
||||
env_id: Optional[str] = None,
|
||||
) -> str:
|
||||
normalized_ref = str(dashboard_ref or "").strip()
|
||||
if normalized_ref and not normalized_ref.isdigit():
|
||||
return normalized_ref
|
||||
|
||||
if env_id:
|
||||
try:
|
||||
environments = config_manager.get_environments()
|
||||
env = next((e for e in environments if e.id == env_id), None)
|
||||
if env:
|
||||
payload = SupersetClient(env).get_dashboard(dashboard_id)
|
||||
dashboard_data = payload.get("result", payload) if isinstance(payload, dict) else {}
|
||||
dashboard_slug = dashboard_data.get("slug")
|
||||
if dashboard_slug:
|
||||
return str(dashboard_slug)
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
return f"dashboard-{dashboard_id}"
|
||||
# [/DEF:_resolve_repo_key_from_ref:Function]
|
||||
|
||||
|
||||
# [DEF:_sanitize_optional_identity_value:Function]
|
||||
# @PURPOSE: Normalize optional identity value into trimmed string or None.
|
||||
# @PRE: value may be None or blank.
|
||||
# @POST: Returns sanitized value suitable for git identity configuration.
|
||||
# @RETURN: Optional[str]
|
||||
def _sanitize_optional_identity_value(value: Optional[str]) -> Optional[str]:
|
||||
normalized = str(value or "").strip()
|
||||
if not normalized:
|
||||
return None
|
||||
return normalized
|
||||
# [/DEF:_sanitize_optional_identity_value:Function]
|
||||
|
||||
|
||||
# [DEF:_resolve_current_user_git_identity:Function]
|
||||
# @PURPOSE: Resolve configured Git username/email from current user's profile preferences.
|
||||
# @PRE: `db` may be stubbed in tests; `current_user` may be absent for direct handler invocations.
|
||||
# @POST: Returns tuple(username, email) only when both values are configured.
|
||||
# @RETURN: Optional[tuple[str, str]]
|
||||
def _resolve_current_user_git_identity(
|
||||
db: Session,
|
||||
current_user: Optional[User],
|
||||
) -> Optional[tuple[str, str]]:
|
||||
if db is None or not hasattr(db, "query"):
|
||||
return None
|
||||
|
||||
user_id = _sanitize_optional_identity_value(getattr(current_user, "id", None))
|
||||
if not user_id:
|
||||
return None
|
||||
|
||||
try:
|
||||
preference = (
|
||||
db.query(UserDashboardPreference)
|
||||
.filter(UserDashboardPreference.user_id == user_id)
|
||||
.first()
|
||||
)
|
||||
except Exception as resolve_error:
|
||||
logger.warning(
|
||||
"[_resolve_current_user_git_identity][Action] Failed to load profile preference for user %s: %s",
|
||||
user_id,
|
||||
resolve_error,
|
||||
)
|
||||
return None
|
||||
|
||||
if not preference:
|
||||
return None
|
||||
|
||||
git_username = _sanitize_optional_identity_value(getattr(preference, "git_username", None))
|
||||
git_email = _sanitize_optional_identity_value(getattr(preference, "git_email", None))
|
||||
if not git_username or not git_email:
|
||||
return None
|
||||
return git_username, git_email
|
||||
# [/DEF:_resolve_current_user_git_identity:Function]
|
||||
|
||||
|
||||
# [DEF:_apply_git_identity_from_profile:Function]
|
||||
# @PURPOSE: Apply user-scoped Git identity to repository-local config before write/pull operations.
|
||||
# @PRE: dashboard_id is resolved; db/current_user may be missing in direct test invocation context.
|
||||
# @POST: git_service.configure_identity is called only when identity and method are available.
|
||||
# @RETURN: None
|
||||
def _apply_git_identity_from_profile(
|
||||
dashboard_id: int,
|
||||
db: Session,
|
||||
current_user: Optional[User],
|
||||
) -> None:
|
||||
identity = _resolve_current_user_git_identity(db, current_user)
|
||||
if not identity:
|
||||
return
|
||||
|
||||
configure_identity = getattr(git_service, "configure_identity", None)
|
||||
if not callable(configure_identity):
|
||||
return
|
||||
|
||||
git_username, git_email = identity
|
||||
configure_identity(dashboard_id, git_username, git_email)
|
||||
# [/DEF:_apply_git_identity_from_profile:Function]
|
||||
|
||||
|
||||
# [DEF:get_git_configs:Function]
|
||||
# @PURPOSE: List all configured Git servers.
|
||||
# @PRE: Database session `db` is available.
|
||||
@@ -183,10 +362,16 @@ def _resolve_dashboard_id_from_ref(
|
||||
@router.get("/config", response_model=List[GitServerConfigSchema])
|
||||
async def get_git_configs(
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("admin:settings", "READ"))
|
||||
_ = Depends(has_permission("git_config", "READ"))
|
||||
):
|
||||
with belief_scope("get_git_configs"):
|
||||
return db.query(GitServerConfig).all()
|
||||
configs = db.query(GitServerConfig).all()
|
||||
result = []
|
||||
for config in configs:
|
||||
schema = GitServerConfigSchema.from_orm(config)
|
||||
schema.pat = "********"
|
||||
result.append(schema)
|
||||
return result
|
||||
# [/DEF:get_git_configs:Function]
|
||||
|
||||
# [DEF:create_git_config:Function]
|
||||
@@ -202,13 +387,48 @@ async def create_git_config(
|
||||
_ = Depends(has_permission("admin:settings", "WRITE"))
|
||||
):
|
||||
with belief_scope("create_git_config"):
|
||||
db_config = GitServerConfig(**config.dict())
|
||||
config_dict = config.dict(exclude={"config_id"})
|
||||
db_config = GitServerConfig(**config_dict)
|
||||
db.add(db_config)
|
||||
db.commit()
|
||||
db.refresh(db_config)
|
||||
return db_config
|
||||
# [/DEF:create_git_config:Function]
|
||||
|
||||
# [DEF:update_git_config:Function]
|
||||
# @PURPOSE: Update an existing Git server configuration.
|
||||
# @PRE: `config_id` corresponds to an existing configuration.
|
||||
# @POST: The configuration record is updated in the database.
|
||||
# @PARAM: config_id (str)
|
||||
# @PARAM: config_update (GitServerConfigUpdate)
|
||||
# @RETURN: GitServerConfigSchema
|
||||
@router.put("/config/{config_id}", response_model=GitServerConfigSchema)
|
||||
async def update_git_config(
|
||||
config_id: str,
|
||||
config_update: GitServerConfigUpdate,
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("admin:settings", "WRITE"))
|
||||
):
|
||||
with belief_scope("update_git_config"):
|
||||
db_config = db.query(GitServerConfig).filter(GitServerConfig.id == config_id).first()
|
||||
if not db_config:
|
||||
raise HTTPException(status_code=404, detail="Configuration not found")
|
||||
|
||||
update_data = config_update.dict(exclude_unset=True)
|
||||
if update_data.get("pat") == "********":
|
||||
update_data.pop("pat")
|
||||
|
||||
for key, value in update_data.items():
|
||||
setattr(db_config, key, value)
|
||||
|
||||
db.commit()
|
||||
db.refresh(db_config)
|
||||
|
||||
result_schema = GitServerConfigSchema.from_orm(db_config)
|
||||
result_schema.pat = "********"
|
||||
return result_schema
|
||||
# [/DEF:update_git_config:Function]
|
||||
|
||||
# [DEF:delete_git_config:Function]
|
||||
# @PURPOSE: Remove a Git server configuration.
|
||||
# @PRE: `config_id` corresponds to an existing configuration.
|
||||
@@ -238,10 +458,22 @@ async def delete_git_config(
|
||||
@router.post("/config/test")
|
||||
async def test_git_config(
|
||||
config: GitServerConfigCreate,
|
||||
_ = Depends(has_permission("admin:settings", "READ"))
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("git_config", "READ"))
|
||||
):
|
||||
with belief_scope("test_git_config"):
|
||||
success = await git_service.test_connection(config.provider, config.url, config.pat)
|
||||
pat_to_use = config.pat
|
||||
if pat_to_use == "********":
|
||||
if config.config_id:
|
||||
db_config = db.query(GitServerConfig).filter(GitServerConfig.id == config.config_id).first()
|
||||
if db_config:
|
||||
pat_to_use = db_config.pat
|
||||
else:
|
||||
db_config = db.query(GitServerConfig).filter(GitServerConfig.url == config.url).first()
|
||||
if db_config:
|
||||
pat_to_use = db_config.pat
|
||||
|
||||
success = await git_service.test_connection(config.provider, config.url, pat_to_use)
|
||||
if success:
|
||||
return {"status": "success", "message": "Connection successful"}
|
||||
else:
|
||||
@@ -257,7 +489,7 @@ async def test_git_config(
|
||||
async def list_gitea_repositories(
|
||||
config_id: str,
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("admin:settings", "READ"))
|
||||
_ = Depends(has_permission("git_config", "READ"))
|
||||
):
|
||||
with belief_scope("list_gitea_repositories"):
|
||||
config = _get_git_config_or_404(db, config_id)
|
||||
@@ -417,6 +649,7 @@ async def init_repository(
|
||||
):
|
||||
with belief_scope("init_repository"):
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
repo_key = _resolve_repo_key_from_ref(dashboard_ref, dashboard_id, config_manager, env_id)
|
||||
# 1. Get config
|
||||
config = db.query(GitServerConfig).filter(GitServerConfig.id == init_data.config_id).first()
|
||||
if not config:
|
||||
@@ -425,23 +658,25 @@ async def init_repository(
|
||||
try:
|
||||
# 2. Perform Git clone/init
|
||||
logger.info(f"[init_repository][Action] Initializing repo for dashboard {dashboard_id}")
|
||||
git_service.init_repo(dashboard_id, init_data.remote_url, config.pat)
|
||||
git_service.init_repo(dashboard_id, init_data.remote_url, config.pat, repo_key=repo_key, default_branch=config.default_branch)
|
||||
|
||||
# 3. Save to DB
|
||||
repo_path = git_service._get_repo_path(dashboard_id)
|
||||
repo_path = git_service._get_repo_path(dashboard_id, repo_key=repo_key)
|
||||
db_repo = db.query(GitRepository).filter(GitRepository.dashboard_id == dashboard_id).first()
|
||||
if not db_repo:
|
||||
db_repo = GitRepository(
|
||||
dashboard_id=dashboard_id,
|
||||
config_id=config.id,
|
||||
remote_url=init_data.remote_url,
|
||||
local_path=repo_path
|
||||
local_path=repo_path,
|
||||
current_branch="dev",
|
||||
)
|
||||
db.add(db_repo)
|
||||
else:
|
||||
db_repo.config_id = config.id
|
||||
db_repo.remote_url = init_data.remote_url
|
||||
db_repo.local_path = repo_path
|
||||
db_repo.current_branch = "dev"
|
||||
|
||||
db.commit()
|
||||
logger.info(f"[init_repository][Coherence:OK] Repository initialized for dashboard {dashboard_id}")
|
||||
@@ -454,6 +689,64 @@ async def init_repository(
|
||||
_handle_unexpected_git_route_error("init_repository", e)
|
||||
# [/DEF:init_repository:Function]
|
||||
|
||||
# [DEF:get_repository_binding:Function]
|
||||
# @PURPOSE: Return repository binding with provider metadata for selected dashboard.
|
||||
# @PRE: `dashboard_ref` resolves to a valid dashboard and repository is initialized.
|
||||
# @POST: Returns dashboard repository binding and linked provider.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @RETURN: RepositoryBindingSchema
|
||||
@router.get("/repositories/{dashboard_ref}", response_model=RepositoryBindingSchema)
|
||||
async def get_repository_binding(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
db: Session = Depends(get_db),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("get_repository_binding"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
db_repo = db.query(GitRepository).filter(GitRepository.dashboard_id == dashboard_id).first()
|
||||
if not db_repo:
|
||||
raise HTTPException(status_code=404, detail="Repository not initialized")
|
||||
config = _get_git_config_or_404(db, db_repo.config_id)
|
||||
return RepositoryBindingSchema(
|
||||
dashboard_id=db_repo.dashboard_id,
|
||||
config_id=db_repo.config_id,
|
||||
provider=config.provider,
|
||||
remote_url=db_repo.remote_url,
|
||||
local_path=db_repo.local_path,
|
||||
)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("get_repository_binding", e)
|
||||
# [/DEF:get_repository_binding:Function]
|
||||
|
||||
# [DEF:delete_repository:Function]
|
||||
# @PURPOSE: Delete local repository workspace and DB binding for selected dashboard.
|
||||
# @PRE: `dashboard_ref` resolves to a valid dashboard.
|
||||
# @POST: Repository files and binding record are removed when present.
|
||||
# @PARAM: dashboard_ref (str)
|
||||
# @RETURN: dict
|
||||
@router.delete("/repositories/{dashboard_ref}")
|
||||
async def delete_repository(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("delete_repository"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
git_service.delete_repo(dashboard_id)
|
||||
return {"status": "success"}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("delete_repository", e)
|
||||
# [/DEF:delete_repository:Function]
|
||||
|
||||
# [DEF:get_branches:Function]
|
||||
# @PURPOSE: List all branches for a dashboard's repository.
|
||||
# @PRE: Repository for `dashboard_ref` is initialized.
|
||||
@@ -489,11 +782,14 @@ async def create_branch(
|
||||
branch_data: BranchCreate,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
db: Session = Depends(get_db),
|
||||
current_user: User = Depends(get_current_user),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("create_branch"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
_apply_git_identity_from_profile(dashboard_id, db, current_user)
|
||||
git_service.create_branch(dashboard_id, branch_data.name, branch_data.from_branch)
|
||||
return {"status": "success"}
|
||||
except HTTPException:
|
||||
@@ -539,11 +835,14 @@ async def commit_changes(
|
||||
commit_data: CommitCreate,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
db: Session = Depends(get_db),
|
||||
current_user: User = Depends(get_current_user),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("commit_changes"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
_apply_git_identity_from_profile(dashboard_id, db, current_user)
|
||||
git_service.commit_changes(dashboard_id, commit_data.message, commit_data.files)
|
||||
return {"status": "success"}
|
||||
except HTTPException:
|
||||
@@ -585,11 +884,35 @@ async def pull_changes(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
db: Session = Depends(get_db),
|
||||
current_user: User = Depends(get_current_user),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("pull_changes"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
db_repo = db.query(GitRepository).filter(GitRepository.dashboard_id == dashboard_id).first()
|
||||
config_url = None
|
||||
config_provider = None
|
||||
if db_repo:
|
||||
config_row = db.query(GitServerConfig).filter(GitServerConfig.id == db_repo.config_id).first()
|
||||
if config_row:
|
||||
config_url = config_row.url
|
||||
config_provider = config_row.provider
|
||||
logger.info(
|
||||
"[pull_changes][Action] Route diagnostics dashboard_ref=%s env_id=%s resolved_dashboard_id=%s "
|
||||
"binding_exists=%s binding_local_path=%s binding_remote_url=%s binding_config_id=%s config_provider=%s config_url=%s",
|
||||
dashboard_ref,
|
||||
env_id,
|
||||
dashboard_id,
|
||||
bool(db_repo),
|
||||
(db_repo.local_path if db_repo else None),
|
||||
(db_repo.remote_url if db_repo else None),
|
||||
(db_repo.config_id if db_repo else None),
|
||||
config_provider,
|
||||
config_url,
|
||||
)
|
||||
_apply_git_identity_from_profile(dashboard_id, db, current_user)
|
||||
git_service.pull_changes(dashboard_id)
|
||||
return {"status": "success"}
|
||||
except HTTPException:
|
||||
@@ -598,6 +921,122 @@ async def pull_changes(
|
||||
_handle_unexpected_git_route_error("pull_changes", e)
|
||||
# [/DEF:pull_changes:Function]
|
||||
|
||||
# [DEF:get_merge_status:Function]
|
||||
# @PURPOSE: Return unfinished-merge status for repository (web-only recovery support).
|
||||
# @PRE: `dashboard_ref` resolves to a valid dashboard repository.
|
||||
# @POST: Returns merge status payload.
|
||||
@router.get("/repositories/{dashboard_ref}/merge/status", response_model=MergeStatusSchema)
|
||||
async def get_merge_status(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("get_merge_status"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
return git_service.get_merge_status(dashboard_id)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("get_merge_status", e)
|
||||
# [/DEF:get_merge_status:Function]
|
||||
|
||||
|
||||
# [DEF:get_merge_conflicts:Function]
|
||||
# @PURPOSE: Return conflicted files with mine/theirs previews for web conflict resolver.
|
||||
# @PRE: `dashboard_ref` resolves to a valid dashboard repository.
|
||||
# @POST: Returns conflict file list.
|
||||
@router.get("/repositories/{dashboard_ref}/merge/conflicts", response_model=List[MergeConflictFileSchema])
|
||||
async def get_merge_conflicts(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("get_merge_conflicts"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
return git_service.get_merge_conflicts(dashboard_id)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("get_merge_conflicts", e)
|
||||
# [/DEF:get_merge_conflicts:Function]
|
||||
|
||||
|
||||
# [DEF:resolve_merge_conflicts:Function]
|
||||
# @PURPOSE: Apply mine/theirs/manual conflict resolutions from WebUI and stage files.
|
||||
# @PRE: `dashboard_ref` resolves; request contains at least one resolution item.
|
||||
# @POST: Resolved files are staged in index.
|
||||
@router.post("/repositories/{dashboard_ref}/merge/resolve")
|
||||
async def resolve_merge_conflicts(
|
||||
dashboard_ref: str,
|
||||
resolve_data: MergeResolveRequest,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("resolve_merge_conflicts"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
resolved_files = git_service.resolve_merge_conflicts(
|
||||
dashboard_id,
|
||||
[item.dict() for item in resolve_data.resolutions],
|
||||
)
|
||||
return {"status": "success", "resolved_files": resolved_files}
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("resolve_merge_conflicts", e)
|
||||
# [/DEF:resolve_merge_conflicts:Function]
|
||||
|
||||
|
||||
# [DEF:abort_merge:Function]
|
||||
# @PURPOSE: Abort unfinished merge from WebUI flow.
|
||||
# @PRE: `dashboard_ref` resolves to repository.
|
||||
# @POST: Merge operation is aborted or reports no active merge.
|
||||
@router.post("/repositories/{dashboard_ref}/merge/abort")
|
||||
async def abort_merge(
|
||||
dashboard_ref: str,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("abort_merge"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
return git_service.abort_merge(dashboard_id)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("abort_merge", e)
|
||||
# [/DEF:abort_merge:Function]
|
||||
|
||||
|
||||
# [DEF:continue_merge:Function]
|
||||
# @PURPOSE: Finalize unfinished merge from WebUI flow.
|
||||
# @PRE: All conflicts are resolved and staged.
|
||||
# @POST: Merge commit is created.
|
||||
@router.post("/repositories/{dashboard_ref}/merge/continue")
|
||||
async def continue_merge(
|
||||
dashboard_ref: str,
|
||||
continue_data: MergeContinueRequest,
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("continue_merge"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
return git_service.continue_merge(dashboard_id, continue_data.message)
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
_handle_unexpected_git_route_error("continue_merge", e)
|
||||
# [/DEF:continue_merge:Function]
|
||||
|
||||
|
||||
# [DEF:sync_dashboard:Function]
|
||||
# @PURPOSE: Sync dashboard state from Superset to Git using the GitPlugin.
|
||||
# @PRE: `dashboard_ref` is valid; GitPlugin is available.
|
||||
@@ -640,6 +1079,7 @@ async def promote_dashboard(
|
||||
env_id: Optional[str] = None,
|
||||
config_manager=Depends(get_config_manager),
|
||||
db: Session = Depends(get_db),
|
||||
current_user: User = Depends(get_current_user),
|
||||
_ = Depends(has_permission("plugin:git", "EXECUTE"))
|
||||
):
|
||||
with belief_scope("promote_dashboard"):
|
||||
@@ -668,6 +1108,7 @@ async def promote_dashboard(
|
||||
to_branch,
|
||||
reason,
|
||||
)
|
||||
_apply_git_identity_from_profile(dashboard_id, db, current_user)
|
||||
result = git_service.promote_direct_merge(
|
||||
dashboard_id=dashboard_id,
|
||||
from_branch=from_branch,
|
||||
@@ -821,7 +1262,7 @@ async def get_repository_status(
|
||||
):
|
||||
with belief_scope("get_repository_status"):
|
||||
try:
|
||||
dashboard_id = _resolve_dashboard_id_from_ref(dashboard_ref, config_manager, env_id)
|
||||
dashboard_id = await _resolve_dashboard_id_from_ref_async(dashboard_ref, config_manager, env_id)
|
||||
return _resolve_repository_status(dashboard_id)
|
||||
except HTTPException:
|
||||
raise
|
||||
|
||||
@@ -21,14 +21,27 @@ class GitServerConfigBase(BaseModel):
|
||||
provider: GitProvider = Field(..., description="Git provider (GITHUB, GITLAB, GITEA)")
|
||||
url: str = Field(..., description="Server base URL")
|
||||
pat: str = Field(..., description="Personal Access Token")
|
||||
pat: str = Field(..., description="Personal Access Token")
|
||||
default_repository: Optional[str] = Field(None, description="Default repository path (org/repo)")
|
||||
default_branch: Optional[str] = Field("main", description="Default branch logic/name")
|
||||
# [/DEF:GitServerConfigBase:Class]
|
||||
|
||||
# [DEF:GitServerConfigUpdate:Class]
|
||||
# @PURPOSE: Schema for updating an existing Git server configuration.
|
||||
class GitServerConfigUpdate(BaseModel):
|
||||
name: Optional[str] = Field(None, description="Display name for the Git server")
|
||||
provider: Optional[GitProvider] = Field(None, description="Git provider (GITHUB, GITLAB, GITEA)")
|
||||
url: Optional[str] = Field(None, description="Server base URL")
|
||||
pat: Optional[str] = Field(None, description="Personal Access Token")
|
||||
default_repository: Optional[str] = Field(None, description="Default repository path (org/repo)")
|
||||
default_branch: Optional[str] = Field(None, description="Default branch logic/name")
|
||||
# [/DEF:GitServerConfigUpdate:Class]
|
||||
|
||||
# [DEF:GitServerConfigCreate:Class]
|
||||
# @PURPOSE: Schema for creating a new Git server configuration.
|
||||
class GitServerConfigCreate(GitServerConfigBase):
|
||||
"""Schema for creating a new Git server configuration."""
|
||||
pass
|
||||
config_id: Optional[str] = Field(None, description="Optional config ID, useful for testing an existing config without sending its full PAT")
|
||||
# [/DEF:GitServerConfigCreate:Class]
|
||||
|
||||
# [DEF:GitServerConfigSchema:Class]
|
||||
@@ -113,6 +126,42 @@ class ConflictResolution(BaseModel):
|
||||
content: Optional[str] = None
|
||||
# [/DEF:ConflictResolution:Class]
|
||||
|
||||
|
||||
# [DEF:MergeStatusSchema:Class]
|
||||
# @PURPOSE: Schema representing unfinished merge status for repository.
|
||||
class MergeStatusSchema(BaseModel):
|
||||
has_unfinished_merge: bool
|
||||
repository_path: str
|
||||
git_dir: str
|
||||
current_branch: str
|
||||
merge_head: Optional[str] = None
|
||||
merge_message_preview: Optional[str] = None
|
||||
conflicts_count: int = 0
|
||||
# [/DEF:MergeStatusSchema:Class]
|
||||
|
||||
|
||||
# [DEF:MergeConflictFileSchema:Class]
|
||||
# @PURPOSE: Schema describing one conflicted file with optional side snapshots.
|
||||
class MergeConflictFileSchema(BaseModel):
|
||||
file_path: str
|
||||
mine: Optional[str] = None
|
||||
theirs: Optional[str] = None
|
||||
# [/DEF:MergeConflictFileSchema:Class]
|
||||
|
||||
|
||||
# [DEF:MergeResolveRequest:Class]
|
||||
# @PURPOSE: Request schema for resolving one or multiple merge conflicts.
|
||||
class MergeResolveRequest(BaseModel):
|
||||
resolutions: List[ConflictResolution] = Field(default_factory=list)
|
||||
# [/DEF:MergeResolveRequest:Class]
|
||||
|
||||
|
||||
# [DEF:MergeContinueRequest:Class]
|
||||
# @PURPOSE: Request schema for finishing merge with optional explicit commit message.
|
||||
class MergeContinueRequest(BaseModel):
|
||||
message: Optional[str] = None
|
||||
# [/DEF:MergeContinueRequest:Class]
|
||||
|
||||
# [DEF:DeploymentEnvironmentSchema:Class]
|
||||
# @PURPOSE: Schema for representing a target deployment environment.
|
||||
class DeploymentEnvironmentSchema(BaseModel):
|
||||
@@ -141,6 +190,17 @@ class RepoInitRequest(BaseModel):
|
||||
remote_url: str
|
||||
# [/DEF:RepoInitRequest:Class]
|
||||
|
||||
|
||||
# [DEF:RepositoryBindingSchema:Class]
|
||||
# @PURPOSE: Schema describing repository-to-config binding and provider metadata.
|
||||
class RepositoryBindingSchema(BaseModel):
|
||||
dashboard_id: int
|
||||
config_id: str
|
||||
provider: GitProvider
|
||||
remote_url: str
|
||||
local_path: str
|
||||
# [/DEF:RepositoryBindingSchema:Class]
|
||||
|
||||
# [DEF:RepoStatusBatchRequest:Class]
|
||||
# @PURPOSE: Schema for requesting repository statuses for multiple dashboards in a single call.
|
||||
class RepoStatusBatchRequest(BaseModel):
|
||||
|
||||
147
backend/src/api/routes/profile.py
Normal file
147
backend/src/api/routes/profile.py
Normal file
@@ -0,0 +1,147 @@
|
||||
# [DEF:backend.src.api.routes.profile:Module]
|
||||
#
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: api, profile, preferences, self-service, account-lookup
|
||||
# @PURPOSE: Exposes self-scoped profile preference endpoints and environment-based Superset account lookup.
|
||||
# @LAYER: API
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.profile_service
|
||||
# @RELATION: DEPENDS_ON -> backend.src.dependencies.get_current_user
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.database.get_db
|
||||
#
|
||||
# @INVARIANT: Endpoints are self-scoped and never mutate another user preference.
|
||||
# @UX_STATE: ProfileLoad -> Returns stable ProfilePreferenceResponse for authenticated user.
|
||||
# @UX_STATE: Saving -> Validation errors map to actionable 422 details.
|
||||
# @UX_STATE: LookupLoading -> Returns success/degraded Superset lookup payload.
|
||||
# @UX_FEEDBACK: Stable status/message/warning payloads support profile page feedback.
|
||||
# @UX_RECOVERY: Lookup degradation keeps manual username save path available.
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
from typing import Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Query
|
||||
from sqlalchemy.orm import Session
|
||||
|
||||
from ...core.database import get_db
|
||||
from ...core.logger import logger, belief_scope
|
||||
from ...dependencies import (
|
||||
get_config_manager,
|
||||
get_current_user,
|
||||
get_plugin_loader,
|
||||
)
|
||||
from ...models.auth import User
|
||||
from ...schemas.profile import (
|
||||
ProfilePreferenceResponse,
|
||||
ProfilePreferenceUpdateRequest,
|
||||
SupersetAccountLookupRequest,
|
||||
SupersetAccountLookupResponse,
|
||||
)
|
||||
from ...services.profile_service import (
|
||||
EnvironmentNotFoundError,
|
||||
ProfileAuthorizationError,
|
||||
ProfileService,
|
||||
ProfileValidationError,
|
||||
)
|
||||
# [/SECTION]
|
||||
|
||||
router = APIRouter(prefix="/api/profile", tags=["profile"])
|
||||
|
||||
|
||||
# [DEF:_get_profile_service:Function]
|
||||
# @PURPOSE: Build profile service for current request scope.
|
||||
# @PRE: db session and config manager are available.
|
||||
# @POST: Returns a ready ProfileService instance.
|
||||
def _get_profile_service(db: Session, config_manager, plugin_loader=None) -> ProfileService:
|
||||
return ProfileService(
|
||||
db=db,
|
||||
config_manager=config_manager,
|
||||
plugin_loader=plugin_loader,
|
||||
)
|
||||
# [/DEF:_get_profile_service:Function]
|
||||
|
||||
|
||||
# [DEF:get_preferences:Function]
|
||||
# @PURPOSE: Get authenticated user's dashboard filter preference.
|
||||
# @PRE: Valid JWT and authenticated user context.
|
||||
# @POST: Returns preference payload for current user only.
|
||||
@router.get("/preferences", response_model=ProfilePreferenceResponse)
|
||||
async def get_preferences(
|
||||
current_user: User = Depends(get_current_user),
|
||||
db: Session = Depends(get_db),
|
||||
config_manager=Depends(get_config_manager),
|
||||
plugin_loader=Depends(get_plugin_loader),
|
||||
):
|
||||
with belief_scope("profile.get_preferences", f"user_id={current_user.id}"):
|
||||
logger.reason("[REASON] Resolving current user preference")
|
||||
service = _get_profile_service(db, config_manager, plugin_loader)
|
||||
return service.get_my_preference(current_user)
|
||||
# [/DEF:get_preferences:Function]
|
||||
|
||||
|
||||
# [DEF:update_preferences:Function]
|
||||
# @PURPOSE: Update authenticated user's dashboard filter preference.
|
||||
# @PRE: Valid JWT and valid request payload.
|
||||
# @POST: Persists normalized preference for current user or raises validation/authorization errors.
|
||||
@router.patch("/preferences", response_model=ProfilePreferenceResponse)
|
||||
async def update_preferences(
|
||||
payload: ProfilePreferenceUpdateRequest,
|
||||
current_user: User = Depends(get_current_user),
|
||||
db: Session = Depends(get_db),
|
||||
config_manager=Depends(get_config_manager),
|
||||
plugin_loader=Depends(get_plugin_loader),
|
||||
):
|
||||
with belief_scope("profile.update_preferences", f"user_id={current_user.id}"):
|
||||
service = _get_profile_service(db, config_manager, plugin_loader)
|
||||
try:
|
||||
logger.reason("[REASON] Attempting preference save")
|
||||
return service.update_my_preference(current_user=current_user, payload=payload)
|
||||
except ProfileValidationError as exc:
|
||||
logger.reflect("[REFLECT] Preference validation failed")
|
||||
raise HTTPException(status_code=422, detail=exc.errors) from exc
|
||||
except ProfileAuthorizationError as exc:
|
||||
logger.explore("[EXPLORE] Cross-user mutation guard blocked request")
|
||||
raise HTTPException(status_code=403, detail=str(exc)) from exc
|
||||
# [/DEF:update_preferences:Function]
|
||||
|
||||
|
||||
# [DEF:lookup_superset_accounts:Function]
|
||||
# @PURPOSE: Lookup Superset account candidates in selected environment.
|
||||
# @PRE: Valid JWT, authenticated context, and environment_id query parameter.
|
||||
# @POST: Returns success or degraded lookup payload with stable shape.
|
||||
@router.get("/superset-accounts", response_model=SupersetAccountLookupResponse)
|
||||
async def lookup_superset_accounts(
|
||||
environment_id: str = Query(...),
|
||||
search: Optional[str] = Query(default=None),
|
||||
page_index: int = Query(default=0, ge=0),
|
||||
page_size: int = Query(default=20, ge=1, le=100),
|
||||
sort_column: str = Query(default="username"),
|
||||
sort_order: str = Query(default="desc"),
|
||||
current_user: User = Depends(get_current_user),
|
||||
db: Session = Depends(get_db),
|
||||
config_manager=Depends(get_config_manager),
|
||||
plugin_loader=Depends(get_plugin_loader),
|
||||
):
|
||||
with belief_scope(
|
||||
"profile.lookup_superset_accounts",
|
||||
f"user_id={current_user.id}, environment_id={environment_id}",
|
||||
):
|
||||
service = _get_profile_service(db, config_manager, plugin_loader)
|
||||
lookup_request = SupersetAccountLookupRequest(
|
||||
environment_id=environment_id,
|
||||
search=search,
|
||||
page_index=page_index,
|
||||
page_size=page_size,
|
||||
sort_column=sort_column,
|
||||
sort_order=sort_order,
|
||||
)
|
||||
try:
|
||||
logger.reason("[REASON] Executing Superset account lookup")
|
||||
return service.lookup_superset_accounts(
|
||||
current_user=current_user,
|
||||
request=lookup_request,
|
||||
)
|
||||
except EnvironmentNotFoundError as exc:
|
||||
logger.explore("[EXPLORE] Lookup request references unknown environment")
|
||||
raise HTTPException(status_code=404, detail=str(exc)) from exc
|
||||
# [/DEF:lookup_superset_accounts:Function]
|
||||
|
||||
# [/DEF:backend.src.api.routes.profile:Module]
|
||||
@@ -13,10 +13,11 @@ from typing import List, Optional
|
||||
|
||||
from fastapi import APIRouter, Depends, HTTPException, Query, status
|
||||
|
||||
from ...dependencies import get_task_manager, has_permission
|
||||
from ...dependencies import get_task_manager, has_permission, get_clean_release_repository
|
||||
from ...core.task_manager import TaskManager
|
||||
from ...core.logger import belief_scope
|
||||
from ...models.report import ReportCollection, ReportDetailView, ReportQuery, ReportStatus, TaskType
|
||||
from ...services.clean_release.repository import CleanReleaseRepository
|
||||
from ...services.reports.report_service import ReportsService
|
||||
# [/SECTION]
|
||||
|
||||
@@ -88,6 +89,7 @@ async def list_reports(
|
||||
sort_by: str = Query("updated_at"),
|
||||
sort_order: str = Query("desc"),
|
||||
task_manager: TaskManager = Depends(get_task_manager),
|
||||
clean_release_repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
_=Depends(has_permission("tasks", "READ")),
|
||||
):
|
||||
with belief_scope("list_reports"):
|
||||
@@ -117,7 +119,7 @@ async def list_reports(
|
||||
},
|
||||
)
|
||||
|
||||
service = ReportsService(task_manager)
|
||||
service = ReportsService(task_manager, clean_release_repository=clean_release_repository)
|
||||
return service.list_reports(query)
|
||||
# [/DEF:list_reports:Function]
|
||||
|
||||
@@ -130,10 +132,11 @@ async def list_reports(
|
||||
async def get_report_detail(
|
||||
report_id: str,
|
||||
task_manager: TaskManager = Depends(get_task_manager),
|
||||
clean_release_repository: CleanReleaseRepository = Depends(get_clean_release_repository),
|
||||
_=Depends(has_permission("tasks", "READ")),
|
||||
):
|
||||
with belief_scope("get_report_detail", f"report_id={report_id}"):
|
||||
service = ReportsService(task_manager)
|
||||
service = ReportsService(task_manager, clean_release_repository=clean_release_repository)
|
||||
detail = service.get_report_detail(report_id)
|
||||
if not detail:
|
||||
raise HTTPException(
|
||||
|
||||
@@ -31,7 +31,38 @@ class LoggingConfigResponse(BaseModel):
|
||||
enable_belief_state: bool
|
||||
# [/DEF:LoggingConfigResponse:Class]
|
||||
|
||||
router = APIRouter()
|
||||
router = APIRouter()
|
||||
|
||||
|
||||
# [DEF:_normalize_superset_env_url:Function]
|
||||
# @PURPOSE: Canonicalize Superset environment URL to base host/path without trailing /api/v1.
|
||||
# @PRE: raw_url can be empty.
|
||||
# @POST: Returns normalized base URL.
|
||||
def _normalize_superset_env_url(raw_url: str) -> str:
|
||||
normalized = str(raw_url or "").strip().rstrip("/")
|
||||
if normalized.lower().endswith("/api/v1"):
|
||||
normalized = normalized[:-len("/api/v1")]
|
||||
return normalized.rstrip("/")
|
||||
# [/DEF:_normalize_superset_env_url:Function]
|
||||
|
||||
|
||||
# [DEF:_validate_superset_connection_fast:Function]
|
||||
# @PURPOSE: Run lightweight Superset connectivity validation without full pagination scan.
|
||||
# @PRE: env contains valid URL and credentials.
|
||||
# @POST: Raises on auth/API failures; returns None on success.
|
||||
def _validate_superset_connection_fast(env: Environment) -> None:
|
||||
client = SupersetClient(env)
|
||||
# 1) Explicit auth check
|
||||
client.authenticate()
|
||||
# 2) Single lightweight API call to ensure read access
|
||||
client.get_dashboards_page(
|
||||
query={
|
||||
"page": 0,
|
||||
"page_size": 1,
|
||||
"columns": ["id"],
|
||||
}
|
||||
)
|
||||
# [/DEF:_validate_superset_connection_fast:Function]
|
||||
|
||||
# [DEF:get_settings:Function]
|
||||
# @PURPOSE: Retrieves all application settings.
|
||||
@@ -112,14 +143,18 @@ async def update_storage_settings(
|
||||
# @PRE: Config manager is available.
|
||||
# @POST: Returns list of environments.
|
||||
# @RETURN: List[Environment] - List of environments.
|
||||
@router.get("/environments", response_model=List[Environment])
|
||||
async def get_environments(
|
||||
@router.get("/environments", response_model=List[Environment])
|
||||
async def get_environments(
|
||||
config_manager: ConfigManager = Depends(get_config_manager),
|
||||
_ = Depends(has_permission("admin:settings", "READ"))
|
||||
):
|
||||
with belief_scope("get_environments"):
|
||||
logger.info("[get_environments][Entry] Fetching environments")
|
||||
return config_manager.get_environments()
|
||||
):
|
||||
with belief_scope("get_environments"):
|
||||
logger.info("[get_environments][Entry] Fetching environments")
|
||||
environments = config_manager.get_environments()
|
||||
return [
|
||||
env.copy(update={"url": _normalize_superset_env_url(env.url)})
|
||||
for env in environments
|
||||
]
|
||||
# [/DEF:get_environments:Function]
|
||||
|
||||
# [DEF:add_environment:Function]
|
||||
@@ -129,21 +164,21 @@ async def get_environments(
|
||||
# @PARAM: env (Environment) - The environment to add.
|
||||
# @RETURN: Environment - The added environment.
|
||||
@router.post("/environments", response_model=Environment)
|
||||
async def add_environment(
|
||||
env: Environment,
|
||||
async def add_environment(
|
||||
env: Environment,
|
||||
config_manager: ConfigManager = Depends(get_config_manager),
|
||||
_ = Depends(has_permission("admin:settings", "WRITE"))
|
||||
):
|
||||
with belief_scope("add_environment"):
|
||||
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
|
||||
):
|
||||
with belief_scope("add_environment"):
|
||||
logger.info(f"[add_environment][Entry] Adding environment {env.id}")
|
||||
env = env.copy(update={"url": _normalize_superset_env_url(env.url)})
|
||||
|
||||
# Validate connection before adding
|
||||
try:
|
||||
client = SupersetClient(env)
|
||||
client.get_dashboards(query={"page_size": 1})
|
||||
except Exception as e:
|
||||
logger.error(f"[add_environment][Coherence:Failed] Connection validation failed: {e}")
|
||||
raise HTTPException(status_code=400, detail=f"Connection validation failed: {e}")
|
||||
# Validate connection before adding (fast path)
|
||||
try:
|
||||
_validate_superset_connection_fast(env)
|
||||
except Exception as e:
|
||||
logger.error(f"[add_environment][Coherence:Failed] Connection validation failed: {e}")
|
||||
raise HTTPException(status_code=400, detail=f"Connection validation failed: {e}")
|
||||
|
||||
config_manager.add_environment(env)
|
||||
return env
|
||||
@@ -157,28 +192,29 @@ async def add_environment(
|
||||
# @PARAM: env (Environment) - The updated environment data.
|
||||
# @RETURN: Environment - The updated environment.
|
||||
@router.put("/environments/{id}", response_model=Environment)
|
||||
async def update_environment(
|
||||
async def update_environment(
|
||||
id: str,
|
||||
env: Environment,
|
||||
config_manager: ConfigManager = Depends(get_config_manager)
|
||||
):
|
||||
):
|
||||
with belief_scope("update_environment"):
|
||||
logger.info(f"[update_environment][Entry] Updating environment {id}")
|
||||
|
||||
# If password is masked, we need the real one for validation
|
||||
env_to_validate = env.copy(deep=True)
|
||||
env = env.copy(update={"url": _normalize_superset_env_url(env.url)})
|
||||
|
||||
# If password is masked, we need the real one for validation
|
||||
env_to_validate = env.copy(deep=True)
|
||||
if env_to_validate.password == "********":
|
||||
old_env = next((e for e in config_manager.get_environments() if e.id == id), None)
|
||||
if old_env:
|
||||
env_to_validate.password = old_env.password
|
||||
|
||||
# Validate connection before updating
|
||||
try:
|
||||
client = SupersetClient(env_to_validate)
|
||||
client.get_dashboards(query={"page_size": 1})
|
||||
except Exception as e:
|
||||
logger.error(f"[update_environment][Coherence:Failed] Connection validation failed: {e}")
|
||||
raise HTTPException(status_code=400, detail=f"Connection validation failed: {e}")
|
||||
# Validate connection before updating (fast path)
|
||||
try:
|
||||
_validate_superset_connection_fast(env_to_validate)
|
||||
except Exception as e:
|
||||
logger.error(f"[update_environment][Coherence:Failed] Connection validation failed: {e}")
|
||||
raise HTTPException(status_code=400, detail=f"Connection validation failed: {e}")
|
||||
|
||||
if config_manager.update_environment(id, env):
|
||||
return env
|
||||
@@ -208,7 +244,7 @@ async def delete_environment(
|
||||
# @PARAM: id (str) - The ID of the environment to test.
|
||||
# @RETURN: dict - Success message or error.
|
||||
@router.post("/environments/{id}/test")
|
||||
async def test_environment_connection(
|
||||
async def test_environment_connection(
|
||||
id: str,
|
||||
config_manager: ConfigManager = Depends(get_config_manager)
|
||||
):
|
||||
@@ -220,15 +256,11 @@ async def test_environment_connection(
|
||||
if not env:
|
||||
raise HTTPException(status_code=404, detail=f"Environment {id} not found")
|
||||
|
||||
try:
|
||||
# Initialize client (this will trigger authentication)
|
||||
client = SupersetClient(env)
|
||||
|
||||
# Try a simple request to verify
|
||||
client.get_dashboards(query={"page_size": 1})
|
||||
|
||||
logger.info(f"[test_environment_connection][Coherence:OK] Connection successful for {id}")
|
||||
return {"status": "success", "message": "Connection successful"}
|
||||
try:
|
||||
_validate_superset_connection_fast(env)
|
||||
|
||||
logger.info(f"[test_environment_connection][Coherence:OK] Connection successful for {id}")
|
||||
return {"status": "success", "message": "Connection successful"}
|
||||
except Exception as e:
|
||||
logger.error(f"[test_environment_connection][Coherence:Failed] Connection failed for {id}: {e}")
|
||||
return {"status": "error", "message": str(e)}
|
||||
|
||||
@@ -4,30 +4,30 @@
|
||||
# @PURPOSE: Defines the FastAPI router for task-related endpoints, allowing clients to create, list, and get the status of tasks.
|
||||
# @LAYER: UI (API)
|
||||
# @RELATION: Depends on the TaskManager. It is included by the main app.
|
||||
from typing import List, Dict, Any, Optional
|
||||
from typing import List, Dict, Any, Optional
|
||||
from fastapi import APIRouter, Depends, HTTPException, status, Query
|
||||
from pydantic import BaseModel
|
||||
from ...core.logger import belief_scope
|
||||
|
||||
from ...core.task_manager import TaskManager, Task, TaskStatus, LogEntry
|
||||
from ...core.task_manager.models import LogFilter, LogStats
|
||||
from ...dependencies import get_task_manager, has_permission, get_current_user, get_config_manager
|
||||
from ...core.config_manager import ConfigManager
|
||||
from ...services.llm_prompt_templates import (
|
||||
is_multimodal_model,
|
||||
normalize_llm_settings,
|
||||
resolve_bound_provider_id,
|
||||
)
|
||||
from ...core.task_manager import TaskManager, Task, TaskStatus, LogEntry
|
||||
from ...core.task_manager.models import LogFilter, LogStats
|
||||
from ...dependencies import get_task_manager, has_permission, get_current_user, get_config_manager
|
||||
from ...core.config_manager import ConfigManager
|
||||
from ...services.llm_prompt_templates import (
|
||||
is_multimodal_model,
|
||||
normalize_llm_settings,
|
||||
resolve_bound_provider_id,
|
||||
)
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
TASK_TYPE_PLUGIN_MAP = {
|
||||
"llm_validation": ["llm_dashboard_validation"],
|
||||
"backup": ["superset-backup"],
|
||||
"migration": ["superset-migration"],
|
||||
}
|
||||
|
||||
class CreateTaskRequest(BaseModel):
|
||||
router = APIRouter()
|
||||
|
||||
TASK_TYPE_PLUGIN_MAP = {
|
||||
"llm_validation": ["llm_dashboard_validation"],
|
||||
"backup": ["superset-backup"],
|
||||
"migration": ["superset-migration"],
|
||||
}
|
||||
|
||||
class CreateTaskRequest(BaseModel):
|
||||
plugin_id: str
|
||||
params: Dict[str, Any]
|
||||
|
||||
@@ -45,54 +45,54 @@ class ResumeTaskRequest(BaseModel):
|
||||
# @PRE: plugin_id must exist and params must be valid for that plugin.
|
||||
# @POST: A new task is created and started.
|
||||
# @RETURN: Task - The created task instance.
|
||||
async def create_task(
|
||||
request: CreateTaskRequest,
|
||||
task_manager: TaskManager = Depends(get_task_manager),
|
||||
current_user = Depends(get_current_user),
|
||||
config_manager: ConfigManager = Depends(get_config_manager),
|
||||
):
|
||||
async def create_task(
|
||||
request: CreateTaskRequest,
|
||||
task_manager: TaskManager = Depends(get_task_manager),
|
||||
current_user = Depends(get_current_user),
|
||||
config_manager: ConfigManager = Depends(get_config_manager),
|
||||
):
|
||||
# Dynamic permission check based on plugin_id
|
||||
has_permission(f"plugin:{request.plugin_id}", "EXECUTE")(current_user)
|
||||
"""
|
||||
Create and start a new task for a given plugin.
|
||||
"""
|
||||
with belief_scope("create_task"):
|
||||
try:
|
||||
# Special handling for LLM tasks to resolve provider config by task binding.
|
||||
if request.plugin_id in {"llm_dashboard_validation", "llm_documentation"}:
|
||||
from ...core.database import SessionLocal
|
||||
from ...services.llm_provider import LLMProviderService
|
||||
db = SessionLocal()
|
||||
try:
|
||||
llm_service = LLMProviderService(db)
|
||||
provider_id = request.params.get("provider_id")
|
||||
if not provider_id:
|
||||
llm_settings = normalize_llm_settings(config_manager.get_config().settings.llm)
|
||||
binding_key = "dashboard_validation" if request.plugin_id == "llm_dashboard_validation" else "documentation"
|
||||
provider_id = resolve_bound_provider_id(llm_settings, binding_key)
|
||||
if provider_id:
|
||||
request.params["provider_id"] = provider_id
|
||||
if not provider_id:
|
||||
providers = llm_service.get_all_providers()
|
||||
active_provider = next((p for p in providers if p.is_active), None)
|
||||
if active_provider:
|
||||
provider_id = active_provider.id
|
||||
request.params["provider_id"] = provider_id
|
||||
|
||||
if provider_id:
|
||||
db_provider = llm_service.get_provider(provider_id)
|
||||
if not db_provider:
|
||||
raise ValueError(f"LLM Provider {provider_id} not found")
|
||||
if request.plugin_id == "llm_dashboard_validation" and not is_multimodal_model(
|
||||
db_provider.default_model,
|
||||
db_provider.provider_type,
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
detail="Selected provider model is not multimodal for dashboard validation",
|
||||
)
|
||||
finally:
|
||||
db.close()
|
||||
try:
|
||||
# Special handling for LLM tasks to resolve provider config by task binding.
|
||||
if request.plugin_id in {"llm_dashboard_validation", "llm_documentation"}:
|
||||
from ...core.database import SessionLocal
|
||||
from ...services.llm_provider import LLMProviderService
|
||||
db = SessionLocal()
|
||||
try:
|
||||
llm_service = LLMProviderService(db)
|
||||
provider_id = request.params.get("provider_id")
|
||||
if not provider_id:
|
||||
llm_settings = normalize_llm_settings(config_manager.get_config().settings.llm)
|
||||
binding_key = "dashboard_validation" if request.plugin_id == "llm_dashboard_validation" else "documentation"
|
||||
provider_id = resolve_bound_provider_id(llm_settings, binding_key)
|
||||
if provider_id:
|
||||
request.params["provider_id"] = provider_id
|
||||
if not provider_id:
|
||||
providers = llm_service.get_all_providers()
|
||||
active_provider = next((p for p in providers if p.is_active), None)
|
||||
if active_provider:
|
||||
provider_id = active_provider.id
|
||||
request.params["provider_id"] = provider_id
|
||||
|
||||
if provider_id:
|
||||
db_provider = llm_service.get_provider(provider_id)
|
||||
if not db_provider:
|
||||
raise ValueError(f"LLM Provider {provider_id} not found")
|
||||
if request.plugin_id == "llm_dashboard_validation" and not is_multimodal_model(
|
||||
db_provider.default_model,
|
||||
db_provider.provider_type,
|
||||
):
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
detail="Selected provider model is not multimodal for dashboard validation",
|
||||
)
|
||||
finally:
|
||||
db.close()
|
||||
|
||||
task = await task_manager.create_task(
|
||||
plugin_id=request.plugin_id,
|
||||
@@ -113,36 +113,36 @@ async def create_task(
|
||||
# @PRE: task_manager must be available.
|
||||
# @POST: Returns a list of tasks.
|
||||
# @RETURN: List[Task] - List of tasks.
|
||||
async def list_tasks(
|
||||
limit: int = 10,
|
||||
offset: int = 0,
|
||||
status_filter: Optional[TaskStatus] = Query(None, alias="status"),
|
||||
task_type: Optional[str] = Query(None, description="Task category: llm_validation, backup, migration"),
|
||||
plugin_id: Optional[List[str]] = Query(None, description="Filter by plugin_id (repeatable query param)"),
|
||||
completed_only: bool = Query(False, description="Return only completed tasks (SUCCESS/FAILED)"),
|
||||
task_manager: TaskManager = Depends(get_task_manager),
|
||||
_ = Depends(has_permission("tasks", "READ"))
|
||||
):
|
||||
"""
|
||||
Retrieve a list of tasks with pagination and optional status filter.
|
||||
"""
|
||||
with belief_scope("list_tasks"):
|
||||
plugin_filters = list(plugin_id) if plugin_id else []
|
||||
if task_type:
|
||||
if task_type not in TASK_TYPE_PLUGIN_MAP:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Unsupported task_type '{task_type}'. Allowed: {', '.join(TASK_TYPE_PLUGIN_MAP.keys())}"
|
||||
)
|
||||
plugin_filters.extend(TASK_TYPE_PLUGIN_MAP[task_type])
|
||||
|
||||
return task_manager.get_tasks(
|
||||
limit=limit,
|
||||
offset=offset,
|
||||
status=status_filter,
|
||||
plugin_ids=plugin_filters or None,
|
||||
completed_only=completed_only
|
||||
)
|
||||
async def list_tasks(
|
||||
limit: int = 10,
|
||||
offset: int = 0,
|
||||
status_filter: Optional[TaskStatus] = Query(None, alias="status"),
|
||||
task_type: Optional[str] = Query(None, description="Task category: llm_validation, backup, migration"),
|
||||
plugin_id: Optional[List[str]] = Query(None, description="Filter by plugin_id (repeatable query param)"),
|
||||
completed_only: bool = Query(False, description="Return only completed tasks (SUCCESS/FAILED)"),
|
||||
task_manager: TaskManager = Depends(get_task_manager),
|
||||
_ = Depends(has_permission("tasks", "READ"))
|
||||
):
|
||||
"""
|
||||
Retrieve a list of tasks with pagination and optional status filter.
|
||||
"""
|
||||
with belief_scope("list_tasks"):
|
||||
plugin_filters = list(plugin_id) if plugin_id else []
|
||||
if task_type:
|
||||
if task_type not in TASK_TYPE_PLUGIN_MAP:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_400_BAD_REQUEST,
|
||||
detail=f"Unsupported task_type '{task_type}'. Allowed: {', '.join(TASK_TYPE_PLUGIN_MAP.keys())}"
|
||||
)
|
||||
plugin_filters.extend(TASK_TYPE_PLUGIN_MAP[task_type])
|
||||
|
||||
return task_manager.get_tasks(
|
||||
limit=limit,
|
||||
offset=offset,
|
||||
status=status_filter,
|
||||
plugin_ids=plugin_filters or None,
|
||||
completed_only=completed_only
|
||||
)
|
||||
# [/DEF:list_tasks:Function]
|
||||
|
||||
@router.get("/{task_id}", response_model=Task)
|
||||
@@ -182,6 +182,23 @@ async def get_task(
|
||||
# @POST: Returns a list of log entries or raises 404.
|
||||
# @RETURN: List[LogEntry] - List of log entries.
|
||||
# @TIER: CRITICAL
|
||||
# @TEST_CONTRACT get_task_logs_api ->
|
||||
# {
|
||||
# required_params: {task_id: str},
|
||||
# optional_params: {level: str, source: str, search: str},
|
||||
# invariants: ["returns 404 for non-existent task", "applies filters correctly"]
|
||||
# }
|
||||
# @TEST_FIXTURE valid_task_logs_request -> {"task_id": "test_1", "level": "INFO"}
|
||||
# @TEST_EDGE task_not_found -> raises 404
|
||||
# @TEST_EDGE invalid_limit -> Query(limit=0) returns 422
|
||||
# @TEST_INVARIANT response_purity -> verifies: [valid_task_logs_request]
|
||||
# @TEST_CONTRACT: TaskLogQueryInput -> List[LogEntry]
|
||||
# @TEST_SCENARIO: existing_task_logs_filtered -> Returns filtered logs by level/source/search with pagination.
|
||||
# @TEST_FIXTURE: valid_task_with_mixed_logs -> backend/tests/fixtures/task_logs/valid_task_with_mixed_logs.json
|
||||
# @TEST_EDGE: missing_task -> Unknown task_id returns 404 Task not found.
|
||||
# @TEST_EDGE: invalid_level_type -> Non-string/invalid level query rejected by validation or yields empty result.
|
||||
# @TEST_EDGE: pagination_bounds -> offset=0 and limit=1000 remain within API bounds and do not overflow.
|
||||
# @TEST_INVARIANT: logs_only_for_existing_task -> VERIFIED_BY: [existing_task_logs_filtered, missing_task]
|
||||
async def get_task_logs(
|
||||
task_id: str,
|
||||
level: Optional[str] = Query(None, description="Filter by log level (DEBUG, INFO, WARNING, ERROR)"),
|
||||
@@ -328,4 +345,4 @@ async def clear_tasks(
|
||||
task_manager.clear_tasks(status)
|
||||
return
|
||||
# [/DEF:clear_tasks:Function]
|
||||
# [/DEF:TasksRouter:Module]
|
||||
# [/DEF:TasksRouter:Module]
|
||||
|
||||
@@ -21,7 +21,7 @@ import asyncio
|
||||
from .dependencies import get_task_manager, get_scheduler_service
|
||||
from .core.utils.network import NetworkError
|
||||
from .core.logger import logger, belief_scope
|
||||
from .api.routes import plugins, tasks, settings, environments, mappings, migration, connections, git, storage, admin, llm, dashboards, datasets, reports, assistant
|
||||
from .api.routes import plugins, tasks, settings, environments, mappings, migration, connections, git, storage, admin, llm, dashboards, datasets, reports, assistant, clean_release, clean_release_v2, profile
|
||||
from .api import auth
|
||||
|
||||
# [DEF:App:Global]
|
||||
@@ -133,6 +133,9 @@ app.include_router(dashboards.router)
|
||||
app.include_router(datasets.router)
|
||||
app.include_router(reports.router)
|
||||
app.include_router(assistant.router, prefix="/api/assistant", tags=["Assistant"])
|
||||
app.include_router(clean_release.router)
|
||||
app.include_router(clean_release_v2.router)
|
||||
app.include_router(profile.router)
|
||||
|
||||
|
||||
# [DEF:api.include_routers:Action]
|
||||
|
||||
128
backend/src/core/__tests__/test_superset_profile_lookup.py
Normal file
128
backend/src/core/__tests__/test_superset_profile_lookup.py
Normal file
@@ -0,0 +1,128 @@
|
||||
# [DEF:backend.src.core.__tests__.test_superset_profile_lookup:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, superset, profile, lookup, fallback, sorting
|
||||
# @PURPOSE: Verifies Superset profile lookup adapter payload normalization and fallback error precedence.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.core.superset_profile_lookup
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
import pytest
|
||||
|
||||
backend_dir = str(Path(__file__).parent.parent.parent.parent.resolve())
|
||||
if backend_dir not in sys.path:
|
||||
sys.path.insert(0, backend_dir)
|
||||
|
||||
from src.core.superset_profile_lookup import SupersetAccountLookupAdapter
|
||||
from src.core.utils.network import AuthenticationError, SupersetAPIError
|
||||
# [/SECTION]
|
||||
|
||||
|
||||
# [DEF:_RecordingNetworkClient:Class]
|
||||
# @PURPOSE: Records request payloads and returns scripted responses for deterministic adapter tests.
|
||||
class _RecordingNetworkClient:
|
||||
# [DEF:__init__:Function]
|
||||
# @PURPOSE: Initializes scripted network responses.
|
||||
# @PRE: scripted_responses is ordered per expected request sequence.
|
||||
# @POST: Instance stores response script and captures subsequent request calls.
|
||||
def __init__(self, scripted_responses: List[Any]):
|
||||
self._scripted_responses = scripted_responses
|
||||
self.calls: List[Dict[str, Any]] = []
|
||||
# [/DEF:__init__:Function]
|
||||
|
||||
# [DEF:request:Function]
|
||||
# @PURPOSE: Mimics APIClient.request while capturing call arguments.
|
||||
# @PRE: method and endpoint are provided.
|
||||
# @POST: Returns scripted response or raises scripted exception.
|
||||
def request(
|
||||
self,
|
||||
method: str,
|
||||
endpoint: str,
|
||||
params: Optional[Dict[str, Any]] = None,
|
||||
**kwargs,
|
||||
) -> Dict[str, Any]:
|
||||
self.calls.append(
|
||||
{
|
||||
"method": method,
|
||||
"endpoint": endpoint,
|
||||
"params": params or {},
|
||||
}
|
||||
)
|
||||
index = len(self.calls) - 1
|
||||
response = self._scripted_responses[index]
|
||||
if isinstance(response, Exception):
|
||||
raise response
|
||||
return response
|
||||
# [/DEF:request:Function]
|
||||
# [/DEF:_RecordingNetworkClient:Class]
|
||||
|
||||
|
||||
# [DEF:test_get_users_page_sends_lowercase_order_direction:Function]
|
||||
# @PURPOSE: Ensures adapter sends lowercase order_direction compatible with Superset rison schema.
|
||||
# @PRE: Adapter is initialized with recording network client.
|
||||
# @POST: First request query payload contains order_direction='asc' for asc sort.
|
||||
def test_get_users_page_sends_lowercase_order_direction():
|
||||
client = _RecordingNetworkClient(
|
||||
scripted_responses=[{"result": [{"username": "admin"}], "count": 1}]
|
||||
)
|
||||
adapter = SupersetAccountLookupAdapter(network_client=client, environment_id="ss-dev")
|
||||
|
||||
adapter.get_users_page(
|
||||
search="admin",
|
||||
page_index=0,
|
||||
page_size=20,
|
||||
sort_column="username",
|
||||
sort_order="asc",
|
||||
)
|
||||
|
||||
sent_query = json.loads(client.calls[0]["params"]["q"])
|
||||
assert sent_query["order_direction"] == "asc"
|
||||
# [/DEF:test_get_users_page_sends_lowercase_order_direction:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_users_page_preserves_primary_schema_error_over_fallback_auth_error:Function]
|
||||
# @PURPOSE: Ensures fallback auth error does not mask primary schema/query failure.
|
||||
# @PRE: Primary endpoint fails with SupersetAPIError and fallback fails with AuthenticationError.
|
||||
# @POST: Raised exception remains primary SupersetAPIError (non-auth) to preserve root cause.
|
||||
def test_get_users_page_preserves_primary_schema_error_over_fallback_auth_error():
|
||||
client = _RecordingNetworkClient(
|
||||
scripted_responses=[
|
||||
SupersetAPIError("API Error 400: bad rison schema"),
|
||||
AuthenticationError(),
|
||||
]
|
||||
)
|
||||
adapter = SupersetAccountLookupAdapter(network_client=client, environment_id="ss-dev")
|
||||
|
||||
with pytest.raises(SupersetAPIError) as exc_info:
|
||||
adapter.get_users_page(sort_order="asc")
|
||||
|
||||
assert "API Error 400" in str(exc_info.value)
|
||||
assert not isinstance(exc_info.value, AuthenticationError)
|
||||
# [/DEF:test_get_users_page_preserves_primary_schema_error_over_fallback_auth_error:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_users_page_uses_fallback_endpoint_when_primary_fails:Function]
|
||||
# @PURPOSE: Verifies adapter retries second users endpoint and succeeds when fallback is healthy.
|
||||
# @PRE: Primary endpoint fails; fallback returns valid users payload.
|
||||
# @POST: Result status is success and both endpoints were attempted in order.
|
||||
def test_get_users_page_uses_fallback_endpoint_when_primary_fails():
|
||||
client = _RecordingNetworkClient(
|
||||
scripted_responses=[
|
||||
SupersetAPIError("Primary endpoint failed"),
|
||||
{"result": [{"username": "admin"}], "count": 1},
|
||||
]
|
||||
)
|
||||
adapter = SupersetAccountLookupAdapter(network_client=client, environment_id="ss-dev")
|
||||
|
||||
result = adapter.get_users_page()
|
||||
|
||||
assert result["status"] == "success"
|
||||
assert [call["endpoint"] for call in client.calls] == ["/security/users/", "/security/users"]
|
||||
# [/DEF:test_get_users_page_uses_fallback_endpoint_when_primary_fails:Function]
|
||||
|
||||
|
||||
# [/DEF:backend.src.core.__tests__.test_superset_profile_lookup:Module]
|
||||
298
backend/src/core/async_superset_client.py
Normal file
298
backend/src/core/async_superset_client.py
Normal file
@@ -0,0 +1,298 @@
|
||||
# [DEF:backend.src.core.async_superset_client:Module]
|
||||
#
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: superset, async, client, httpx, dashboards, datasets
|
||||
# @PURPOSE: Async Superset client for dashboard hot-path requests without blocking FastAPI event loop.
|
||||
# @LAYER: Core
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.superset_client
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.utils.async_network.AsyncAPIClient
|
||||
# @INVARIANT: Async dashboard operations reuse shared auth cache and avoid sync requests in async routes.
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
import asyncio
|
||||
import json
|
||||
import re
|
||||
from typing import Any, Dict, List, Optional, Tuple, cast
|
||||
|
||||
from .config_models import Environment
|
||||
from .logger import logger as app_logger, belief_scope
|
||||
from .superset_client import SupersetClient
|
||||
from .utils.async_network import AsyncAPIClient
|
||||
# [/SECTION]
|
||||
|
||||
|
||||
# [DEF:AsyncSupersetClient:Class]
|
||||
# @PURPOSE: Async sibling of SupersetClient for dashboard read paths.
|
||||
class AsyncSupersetClient(SupersetClient):
|
||||
# [DEF:__init__:Function]
|
||||
# @PURPOSE: Initialize async Superset client with AsyncAPIClient transport.
|
||||
# @PRE: env is valid.
|
||||
# @POST: Client uses async network transport and inherited projection helpers.
|
||||
def __init__(self, env: Environment):
|
||||
self.env = env
|
||||
auth_payload = {
|
||||
"username": env.username,
|
||||
"password": env.password,
|
||||
"provider": "db",
|
||||
"refresh": "true",
|
||||
}
|
||||
self.network = AsyncAPIClient(
|
||||
config={"base_url": env.url, "auth": auth_payload},
|
||||
verify_ssl=env.verify_ssl,
|
||||
timeout=env.timeout,
|
||||
)
|
||||
self.delete_before_reimport = False
|
||||
# [/DEF:__init__:Function]
|
||||
|
||||
# [DEF:aclose:Function]
|
||||
# @PURPOSE: Close async transport resources.
|
||||
# @POST: Underlying AsyncAPIClient is closed.
|
||||
async def aclose(self) -> None:
|
||||
await self.network.aclose()
|
||||
# [/DEF:aclose:Function]
|
||||
|
||||
# [DEF:get_dashboards_page_async:Function]
|
||||
# @PURPOSE: Fetch one dashboards page asynchronously.
|
||||
# @POST: Returns total count and page result list.
|
||||
async def get_dashboards_page_async(self, query: Optional[Dict] = None) -> Tuple[int, List[Dict]]:
|
||||
with belief_scope("AsyncSupersetClient.get_dashboards_page_async"):
|
||||
validated_query = self._validate_query_params(query or {})
|
||||
if "columns" not in validated_query:
|
||||
validated_query["columns"] = [
|
||||
"slug",
|
||||
"id",
|
||||
"url",
|
||||
"changed_on_utc",
|
||||
"dashboard_title",
|
||||
"published",
|
||||
"created_by",
|
||||
"changed_by",
|
||||
"changed_by_name",
|
||||
"owners",
|
||||
]
|
||||
|
||||
response_json = cast(
|
||||
Dict[str, Any],
|
||||
await self.network.request(
|
||||
method="GET",
|
||||
endpoint="/dashboard/",
|
||||
params={"q": json.dumps(validated_query)},
|
||||
),
|
||||
)
|
||||
result = response_json.get("result", [])
|
||||
total_count = response_json.get("count", len(result))
|
||||
return total_count, result
|
||||
# [/DEF:get_dashboards_page_async:Function]
|
||||
|
||||
# [DEF:get_dashboard_async:Function]
|
||||
# @PURPOSE: Fetch one dashboard payload asynchronously.
|
||||
# @POST: Returns raw dashboard payload from Superset API.
|
||||
async def get_dashboard_async(self, dashboard_id: int) -> Dict:
|
||||
with belief_scope("AsyncSupersetClient.get_dashboard_async", f"id={dashboard_id}"):
|
||||
response = await self.network.request(method="GET", endpoint=f"/dashboard/{dashboard_id}")
|
||||
return cast(Dict, response)
|
||||
# [/DEF:get_dashboard_async:Function]
|
||||
|
||||
# [DEF:get_chart_async:Function]
|
||||
# @PURPOSE: Fetch one chart payload asynchronously.
|
||||
# @POST: Returns raw chart payload from Superset API.
|
||||
async def get_chart_async(self, chart_id: int) -> Dict:
|
||||
with belief_scope("AsyncSupersetClient.get_chart_async", f"id={chart_id}"):
|
||||
response = await self.network.request(method="GET", endpoint=f"/chart/{chart_id}")
|
||||
return cast(Dict, response)
|
||||
# [/DEF:get_chart_async:Function]
|
||||
|
||||
# [DEF:get_dashboard_detail_async:Function]
|
||||
# @PURPOSE: Fetch dashboard detail asynchronously with concurrent charts/datasets requests.
|
||||
# @POST: Returns dashboard detail payload for overview page.
|
||||
async def get_dashboard_detail_async(self, dashboard_id: int) -> Dict:
|
||||
with belief_scope("AsyncSupersetClient.get_dashboard_detail_async", f"id={dashboard_id}"):
|
||||
dashboard_response = await self.get_dashboard_async(dashboard_id)
|
||||
dashboard_data = dashboard_response.get("result", dashboard_response)
|
||||
|
||||
charts: List[Dict] = []
|
||||
datasets: List[Dict] = []
|
||||
|
||||
def extract_dataset_id_from_form_data(form_data: Optional[Dict]) -> Optional[int]:
|
||||
if not isinstance(form_data, dict):
|
||||
return None
|
||||
datasource = form_data.get("datasource")
|
||||
if isinstance(datasource, str):
|
||||
matched = re.match(r"^(\d+)__", datasource)
|
||||
if matched:
|
||||
try:
|
||||
return int(matched.group(1))
|
||||
except ValueError:
|
||||
return None
|
||||
if isinstance(datasource, dict):
|
||||
ds_id = datasource.get("id")
|
||||
try:
|
||||
return int(ds_id) if ds_id is not None else None
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
ds_id = form_data.get("datasource_id")
|
||||
try:
|
||||
return int(ds_id) if ds_id is not None else None
|
||||
except (TypeError, ValueError):
|
||||
return None
|
||||
|
||||
chart_task = self.network.request(
|
||||
method="GET",
|
||||
endpoint=f"/dashboard/{dashboard_id}/charts",
|
||||
)
|
||||
dataset_task = self.network.request(
|
||||
method="GET",
|
||||
endpoint=f"/dashboard/{dashboard_id}/datasets",
|
||||
)
|
||||
charts_response, datasets_response = await asyncio.gather(
|
||||
chart_task,
|
||||
dataset_task,
|
||||
return_exceptions=True,
|
||||
)
|
||||
|
||||
if not isinstance(charts_response, Exception):
|
||||
charts_payload = charts_response.get("result", []) if isinstance(charts_response, dict) else []
|
||||
for chart_obj in charts_payload:
|
||||
if not isinstance(chart_obj, dict):
|
||||
continue
|
||||
chart_id = chart_obj.get("id")
|
||||
if chart_id is None:
|
||||
continue
|
||||
form_data = chart_obj.get("form_data")
|
||||
if isinstance(form_data, str):
|
||||
try:
|
||||
form_data = json.loads(form_data)
|
||||
except Exception:
|
||||
form_data = {}
|
||||
dataset_id = extract_dataset_id_from_form_data(form_data) or chart_obj.get("datasource_id")
|
||||
charts.append({
|
||||
"id": int(chart_id),
|
||||
"title": chart_obj.get("slice_name") or chart_obj.get("name") or f"Chart {chart_id}",
|
||||
"viz_type": (form_data.get("viz_type") if isinstance(form_data, dict) else None),
|
||||
"dataset_id": int(dataset_id) if dataset_id is not None else None,
|
||||
"last_modified": chart_obj.get("changed_on"),
|
||||
"overview": chart_obj.get("description") or (form_data.get("viz_type") if isinstance(form_data, dict) else None) or "Chart",
|
||||
})
|
||||
else:
|
||||
app_logger.warning("[get_dashboard_detail_async][Warning] Failed to fetch dashboard charts: %s", charts_response)
|
||||
|
||||
if not isinstance(datasets_response, Exception):
|
||||
datasets_payload = datasets_response.get("result", []) if isinstance(datasets_response, dict) else []
|
||||
for dataset_obj in datasets_payload:
|
||||
if not isinstance(dataset_obj, dict):
|
||||
continue
|
||||
dataset_id = dataset_obj.get("id")
|
||||
if dataset_id is None:
|
||||
continue
|
||||
db_payload = dataset_obj.get("database")
|
||||
db_name = db_payload.get("database_name") if isinstance(db_payload, dict) else None
|
||||
table_name = dataset_obj.get("table_name") or dataset_obj.get("datasource_name") or dataset_obj.get("name") or f"Dataset {dataset_id}"
|
||||
schema = dataset_obj.get("schema")
|
||||
fq_name = f"{schema}.{table_name}" if schema else table_name
|
||||
datasets.append({
|
||||
"id": int(dataset_id),
|
||||
"table_name": table_name,
|
||||
"schema": schema,
|
||||
"database": db_name or dataset_obj.get("database_name") or "Unknown",
|
||||
"last_modified": dataset_obj.get("changed_on"),
|
||||
"overview": fq_name,
|
||||
})
|
||||
else:
|
||||
app_logger.warning("[get_dashboard_detail_async][Warning] Failed to fetch dashboard datasets: %s", datasets_response)
|
||||
|
||||
if not charts:
|
||||
raw_position_json = dashboard_data.get("position_json")
|
||||
chart_ids_from_position = set()
|
||||
if isinstance(raw_position_json, str) and raw_position_json:
|
||||
try:
|
||||
parsed_position = json.loads(raw_position_json)
|
||||
chart_ids_from_position.update(self._extract_chart_ids_from_layout(parsed_position))
|
||||
except Exception:
|
||||
pass
|
||||
elif isinstance(raw_position_json, dict):
|
||||
chart_ids_from_position.update(self._extract_chart_ids_from_layout(raw_position_json))
|
||||
|
||||
raw_json_metadata = dashboard_data.get("json_metadata")
|
||||
if isinstance(raw_json_metadata, str) and raw_json_metadata:
|
||||
try:
|
||||
parsed_metadata = json.loads(raw_json_metadata)
|
||||
chart_ids_from_position.update(self._extract_chart_ids_from_layout(parsed_metadata))
|
||||
except Exception:
|
||||
pass
|
||||
elif isinstance(raw_json_metadata, dict):
|
||||
chart_ids_from_position.update(self._extract_chart_ids_from_layout(raw_json_metadata))
|
||||
|
||||
fallback_chart_tasks = [
|
||||
self.get_chart_async(int(chart_id))
|
||||
for chart_id in sorted(chart_ids_from_position)
|
||||
]
|
||||
fallback_chart_responses = await asyncio.gather(
|
||||
*fallback_chart_tasks,
|
||||
return_exceptions=True,
|
||||
)
|
||||
for chart_id, chart_response in zip(sorted(chart_ids_from_position), fallback_chart_responses):
|
||||
if isinstance(chart_response, Exception):
|
||||
app_logger.warning("[get_dashboard_detail_async][Warning] Failed to resolve fallback chart %s: %s", chart_id, chart_response)
|
||||
continue
|
||||
chart_data = chart_response.get("result", chart_response)
|
||||
charts.append({
|
||||
"id": int(chart_id),
|
||||
"title": chart_data.get("slice_name") or chart_data.get("name") or f"Chart {chart_id}",
|
||||
"viz_type": chart_data.get("viz_type"),
|
||||
"dataset_id": chart_data.get("datasource_id"),
|
||||
"last_modified": chart_data.get("changed_on"),
|
||||
"overview": chart_data.get("description") or chart_data.get("viz_type") or "Chart",
|
||||
})
|
||||
|
||||
dataset_ids_from_charts = {
|
||||
c.get("dataset_id")
|
||||
for c in charts
|
||||
if c.get("dataset_id") is not None
|
||||
}
|
||||
known_dataset_ids = {d.get("id") for d in datasets if d.get("id") is not None}
|
||||
missing_dataset_ids = sorted(int(item) for item in dataset_ids_from_charts if item not in known_dataset_ids)
|
||||
if missing_dataset_ids:
|
||||
dataset_fetch_tasks = [
|
||||
self.network.request(method="GET", endpoint=f"/dataset/{dataset_id}")
|
||||
for dataset_id in missing_dataset_ids
|
||||
]
|
||||
dataset_fetch_responses = await asyncio.gather(
|
||||
*dataset_fetch_tasks,
|
||||
return_exceptions=True,
|
||||
)
|
||||
for dataset_id, dataset_response in zip(missing_dataset_ids, dataset_fetch_responses):
|
||||
if isinstance(dataset_response, Exception):
|
||||
app_logger.warning("[get_dashboard_detail_async][Warning] Failed to backfill dataset %s: %s", dataset_id, dataset_response)
|
||||
continue
|
||||
dataset_data = dataset_response.get("result", dataset_response) if isinstance(dataset_response, dict) else {}
|
||||
db_payload = dataset_data.get("database")
|
||||
db_name = db_payload.get("database_name") if isinstance(db_payload, dict) else None
|
||||
table_name = dataset_data.get("table_name") or dataset_data.get("datasource_name") or dataset_data.get("name") or f"Dataset {dataset_id}"
|
||||
schema = dataset_data.get("schema")
|
||||
fq_name = f"{schema}.{table_name}" if schema else table_name
|
||||
datasets.append({
|
||||
"id": int(dataset_id),
|
||||
"table_name": table_name,
|
||||
"schema": schema,
|
||||
"database": db_name or dataset_data.get("database_name") or "Unknown",
|
||||
"last_modified": dataset_data.get("changed_on"),
|
||||
"overview": fq_name,
|
||||
})
|
||||
|
||||
return {
|
||||
"id": int(dashboard_data.get("id") or dashboard_id),
|
||||
"title": dashboard_data.get("dashboard_title") or dashboard_data.get("title") or f"Dashboard {dashboard_id}",
|
||||
"slug": dashboard_data.get("slug"),
|
||||
"url": dashboard_data.get("url"),
|
||||
"description": dashboard_data.get("description"),
|
||||
"last_modified": dashboard_data.get("changed_on_utc") or dashboard_data.get("changed_on"),
|
||||
"published": dashboard_data.get("published"),
|
||||
"charts": charts,
|
||||
"datasets": datasets,
|
||||
"chart_count": len(charts),
|
||||
"dataset_count": len(datasets),
|
||||
}
|
||||
# [/DEF:get_dashboard_detail_async:Function]
|
||||
# [/DEF:AsyncSupersetClient:Class]
|
||||
|
||||
# [/DEF:backend.src.core.async_superset_client:Module]
|
||||
@@ -12,6 +12,7 @@
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from ...models.auth import User, Role, Permission
|
||||
from ...models.profile import UserDashboardPreference
|
||||
from ..logger import belief_scope
|
||||
# [/SECTION]
|
||||
|
||||
@@ -109,6 +110,38 @@ class AuthRepository:
|
||||
).first()
|
||||
# [/DEF:get_permission_by_resource_action:Function]
|
||||
|
||||
# [DEF:get_user_dashboard_preference:Function]
|
||||
# @PURPOSE: Retrieves dashboard preference by owner user ID.
|
||||
# @PRE: user_id is a string.
|
||||
# @POST: Returns UserDashboardPreference if found, else None.
|
||||
# @PARAM: user_id (str) - Preference owner identifier.
|
||||
# @RETURN: Optional[UserDashboardPreference] - Found preference or None.
|
||||
def get_user_dashboard_preference(self, user_id: str) -> Optional[UserDashboardPreference]:
|
||||
with belief_scope("AuthRepository.get_user_dashboard_preference"):
|
||||
return (
|
||||
self.db.query(UserDashboardPreference)
|
||||
.filter(UserDashboardPreference.user_id == user_id)
|
||||
.first()
|
||||
)
|
||||
# [/DEF:get_user_dashboard_preference:Function]
|
||||
|
||||
# [DEF:save_user_dashboard_preference:Function]
|
||||
# @PURPOSE: Persists dashboard preference entity and returns refreshed row.
|
||||
# @PRE: preference is a valid UserDashboardPreference entity.
|
||||
# @POST: Preference is committed and refreshed in database.
|
||||
# @PARAM: preference (UserDashboardPreference) - Preference entity to persist.
|
||||
# @RETURN: UserDashboardPreference - Persisted preference row.
|
||||
def save_user_dashboard_preference(
|
||||
self,
|
||||
preference: UserDashboardPreference,
|
||||
) -> UserDashboardPreference:
|
||||
with belief_scope("AuthRepository.save_user_dashboard_preference"):
|
||||
self.db.add(preference)
|
||||
self.db.commit()
|
||||
self.db.refresh(preference)
|
||||
return preference
|
||||
# [/DEF:save_user_dashboard_preference:Function]
|
||||
|
||||
# [DEF:list_permissions:Function]
|
||||
# @PURPOSE: Lists all available permissions.
|
||||
# @POST: Returns a list of all Permission objects.
|
||||
|
||||
@@ -24,19 +24,19 @@ class Schedule(BaseModel):
|
||||
|
||||
# [DEF:Environment:DataClass]
|
||||
# @PURPOSE: Represents a Superset environment configuration.
|
||||
class Environment(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
url: str
|
||||
username: str
|
||||
password: str # Will be masked in UI
|
||||
stage: str = Field(default="DEV", pattern="^(DEV|PREPROD|PROD)$")
|
||||
verify_ssl: bool = True
|
||||
timeout: int = 30
|
||||
is_default: bool = False
|
||||
is_production: bool = False
|
||||
backup_schedule: Schedule = Field(default_factory=Schedule)
|
||||
# [/DEF:Environment:DataClass]
|
||||
class Environment(BaseModel):
|
||||
id: str
|
||||
name: str
|
||||
url: str
|
||||
username: str
|
||||
password: str # Will be masked in UI
|
||||
stage: str = Field(default="DEV", pattern="^(DEV|PREPROD|PROD)$")
|
||||
verify_ssl: bool = True
|
||||
timeout: int = 30
|
||||
is_default: bool = False
|
||||
is_production: bool = False
|
||||
backup_schedule: Schedule = Field(default_factory=Schedule)
|
||||
# [/DEF:Environment:DataClass]
|
||||
|
||||
# [DEF:LoggingConfig:DataClass]
|
||||
# @PURPOSE: Defines the configuration for the application's logging system.
|
||||
@@ -49,10 +49,18 @@ class LoggingConfig(BaseModel):
|
||||
enable_belief_state: bool = True
|
||||
# [/DEF:LoggingConfig:DataClass]
|
||||
|
||||
# [DEF:CleanReleaseConfig:DataClass]
|
||||
# @PURPOSE: Configuration for clean release compliance subsystem.
|
||||
class CleanReleaseConfig(BaseModel):
|
||||
active_policy_id: Optional[str] = None
|
||||
active_registry_id: Optional[str] = None
|
||||
# [/DEF:CleanReleaseConfig:DataClass]
|
||||
|
||||
# [DEF:GlobalSettings:DataClass]
|
||||
# @PURPOSE: Represents global application settings.
|
||||
class GlobalSettings(BaseModel):
|
||||
storage: StorageConfig = Field(default_factory=StorageConfig)
|
||||
clean_release: CleanReleaseConfig = Field(default_factory=CleanReleaseConfig)
|
||||
default_environment_id: Optional[str] = None
|
||||
logging: LoggingConfig = Field(default_factory=LoggingConfig)
|
||||
connections: List[dict] = []
|
||||
|
||||
@@ -11,7 +11,7 @@
|
||||
# @INVARIANT: A single engine instance is used for the entire application.
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
from sqlalchemy import create_engine
|
||||
from sqlalchemy import create_engine, inspect, text
|
||||
from sqlalchemy.orm import sessionmaker
|
||||
from ..models.mapping import Base
|
||||
# Import models to ensure they're registered with Base
|
||||
@@ -20,7 +20,9 @@ from ..models import auth as _auth_models # noqa: F401
|
||||
from ..models import config as _config_models # noqa: F401
|
||||
from ..models import llm as _llm_models # noqa: F401
|
||||
from ..models import assistant as _assistant_models # noqa: F401
|
||||
from .logger import belief_scope
|
||||
from ..models import profile as _profile_models # noqa: F401
|
||||
from ..models import clean_release as _clean_release_models # noqa: F401
|
||||
from .logger import belief_scope, logger
|
||||
from .auth.config import auth_config
|
||||
import os
|
||||
from pathlib import Path
|
||||
@@ -94,6 +96,104 @@ TasksSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=tasks_e
|
||||
AuthSessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=auth_engine)
|
||||
# [/DEF:AuthSessionLocal:Class]
|
||||
|
||||
# [DEF:_ensure_user_dashboard_preferences_columns:Function]
|
||||
# @PURPOSE: Applies additive schema upgrades for user_dashboard_preferences table.
|
||||
# @PRE: bind_engine points to application database where profile table is stored.
|
||||
# @POST: Missing columns are added without data loss.
|
||||
def _ensure_user_dashboard_preferences_columns(bind_engine):
|
||||
with belief_scope("_ensure_user_dashboard_preferences_columns"):
|
||||
table_name = "user_dashboard_preferences"
|
||||
inspector = inspect(bind_engine)
|
||||
if table_name not in inspector.get_table_names():
|
||||
return
|
||||
|
||||
existing_columns = {
|
||||
str(column.get("name") or "").strip()
|
||||
for column in inspector.get_columns(table_name)
|
||||
}
|
||||
|
||||
alter_statements = []
|
||||
if "git_username" not in existing_columns:
|
||||
alter_statements.append(
|
||||
"ALTER TABLE user_dashboard_preferences ADD COLUMN git_username VARCHAR"
|
||||
)
|
||||
if "git_email" not in existing_columns:
|
||||
alter_statements.append(
|
||||
"ALTER TABLE user_dashboard_preferences ADD COLUMN git_email VARCHAR"
|
||||
)
|
||||
if "git_personal_access_token_encrypted" not in existing_columns:
|
||||
alter_statements.append(
|
||||
"ALTER TABLE user_dashboard_preferences "
|
||||
"ADD COLUMN git_personal_access_token_encrypted VARCHAR"
|
||||
)
|
||||
if "start_page" not in existing_columns:
|
||||
alter_statements.append(
|
||||
"ALTER TABLE user_dashboard_preferences "
|
||||
"ADD COLUMN start_page VARCHAR NOT NULL DEFAULT 'dashboards'"
|
||||
)
|
||||
if "auto_open_task_drawer" not in existing_columns:
|
||||
alter_statements.append(
|
||||
"ALTER TABLE user_dashboard_preferences "
|
||||
"ADD COLUMN auto_open_task_drawer BOOLEAN NOT NULL DEFAULT TRUE"
|
||||
)
|
||||
if "dashboards_table_density" not in existing_columns:
|
||||
alter_statements.append(
|
||||
"ALTER TABLE user_dashboard_preferences "
|
||||
"ADD COLUMN dashboards_table_density VARCHAR NOT NULL DEFAULT 'comfortable'"
|
||||
)
|
||||
|
||||
if not alter_statements:
|
||||
return
|
||||
|
||||
try:
|
||||
with bind_engine.begin() as connection:
|
||||
for statement in alter_statements:
|
||||
connection.execute(text(statement))
|
||||
except Exception as migration_error:
|
||||
logger.warning(
|
||||
"[database][EXPLORE] Profile preference additive migration failed: %s",
|
||||
migration_error,
|
||||
)
|
||||
# [/DEF:_ensure_user_dashboard_preferences_columns:Function]
|
||||
|
||||
|
||||
# [DEF:_ensure_git_server_configs_columns:Function]
|
||||
# @PURPOSE: Applies additive schema upgrades for git_server_configs table.
|
||||
# @PRE: bind_engine points to application database.
|
||||
# @POST: Missing columns are added without data loss.
|
||||
def _ensure_git_server_configs_columns(bind_engine):
|
||||
with belief_scope("_ensure_git_server_configs_columns"):
|
||||
table_name = "git_server_configs"
|
||||
inspector = inspect(bind_engine)
|
||||
if table_name not in inspector.get_table_names():
|
||||
return
|
||||
|
||||
existing_columns = {
|
||||
str(column.get("name") or "").strip()
|
||||
for column in inspector.get_columns(table_name)
|
||||
}
|
||||
|
||||
alter_statements = []
|
||||
if "default_branch" not in existing_columns:
|
||||
alter_statements.append(
|
||||
"ALTER TABLE git_server_configs ADD COLUMN default_branch VARCHAR NOT NULL DEFAULT 'main'"
|
||||
)
|
||||
|
||||
if not alter_statements:
|
||||
return
|
||||
|
||||
try:
|
||||
with bind_engine.begin() as connection:
|
||||
for statement in alter_statements:
|
||||
connection.execute(text(statement))
|
||||
except Exception as migration_error:
|
||||
logger.warning(
|
||||
"[database][EXPLORE] GitServerConfig preference additive migration failed: %s",
|
||||
migration_error,
|
||||
)
|
||||
# [/DEF:_ensure_git_server_configs_columns:Function]
|
||||
|
||||
|
||||
# [DEF:init_db:Function]
|
||||
# @PURPOSE: Initializes the database by creating all tables.
|
||||
# @PRE: engine, tasks_engine and auth_engine are initialized.
|
||||
@@ -104,6 +204,8 @@ def init_db():
|
||||
Base.metadata.create_all(bind=engine)
|
||||
Base.metadata.create_all(bind=tasks_engine)
|
||||
Base.metadata.create_all(bind=auth_engine)
|
||||
_ensure_user_dashboard_preferences_columns(engine)
|
||||
_ensure_git_server_configs_columns(engine)
|
||||
# [/DEF:init_db:Function]
|
||||
|
||||
# [DEF:get_db:Function]
|
||||
|
||||
@@ -159,14 +159,37 @@ class SupersetClient:
|
||||
|
||||
# Map fields to DashboardMetadata schema
|
||||
result = []
|
||||
for dash in dashboards:
|
||||
owners = self._extract_owner_labels(dash.get("owners"))
|
||||
max_debug_samples = 12
|
||||
for index, dash in enumerate(dashboards):
|
||||
raw_owners = dash.get("owners")
|
||||
raw_created_by = dash.get("created_by")
|
||||
raw_changed_by = dash.get("changed_by")
|
||||
raw_changed_by_name = dash.get("changed_by_name")
|
||||
|
||||
owners = self._extract_owner_labels(raw_owners)
|
||||
# No per-dashboard detail requests here: keep list endpoint O(1).
|
||||
if not owners:
|
||||
owners = self._extract_owner_labels(
|
||||
[dash.get("created_by"), dash.get("changed_by")],
|
||||
[raw_created_by, raw_changed_by],
|
||||
)
|
||||
|
||||
projected_created_by = self._extract_user_display(
|
||||
None,
|
||||
raw_created_by,
|
||||
)
|
||||
projected_modified_by = self._extract_user_display(
|
||||
raw_changed_by_name,
|
||||
raw_changed_by,
|
||||
)
|
||||
|
||||
raw_owner_usernames: List[str] = []
|
||||
if isinstance(raw_owners, list):
|
||||
for owner_payload in raw_owners:
|
||||
if isinstance(owner_payload, dict):
|
||||
owner_username = self._sanitize_user_text(owner_payload.get("username"))
|
||||
if owner_username:
|
||||
raw_owner_usernames.append(owner_username)
|
||||
|
||||
result.append({
|
||||
"id": dash.get("id"),
|
||||
"slug": dash.get("slug"),
|
||||
@@ -174,16 +197,26 @@ class SupersetClient:
|
||||
"url": dash.get("url"),
|
||||
"last_modified": dash.get("changed_on_utc"),
|
||||
"status": "published" if dash.get("published") else "draft",
|
||||
"created_by": self._extract_user_display(
|
||||
None,
|
||||
dash.get("created_by"),
|
||||
),
|
||||
"modified_by": self._extract_user_display(
|
||||
dash.get("changed_by_name"),
|
||||
dash.get("changed_by"),
|
||||
),
|
||||
"created_by": projected_created_by,
|
||||
"modified_by": projected_modified_by,
|
||||
"owners": owners,
|
||||
})
|
||||
|
||||
if index < max_debug_samples:
|
||||
app_logger.reflect(
|
||||
"[REFLECT] Dashboard actor projection sample "
|
||||
f"(env={getattr(self.env, 'id', None)}, dashboard_id={dash.get('id')}, "
|
||||
f"raw_owners={raw_owners!r}, raw_owner_usernames={raw_owner_usernames!r}, "
|
||||
f"raw_created_by={raw_created_by!r}, raw_changed_by={raw_changed_by!r}, "
|
||||
f"raw_changed_by_name={raw_changed_by_name!r}, projected_owners={owners!r}, "
|
||||
f"projected_created_by={projected_created_by!r}, projected_modified_by={projected_modified_by!r})"
|
||||
)
|
||||
|
||||
app_logger.reflect(
|
||||
"[REFLECT] Dashboard actor projection summary "
|
||||
f"(env={getattr(self.env, 'id', None)}, dashboards={len(result)}, "
|
||||
f"sampled={min(len(result), max_debug_samples)})"
|
||||
)
|
||||
return result
|
||||
# [/DEF:get_dashboards_summary:Function]
|
||||
|
||||
|
||||
238
backend/src/core/superset_profile_lookup.py
Normal file
238
backend/src/core/superset_profile_lookup.py
Normal file
@@ -0,0 +1,238 @@
|
||||
# [DEF:backend.src.core.superset_profile_lookup:Module]
|
||||
#
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: superset, users, lookup, profile, pagination, normalization
|
||||
# @PURPOSE: Provides environment-scoped Superset account lookup adapter with stable normalized output.
|
||||
# @LAYER: Core
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.utils.network.APIClient
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.logger
|
||||
#
|
||||
# @INVARIANT: Adapter never leaks raw upstream payload shape to API consumers.
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
import json
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from .logger import logger, belief_scope
|
||||
from .utils.network import APIClient, AuthenticationError, SupersetAPIError
|
||||
# [/SECTION]
|
||||
|
||||
|
||||
# [DEF:SupersetAccountLookupAdapter:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Lookup Superset users and normalize candidates for profile binding.
|
||||
class SupersetAccountLookupAdapter:
|
||||
# [DEF:__init__:Function]
|
||||
# @PURPOSE: Initializes lookup adapter with authenticated API client and environment context.
|
||||
# @PRE: network_client supports request(method, endpoint, params=...).
|
||||
# @POST: Adapter is ready to perform users lookup requests.
|
||||
def __init__(self, network_client: APIClient, environment_id: str):
|
||||
self.network_client = network_client
|
||||
self.environment_id = str(environment_id or "")
|
||||
# [/DEF:__init__:Function]
|
||||
|
||||
# [DEF:get_users_page:Function]
|
||||
# @PURPOSE: Fetch one users page from Superset with passthrough search/sort parameters.
|
||||
# @PRE: page_index >= 0 and page_size >= 1.
|
||||
# @POST: Returns deterministic payload with normalized items and total count.
|
||||
# @RETURN: Dict[str, Any]
|
||||
def get_users_page(
|
||||
self,
|
||||
search: Optional[str] = None,
|
||||
page_index: int = 0,
|
||||
page_size: int = 20,
|
||||
sort_column: str = "username",
|
||||
sort_order: str = "desc",
|
||||
) -> Dict[str, Any]:
|
||||
with belief_scope("SupersetAccountLookupAdapter.get_users_page"):
|
||||
normalized_page_index = max(int(page_index), 0)
|
||||
normalized_page_size = max(int(page_size), 1)
|
||||
|
||||
normalized_sort_column = str(sort_column or "username").strip().lower() or "username"
|
||||
normalized_sort_order = str(sort_order or "desc").strip().lower()
|
||||
if normalized_sort_order not in {"asc", "desc"}:
|
||||
normalized_sort_order = "desc"
|
||||
|
||||
query: Dict[str, Any] = {
|
||||
"page": normalized_page_index,
|
||||
"page_size": normalized_page_size,
|
||||
"order_column": normalized_sort_column,
|
||||
"order_direction": normalized_sort_order,
|
||||
}
|
||||
|
||||
normalized_search = str(search or "").strip()
|
||||
if normalized_search:
|
||||
query["filters"] = [{"col": "username", "opr": "ct", "value": normalized_search}]
|
||||
|
||||
logger.reason(
|
||||
"[REASON] Lookup Superset users "
|
||||
f"(env={self.environment_id}, page={normalized_page_index}, page_size={normalized_page_size})"
|
||||
)
|
||||
logger.reflect(
|
||||
"[REFLECT] Prepared Superset users lookup query "
|
||||
f"(env={self.environment_id}, order_column={normalized_sort_column}, "
|
||||
f"normalized_sort_order={normalized_sort_order}, "
|
||||
f"payload_order_direction={query.get('order_direction')})"
|
||||
)
|
||||
|
||||
primary_error: Optional[Exception] = None
|
||||
last_error: Optional[Exception] = None
|
||||
for attempt_index, endpoint in enumerate(("/security/users/", "/security/users"), start=1):
|
||||
try:
|
||||
logger.reason(
|
||||
"[REASON] Users lookup request attempt "
|
||||
f"(env={self.environment_id}, attempt={attempt_index}, endpoint={endpoint})"
|
||||
)
|
||||
response = self.network_client.request(
|
||||
method="GET",
|
||||
endpoint=endpoint,
|
||||
params={"q": json.dumps(query)},
|
||||
)
|
||||
logger.reflect(
|
||||
"[REFLECT] Users lookup endpoint succeeded "
|
||||
f"(env={self.environment_id}, attempt={attempt_index}, endpoint={endpoint})"
|
||||
)
|
||||
return self._normalize_lookup_payload(
|
||||
response=response,
|
||||
page_index=normalized_page_index,
|
||||
page_size=normalized_page_size,
|
||||
)
|
||||
except Exception as exc:
|
||||
if primary_error is None:
|
||||
primary_error = exc
|
||||
last_error = exc
|
||||
cause = getattr(exc, "__cause__", None)
|
||||
cause_response = getattr(cause, "response", None)
|
||||
status_code = getattr(cause_response, "status_code", None)
|
||||
logger.explore(
|
||||
"[EXPLORE] Users lookup endpoint failed "
|
||||
f"(env={self.environment_id}, attempt={attempt_index}, endpoint={endpoint}, "
|
||||
f"error_type={type(exc).__name__}, status_code={status_code}, "
|
||||
f"payload_order_direction={query.get('order_direction')}): {exc}"
|
||||
)
|
||||
|
||||
if last_error is not None:
|
||||
selected_error: Exception = last_error
|
||||
if (
|
||||
primary_error is not None
|
||||
and primary_error is not last_error
|
||||
and isinstance(last_error, AuthenticationError)
|
||||
and not isinstance(primary_error, AuthenticationError)
|
||||
):
|
||||
selected_error = primary_error
|
||||
logger.reflect(
|
||||
"[REFLECT] Preserving primary lookup failure over fallback auth error "
|
||||
f"(env={self.environment_id}, primary_error_type={type(primary_error).__name__}, "
|
||||
f"fallback_error_type={type(last_error).__name__})"
|
||||
)
|
||||
|
||||
logger.explore(
|
||||
"[EXPLORE] All Superset users lookup endpoints failed "
|
||||
f"(env={self.environment_id}, payload_order_direction={query.get('order_direction')}, "
|
||||
f"selected_error_type={type(selected_error).__name__})"
|
||||
)
|
||||
raise selected_error
|
||||
raise SupersetAPIError("Superset users lookup failed without explicit error")
|
||||
# [/DEF:get_users_page:Function]
|
||||
|
||||
# [DEF:_normalize_lookup_payload:Function]
|
||||
# @PURPOSE: Convert Superset users response variants into stable candidates payload.
|
||||
# @PRE: response can be dict/list in any supported upstream shape.
|
||||
# @POST: Output contains canonical keys: status, environment_id, page_index, page_size, total, items.
|
||||
# @RETURN: Dict[str, Any]
|
||||
def _normalize_lookup_payload(
|
||||
self,
|
||||
response: Any,
|
||||
page_index: int,
|
||||
page_size: int,
|
||||
) -> Dict[str, Any]:
|
||||
with belief_scope("SupersetAccountLookupAdapter._normalize_lookup_payload"):
|
||||
payload = response
|
||||
if isinstance(payload, dict) and isinstance(payload.get("result"), dict):
|
||||
payload = payload.get("result")
|
||||
|
||||
raw_items: List[Any] = []
|
||||
total = 0
|
||||
|
||||
if isinstance(payload, dict):
|
||||
if isinstance(payload.get("result"), list):
|
||||
raw_items = payload.get("result") or []
|
||||
total = int(payload.get("count", len(raw_items)) or 0)
|
||||
elif isinstance(payload.get("users"), list):
|
||||
raw_items = payload.get("users") or []
|
||||
total = int(payload.get("total", len(raw_items)) or 0)
|
||||
elif isinstance(payload.get("items"), list):
|
||||
raw_items = payload.get("items") or []
|
||||
total = int(payload.get("total", len(raw_items)) or 0)
|
||||
elif isinstance(payload, list):
|
||||
raw_items = payload
|
||||
total = len(raw_items)
|
||||
|
||||
normalized_items: List[Dict[str, Any]] = []
|
||||
seen_usernames = set()
|
||||
|
||||
for raw_user in raw_items:
|
||||
candidate = self.normalize_user_payload(raw_user)
|
||||
username_key = str(candidate.get("username") or "").strip().lower()
|
||||
if not username_key:
|
||||
continue
|
||||
if username_key in seen_usernames:
|
||||
continue
|
||||
seen_usernames.add(username_key)
|
||||
normalized_items.append(candidate)
|
||||
|
||||
logger.reflect(
|
||||
"[REFLECT] Normalized lookup payload "
|
||||
f"(env={self.environment_id}, items={len(normalized_items)}, total={max(total, len(normalized_items))})"
|
||||
)
|
||||
|
||||
return {
|
||||
"status": "success",
|
||||
"environment_id": self.environment_id,
|
||||
"page_index": max(int(page_index), 0),
|
||||
"page_size": max(int(page_size), 1),
|
||||
"total": max(int(total), len(normalized_items)),
|
||||
"items": normalized_items,
|
||||
}
|
||||
# [/DEF:_normalize_lookup_payload:Function]
|
||||
|
||||
# [DEF:normalize_user_payload:Function]
|
||||
# @PURPOSE: Project raw Superset user object to canonical candidate shape.
|
||||
# @PRE: raw_user may have heterogenous key names between Superset versions.
|
||||
# @POST: Returns normalized candidate keys (environment_id, username, display_name, email, is_active).
|
||||
# @RETURN: Dict[str, Any]
|
||||
def normalize_user_payload(self, raw_user: Any) -> Dict[str, Any]:
|
||||
if not isinstance(raw_user, dict):
|
||||
raw_user = {}
|
||||
|
||||
username = str(
|
||||
raw_user.get("username")
|
||||
or raw_user.get("userName")
|
||||
or raw_user.get("name")
|
||||
or ""
|
||||
).strip()
|
||||
|
||||
full_name = str(raw_user.get("full_name") or "").strip()
|
||||
first_name = str(raw_user.get("first_name") or "").strip()
|
||||
last_name = str(raw_user.get("last_name") or "").strip()
|
||||
display_name = full_name or " ".join(
|
||||
part for part in [first_name, last_name] if part
|
||||
).strip()
|
||||
if not display_name:
|
||||
display_name = username or None
|
||||
|
||||
email = str(raw_user.get("email") or "").strip() or None
|
||||
is_active_raw = raw_user.get("is_active")
|
||||
is_active = bool(is_active_raw) if is_active_raw is not None else None
|
||||
|
||||
return {
|
||||
"environment_id": self.environment_id,
|
||||
"username": username,
|
||||
"display_name": display_name,
|
||||
"email": email,
|
||||
"is_active": is_active,
|
||||
}
|
||||
# [/DEF:normalize_user_payload:Function]
|
||||
# [/DEF:SupersetAccountLookupAdapter:Class]
|
||||
|
||||
# [/DEF:backend.src.core.superset_profile_lookup:Module]
|
||||
102
backend/src/core/task_manager/__tests__/test_task_logger.py
Normal file
102
backend/src/core/task_manager/__tests__/test_task_logger.py
Normal file
@@ -0,0 +1,102 @@
|
||||
# [DEF:__tests__/test_task_logger:Module]
|
||||
# @RELATION: VERIFIES -> ../task_logger.py
|
||||
# @PURPOSE: Contract testing for TaskLogger
|
||||
# [/DEF:__tests__/test_task_logger:Module]
|
||||
|
||||
import pytest
|
||||
from unittest.mock import MagicMock
|
||||
from src.core.task_manager.task_logger import TaskLogger
|
||||
|
||||
# @TEST_FIXTURE: valid_task_logger -> {"task_id": "test_123", "add_log_fn": lambda *args: None, "source": "test_plugin"}
|
||||
@pytest.fixture
|
||||
def mock_add_log():
|
||||
return MagicMock()
|
||||
|
||||
@pytest.fixture
|
||||
def task_logger(mock_add_log):
|
||||
return TaskLogger(task_id="test_123", add_log_fn=mock_add_log, source="test_plugin")
|
||||
|
||||
# @TEST_CONTRACT: TaskLoggerModel -> Invariants
|
||||
def test_task_logger_initialization(task_logger):
|
||||
"""Verify TaskLogger is bound to specific task_id and source."""
|
||||
assert task_logger._task_id == "test_123"
|
||||
assert task_logger._default_source == "test_plugin"
|
||||
|
||||
# @TEST_CONTRACT: invariants -> "All specific log methods (info, error) delegate to _log"
|
||||
def test_log_methods_delegation(task_logger, mock_add_log):
|
||||
"""Verify info, error, warning, debug delegate to internal _log."""
|
||||
task_logger.info("info message", metadata={"k": "v"})
|
||||
mock_add_log.assert_called_with(
|
||||
task_id="test_123",
|
||||
level="INFO",
|
||||
message="info message",
|
||||
source="test_plugin",
|
||||
metadata={"k": "v"}
|
||||
)
|
||||
|
||||
task_logger.error("error message", source="override")
|
||||
mock_add_log.assert_called_with(
|
||||
task_id="test_123",
|
||||
level="ERROR",
|
||||
message="error message",
|
||||
source="override",
|
||||
metadata=None
|
||||
)
|
||||
|
||||
task_logger.warning("warning message")
|
||||
mock_add_log.assert_called_with(
|
||||
task_id="test_123",
|
||||
level="WARNING",
|
||||
message="warning message",
|
||||
source="test_plugin",
|
||||
metadata=None
|
||||
)
|
||||
|
||||
task_logger.debug("debug message")
|
||||
mock_add_log.assert_called_with(
|
||||
task_id="test_123",
|
||||
level="DEBUG",
|
||||
message="debug message",
|
||||
source="test_plugin",
|
||||
metadata=None
|
||||
)
|
||||
|
||||
# @TEST_CONTRACT: invariants -> "with_source creates a new logger with the same task_id"
|
||||
def test_with_source(task_logger):
|
||||
"""Verify with_source returns a new instance with updated default source."""
|
||||
new_logger = task_logger.with_source("new_source")
|
||||
assert isinstance(new_logger, TaskLogger)
|
||||
assert new_logger._task_id == "test_123"
|
||||
assert new_logger._default_source == "new_source"
|
||||
assert new_logger is not task_logger
|
||||
|
||||
# @TEST_EDGE: missing_task_id -> raises TypeError
|
||||
def test_missing_task_id():
|
||||
with pytest.raises(TypeError):
|
||||
TaskLogger(add_log_fn=lambda x: x)
|
||||
|
||||
# @TEST_EDGE: invalid_add_log_fn -> raises TypeError
|
||||
# (Python doesn't strictly enforce this at init, but let's verify it fails on call if not callable)
|
||||
def test_invalid_add_log_fn():
|
||||
logger = TaskLogger(task_id="msg", add_log_fn=None)
|
||||
with pytest.raises(TypeError):
|
||||
logger.info("test")
|
||||
|
||||
# @TEST_INVARIANT: consistent_delegation
|
||||
def test_progress_log(task_logger, mock_add_log):
|
||||
"""Verify progress method correctly formats metadata."""
|
||||
task_logger.progress("Step 1", 45.5)
|
||||
mock_add_log.assert_called_with(
|
||||
task_id="test_123",
|
||||
level="INFO",
|
||||
message="Step 1",
|
||||
source="test_plugin",
|
||||
metadata={"progress": 45.5}
|
||||
)
|
||||
|
||||
# Boundary checks
|
||||
task_logger.progress("Step high", 150)
|
||||
assert mock_add_log.call_args[1]["metadata"]["progress"] == 100
|
||||
|
||||
task_logger.progress("Step low", -10)
|
||||
assert mock_add_log.call_args[1]["metadata"]["progress"] == 0
|
||||
@@ -10,6 +10,7 @@
|
||||
from datetime import datetime
|
||||
from typing import List, Optional
|
||||
import json
|
||||
import re
|
||||
|
||||
from sqlalchemy.orm import Session
|
||||
from ...models.task import TaskRecord, TaskLogRecord
|
||||
@@ -80,18 +81,40 @@ class TaskPersistenceService:
|
||||
|
||||
# [DEF:_resolve_environment_id:Function]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Resolve environment id based on provided value or fallback to default
|
||||
# @PURPOSE: Resolve environment id into existing environments.id value to satisfy FK constraints.
|
||||
# @PRE: Session is active
|
||||
# @POST: Environment ID is returned
|
||||
# @POST: Returns existing environments.id or None when unresolved.
|
||||
@staticmethod
|
||||
def _resolve_environment_id(session: Session, env_id: Optional[str]) -> str:
|
||||
def _resolve_environment_id(session: Session, env_id: Optional[str]) -> Optional[str]:
|
||||
with belief_scope("_resolve_environment_id"):
|
||||
if env_id:
|
||||
return env_id
|
||||
repo_env = session.query(Environment).filter_by(name="default").first()
|
||||
if repo_env:
|
||||
return str(repo_env.id)
|
||||
return "default"
|
||||
raw_value = str(env_id or "").strip()
|
||||
if not raw_value:
|
||||
return None
|
||||
|
||||
# 1) Direct match by primary key.
|
||||
by_id = session.query(Environment).filter(Environment.id == raw_value).first()
|
||||
if by_id:
|
||||
return str(by_id.id)
|
||||
|
||||
# 2) Exact match by name.
|
||||
by_name = session.query(Environment).filter(Environment.name == raw_value).first()
|
||||
if by_name:
|
||||
return str(by_name.id)
|
||||
|
||||
# 3) Slug-like match (e.g. "ss-dev" -> "SS DEV").
|
||||
def normalize_token(value: str) -> str:
|
||||
lowered = str(value or "").strip().lower()
|
||||
return re.sub(r"[^a-z0-9]+", "-", lowered).strip("-")
|
||||
|
||||
target_token = normalize_token(raw_value)
|
||||
if not target_token:
|
||||
return None
|
||||
|
||||
for env in session.query(Environment).all():
|
||||
if normalize_token(env.id) == target_token or normalize_token(env.name) == target_token:
|
||||
return str(env.id)
|
||||
|
||||
return None
|
||||
# [/DEF:_resolve_environment_id:Function]
|
||||
|
||||
# [DEF:__init__:Function]
|
||||
|
||||
237
backend/src/core/utils/async_network.py
Normal file
237
backend/src/core/utils/async_network.py
Normal file
@@ -0,0 +1,237 @@
|
||||
# [DEF:backend.src.core.utils.async_network:Module]
|
||||
#
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: network, httpx, async, superset, authentication, cache
|
||||
# @PURPOSE: Provides async Superset API client with shared auth-token cache to avoid per-request re-login.
|
||||
# @LAYER: Infra
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.utils.network.SupersetAuthCache
|
||||
# @INVARIANT: Async client reuses cached auth tokens per environment credentials and invalidates on 401.
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
from typing import Optional, Dict, Any, Union
|
||||
import asyncio
|
||||
|
||||
import httpx
|
||||
|
||||
from ..logger import logger as app_logger, belief_scope
|
||||
from .network import (
|
||||
AuthenticationError,
|
||||
DashboardNotFoundError,
|
||||
NetworkError,
|
||||
PermissionDeniedError,
|
||||
SupersetAPIError,
|
||||
SupersetAuthCache,
|
||||
)
|
||||
# [/SECTION]
|
||||
|
||||
|
||||
# [DEF:AsyncAPIClient:Class]
|
||||
# @PURPOSE: Async Superset API client backed by httpx.AsyncClient with shared auth cache.
|
||||
class AsyncAPIClient:
|
||||
DEFAULT_TIMEOUT = 30
|
||||
_auth_locks: Dict[tuple[str, str, bool], asyncio.Lock] = {}
|
||||
|
||||
# [DEF:__init__:Function]
|
||||
# @PURPOSE: Initialize async API client for one environment.
|
||||
# @PRE: config contains base_url and auth payload.
|
||||
# @POST: Client is ready for async request/authentication flow.
|
||||
def __init__(self, config: Dict[str, Any], verify_ssl: bool = True, timeout: int = DEFAULT_TIMEOUT):
|
||||
self.base_url: str = self._normalize_base_url(config.get("base_url", ""))
|
||||
self.api_base_url: str = f"{self.base_url}/api/v1"
|
||||
self.auth = config.get("auth")
|
||||
self.request_settings = {"verify_ssl": verify_ssl, "timeout": timeout}
|
||||
self._client = httpx.AsyncClient(
|
||||
verify=verify_ssl,
|
||||
timeout=httpx.Timeout(timeout),
|
||||
follow_redirects=True,
|
||||
)
|
||||
self._tokens: Dict[str, str] = {}
|
||||
self._authenticated = False
|
||||
self._auth_cache_key = SupersetAuthCache.build_key(
|
||||
self.base_url,
|
||||
self.auth,
|
||||
verify_ssl,
|
||||
)
|
||||
|
||||
# [/DEF:__init__:Function]
|
||||
|
||||
# [DEF:_normalize_base_url:Function]
|
||||
# @PURPOSE: Normalize base URL for Superset API root construction.
|
||||
# @POST: Returns canonical base URL without trailing slash and duplicate /api/v1 suffix.
|
||||
def _normalize_base_url(self, raw_url: str) -> str:
|
||||
normalized = str(raw_url or "").strip().rstrip("/")
|
||||
if normalized.lower().endswith("/api/v1"):
|
||||
normalized = normalized[:-len("/api/v1")]
|
||||
return normalized.rstrip("/")
|
||||
# [/DEF:_normalize_base_url:Function]
|
||||
|
||||
# [DEF:_build_api_url:Function]
|
||||
# @PURPOSE: Build full API URL from relative Superset endpoint.
|
||||
# @POST: Returns absolute URL for upstream request.
|
||||
def _build_api_url(self, endpoint: str) -> str:
|
||||
normalized_endpoint = str(endpoint or "").strip()
|
||||
if normalized_endpoint.startswith("http://") or normalized_endpoint.startswith("https://"):
|
||||
return normalized_endpoint
|
||||
if not normalized_endpoint.startswith("/"):
|
||||
normalized_endpoint = f"/{normalized_endpoint}"
|
||||
if normalized_endpoint.startswith("/api/v1/") or normalized_endpoint == "/api/v1":
|
||||
return f"{self.base_url}{normalized_endpoint}"
|
||||
return f"{self.api_base_url}{normalized_endpoint}"
|
||||
# [/DEF:_build_api_url:Function]
|
||||
|
||||
# [DEF:_get_auth_lock:Function]
|
||||
# @PURPOSE: Return per-cache-key async lock to serialize fresh login attempts.
|
||||
# @POST: Returns stable asyncio.Lock instance.
|
||||
@classmethod
|
||||
def _get_auth_lock(cls, cache_key: tuple[str, str, bool]) -> asyncio.Lock:
|
||||
existing_lock = cls._auth_locks.get(cache_key)
|
||||
if existing_lock is not None:
|
||||
return existing_lock
|
||||
created_lock = asyncio.Lock()
|
||||
cls._auth_locks[cache_key] = created_lock
|
||||
return created_lock
|
||||
# [/DEF:_get_auth_lock:Function]
|
||||
|
||||
# [DEF:authenticate:Function]
|
||||
# @PURPOSE: Authenticate against Superset and cache access/csrf tokens.
|
||||
# @POST: Client tokens are populated and reusable across requests.
|
||||
async def authenticate(self) -> Dict[str, str]:
|
||||
cached_tokens = SupersetAuthCache.get(self._auth_cache_key)
|
||||
if cached_tokens and cached_tokens.get("access_token") and cached_tokens.get("csrf_token"):
|
||||
self._tokens = cached_tokens
|
||||
self._authenticated = True
|
||||
app_logger.info("[async_authenticate][CacheHit] Reusing cached Superset auth tokens for %s", self.base_url)
|
||||
return self._tokens
|
||||
|
||||
auth_lock = self._get_auth_lock(self._auth_cache_key)
|
||||
async with auth_lock:
|
||||
cached_tokens = SupersetAuthCache.get(self._auth_cache_key)
|
||||
if cached_tokens and cached_tokens.get("access_token") and cached_tokens.get("csrf_token"):
|
||||
self._tokens = cached_tokens
|
||||
self._authenticated = True
|
||||
app_logger.info("[async_authenticate][CacheHitAfterWait] Reusing cached Superset auth tokens for %s", self.base_url)
|
||||
return self._tokens
|
||||
|
||||
with belief_scope("AsyncAPIClient.authenticate"):
|
||||
app_logger.info("[async_authenticate][Enter] Authenticating to %s", self.base_url)
|
||||
try:
|
||||
login_url = f"{self.api_base_url}/security/login"
|
||||
response = await self._client.post(login_url, json=self.auth)
|
||||
response.raise_for_status()
|
||||
access_token = response.json()["access_token"]
|
||||
|
||||
csrf_url = f"{self.api_base_url}/security/csrf_token/"
|
||||
csrf_response = await self._client.get(
|
||||
csrf_url,
|
||||
headers={"Authorization": f"Bearer {access_token}"},
|
||||
)
|
||||
csrf_response.raise_for_status()
|
||||
|
||||
self._tokens = {
|
||||
"access_token": access_token,
|
||||
"csrf_token": csrf_response.json()["result"],
|
||||
}
|
||||
self._authenticated = True
|
||||
SupersetAuthCache.set(self._auth_cache_key, self._tokens)
|
||||
app_logger.info("[async_authenticate][Exit] Authenticated successfully.")
|
||||
return self._tokens
|
||||
except httpx.HTTPStatusError as exc:
|
||||
SupersetAuthCache.invalidate(self._auth_cache_key)
|
||||
status_code = exc.response.status_code if exc.response is not None else None
|
||||
if status_code in [502, 503, 504]:
|
||||
raise NetworkError(
|
||||
f"Environment unavailable during authentication (Status {status_code})",
|
||||
status_code=status_code,
|
||||
) from exc
|
||||
raise AuthenticationError(f"Authentication failed: {exc}") from exc
|
||||
except (httpx.HTTPError, KeyError) as exc:
|
||||
SupersetAuthCache.invalidate(self._auth_cache_key)
|
||||
raise NetworkError(f"Network or parsing error during authentication: {exc}") from exc
|
||||
# [/DEF:authenticate:Function]
|
||||
|
||||
# [DEF:get_headers:Function]
|
||||
# @PURPOSE: Return authenticated Superset headers for async requests.
|
||||
# @POST: Headers include Authorization and CSRF tokens.
|
||||
async def get_headers(self) -> Dict[str, str]:
|
||||
if not self._authenticated:
|
||||
await self.authenticate()
|
||||
return {
|
||||
"Authorization": f"Bearer {self._tokens['access_token']}",
|
||||
"X-CSRFToken": self._tokens.get("csrf_token", ""),
|
||||
"Referer": self.base_url,
|
||||
"Content-Type": "application/json",
|
||||
}
|
||||
# [/DEF:get_headers:Function]
|
||||
|
||||
# [DEF:request:Function]
|
||||
# @PURPOSE: Perform one authenticated async Superset API request.
|
||||
# @POST: Returns JSON payload or raw httpx.Response when raw_response=true.
|
||||
async def request(
|
||||
self,
|
||||
method: str,
|
||||
endpoint: str,
|
||||
headers: Optional[Dict[str, str]] = None,
|
||||
raw_response: bool = False,
|
||||
**kwargs,
|
||||
) -> Union[httpx.Response, Dict[str, Any]]:
|
||||
full_url = self._build_api_url(endpoint)
|
||||
request_headers = await self.get_headers()
|
||||
if headers:
|
||||
request_headers.update(headers)
|
||||
if "allow_redirects" in kwargs and "follow_redirects" not in kwargs:
|
||||
kwargs["follow_redirects"] = bool(kwargs.pop("allow_redirects"))
|
||||
|
||||
try:
|
||||
response = await self._client.request(method, full_url, headers=request_headers, **kwargs)
|
||||
response.raise_for_status()
|
||||
return response if raw_response else response.json()
|
||||
except httpx.HTTPStatusError as exc:
|
||||
if exc.response is not None and exc.response.status_code == 401:
|
||||
self._authenticated = False
|
||||
self._tokens = {}
|
||||
SupersetAuthCache.invalidate(self._auth_cache_key)
|
||||
self._handle_http_error(exc, endpoint)
|
||||
except httpx.HTTPError as exc:
|
||||
self._handle_network_error(exc, full_url)
|
||||
# [/DEF:request:Function]
|
||||
|
||||
# [DEF:_handle_http_error:Function]
|
||||
# @PURPOSE: Translate upstream HTTP errors into stable domain exceptions.
|
||||
# @POST: Raises domain-specific exception for caller flow control.
|
||||
def _handle_http_error(self, exc: httpx.HTTPStatusError, endpoint: str) -> None:
|
||||
with belief_scope("AsyncAPIClient._handle_http_error"):
|
||||
status_code = exc.response.status_code
|
||||
if status_code in [502, 503, 504]:
|
||||
raise NetworkError(f"Environment unavailable (Status {status_code})", status_code=status_code) from exc
|
||||
if status_code == 404:
|
||||
raise DashboardNotFoundError(endpoint) from exc
|
||||
if status_code == 403:
|
||||
raise PermissionDeniedError() from exc
|
||||
if status_code == 401:
|
||||
raise AuthenticationError() from exc
|
||||
raise SupersetAPIError(f"API Error {status_code}: {exc.response.text}") from exc
|
||||
# [/DEF:_handle_http_error:Function]
|
||||
|
||||
# [DEF:_handle_network_error:Function]
|
||||
# @PURPOSE: Translate generic httpx errors into NetworkError.
|
||||
# @POST: Raises NetworkError with URL context.
|
||||
def _handle_network_error(self, exc: httpx.HTTPError, url: str) -> None:
|
||||
with belief_scope("AsyncAPIClient._handle_network_error"):
|
||||
if isinstance(exc, httpx.TimeoutException):
|
||||
message = "Request timeout"
|
||||
elif isinstance(exc, httpx.ConnectError):
|
||||
message = "Connection error"
|
||||
else:
|
||||
message = f"Unknown network error: {exc}"
|
||||
raise NetworkError(message, url=url) from exc
|
||||
# [/DEF:_handle_network_error:Function]
|
||||
|
||||
# [DEF:aclose:Function]
|
||||
# @PURPOSE: Close underlying httpx client.
|
||||
# @POST: Client resources are released.
|
||||
async def aclose(self) -> None:
|
||||
await self._client.aclose()
|
||||
# [/DEF:aclose:Function]
|
||||
# [/DEF:AsyncAPIClient:Class]
|
||||
|
||||
# [/DEF:backend.src.core.utils.async_network:Module]
|
||||
@@ -8,10 +8,12 @@
|
||||
# @PUBLIC_API: APIClient
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
from typing import Optional, Dict, Any, List, Union, cast
|
||||
from typing import Optional, Dict, Any, List, Union, cast, Tuple
|
||||
import json
|
||||
import io
|
||||
from pathlib import Path
|
||||
import threading
|
||||
import time
|
||||
import requests
|
||||
from requests.adapters import HTTPAdapter
|
||||
import urllib3
|
||||
@@ -86,6 +88,62 @@ class NetworkError(Exception):
|
||||
# [/DEF:__init__:Function]
|
||||
# [/DEF:NetworkError:Class]
|
||||
|
||||
|
||||
# [DEF:SupersetAuthCache:Class]
|
||||
# @PURPOSE: Process-local cache for Superset access/csrf tokens keyed by environment credentials.
|
||||
# @PRE: base_url and username are stable strings.
|
||||
# @POST: Cached entries expire automatically by TTL and can be reused across requests.
|
||||
class SupersetAuthCache:
|
||||
TTL_SECONDS = 300
|
||||
|
||||
_lock = threading.Lock()
|
||||
_entries: Dict[Tuple[str, str, bool], Dict[str, Any]] = {}
|
||||
|
||||
@classmethod
|
||||
def build_key(cls, base_url: str, auth: Optional[Dict[str, Any]], verify_ssl: bool) -> Tuple[str, str, bool]:
|
||||
username = ""
|
||||
if isinstance(auth, dict):
|
||||
username = str(auth.get("username") or "").strip()
|
||||
return (str(base_url or "").strip(), username, bool(verify_ssl))
|
||||
|
||||
@classmethod
|
||||
def get(cls, key: Tuple[str, str, bool]) -> Optional[Dict[str, str]]:
|
||||
now = time.time()
|
||||
with cls._lock:
|
||||
payload = cls._entries.get(key)
|
||||
if not payload:
|
||||
return None
|
||||
expires_at = float(payload.get("expires_at") or 0)
|
||||
if expires_at <= now:
|
||||
cls._entries.pop(key, None)
|
||||
return None
|
||||
tokens = payload.get("tokens")
|
||||
if not isinstance(tokens, dict):
|
||||
cls._entries.pop(key, None)
|
||||
return None
|
||||
return {
|
||||
"access_token": str(tokens.get("access_token") or ""),
|
||||
"csrf_token": str(tokens.get("csrf_token") or ""),
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def set(cls, key: Tuple[str, str, bool], tokens: Dict[str, str], ttl_seconds: Optional[int] = None) -> None:
|
||||
normalized_ttl = max(int(ttl_seconds or cls.TTL_SECONDS), 1)
|
||||
with cls._lock:
|
||||
cls._entries[key] = {
|
||||
"tokens": {
|
||||
"access_token": str(tokens.get("access_token") or ""),
|
||||
"csrf_token": str(tokens.get("csrf_token") or ""),
|
||||
},
|
||||
"expires_at": time.time() + normalized_ttl,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def invalidate(cls, key: Tuple[str, str, bool]) -> None:
|
||||
with cls._lock:
|
||||
cls._entries.pop(key, None)
|
||||
# [/DEF:SupersetAuthCache:Class]
|
||||
|
||||
# [DEF:APIClient:Class]
|
||||
# @PURPOSE: Инкапсулирует HTTP-логику для работы с API, включая сессии, аутентификацию, и обработку запросов.
|
||||
class APIClient:
|
||||
@@ -101,11 +159,17 @@ class APIClient:
|
||||
def __init__(self, config: Dict[str, Any], verify_ssl: bool = True, timeout: int = DEFAULT_TIMEOUT):
|
||||
with belief_scope("__init__"):
|
||||
app_logger.info("[APIClient.__init__][Entry] Initializing APIClient.")
|
||||
self.base_url: str = config.get("base_url", "")
|
||||
self.base_url: str = self._normalize_base_url(config.get("base_url", ""))
|
||||
self.api_base_url: str = f"{self.base_url}/api/v1"
|
||||
self.auth = config.get("auth")
|
||||
self.request_settings = {"verify_ssl": verify_ssl, "timeout": timeout}
|
||||
self.session = self._init_session()
|
||||
self._tokens: Dict[str, str] = {}
|
||||
self._auth_cache_key = SupersetAuthCache.build_key(
|
||||
self.base_url,
|
||||
self.auth,
|
||||
verify_ssl,
|
||||
)
|
||||
self._authenticated = False
|
||||
app_logger.info("[APIClient.__init__][Exit] APIClient initialized.")
|
||||
# [/DEF:__init__:Function]
|
||||
@@ -156,6 +220,34 @@ class APIClient:
|
||||
return session
|
||||
# [/DEF:_init_session:Function]
|
||||
|
||||
# [DEF:_normalize_base_url:Function]
|
||||
# @PURPOSE: Normalize Superset environment URL to base host/path without trailing slash and /api/v1 suffix.
|
||||
# @PRE: raw_url can be empty.
|
||||
# @POST: Returns canonical base URL suitable for building API endpoints.
|
||||
# @RETURN: str
|
||||
def _normalize_base_url(self, raw_url: str) -> str:
|
||||
normalized = str(raw_url or "").strip().rstrip("/")
|
||||
if normalized.lower().endswith("/api/v1"):
|
||||
normalized = normalized[:-len("/api/v1")]
|
||||
return normalized.rstrip("/")
|
||||
# [/DEF:_normalize_base_url:Function]
|
||||
|
||||
# [DEF:_build_api_url:Function]
|
||||
# @PURPOSE: Build absolute Superset API URL for endpoint using canonical /api/v1 base.
|
||||
# @PRE: endpoint is relative path or absolute URL.
|
||||
# @POST: Returns full URL without accidental duplicate slashes.
|
||||
# @RETURN: str
|
||||
def _build_api_url(self, endpoint: str) -> str:
|
||||
normalized_endpoint = str(endpoint or "").strip()
|
||||
if normalized_endpoint.startswith("http://") or normalized_endpoint.startswith("https://"):
|
||||
return normalized_endpoint
|
||||
if not normalized_endpoint.startswith("/"):
|
||||
normalized_endpoint = f"/{normalized_endpoint}"
|
||||
if normalized_endpoint.startswith("/api/v1/") or normalized_endpoint == "/api/v1":
|
||||
return f"{self.base_url}{normalized_endpoint}"
|
||||
return f"{self.api_base_url}{normalized_endpoint}"
|
||||
# [/DEF:_build_api_url:Function]
|
||||
|
||||
# [DEF:authenticate:Function]
|
||||
# @PURPOSE: Выполняет аутентификацию в Superset API и получает access и CSRF токены.
|
||||
# @PRE: self.auth and self.base_url must be valid.
|
||||
@@ -165,8 +257,14 @@ class APIClient:
|
||||
def authenticate(self) -> Dict[str, str]:
|
||||
with belief_scope("authenticate"):
|
||||
app_logger.info("[authenticate][Enter] Authenticating to %s", self.base_url)
|
||||
cached_tokens = SupersetAuthCache.get(self._auth_cache_key)
|
||||
if cached_tokens and cached_tokens.get("access_token") and cached_tokens.get("csrf_token"):
|
||||
self._tokens = cached_tokens
|
||||
self._authenticated = True
|
||||
app_logger.info("[authenticate][CacheHit] Reusing cached Superset auth tokens for %s", self.base_url)
|
||||
return self._tokens
|
||||
try:
|
||||
login_url = f"{self.base_url}/security/login"
|
||||
login_url = f"{self.api_base_url}/security/login"
|
||||
# Log the payload keys and values (masking password)
|
||||
masked_auth = {k: ("******" if k == "password" else v) for k, v in self.auth.items()}
|
||||
app_logger.info(f"[authenticate][Debug] Login URL: {login_url}")
|
||||
@@ -180,20 +278,23 @@ class APIClient:
|
||||
response.raise_for_status()
|
||||
access_token = response.json()["access_token"]
|
||||
|
||||
csrf_url = f"{self.base_url}/security/csrf_token/"
|
||||
csrf_url = f"{self.api_base_url}/security/csrf_token/"
|
||||
csrf_response = self.session.get(csrf_url, headers={"Authorization": f"Bearer {access_token}"}, timeout=self.request_settings["timeout"])
|
||||
csrf_response.raise_for_status()
|
||||
|
||||
self._tokens = {"access_token": access_token, "csrf_token": csrf_response.json()["result"]}
|
||||
self._authenticated = True
|
||||
SupersetAuthCache.set(self._auth_cache_key, self._tokens)
|
||||
app_logger.info("[authenticate][Exit] Authenticated successfully.")
|
||||
return self._tokens
|
||||
except requests.exceptions.HTTPError as e:
|
||||
SupersetAuthCache.invalidate(self._auth_cache_key)
|
||||
status_code = e.response.status_code if e.response is not None else None
|
||||
if status_code in [502, 503, 504]:
|
||||
raise NetworkError(f"Environment unavailable during authentication (Status {status_code})", status_code=status_code) from e
|
||||
raise AuthenticationError(f"Authentication failed: {e}") from e
|
||||
except (requests.exceptions.RequestException, KeyError) as e:
|
||||
SupersetAuthCache.invalidate(self._auth_cache_key)
|
||||
raise NetworkError(f"Network or parsing error during authentication: {e}") from e
|
||||
# [/DEF:authenticate:Function]
|
||||
|
||||
@@ -224,7 +325,7 @@ class APIClient:
|
||||
# @RETURN: `requests.Response` если `raw_response=True`, иначе `dict`.
|
||||
# @THROW: SupersetAPIError, NetworkError и их подклассы.
|
||||
def request(self, method: str, endpoint: str, headers: Optional[Dict] = None, raw_response: bool = False, **kwargs) -> Union[requests.Response, Dict[str, Any]]:
|
||||
full_url = f"{self.base_url}{endpoint}"
|
||||
full_url = self._build_api_url(endpoint)
|
||||
_headers = self.headers.copy()
|
||||
if headers:
|
||||
_headers.update(headers)
|
||||
@@ -234,6 +335,10 @@ class APIClient:
|
||||
response.raise_for_status()
|
||||
return response if raw_response else response.json()
|
||||
except requests.exceptions.HTTPError as e:
|
||||
if e.response is not None and e.response.status_code == 401:
|
||||
self._authenticated = False
|
||||
self._tokens = {}
|
||||
SupersetAuthCache.invalidate(self._auth_cache_key)
|
||||
self._handle_http_error(e, endpoint)
|
||||
except requests.exceptions.RequestException as e:
|
||||
self._handle_network_error(e, full_url)
|
||||
@@ -288,7 +393,7 @@ class APIClient:
|
||||
# @THROW: SupersetAPIError, NetworkError, TypeError.
|
||||
def upload_file(self, endpoint: str, file_info: Dict[str, Any], extra_data: Optional[Dict] = None, timeout: Optional[int] = None) -> Dict:
|
||||
with belief_scope("upload_file"):
|
||||
full_url = f"{self.base_url}{endpoint}"
|
||||
full_url = self._build_api_url(endpoint)
|
||||
_headers = self.headers.copy()
|
||||
_headers.pop('Content-Type', None)
|
||||
|
||||
|
||||
@@ -14,20 +14,29 @@ from .core.config_manager import ConfigManager
|
||||
from .core.scheduler import SchedulerService
|
||||
from .services.resource_service import ResourceService
|
||||
from .services.mapping_service import MappingService
|
||||
from .core.database import init_db, get_auth_db
|
||||
from .services.clean_release.repositories import (
|
||||
CandidateRepository, ArtifactRepository, ManifestRepository,
|
||||
PolicyRepository, ComplianceRepository, ReportRepository,
|
||||
ApprovalRepository, PublicationRepository, AuditRepository,
|
||||
CleanReleaseAuditLog
|
||||
)
|
||||
from .services.clean_release.repository import CleanReleaseRepository
|
||||
from .services.clean_release.facade import CleanReleaseFacade
|
||||
from .services.reports.report_service import ReportsService
|
||||
from .core.database import init_db, get_auth_db, get_db
|
||||
from .core.logger import logger
|
||||
from .core.auth.jwt import decode_token
|
||||
from .core.auth.repository import AuthRepository
|
||||
from .models.auth import User
|
||||
|
||||
# Initialize singletons
|
||||
# Use absolute path relative to this file to ensure plugins are found regardless of CWD
|
||||
project_root = Path(__file__).parent.parent.parent
|
||||
config_path = project_root / "config.json"
|
||||
|
||||
# Initialize database before services that use persisted configuration.
|
||||
init_db()
|
||||
config_manager = ConfigManager(config_path=str(config_path))
|
||||
# Initialize singletons
|
||||
# Use absolute path relative to this file to ensure plugins are found regardless of CWD
|
||||
project_root = Path(__file__).parent.parent.parent
|
||||
config_path = project_root / "config.json"
|
||||
|
||||
# Initialize database before services that use persisted configuration.
|
||||
init_db()
|
||||
config_manager = ConfigManager(config_path=str(config_path))
|
||||
|
||||
# [DEF:get_config_manager:Function]
|
||||
# @PURPOSE: Dependency injector for ConfigManager.
|
||||
@@ -54,6 +63,11 @@ logger.info("SchedulerService initialized")
|
||||
resource_service = ResourceService()
|
||||
logger.info("ResourceService initialized")
|
||||
|
||||
# Clean Release Redesign Singletons
|
||||
# Note: These use get_db() which is a generator, so we need a way to provide a session.
|
||||
# For singletons in dependencies.py, we might need a different approach or
|
||||
# initialize them inside the dependency functions.
|
||||
|
||||
# [DEF:get_plugin_loader:Function]
|
||||
# @PURPOSE: Dependency injector for PluginLoader.
|
||||
# @PRE: Global plugin_loader must be initialized.
|
||||
@@ -104,6 +118,46 @@ def get_mapping_service() -> MappingService:
|
||||
return MappingService(config_manager)
|
||||
# [/DEF:get_mapping_service:Function]
|
||||
|
||||
|
||||
_clean_release_repository = CleanReleaseRepository()
|
||||
|
||||
# [DEF:get_clean_release_repository:Function]
|
||||
# @PURPOSE: Legacy compatibility shim for CleanReleaseRepository.
|
||||
# @POST: Returns a shared CleanReleaseRepository instance.
|
||||
def get_clean_release_repository() -> CleanReleaseRepository:
|
||||
"""Legacy compatibility shim for CleanReleaseRepository."""
|
||||
return _clean_release_repository
|
||||
# [/DEF:get_clean_release_repository:Function]
|
||||
|
||||
|
||||
# [DEF:get_clean_release_facade:Function]
|
||||
# @PURPOSE: Dependency injector for CleanReleaseFacade.
|
||||
# @POST: Returns a facade instance with a fresh DB session.
|
||||
def get_clean_release_facade(db = Depends(get_db)) -> CleanReleaseFacade:
|
||||
candidate_repo = CandidateRepository(db)
|
||||
artifact_repo = ArtifactRepository(db)
|
||||
manifest_repo = ManifestRepository(db)
|
||||
policy_repo = PolicyRepository(db)
|
||||
compliance_repo = ComplianceRepository(db)
|
||||
report_repo = ReportRepository(db)
|
||||
approval_repo = ApprovalRepository(db)
|
||||
publication_repo = PublicationRepository(db)
|
||||
audit_repo = AuditRepository(db)
|
||||
|
||||
return CleanReleaseFacade(
|
||||
candidate_repo=candidate_repo,
|
||||
artifact_repo=artifact_repo,
|
||||
manifest_repo=manifest_repo,
|
||||
policy_repo=policy_repo,
|
||||
compliance_repo=compliance_repo,
|
||||
report_repo=report_repo,
|
||||
approval_repo=approval_repo,
|
||||
publication_repo=publication_repo,
|
||||
audit_repo=audit_repo,
|
||||
config_manager=config_manager
|
||||
)
|
||||
# [/DEF:get_clean_release_facade:Function]
|
||||
|
||||
# [DEF:oauth2_scheme:Variable]
|
||||
# @PURPOSE: OAuth2 password bearer scheme for token extraction.
|
||||
oauth2_scheme = OAuth2PasswordBearer(tokenUrl="/api/auth/login")
|
||||
|
||||
149
backend/src/models/__tests__/test_clean_release.py
Normal file
149
backend/src/models/__tests__/test_clean_release.py
Normal file
@@ -0,0 +1,149 @@
|
||||
# [DEF:__tests__/test_clean_release:Module]
|
||||
# @RELATION: VERIFIES -> ../clean_release.py
|
||||
# @PURPOSE: Contract testing for Clean Release models
|
||||
# [/DEF:__tests__/test_clean_release:Module]
|
||||
|
||||
import pytest
|
||||
from datetime import datetime
|
||||
from pydantic import ValidationError
|
||||
from src.models.clean_release import (
|
||||
ReleaseCandidate,
|
||||
ReleaseCandidateStatus,
|
||||
ProfileType,
|
||||
CleanProfilePolicy,
|
||||
DistributionManifest,
|
||||
ManifestItem,
|
||||
ManifestSummary,
|
||||
ClassificationType,
|
||||
ComplianceCheckRun,
|
||||
CheckFinalStatus,
|
||||
CheckStageResult,
|
||||
CheckStageName,
|
||||
CheckStageStatus,
|
||||
ComplianceReport,
|
||||
ExecutionMode
|
||||
)
|
||||
|
||||
# @TEST_FIXTURE: valid_enterprise_candidate
|
||||
@pytest.fixture
|
||||
def valid_candidate_data():
|
||||
return {
|
||||
"candidate_id": "RC-001",
|
||||
"version": "1.0.0",
|
||||
"profile": ProfileType.ENTERPRISE_CLEAN,
|
||||
"created_at": datetime.now(),
|
||||
"created_by": "admin",
|
||||
"source_snapshot_ref": "v1.0.0-snapshot"
|
||||
}
|
||||
|
||||
def test_release_candidate_valid(valid_candidate_data):
|
||||
rc = ReleaseCandidate(**valid_candidate_data)
|
||||
assert rc.candidate_id == "RC-001"
|
||||
assert rc.status == ReleaseCandidateStatus.DRAFT
|
||||
|
||||
def test_release_candidate_empty_id(valid_candidate_data):
|
||||
valid_candidate_data["candidate_id"] = " "
|
||||
with pytest.raises(ValueError, match="candidate_id must be non-empty"):
|
||||
ReleaseCandidate(**valid_candidate_data)
|
||||
|
||||
# @TEST_FIXTURE: valid_enterprise_policy
|
||||
@pytest.fixture
|
||||
def valid_policy_data():
|
||||
return {
|
||||
"policy_id": "POL-001",
|
||||
"policy_version": "1",
|
||||
"active": True,
|
||||
"prohibited_artifact_categories": ["test-data"],
|
||||
"required_system_categories": ["core"],
|
||||
"internal_source_registry_ref": "REG-1",
|
||||
"effective_from": datetime.now(),
|
||||
"profile": ProfileType.ENTERPRISE_CLEAN
|
||||
}
|
||||
|
||||
# @TEST_INVARIANT: policy_purity
|
||||
def test_enterprise_policy_valid(valid_policy_data):
|
||||
policy = CleanProfilePolicy(**valid_policy_data)
|
||||
assert policy.external_source_forbidden is True
|
||||
|
||||
# @TEST_EDGE: enterprise_policy_missing_prohibited
|
||||
def test_enterprise_policy_missing_prohibited(valid_policy_data):
|
||||
valid_policy_data["prohibited_artifact_categories"] = []
|
||||
with pytest.raises(ValueError, match="enterprise-clean policy requires prohibited_artifact_categories"):
|
||||
CleanProfilePolicy(**valid_policy_data)
|
||||
|
||||
# @TEST_EDGE: enterprise_policy_external_allowed
|
||||
def test_enterprise_policy_external_allowed(valid_policy_data):
|
||||
valid_policy_data["external_source_forbidden"] = False
|
||||
with pytest.raises(ValueError, match="enterprise-clean policy requires external_source_forbidden=true"):
|
||||
CleanProfilePolicy(**valid_policy_data)
|
||||
|
||||
# @TEST_INVARIANT: manifest_consistency
|
||||
# @TEST_EDGE: manifest_count_mismatch
|
||||
def test_manifest_count_mismatch():
|
||||
summary = ManifestSummary(included_count=1, excluded_count=0, prohibited_detected_count=0)
|
||||
item = ManifestItem(path="p", category="c", classification=ClassificationType.ALLOWED, reason="r")
|
||||
|
||||
# Valid
|
||||
DistributionManifest(
|
||||
manifest_id="m1", candidate_id="rc1", policy_id="p1",
|
||||
generated_at=datetime.now(), generated_by="u", items=[item],
|
||||
summary=summary, deterministic_hash="h"
|
||||
)
|
||||
|
||||
# Invalid count
|
||||
summary.included_count = 2
|
||||
with pytest.raises(ValueError, match="manifest summary counts must match items size"):
|
||||
DistributionManifest(
|
||||
manifest_id="m1", candidate_id="rc1", policy_id="p1",
|
||||
generated_at=datetime.now(), generated_by="u", items=[item],
|
||||
summary=summary, deterministic_hash="h"
|
||||
)
|
||||
|
||||
# @TEST_INVARIANT: run_integrity
|
||||
# @TEST_EDGE: compliant_run_stage_fail
|
||||
def test_compliant_run_validation():
|
||||
base_run = {
|
||||
"check_run_id": "run1",
|
||||
"candidate_id": "rc1",
|
||||
"policy_id": "p1",
|
||||
"started_at": datetime.now(),
|
||||
"triggered_by": "u",
|
||||
"execution_mode": ExecutionMode.TUI,
|
||||
"final_status": CheckFinalStatus.COMPLIANT,
|
||||
"checks": [
|
||||
CheckStageResult(stage=CheckStageName.DATA_PURITY, status=CheckStageStatus.PASS),
|
||||
CheckStageResult(stage=CheckStageName.INTERNAL_SOURCES_ONLY, status=CheckStageStatus.PASS),
|
||||
CheckStageResult(stage=CheckStageName.NO_EXTERNAL_ENDPOINTS, status=CheckStageStatus.PASS),
|
||||
CheckStageResult(stage=CheckStageName.MANIFEST_CONSISTENCY, status=CheckStageStatus.PASS),
|
||||
]
|
||||
}
|
||||
# Valid
|
||||
ComplianceCheckRun(**base_run)
|
||||
|
||||
# One stage fails -> cannot be COMPLIANT
|
||||
base_run["checks"][0].status = CheckStageStatus.FAIL
|
||||
with pytest.raises(ValueError, match="compliant run requires PASS on all mandatory stages"):
|
||||
ComplianceCheckRun(**base_run)
|
||||
|
||||
# Missing stage -> cannot be COMPLIANT
|
||||
base_run["checks"] = base_run["checks"][1:]
|
||||
with pytest.raises(ValueError, match="compliant run requires all mandatory stages"):
|
||||
ComplianceCheckRun(**base_run)
|
||||
|
||||
def test_report_validation():
|
||||
# Valid blocked report
|
||||
ComplianceReport(
|
||||
report_id="rep1", check_run_id="run1", candidate_id="rc1",
|
||||
generated_at=datetime.now(), final_status=CheckFinalStatus.BLOCKED,
|
||||
operator_summary="Blocked", structured_payload_ref="ref",
|
||||
violations_count=2, blocking_violations_count=2
|
||||
)
|
||||
|
||||
# BLOCKED with 0 blocking violations
|
||||
with pytest.raises(ValueError, match="blocked report requires blocking violations"):
|
||||
ComplianceReport(
|
||||
report_id="rep1", check_run_id="run1", candidate_id="rc1",
|
||||
generated_at=datetime.now(), final_status=CheckFinalStatus.BLOCKED,
|
||||
operator_summary="Blocked", structured_payload_ref="ref",
|
||||
violations_count=2, blocking_violations_count=0
|
||||
)
|
||||
436
backend/src/models/clean_release.py
Normal file
436
backend/src/models/clean_release.py
Normal file
@@ -0,0 +1,436 @@
|
||||
# [DEF:backend.src.models.clean_release:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, models, lifecycle, compliance, evidence, immutability
|
||||
# @PURPOSE: Define canonical clean release domain entities and lifecycle guards.
|
||||
# @LAYER: Domain
|
||||
# @INVARIANT: Immutable snapshots are never mutated; forbidden lifecycle transitions are rejected.
|
||||
|
||||
from datetime import datetime
|
||||
from dataclasses import dataclass
|
||||
from enum import Enum
|
||||
from typing import List, Optional, Dict, Any
|
||||
from sqlalchemy import Column, String, DateTime, JSON, ForeignKey, Integer, Boolean
|
||||
from sqlalchemy.orm import relationship
|
||||
from .mapping import Base
|
||||
from ..services.clean_release.enums import (
|
||||
CandidateStatus, RunStatus, ComplianceDecision,
|
||||
ApprovalDecisionType, PublicationStatus, ClassificationType
|
||||
)
|
||||
from ..services.clean_release.exceptions import IllegalTransitionError
|
||||
|
||||
# [DEF:CheckFinalStatus:Class]
|
||||
# @PURPOSE: Backward-compatible final status enum for legacy TUI/orchestrator tests.
|
||||
class CheckFinalStatus(str, Enum):
|
||||
COMPLIANT = "COMPLIANT"
|
||||
BLOCKED = "BLOCKED"
|
||||
FAILED = "FAILED"
|
||||
# [/DEF:CheckFinalStatus:Class]
|
||||
|
||||
# [DEF:CheckStageName:Class]
|
||||
# @PURPOSE: Backward-compatible stage name enum for legacy TUI/orchestrator tests.
|
||||
class CheckStageName(str, Enum):
|
||||
DATA_PURITY = "DATA_PURITY"
|
||||
INTERNAL_SOURCES_ONLY = "INTERNAL_SOURCES_ONLY"
|
||||
NO_EXTERNAL_ENDPOINTS = "NO_EXTERNAL_ENDPOINTS"
|
||||
MANIFEST_CONSISTENCY = "MANIFEST_CONSISTENCY"
|
||||
# [/DEF:CheckStageName:Class]
|
||||
|
||||
# [DEF:CheckStageStatus:Class]
|
||||
# @PURPOSE: Backward-compatible stage status enum for legacy TUI/orchestrator tests.
|
||||
class CheckStageStatus(str, Enum):
|
||||
PASS = "PASS"
|
||||
FAIL = "FAIL"
|
||||
SKIPPED = "SKIPPED"
|
||||
RUNNING = "RUNNING"
|
||||
# [/DEF:CheckStageStatus:Class]
|
||||
|
||||
# [DEF:CheckStageResult:Class]
|
||||
# @PURPOSE: Backward-compatible stage result container for legacy TUI/orchestrator tests.
|
||||
@dataclass
|
||||
class CheckStageResult:
|
||||
stage: CheckStageName
|
||||
status: CheckStageStatus
|
||||
details: str = ""
|
||||
# [/DEF:CheckStageResult:Class]
|
||||
|
||||
# [DEF:ProfileType:Class]
|
||||
# @PURPOSE: Backward-compatible profile enum for legacy TUI bootstrap logic.
|
||||
class ProfileType(str, Enum):
|
||||
ENTERPRISE_CLEAN = "enterprise-clean"
|
||||
# [/DEF:ProfileType:Class]
|
||||
|
||||
# [DEF:RegistryStatus:Class]
|
||||
# @PURPOSE: Backward-compatible registry status enum for legacy TUI bootstrap logic.
|
||||
class RegistryStatus(str, Enum):
|
||||
ACTIVE = "ACTIVE"
|
||||
INACTIVE = "INACTIVE"
|
||||
# [/DEF:RegistryStatus:Class]
|
||||
|
||||
# [DEF:ReleaseCandidateStatus:Class]
|
||||
# @PURPOSE: Backward-compatible release candidate status enum for legacy TUI.
|
||||
class ReleaseCandidateStatus(str, Enum):
|
||||
DRAFT = CandidateStatus.DRAFT.value
|
||||
PREPARED = CandidateStatus.PREPARED.value
|
||||
MANIFEST_BUILT = CandidateStatus.MANIFEST_BUILT.value
|
||||
CHECK_PENDING = CandidateStatus.CHECK_PENDING.value
|
||||
CHECK_RUNNING = CandidateStatus.CHECK_RUNNING.value
|
||||
CHECK_PASSED = CandidateStatus.CHECK_PASSED.value
|
||||
CHECK_BLOCKED = CandidateStatus.CHECK_BLOCKED.value
|
||||
CHECK_ERROR = CandidateStatus.CHECK_ERROR.value
|
||||
APPROVED = CandidateStatus.APPROVED.value
|
||||
PUBLISHED = CandidateStatus.PUBLISHED.value
|
||||
REVOKED = CandidateStatus.REVOKED.value
|
||||
# [/DEF:ReleaseCandidateStatus:Class]
|
||||
|
||||
# [DEF:ResourceSourceEntry:Class]
|
||||
# @PURPOSE: Backward-compatible source entry model for legacy TUI bootstrap logic.
|
||||
@dataclass
|
||||
class ResourceSourceEntry:
|
||||
source_id: str
|
||||
host: str
|
||||
protocol: str
|
||||
purpose: str
|
||||
enabled: bool = True
|
||||
# [/DEF:ResourceSourceEntry:Class]
|
||||
|
||||
# [DEF:ResourceSourceRegistry:Class]
|
||||
# @PURPOSE: Backward-compatible source registry model for legacy TUI bootstrap logic.
|
||||
@dataclass
|
||||
class ResourceSourceRegistry:
|
||||
registry_id: str
|
||||
name: str
|
||||
entries: List[ResourceSourceEntry]
|
||||
updated_at: datetime
|
||||
updated_by: str
|
||||
status: str = "ACTIVE"
|
||||
|
||||
@property
|
||||
def id(self) -> str:
|
||||
return self.registry_id
|
||||
# [/DEF:ResourceSourceRegistry:Class]
|
||||
|
||||
# [DEF:CleanProfilePolicy:Class]
|
||||
# @PURPOSE: Backward-compatible policy model for legacy TUI bootstrap logic.
|
||||
@dataclass
|
||||
class CleanProfilePolicy:
|
||||
policy_id: str
|
||||
policy_version: str
|
||||
profile: str
|
||||
active: bool
|
||||
internal_source_registry_ref: str
|
||||
prohibited_artifact_categories: List[str]
|
||||
effective_from: datetime
|
||||
required_system_categories: Optional[List[str]] = None
|
||||
|
||||
@property
|
||||
def id(self) -> str:
|
||||
return self.policy_id
|
||||
|
||||
@property
|
||||
def registry_snapshot_id(self) -> str:
|
||||
return self.internal_source_registry_ref
|
||||
# [/DEF:CleanProfilePolicy:Class]
|
||||
|
||||
# [DEF:ComplianceCheckRun:Class]
|
||||
# @PURPOSE: Backward-compatible run model for legacy TUI typing/import compatibility.
|
||||
@dataclass
|
||||
class ComplianceCheckRun:
|
||||
check_run_id: str
|
||||
candidate_id: str
|
||||
policy_id: str
|
||||
requested_by: str
|
||||
execution_mode: str
|
||||
checks: List[CheckStageResult]
|
||||
final_status: CheckFinalStatus
|
||||
# [/DEF:ComplianceCheckRun:Class]
|
||||
|
||||
# [DEF:ReleaseCandidate:Class]
|
||||
# @PURPOSE: Represents the release unit being prepared and governed.
|
||||
# @PRE: id, version, source_snapshot_ref are non-empty.
|
||||
# @POST: status advances only through legal transitions.
|
||||
class ReleaseCandidate(Base):
|
||||
__tablename__ = "clean_release_candidates"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
name = Column(String, nullable=True) # Added back for backward compatibility with some legacy DTOs
|
||||
version = Column(String, nullable=False)
|
||||
source_snapshot_ref = Column(String, nullable=False)
|
||||
build_id = Column(String, nullable=True)
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
created_by = Column(String, nullable=False)
|
||||
status = Column(String, default=CandidateStatus.DRAFT)
|
||||
|
||||
@property
|
||||
def candidate_id(self) -> str:
|
||||
return self.id
|
||||
|
||||
def transition_to(self, new_status: CandidateStatus):
|
||||
"""
|
||||
@PURPOSE: Enforce legal state transitions.
|
||||
@PRE: Transition must be allowed by lifecycle rules.
|
||||
"""
|
||||
allowed = {
|
||||
CandidateStatus.DRAFT: [CandidateStatus.PREPARED],
|
||||
CandidateStatus.PREPARED: [CandidateStatus.MANIFEST_BUILT],
|
||||
CandidateStatus.MANIFEST_BUILT: [CandidateStatus.CHECK_PENDING],
|
||||
CandidateStatus.CHECK_PENDING: [CandidateStatus.CHECK_RUNNING],
|
||||
CandidateStatus.CHECK_RUNNING: [
|
||||
CandidateStatus.CHECK_PASSED,
|
||||
CandidateStatus.CHECK_BLOCKED,
|
||||
CandidateStatus.CHECK_ERROR
|
||||
],
|
||||
CandidateStatus.CHECK_PASSED: [CandidateStatus.APPROVED, CandidateStatus.CHECK_PENDING],
|
||||
CandidateStatus.CHECK_BLOCKED: [CandidateStatus.CHECK_PENDING],
|
||||
CandidateStatus.CHECK_ERROR: [CandidateStatus.CHECK_PENDING],
|
||||
CandidateStatus.APPROVED: [CandidateStatus.PUBLISHED],
|
||||
CandidateStatus.PUBLISHED: [CandidateStatus.REVOKED],
|
||||
CandidateStatus.REVOKED: []
|
||||
}
|
||||
current_status = CandidateStatus(self.status)
|
||||
if new_status not in allowed.get(current_status, []):
|
||||
raise IllegalTransitionError(f"Forbidden transition from {current_status} to {new_status}")
|
||||
self.status = new_status.value
|
||||
# [/DEF:ReleaseCandidate:Class]
|
||||
|
||||
# [DEF:CandidateArtifact:Class]
|
||||
# @PURPOSE: Represents one artifact associated with a release candidate.
|
||||
class CandidateArtifact(Base):
|
||||
__tablename__ = "clean_release_artifacts"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
candidate_id = Column(String, ForeignKey("clean_release_candidates.id"), nullable=False)
|
||||
path = Column(String, nullable=False)
|
||||
sha256 = Column(String, nullable=False)
|
||||
size = Column(Integer, nullable=False)
|
||||
detected_category = Column(String, nullable=True)
|
||||
declared_category = Column(String, nullable=True)
|
||||
source_uri = Column(String, nullable=True)
|
||||
source_host = Column(String, nullable=True)
|
||||
metadata_json = Column(JSON, default=dict)
|
||||
# [/DEF:CandidateArtifact:Class]
|
||||
|
||||
# [DEF:ManifestItem:Class]
|
||||
@dataclass
|
||||
class ManifestItem:
|
||||
path: str
|
||||
category: str
|
||||
classification: ClassificationType
|
||||
reason: str
|
||||
checksum: Optional[str] = None
|
||||
# [/DEF:ManifestItem:Class]
|
||||
|
||||
# [DEF:ManifestSummary:Class]
|
||||
@dataclass
|
||||
class ManifestSummary:
|
||||
included_count: int
|
||||
excluded_count: int
|
||||
prohibited_detected_count: int
|
||||
# [/DEF:ManifestSummary:Class]
|
||||
|
||||
# [DEF:DistributionManifest:Class]
|
||||
# @PURPOSE: Immutable snapshot of the candidate payload.
|
||||
# @INVARIANT: Immutable after creation.
|
||||
class DistributionManifest(Base):
|
||||
__tablename__ = "clean_release_manifests"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
candidate_id = Column(String, ForeignKey("clean_release_candidates.id"), nullable=False)
|
||||
manifest_version = Column(Integer, nullable=False)
|
||||
manifest_digest = Column(String, nullable=False)
|
||||
artifacts_digest = Column(String, nullable=False)
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
created_by = Column(String, nullable=False)
|
||||
source_snapshot_ref = Column(String, nullable=False)
|
||||
content_json = Column(JSON, nullable=False)
|
||||
immutable = Column(Boolean, default=True)
|
||||
|
||||
# Redesign compatibility fields (not persisted directly but used by builder/facade)
|
||||
def __init__(self, **kwargs):
|
||||
# Handle fields from manifest_builder.py
|
||||
if "manifest_id" in kwargs:
|
||||
kwargs["id"] = kwargs.pop("manifest_id")
|
||||
if "generated_at" in kwargs:
|
||||
kwargs["created_at"] = kwargs.pop("generated_at")
|
||||
if "generated_by" in kwargs:
|
||||
kwargs["created_by"] = kwargs.pop("generated_by")
|
||||
if "deterministic_hash" in kwargs:
|
||||
kwargs["manifest_digest"] = kwargs.pop("deterministic_hash")
|
||||
|
||||
# Ensure required DB fields have defaults if missing
|
||||
if "manifest_version" not in kwargs:
|
||||
kwargs["manifest_version"] = 1
|
||||
if "artifacts_digest" not in kwargs:
|
||||
kwargs["artifacts_digest"] = kwargs.get("manifest_digest", "pending")
|
||||
if "source_snapshot_ref" not in kwargs:
|
||||
kwargs["source_snapshot_ref"] = "pending"
|
||||
|
||||
# Pack items and summary into content_json if provided
|
||||
if "items" in kwargs or "summary" in kwargs:
|
||||
content = kwargs.get("content_json", {})
|
||||
if "items" in kwargs:
|
||||
items = kwargs.pop("items")
|
||||
content["items"] = [
|
||||
{
|
||||
"path": i.path,
|
||||
"category": i.category,
|
||||
"classification": i.classification.value,
|
||||
"reason": i.reason,
|
||||
"checksum": i.checksum
|
||||
} for i in items
|
||||
]
|
||||
if "summary" in kwargs:
|
||||
summary = kwargs.pop("summary")
|
||||
content["summary"] = {
|
||||
"included_count": summary.included_count,
|
||||
"excluded_count": summary.excluded_count,
|
||||
"prohibited_detected_count": summary.prohibited_detected_count
|
||||
}
|
||||
kwargs["content_json"] = content
|
||||
|
||||
super().__init__(**kwargs)
|
||||
# [/DEF:DistributionManifest:Class]
|
||||
|
||||
# [DEF:SourceRegistrySnapshot:Class]
|
||||
# @PURPOSE: Immutable registry snapshot for allowed sources.
|
||||
class SourceRegistrySnapshot(Base):
|
||||
__tablename__ = "clean_release_registry_snapshots"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
registry_id = Column(String, nullable=False)
|
||||
registry_version = Column(String, nullable=False)
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
allowed_hosts = Column(JSON, nullable=False) # List[str]
|
||||
allowed_schemes = Column(JSON, nullable=False) # List[str]
|
||||
allowed_source_types = Column(JSON, nullable=False) # List[str]
|
||||
immutable = Column(Boolean, default=True)
|
||||
# [/DEF:SourceRegistrySnapshot:Class]
|
||||
|
||||
# [DEF:CleanPolicySnapshot:Class]
|
||||
# @PURPOSE: Immutable policy snapshot used to evaluate a run.
|
||||
class CleanPolicySnapshot(Base):
|
||||
__tablename__ = "clean_release_policy_snapshots"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
policy_id = Column(String, nullable=False)
|
||||
policy_version = Column(String, nullable=False)
|
||||
created_at = Column(DateTime, default=datetime.utcnow)
|
||||
content_json = Column(JSON, nullable=False)
|
||||
registry_snapshot_id = Column(String, ForeignKey("clean_release_registry_snapshots.id"), nullable=False)
|
||||
immutable = Column(Boolean, default=True)
|
||||
# [/DEF:CleanPolicySnapshot:Class]
|
||||
|
||||
# [DEF:ComplianceRun:Class]
|
||||
# @PURPOSE: Operational record for one compliance execution.
|
||||
class ComplianceRun(Base):
|
||||
__tablename__ = "clean_release_compliance_runs"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
candidate_id = Column(String, ForeignKey("clean_release_candidates.id"), nullable=False)
|
||||
manifest_id = Column(String, ForeignKey("clean_release_manifests.id"), nullable=False)
|
||||
manifest_digest = Column(String, nullable=False)
|
||||
policy_snapshot_id = Column(String, ForeignKey("clean_release_policy_snapshots.id"), nullable=False)
|
||||
registry_snapshot_id = Column(String, ForeignKey("clean_release_registry_snapshots.id"), nullable=False)
|
||||
requested_by = Column(String, nullable=False)
|
||||
requested_at = Column(DateTime, default=datetime.utcnow)
|
||||
started_at = Column(DateTime, nullable=True)
|
||||
finished_at = Column(DateTime, nullable=True)
|
||||
status = Column(String, default=RunStatus.PENDING)
|
||||
final_status = Column(String, nullable=True) # ComplianceDecision
|
||||
failure_reason = Column(String, nullable=True)
|
||||
task_id = Column(String, nullable=True)
|
||||
|
||||
@property
|
||||
def check_run_id(self) -> str:
|
||||
return self.id
|
||||
# [/DEF:ComplianceRun:Class]
|
||||
|
||||
# [DEF:ComplianceStageRun:Class]
|
||||
# @PURPOSE: Stage-level execution record inside a run.
|
||||
class ComplianceStageRun(Base):
|
||||
__tablename__ = "clean_release_compliance_stage_runs"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
run_id = Column(String, ForeignKey("clean_release_compliance_runs.id"), nullable=False)
|
||||
stage_name = Column(String, nullable=False)
|
||||
status = Column(String, nullable=False)
|
||||
started_at = Column(DateTime, nullable=True)
|
||||
finished_at = Column(DateTime, nullable=True)
|
||||
decision = Column(String, nullable=True) # ComplianceDecision
|
||||
details_json = Column(JSON, default=dict)
|
||||
# [/DEF:ComplianceStageRun:Class]
|
||||
|
||||
# [DEF:ComplianceViolation:Class]
|
||||
# @PURPOSE: Violation produced by a stage.
|
||||
class ComplianceViolation(Base):
|
||||
__tablename__ = "clean_release_compliance_violations"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
run_id = Column(String, ForeignKey("clean_release_compliance_runs.id"), nullable=False)
|
||||
stage_name = Column(String, nullable=False)
|
||||
code = Column(String, nullable=False)
|
||||
severity = Column(String, nullable=False)
|
||||
artifact_path = Column(String, nullable=True)
|
||||
artifact_sha256 = Column(String, nullable=True)
|
||||
message = Column(String, nullable=False)
|
||||
evidence_json = Column(JSON, default=dict)
|
||||
# [/DEF:ComplianceViolation:Class]
|
||||
|
||||
# [DEF:ComplianceReport:Class]
|
||||
# @PURPOSE: Immutable result derived from a completed run.
|
||||
# @INVARIANT: Immutable after creation.
|
||||
class ComplianceReport(Base):
|
||||
__tablename__ = "clean_release_compliance_reports"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
run_id = Column(String, ForeignKey("clean_release_compliance_runs.id"), nullable=False)
|
||||
candidate_id = Column(String, ForeignKey("clean_release_candidates.id"), nullable=False)
|
||||
final_status = Column(String, nullable=False) # ComplianceDecision
|
||||
summary_json = Column(JSON, nullable=False)
|
||||
generated_at = Column(DateTime, default=datetime.utcnow)
|
||||
immutable = Column(Boolean, default=True)
|
||||
# [/DEF:ComplianceReport:Class]
|
||||
|
||||
# [DEF:ApprovalDecision:Class]
|
||||
# @PURPOSE: Approval or rejection bound to a candidate and report.
|
||||
class ApprovalDecision(Base):
|
||||
__tablename__ = "clean_release_approval_decisions"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
candidate_id = Column(String, ForeignKey("clean_release_candidates.id"), nullable=False)
|
||||
report_id = Column(String, ForeignKey("clean_release_compliance_reports.id"), nullable=False)
|
||||
decision = Column(String, nullable=False) # ApprovalDecisionType
|
||||
decided_by = Column(String, nullable=False)
|
||||
decided_at = Column(DateTime, default=datetime.utcnow)
|
||||
comment = Column(String, nullable=True)
|
||||
# [/DEF:ApprovalDecision:Class]
|
||||
|
||||
# [DEF:PublicationRecord:Class]
|
||||
# @PURPOSE: Publication or revocation record.
|
||||
class PublicationRecord(Base):
|
||||
__tablename__ = "clean_release_publication_records"
|
||||
|
||||
id = Column(String, primary_key=True)
|
||||
candidate_id = Column(String, ForeignKey("clean_release_candidates.id"), nullable=False)
|
||||
report_id = Column(String, ForeignKey("clean_release_compliance_reports.id"), nullable=False)
|
||||
published_by = Column(String, nullable=False)
|
||||
published_at = Column(DateTime, default=datetime.utcnow)
|
||||
target_channel = Column(String, nullable=False)
|
||||
publication_ref = Column(String, nullable=True)
|
||||
status = Column(String, default=PublicationStatus.ACTIVE)
|
||||
# [/DEF:PublicationRecord:Class]
|
||||
|
||||
# [DEF:CleanReleaseAuditLog:Class]
|
||||
# @PURPOSE: Represents a persistent audit log entry for clean release actions.
|
||||
import uuid
|
||||
class CleanReleaseAuditLog(Base):
|
||||
__tablename__ = "clean_release_audit_logs"
|
||||
|
||||
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
|
||||
candidate_id = Column(String, index=True, nullable=True)
|
||||
action = Column(String, nullable=False) # e.g. "TRANSITION", "APPROVE", "PUBLISH"
|
||||
actor = Column(String, nullable=False)
|
||||
timestamp = Column(DateTime, default=datetime.utcnow)
|
||||
details_json = Column(JSON, default=dict)
|
||||
# [/DEF:CleanReleaseAuditLog:Class]
|
||||
|
||||
# [/DEF:backend.src.models.clean_release:Module]
|
||||
@@ -38,6 +38,7 @@ class GitServerConfig(Base):
|
||||
url = Column(String(255), nullable=False)
|
||||
pat = Column(String(255), nullable=False) # PERSONAL ACCESS TOKEN
|
||||
default_repository = Column(String(255), nullable=True)
|
||||
default_branch = Column(String(255), default="main")
|
||||
status = Column(Enum(GitStatus), default=GitStatus.UNKNOWN)
|
||||
last_validated = Column(DateTime, default=datetime.utcnow)
|
||||
# [/DEF:GitServerConfig:Class]
|
||||
@@ -53,7 +54,7 @@ class GitRepository(Base):
|
||||
config_id = Column(String(36), ForeignKey("git_server_configs.id"), nullable=False)
|
||||
remote_url = Column(String(255), nullable=False)
|
||||
local_path = Column(String(255), nullable=False)
|
||||
current_branch = Column(String(255), default="main")
|
||||
current_branch = Column(String(255), default="dev")
|
||||
sync_status = Column(Enum(SyncStatus), default=SyncStatus.CLEAN)
|
||||
# [/DEF:GitRepository:Class]
|
||||
|
||||
|
||||
55
backend/src/models/profile.py
Normal file
55
backend/src/models/profile.py
Normal file
@@ -0,0 +1,55 @@
|
||||
# [DEF:backend.src.models.profile:Module]
|
||||
#
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: profile, preferences, persistence, user, dashboard-filter, git, ui-preferences, sqlalchemy
|
||||
# @PURPOSE: Defines persistent per-user profile settings for dashboard filter, Git identity/token, and UX preferences.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.auth
|
||||
# @RELATION: INHERITS_FROM -> backend.src.models.mapping.Base
|
||||
#
|
||||
# @INVARIANT: Exactly one preference row exists per user_id.
|
||||
# @INVARIANT: Sensitive Git token is stored encrypted and never returned in plaintext.
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
import uuid
|
||||
from datetime import datetime
|
||||
from sqlalchemy import Column, String, Boolean, DateTime, ForeignKey
|
||||
from sqlalchemy.orm import relationship
|
||||
from .mapping import Base
|
||||
# [/SECTION]
|
||||
|
||||
|
||||
# [DEF:UserDashboardPreference:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Stores Superset username binding and default "my dashboards" toggle for one authenticated user.
|
||||
class UserDashboardPreference(Base):
|
||||
__tablename__ = "user_dashboard_preferences"
|
||||
|
||||
id = Column(String, primary_key=True, default=lambda: str(uuid.uuid4()))
|
||||
user_id = Column(String, ForeignKey("users.id"), nullable=False, unique=True, index=True)
|
||||
|
||||
superset_username = Column(String, nullable=True)
|
||||
superset_username_normalized = Column(String, nullable=True, index=True)
|
||||
|
||||
show_only_my_dashboards = Column(Boolean, nullable=False, default=False)
|
||||
|
||||
git_username = Column(String, nullable=True)
|
||||
git_email = Column(String, nullable=True)
|
||||
git_personal_access_token_encrypted = Column(String, nullable=True)
|
||||
|
||||
start_page = Column(String, nullable=False, default="dashboards")
|
||||
auto_open_task_drawer = Column(Boolean, nullable=False, default=True)
|
||||
dashboards_table_density = Column(String, nullable=False, default="comfortable")
|
||||
|
||||
created_at = Column(DateTime, nullable=False, default=datetime.utcnow)
|
||||
updated_at = Column(
|
||||
DateTime,
|
||||
nullable=False,
|
||||
default=datetime.utcnow,
|
||||
onupdate=datetime.utcnow,
|
||||
)
|
||||
|
||||
user = relationship("User")
|
||||
# [/DEF:UserDashboardPreference:Class]
|
||||
|
||||
# [/DEF:backend.src.models.profile:Module]
|
||||
@@ -25,6 +25,7 @@ class TaskType(str, Enum):
|
||||
BACKUP = "backup"
|
||||
MIGRATION = "migration"
|
||||
DOCUMENTATION = "documentation"
|
||||
CLEAN_RELEASE = "clean_release"
|
||||
UNKNOWN = "unknown"
|
||||
# [/DEF:TaskType:Class]
|
||||
|
||||
|
||||
@@ -228,6 +228,25 @@ class StoragePlugin(PluginBase):
|
||||
f"[StoragePlugin][Action] Listing files in root: {root}, category: {category}, subpath: {subpath}, recursive: {recursive}"
|
||||
)
|
||||
files = []
|
||||
|
||||
# Root view contract: show category directories only.
|
||||
if category is None and not subpath:
|
||||
for cat in FileCategory:
|
||||
base_dir = root / cat.value
|
||||
if not base_dir.exists():
|
||||
continue
|
||||
stat = base_dir.stat()
|
||||
files.append(
|
||||
StoredFile(
|
||||
name=cat.value,
|
||||
path=cat.value,
|
||||
size=0,
|
||||
created_at=datetime.fromtimestamp(stat.st_ctime),
|
||||
category=cat,
|
||||
mime_type="directory",
|
||||
)
|
||||
)
|
||||
return sorted(files, key=lambda x: x.name)
|
||||
|
||||
categories = [category] if category else list(FileCategory)
|
||||
|
||||
|
||||
159
backend/src/schemas/profile.py
Normal file
159
backend/src/schemas/profile.py
Normal file
@@ -0,0 +1,159 @@
|
||||
# [DEF:backend.src.schemas.profile:Module]
|
||||
#
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: profile, schemas, pydantic, preferences, superset, lookup, security, git, ux
|
||||
# @PURPOSE: Defines API schemas for profile preference persistence, security read-only snapshot, and Superset account lookup.
|
||||
# @LAYER: API
|
||||
# @RELATION: DEPENDS_ON -> pydantic
|
||||
#
|
||||
# @INVARIANT: Schema shapes stay stable for profile UI states and backend preference contracts.
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
from datetime import datetime
|
||||
from typing import List, Literal, Optional
|
||||
from pydantic import BaseModel, Field
|
||||
# [/SECTION]
|
||||
|
||||
|
||||
# [DEF:ProfilePermissionState:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Represents one permission badge state for profile read-only security view.
|
||||
class ProfilePermissionState(BaseModel):
|
||||
key: str
|
||||
allowed: bool
|
||||
# [/DEF:ProfilePermissionState:Class]
|
||||
|
||||
|
||||
# [DEF:ProfileSecuritySummary:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Read-only security and access snapshot for current user.
|
||||
class ProfileSecuritySummary(BaseModel):
|
||||
read_only: bool = True
|
||||
auth_source: Optional[str] = None
|
||||
current_role: Optional[str] = None
|
||||
role_source: Optional[str] = None
|
||||
roles: List[str] = Field(default_factory=list)
|
||||
permissions: List[ProfilePermissionState] = Field(default_factory=list)
|
||||
# [/DEF:ProfileSecuritySummary:Class]
|
||||
|
||||
|
||||
# [DEF:ProfilePreference:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Represents persisted profile preference for a single authenticated user.
|
||||
class ProfilePreference(BaseModel):
|
||||
user_id: str
|
||||
superset_username: Optional[str] = None
|
||||
superset_username_normalized: Optional[str] = None
|
||||
show_only_my_dashboards: bool = False
|
||||
|
||||
git_username: Optional[str] = None
|
||||
git_email: Optional[str] = None
|
||||
has_git_personal_access_token: bool = False
|
||||
git_personal_access_token_masked: Optional[str] = None
|
||||
|
||||
start_page: Literal["dashboards", "datasets", "reports"] = "dashboards"
|
||||
auto_open_task_drawer: bool = True
|
||||
dashboards_table_density: Literal["compact", "comfortable"] = "comfortable"
|
||||
|
||||
created_at: datetime
|
||||
updated_at: datetime
|
||||
|
||||
class Config:
|
||||
from_attributes = True
|
||||
# [/DEF:ProfilePreference:Class]
|
||||
|
||||
|
||||
# [DEF:ProfilePreferenceUpdateRequest:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Request payload for updating current user's profile settings.
|
||||
class ProfilePreferenceUpdateRequest(BaseModel):
|
||||
superset_username: Optional[str] = Field(
|
||||
default=None,
|
||||
description="Apache Superset username bound to current user profile.",
|
||||
)
|
||||
show_only_my_dashboards: Optional[bool] = Field(
|
||||
default=None,
|
||||
description='When true, "/dashboards" can auto-apply profile filter in main context.',
|
||||
)
|
||||
git_username: Optional[str] = Field(
|
||||
default=None,
|
||||
description="Git author username used for commit signature.",
|
||||
)
|
||||
git_email: Optional[str] = Field(
|
||||
default=None,
|
||||
description="Git author email used for commit signature.",
|
||||
)
|
||||
git_personal_access_token: Optional[str] = Field(
|
||||
default=None,
|
||||
description="Personal Access Token value. Empty string clears existing token.",
|
||||
)
|
||||
start_page: Optional[
|
||||
Literal["dashboards", "datasets", "reports", "reports-logs"]
|
||||
] = Field(
|
||||
default=None,
|
||||
description="Preferred start page after login.",
|
||||
)
|
||||
auto_open_task_drawer: Optional[bool] = Field(
|
||||
default=None,
|
||||
description="Auto-open task drawer when long-running tasks start.",
|
||||
)
|
||||
dashboards_table_density: Optional[
|
||||
Literal["compact", "comfortable", "free"]
|
||||
] = Field(
|
||||
default=None,
|
||||
description="Preferred table density for dashboard listings.",
|
||||
)
|
||||
# [/DEF:ProfilePreferenceUpdateRequest:Class]
|
||||
|
||||
|
||||
# [DEF:ProfilePreferenceResponse:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Response envelope for profile preference read/update endpoints.
|
||||
class ProfilePreferenceResponse(BaseModel):
|
||||
status: Literal["success", "error"] = "success"
|
||||
message: Optional[str] = None
|
||||
validation_errors: List[str] = Field(default_factory=list)
|
||||
preference: ProfilePreference
|
||||
security: ProfileSecuritySummary = Field(default_factory=ProfileSecuritySummary)
|
||||
# [/DEF:ProfilePreferenceResponse:Class]
|
||||
|
||||
|
||||
# [DEF:SupersetAccountLookupRequest:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Query contract for Superset account lookup by selected environment.
|
||||
class SupersetAccountLookupRequest(BaseModel):
|
||||
environment_id: str
|
||||
search: Optional[str] = None
|
||||
page_index: int = Field(default=0, ge=0)
|
||||
page_size: int = Field(default=20, ge=1, le=100)
|
||||
sort_column: str = Field(default="username")
|
||||
sort_order: str = Field(default="desc")
|
||||
# [/DEF:SupersetAccountLookupRequest:Class]
|
||||
|
||||
|
||||
# [DEF:SupersetAccountCandidate:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Canonical account candidate projected from Superset users payload.
|
||||
class SupersetAccountCandidate(BaseModel):
|
||||
environment_id: str
|
||||
username: str
|
||||
display_name: Optional[str] = None
|
||||
email: Optional[str] = None
|
||||
is_active: Optional[bool] = None
|
||||
# [/DEF:SupersetAccountCandidate:Class]
|
||||
|
||||
|
||||
# [DEF:SupersetAccountLookupResponse:Class]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Response envelope for Superset account lookup (success or degraded mode).
|
||||
class SupersetAccountLookupResponse(BaseModel):
|
||||
status: Literal["success", "degraded"]
|
||||
environment_id: str
|
||||
page_index: int = Field(ge=0)
|
||||
page_size: int = Field(ge=1, le=100)
|
||||
total: int = Field(ge=0)
|
||||
warning: Optional[str] = None
|
||||
items: List[SupersetAccountCandidate] = Field(default_factory=list)
|
||||
# [/DEF:SupersetAccountLookupResponse:Class]
|
||||
|
||||
# [/DEF:backend.src.schemas.profile:Module]
|
||||
444
backend/src/scripts/clean_release_cli.py
Normal file
444
backend/src/scripts/clean_release_cli.py
Normal file
@@ -0,0 +1,444 @@
|
||||
# [DEF:backend.src.scripts.clean_release_cli:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: cli, clean-release, candidate, artifacts, manifest
|
||||
# @PURPOSE: Provide headless CLI commands for candidate registration, artifact import and manifest build.
|
||||
# @LAYER: Scripts
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
from datetime import date, datetime, timezone
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from ..models.clean_release import CandidateArtifact, ReleaseCandidate
|
||||
from ..services.clean_release.approval_service import approve_candidate, reject_candidate
|
||||
from ..services.clean_release.compliance_execution_service import ComplianceExecutionService
|
||||
from ..services.clean_release.enums import CandidateStatus
|
||||
from ..services.clean_release.publication_service import publish_candidate, revoke_publication
|
||||
|
||||
|
||||
# [DEF:build_parser:Function]
|
||||
# @PURPOSE: Build argparse parser for clean release CLI.
|
||||
def build_parser() -> argparse.ArgumentParser:
|
||||
parser = argparse.ArgumentParser(prog="clean-release-cli")
|
||||
subparsers = parser.add_subparsers(dest="command", required=True)
|
||||
|
||||
register = subparsers.add_parser("candidate-register")
|
||||
register.add_argument("--candidate-id", required=True)
|
||||
register.add_argument("--version", required=True)
|
||||
register.add_argument("--source-snapshot-ref", required=True)
|
||||
register.add_argument("--created-by", default="cli-operator")
|
||||
|
||||
artifact_import = subparsers.add_parser("artifact-import")
|
||||
artifact_import.add_argument("--candidate-id", required=True)
|
||||
artifact_import.add_argument("--artifact-id", required=True)
|
||||
artifact_import.add_argument("--path", required=True)
|
||||
artifact_import.add_argument("--sha256", required=True)
|
||||
artifact_import.add_argument("--size", type=int, required=True)
|
||||
|
||||
manifest_build = subparsers.add_parser("manifest-build")
|
||||
manifest_build.add_argument("--candidate-id", required=True)
|
||||
manifest_build.add_argument("--created-by", default="cli-operator")
|
||||
|
||||
compliance_run = subparsers.add_parser("compliance-run")
|
||||
compliance_run.add_argument("--candidate-id", required=True)
|
||||
compliance_run.add_argument("--manifest-id", required=False, default=None)
|
||||
compliance_run.add_argument("--actor", default="cli-operator")
|
||||
compliance_run.add_argument("--json", action="store_true")
|
||||
|
||||
compliance_status = subparsers.add_parser("compliance-status")
|
||||
compliance_status.add_argument("--run-id", required=True)
|
||||
compliance_status.add_argument("--json", action="store_true")
|
||||
|
||||
compliance_report = subparsers.add_parser("compliance-report")
|
||||
compliance_report.add_argument("--run-id", required=True)
|
||||
compliance_report.add_argument("--json", action="store_true")
|
||||
|
||||
compliance_violations = subparsers.add_parser("compliance-violations")
|
||||
compliance_violations.add_argument("--run-id", required=True)
|
||||
compliance_violations.add_argument("--json", action="store_true")
|
||||
|
||||
approve = subparsers.add_parser("approve")
|
||||
approve.add_argument("--candidate-id", required=True)
|
||||
approve.add_argument("--report-id", required=True)
|
||||
approve.add_argument("--actor", default="cli-operator")
|
||||
approve.add_argument("--comment", required=False, default=None)
|
||||
approve.add_argument("--json", action="store_true")
|
||||
|
||||
reject = subparsers.add_parser("reject")
|
||||
reject.add_argument("--candidate-id", required=True)
|
||||
reject.add_argument("--report-id", required=True)
|
||||
reject.add_argument("--actor", default="cli-operator")
|
||||
reject.add_argument("--comment", required=False, default=None)
|
||||
reject.add_argument("--json", action="store_true")
|
||||
|
||||
publish = subparsers.add_parser("publish")
|
||||
publish.add_argument("--candidate-id", required=True)
|
||||
publish.add_argument("--report-id", required=True)
|
||||
publish.add_argument("--actor", default="cli-operator")
|
||||
publish.add_argument("--target-channel", required=True)
|
||||
publish.add_argument("--publication-ref", required=False, default=None)
|
||||
publish.add_argument("--json", action="store_true")
|
||||
|
||||
revoke = subparsers.add_parser("revoke")
|
||||
revoke.add_argument("--publication-id", required=True)
|
||||
revoke.add_argument("--actor", default="cli-operator")
|
||||
revoke.add_argument("--comment", required=False, default=None)
|
||||
revoke.add_argument("--json", action="store_true")
|
||||
|
||||
return parser
|
||||
# [/DEF:build_parser:Function]
|
||||
|
||||
|
||||
# [DEF:run_candidate_register:Function]
|
||||
# @PURPOSE: Register candidate in repository via CLI command.
|
||||
# @PRE: Candidate ID must be unique.
|
||||
# @POST: Candidate is persisted in DRAFT status.
|
||||
def run_candidate_register(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
repository = get_clean_release_repository()
|
||||
existing = repository.get_candidate(args.candidate_id)
|
||||
if existing is not None:
|
||||
print(json.dumps({"status": "error", "message": "candidate already exists"}))
|
||||
return 1
|
||||
|
||||
candidate = ReleaseCandidate(
|
||||
id=args.candidate_id,
|
||||
version=args.version,
|
||||
source_snapshot_ref=args.source_snapshot_ref,
|
||||
created_by=args.created_by,
|
||||
created_at=datetime.now(timezone.utc),
|
||||
status=CandidateStatus.DRAFT.value,
|
||||
)
|
||||
repository.save_candidate(candidate)
|
||||
print(json.dumps({"status": "ok", "candidate_id": candidate.id}))
|
||||
return 0
|
||||
# [/DEF:run_candidate_register:Function]
|
||||
|
||||
|
||||
# [DEF:run_artifact_import:Function]
|
||||
# @PURPOSE: Import single artifact for existing candidate.
|
||||
# @PRE: Candidate must exist.
|
||||
# @POST: Artifact is persisted for candidate.
|
||||
def run_artifact_import(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
repository = get_clean_release_repository()
|
||||
candidate = repository.get_candidate(args.candidate_id)
|
||||
if candidate is None:
|
||||
print(json.dumps({"status": "error", "message": "candidate not found"}))
|
||||
return 1
|
||||
|
||||
artifact = CandidateArtifact(
|
||||
id=args.artifact_id,
|
||||
candidate_id=args.candidate_id,
|
||||
path=args.path,
|
||||
sha256=args.sha256,
|
||||
size=args.size,
|
||||
)
|
||||
repository.save_artifact(artifact)
|
||||
|
||||
if candidate.status == CandidateStatus.DRAFT.value:
|
||||
candidate.transition_to(CandidateStatus.PREPARED)
|
||||
repository.save_candidate(candidate)
|
||||
|
||||
print(json.dumps({"status": "ok", "artifact_id": artifact.id}))
|
||||
return 0
|
||||
# [/DEF:run_artifact_import:Function]
|
||||
|
||||
|
||||
# [DEF:run_manifest_build:Function]
|
||||
# @PURPOSE: Build immutable manifest snapshot for candidate.
|
||||
# @PRE: Candidate must exist.
|
||||
# @POST: New manifest version is persisted.
|
||||
def run_manifest_build(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
from ..services.clean_release.manifest_service import build_manifest_snapshot
|
||||
|
||||
repository = get_clean_release_repository()
|
||||
try:
|
||||
manifest = build_manifest_snapshot(
|
||||
repository=repository,
|
||||
candidate_id=args.candidate_id,
|
||||
created_by=args.created_by,
|
||||
)
|
||||
except ValueError as exc:
|
||||
print(json.dumps({"status": "error", "message": str(exc)}))
|
||||
return 1
|
||||
|
||||
print(json.dumps({"status": "ok", "manifest_id": manifest.id, "version": manifest.manifest_version}))
|
||||
return 0
|
||||
# [/DEF:run_manifest_build:Function]
|
||||
|
||||
|
||||
# [DEF:run_compliance_run:Function]
|
||||
# @PURPOSE: Execute compliance run for candidate with optional manifest fallback.
|
||||
# @PRE: Candidate exists and trusted snapshots are configured.
|
||||
# @POST: Returns run payload and exit code 0 on success.
|
||||
def run_compliance_run(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository, get_config_manager
|
||||
|
||||
repository = get_clean_release_repository()
|
||||
config_manager = get_config_manager()
|
||||
service = ComplianceExecutionService(repository=repository, config_manager=config_manager)
|
||||
|
||||
try:
|
||||
result = service.execute_run(
|
||||
candidate_id=args.candidate_id,
|
||||
requested_by=args.actor,
|
||||
manifest_id=args.manifest_id,
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
print(json.dumps({"status": "error", "message": str(exc)}))
|
||||
return 2
|
||||
|
||||
payload = {
|
||||
"status": "ok",
|
||||
"run_id": result.run.id,
|
||||
"candidate_id": result.run.candidate_id,
|
||||
"run_status": result.run.status,
|
||||
"final_status": result.run.final_status,
|
||||
"task_id": getattr(result.run, "task_id", None),
|
||||
"report_id": getattr(result.run, "report_id", None),
|
||||
}
|
||||
print(json.dumps(payload))
|
||||
return 0
|
||||
# [/DEF:run_compliance_run:Function]
|
||||
|
||||
|
||||
# [DEF:run_compliance_status:Function]
|
||||
# @PURPOSE: Read run status by run id.
|
||||
# @PRE: Run exists.
|
||||
# @POST: Returns run status payload.
|
||||
def run_compliance_status(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
|
||||
repository = get_clean_release_repository()
|
||||
run = repository.get_check_run(args.run_id)
|
||||
if run is None:
|
||||
print(json.dumps({"status": "error", "message": "run not found"}))
|
||||
return 2
|
||||
|
||||
report = next((item for item in repository.reports.values() if item.run_id == run.id), None)
|
||||
payload = {
|
||||
"status": "ok",
|
||||
"run_id": run.id,
|
||||
"candidate_id": run.candidate_id,
|
||||
"run_status": run.status,
|
||||
"final_status": run.final_status,
|
||||
"task_id": getattr(run, "task_id", None),
|
||||
"report_id": getattr(run, "report_id", None) or (report.id if report else None),
|
||||
}
|
||||
print(json.dumps(payload))
|
||||
return 0
|
||||
# [/DEF:run_compliance_status:Function]
|
||||
|
||||
|
||||
# [DEF:_to_payload:Function]
|
||||
# @PURPOSE: Serialize domain models for CLI JSON output across SQLAlchemy/Pydantic variants.
|
||||
# @PRE: value is serializable model or primitive object.
|
||||
# @POST: Returns dictionary payload without mutating value.
|
||||
def _to_payload(value: Any) -> Dict[str, Any]:
|
||||
def _normalize(raw: Any) -> Any:
|
||||
if isinstance(raw, datetime):
|
||||
return raw.isoformat()
|
||||
if isinstance(raw, date):
|
||||
return raw.isoformat()
|
||||
if isinstance(raw, dict):
|
||||
return {str(key): _normalize(item) for key, item in raw.items()}
|
||||
if isinstance(raw, list):
|
||||
return [_normalize(item) for item in raw]
|
||||
if isinstance(raw, tuple):
|
||||
return [_normalize(item) for item in raw]
|
||||
return raw
|
||||
|
||||
if hasattr(value, "model_dump"):
|
||||
return _normalize(value.model_dump())
|
||||
table = getattr(value, "__table__", None)
|
||||
if table is not None:
|
||||
row = {column.name: getattr(value, column.name) for column in table.columns}
|
||||
return _normalize(row)
|
||||
raise TypeError(f"unsupported payload type: {type(value)!r}")
|
||||
# [/DEF:_to_payload:Function]
|
||||
|
||||
|
||||
# [DEF:run_compliance_report:Function]
|
||||
# @PURPOSE: Read immutable report by run id.
|
||||
# @PRE: Run and report exist.
|
||||
# @POST: Returns report payload.
|
||||
def run_compliance_report(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
|
||||
repository = get_clean_release_repository()
|
||||
run = repository.get_check_run(args.run_id)
|
||||
if run is None:
|
||||
print(json.dumps({"status": "error", "message": "run not found"}))
|
||||
return 2
|
||||
|
||||
report = next((item for item in repository.reports.values() if item.run_id == run.id), None)
|
||||
if report is None:
|
||||
print(json.dumps({"status": "error", "message": "report not found"}))
|
||||
return 2
|
||||
|
||||
print(json.dumps({"status": "ok", "report": _to_payload(report)}))
|
||||
return 0
|
||||
# [/DEF:run_compliance_report:Function]
|
||||
|
||||
|
||||
# [DEF:run_compliance_violations:Function]
|
||||
# @PURPOSE: Read run violations by run id.
|
||||
# @PRE: Run exists.
|
||||
# @POST: Returns violations payload.
|
||||
def run_compliance_violations(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
|
||||
repository = get_clean_release_repository()
|
||||
run = repository.get_check_run(args.run_id)
|
||||
if run is None:
|
||||
print(json.dumps({"status": "error", "message": "run not found"}))
|
||||
return 2
|
||||
|
||||
violations = repository.get_violations_by_run(args.run_id)
|
||||
print(json.dumps({"status": "ok", "items": [_to_payload(item) for item in violations]}))
|
||||
return 0
|
||||
# [/DEF:run_compliance_violations:Function]
|
||||
|
||||
|
||||
# [DEF:run_approve:Function]
|
||||
# @PURPOSE: Approve candidate based on immutable PASSED report.
|
||||
# @PRE: Candidate and report exist; report is PASSED.
|
||||
# @POST: Persists APPROVED decision and returns success payload.
|
||||
def run_approve(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
|
||||
repository = get_clean_release_repository()
|
||||
try:
|
||||
decision = approve_candidate(
|
||||
repository=repository,
|
||||
candidate_id=args.candidate_id,
|
||||
report_id=args.report_id,
|
||||
decided_by=args.actor,
|
||||
comment=args.comment,
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
print(json.dumps({"status": "error", "message": str(exc)}))
|
||||
return 2
|
||||
|
||||
print(json.dumps({"status": "ok", "decision": decision.decision, "decision_id": decision.id}))
|
||||
return 0
|
||||
# [/DEF:run_approve:Function]
|
||||
|
||||
|
||||
# [DEF:run_reject:Function]
|
||||
# @PURPOSE: Reject candidate without mutating compliance evidence.
|
||||
# @PRE: Candidate and report exist.
|
||||
# @POST: Persists REJECTED decision and returns success payload.
|
||||
def run_reject(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
|
||||
repository = get_clean_release_repository()
|
||||
try:
|
||||
decision = reject_candidate(
|
||||
repository=repository,
|
||||
candidate_id=args.candidate_id,
|
||||
report_id=args.report_id,
|
||||
decided_by=args.actor,
|
||||
comment=args.comment,
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
print(json.dumps({"status": "error", "message": str(exc)}))
|
||||
return 2
|
||||
|
||||
print(json.dumps({"status": "ok", "decision": decision.decision, "decision_id": decision.id}))
|
||||
return 0
|
||||
# [/DEF:run_reject:Function]
|
||||
|
||||
|
||||
# [DEF:run_publish:Function]
|
||||
# @PURPOSE: Publish approved candidate to target channel.
|
||||
# @PRE: Candidate is approved and report belongs to candidate.
|
||||
# @POST: Appends ACTIVE publication record and returns payload.
|
||||
def run_publish(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
|
||||
repository = get_clean_release_repository()
|
||||
try:
|
||||
publication = publish_candidate(
|
||||
repository=repository,
|
||||
candidate_id=args.candidate_id,
|
||||
report_id=args.report_id,
|
||||
published_by=args.actor,
|
||||
target_channel=args.target_channel,
|
||||
publication_ref=args.publication_ref,
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
print(json.dumps({"status": "error", "message": str(exc)}))
|
||||
return 2
|
||||
|
||||
print(json.dumps({"status": "ok", "publication": _to_payload(publication)}))
|
||||
return 0
|
||||
# [/DEF:run_publish:Function]
|
||||
|
||||
|
||||
# [DEF:run_revoke:Function]
|
||||
# @PURPOSE: Revoke active publication record.
|
||||
# @PRE: Publication id exists and is ACTIVE.
|
||||
# @POST: Publication record status becomes REVOKED.
|
||||
def run_revoke(args: argparse.Namespace) -> int:
|
||||
from ..dependencies import get_clean_release_repository
|
||||
|
||||
repository = get_clean_release_repository()
|
||||
try:
|
||||
publication = revoke_publication(
|
||||
repository=repository,
|
||||
publication_id=args.publication_id,
|
||||
revoked_by=args.actor,
|
||||
comment=args.comment,
|
||||
)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
print(json.dumps({"status": "error", "message": str(exc)}))
|
||||
return 2
|
||||
|
||||
print(json.dumps({"status": "ok", "publication": _to_payload(publication)}))
|
||||
return 0
|
||||
# [/DEF:run_revoke:Function]
|
||||
|
||||
|
||||
# [DEF:main:Function]
|
||||
# @PURPOSE: CLI entrypoint for clean release commands.
|
||||
def main(argv: Optional[List[str]] = None) -> int:
|
||||
parser = build_parser()
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
if args.command == "candidate-register":
|
||||
return run_candidate_register(args)
|
||||
if args.command == "artifact-import":
|
||||
return run_artifact_import(args)
|
||||
if args.command == "manifest-build":
|
||||
return run_manifest_build(args)
|
||||
if args.command == "compliance-run":
|
||||
return run_compliance_run(args)
|
||||
if args.command == "compliance-status":
|
||||
return run_compliance_status(args)
|
||||
if args.command == "compliance-report":
|
||||
return run_compliance_report(args)
|
||||
if args.command == "compliance-violations":
|
||||
return run_compliance_violations(args)
|
||||
if args.command == "approve":
|
||||
return run_approve(args)
|
||||
if args.command == "reject":
|
||||
return run_reject(args)
|
||||
if args.command == "publish":
|
||||
return run_publish(args)
|
||||
if args.command == "revoke":
|
||||
return run_revoke(args)
|
||||
|
||||
print(json.dumps({"status": "error", "message": "unknown command"}))
|
||||
return 2
|
||||
# [/DEF:main:Function]
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
|
||||
# [/DEF:backend.src.scripts.clean_release_cli:Module]
|
||||
589
backend/src/scripts/clean_release_tui.py
Normal file
589
backend/src/scripts/clean_release_tui.py
Normal file
@@ -0,0 +1,589 @@
|
||||
# [DEF:backend.src.scripts.clean_release_tui:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, tui, ncurses, interactive-validator
|
||||
# @PURPOSE: Interactive terminal interface for Enterprise Clean Release compliance validation.
|
||||
# @LAYER: UI
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.compliance_orchestrator
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @INVARIANT: TUI refuses startup in non-TTY environments; headless flow is CLI/API only.
|
||||
|
||||
import curses
|
||||
import json
|
||||
import os
|
||||
import sys
|
||||
from datetime import datetime, timezone
|
||||
from types import SimpleNamespace
|
||||
from typing import List, Optional, Any, Dict
|
||||
|
||||
# Standardize sys.path for direct execution from project root or scripts dir
|
||||
SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
|
||||
PROJECT_ROOT = os.path.abspath(os.path.join(SCRIPT_DIR, "..", "..", ".."))
|
||||
if PROJECT_ROOT not in sys.path:
|
||||
sys.path.insert(0, PROJECT_ROOT)
|
||||
|
||||
from backend.src.models.clean_release import (
|
||||
CandidateArtifact,
|
||||
CheckFinalStatus,
|
||||
CheckStageName,
|
||||
CheckStageStatus,
|
||||
CleanProfilePolicy,
|
||||
ComplianceViolation,
|
||||
ProfileType,
|
||||
ReleaseCandidate,
|
||||
ResourceSourceEntry,
|
||||
ResourceSourceRegistry,
|
||||
RegistryStatus,
|
||||
ReleaseCandidateStatus,
|
||||
)
|
||||
from backend.src.services.clean_release.approval_service import approve_candidate
|
||||
from backend.src.services.clean_release.compliance_execution_service import ComplianceExecutionService
|
||||
from backend.src.services.clean_release.enums import CandidateStatus
|
||||
from backend.src.services.clean_release.manifest_service import build_manifest_snapshot
|
||||
from backend.src.services.clean_release.publication_service import publish_candidate
|
||||
from backend.src.services.clean_release.repository import CleanReleaseRepository
|
||||
|
||||
# [DEF:TuiFacadeAdapter:Class]
|
||||
# @PURPOSE: Thin TUI adapter that routes business mutations through application services.
|
||||
# @PRE: repository contains candidate and trusted policy/registry snapshots for execution.
|
||||
# @POST: Business actions return service results/errors without direct TUI-owned mutations.
|
||||
class TuiFacadeAdapter:
|
||||
def __init__(self, repository: CleanReleaseRepository):
|
||||
self.repository = repository
|
||||
|
||||
def _build_config_manager(self):
|
||||
policy = self.repository.get_active_policy()
|
||||
if policy is None:
|
||||
raise ValueError("Active policy not found")
|
||||
clean_release = SimpleNamespace(
|
||||
active_policy_id=policy.id,
|
||||
active_registry_id=policy.registry_snapshot_id,
|
||||
)
|
||||
settings = SimpleNamespace(clean_release=clean_release)
|
||||
config = SimpleNamespace(settings=settings)
|
||||
return SimpleNamespace(get_config=lambda: config)
|
||||
|
||||
def run_compliance(self, *, candidate_id: str, actor: str):
|
||||
manifests = self.repository.get_manifests_by_candidate(candidate_id)
|
||||
if not manifests:
|
||||
raise ValueError("Manifest required before compliance run")
|
||||
latest_manifest = sorted(manifests, key=lambda item: item.manifest_version, reverse=True)[0]
|
||||
service = ComplianceExecutionService(
|
||||
repository=self.repository,
|
||||
config_manager=self._build_config_manager(),
|
||||
)
|
||||
return service.execute_run(candidate_id=candidate_id, requested_by=actor, manifest_id=latest_manifest.id)
|
||||
|
||||
def approve_latest(self, *, candidate_id: str, actor: str):
|
||||
reports = [item for item in self.repository.reports.values() if item.candidate_id == candidate_id]
|
||||
if not reports:
|
||||
raise ValueError("No compliance report available for approval")
|
||||
report = sorted(reports, key=lambda item: item.generated_at, reverse=True)[0]
|
||||
return approve_candidate(
|
||||
repository=self.repository,
|
||||
candidate_id=candidate_id,
|
||||
report_id=report.id,
|
||||
decided_by=actor,
|
||||
comment="Approved from TUI",
|
||||
)
|
||||
|
||||
def publish_latest(self, *, candidate_id: str, actor: str):
|
||||
reports = [item for item in self.repository.reports.values() if item.candidate_id == candidate_id]
|
||||
if not reports:
|
||||
raise ValueError("No compliance report available for publication")
|
||||
report = sorted(reports, key=lambda item: item.generated_at, reverse=True)[0]
|
||||
return publish_candidate(
|
||||
repository=self.repository,
|
||||
candidate_id=candidate_id,
|
||||
report_id=report.id,
|
||||
published_by=actor,
|
||||
target_channel="stable",
|
||||
publication_ref=None,
|
||||
)
|
||||
|
||||
def build_manifest(self, *, candidate_id: str, actor: str):
|
||||
return build_manifest_snapshot(
|
||||
repository=self.repository,
|
||||
candidate_id=candidate_id,
|
||||
created_by=actor,
|
||||
)
|
||||
|
||||
def get_overview(self, *, candidate_id: str) -> Dict[str, Any]:
|
||||
candidate = self.repository.get_candidate(candidate_id)
|
||||
manifests = self.repository.get_manifests_by_candidate(candidate_id)
|
||||
latest_manifest = sorted(manifests, key=lambda item: item.manifest_version, reverse=True)[0] if manifests else None
|
||||
runs = [item for item in self.repository.check_runs.values() if item.candidate_id == candidate_id]
|
||||
latest_run = sorted(runs, key=lambda item: item.requested_at, reverse=True)[0] if runs else None
|
||||
latest_report = next((item for item in self.repository.reports.values() if latest_run and item.run_id == latest_run.id), None)
|
||||
approvals = getattr(self.repository, "approval_decisions", [])
|
||||
latest_approval = sorted(
|
||||
[item for item in approvals if item.candidate_id == candidate_id],
|
||||
key=lambda item: item.decided_at,
|
||||
reverse=True,
|
||||
)[0] if any(item.candidate_id == candidate_id for item in approvals) else None
|
||||
publications = getattr(self.repository, "publication_records", [])
|
||||
latest_publication = sorted(
|
||||
[item for item in publications if item.candidate_id == candidate_id],
|
||||
key=lambda item: item.published_at,
|
||||
reverse=True,
|
||||
)[0] if any(item.candidate_id == candidate_id for item in publications) else None
|
||||
policy = self.repository.get_active_policy()
|
||||
registry = self.repository.get_registry(policy.internal_source_registry_ref) if policy else None
|
||||
return {
|
||||
"candidate": candidate,
|
||||
"manifest": latest_manifest,
|
||||
"run": latest_run,
|
||||
"report": latest_report,
|
||||
"approval": latest_approval,
|
||||
"publication": latest_publication,
|
||||
"policy": policy,
|
||||
"registry": registry,
|
||||
}
|
||||
# [/DEF:TuiFacadeAdapter:Class]
|
||||
|
||||
|
||||
# [DEF:CleanReleaseTUI:Class]
|
||||
# @PURPOSE: Curses-based application for compliance monitoring.
|
||||
# @UX_STATE: READY -> Waiting for operator to start checks (F5).
|
||||
# @UX_STATE: RUNNING -> Executing compliance stages with progress feedback.
|
||||
# @UX_STATE: COMPLIANT -> Release candidate passed all checks.
|
||||
# @UX_STATE: BLOCKED -> Violations detected, release forbidden.
|
||||
# @UX_FEEDBACK: Red alerts for BLOCKED status, Green for COMPLIANT.
|
||||
class CleanReleaseTUI:
|
||||
def __init__(self, stdscr: curses.window):
|
||||
self.stdscr = stdscr
|
||||
self.mode = os.getenv("CLEAN_TUI_MODE", "demo").strip().lower()
|
||||
self.repo = self._build_repository(self.mode)
|
||||
self.facade = TuiFacadeAdapter(self.repo)
|
||||
self.candidate_id = self._resolve_candidate_id()
|
||||
self.status: Any = "READY"
|
||||
self.checks_progress: List[Dict[str, Any]] = []
|
||||
self.violations_list: List[ComplianceViolation] = []
|
||||
self.report_id: Optional[str] = None
|
||||
self.last_error: Optional[str] = None
|
||||
self.overview: Dict[str, Any] = {}
|
||||
self.refresh_overview()
|
||||
|
||||
curses.start_color()
|
||||
curses.use_default_colors()
|
||||
curses.init_pair(1, curses.COLOR_WHITE, curses.COLOR_BLUE) # Header/Footer
|
||||
curses.init_pair(2, curses.COLOR_GREEN, -1) # PASS
|
||||
curses.init_pair(3, curses.COLOR_RED, -1) # FAIL/BLOCKED
|
||||
curses.init_pair(4, curses.COLOR_YELLOW, -1) # RUNNING
|
||||
curses.init_pair(5, curses.COLOR_CYAN, -1) # Text
|
||||
|
||||
def _build_repository(self, mode: str) -> CleanReleaseRepository:
|
||||
repo = CleanReleaseRepository()
|
||||
if mode == "demo":
|
||||
self._bootstrap_demo_repository(repo)
|
||||
else:
|
||||
self._bootstrap_real_repository(repo)
|
||||
return repo
|
||||
|
||||
def _bootstrap_demo_repository(self, repository: CleanReleaseRepository) -> None:
|
||||
now = datetime.now(timezone.utc)
|
||||
policy = CleanProfilePolicy(
|
||||
policy_id="POL-ENT-CLEAN",
|
||||
policy_version="1",
|
||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||
active=True,
|
||||
internal_source_registry_ref="REG-1",
|
||||
prohibited_artifact_categories=["test-data"],
|
||||
effective_from=now,
|
||||
)
|
||||
setattr(policy, "immutable", True)
|
||||
repository.save_policy(policy)
|
||||
|
||||
registry = ResourceSourceRegistry(
|
||||
registry_id="REG-1",
|
||||
name="Default Internal Registry",
|
||||
entries=[
|
||||
ResourceSourceEntry(
|
||||
source_id="S1",
|
||||
host="internal-repo.company.com",
|
||||
protocol="https",
|
||||
purpose="artifactory",
|
||||
)
|
||||
],
|
||||
updated_at=now,
|
||||
updated_by="system",
|
||||
)
|
||||
setattr(registry, "immutable", True)
|
||||
setattr(registry, "allowed_hosts", ["internal-repo.company.com"])
|
||||
setattr(registry, "allowed_schemes", ["https"])
|
||||
setattr(registry, "allowed_source_types", ["artifactory"])
|
||||
repository.save_registry(registry)
|
||||
candidate = ReleaseCandidate(
|
||||
id="2026.03.03-rc1",
|
||||
version="1.0.0",
|
||||
source_snapshot_ref="v1.0.0-rc1",
|
||||
created_at=now,
|
||||
created_by="system",
|
||||
status=CandidateStatus.DRAFT.value,
|
||||
)
|
||||
candidate.transition_to(CandidateStatus.PREPARED)
|
||||
repository.save_candidate(candidate)
|
||||
repository.save_artifact(
|
||||
CandidateArtifact(
|
||||
id="demo-art-1",
|
||||
candidate_id=candidate.id,
|
||||
path="src/main.py",
|
||||
sha256="sha256-demo-core",
|
||||
size=128,
|
||||
detected_category="core",
|
||||
)
|
||||
)
|
||||
repository.save_artifact(
|
||||
CandidateArtifact(
|
||||
id="demo-art-2",
|
||||
candidate_id=candidate.id,
|
||||
path="test/data.csv",
|
||||
sha256="sha256-demo-test",
|
||||
size=64,
|
||||
detected_category="test-data",
|
||||
)
|
||||
)
|
||||
manifest = build_manifest_snapshot(
|
||||
repository=repository,
|
||||
candidate_id=candidate.id,
|
||||
created_by="system",
|
||||
policy_id="POL-ENT-CLEAN",
|
||||
)
|
||||
summary = dict(manifest.content_json.get("summary", {}))
|
||||
summary["prohibited_detected_count"] = 1
|
||||
manifest.content_json["summary"] = summary
|
||||
|
||||
def _bootstrap_real_repository(self, repository: CleanReleaseRepository) -> None:
|
||||
bootstrap_path = os.getenv("CLEAN_TUI_BOOTSTRAP_JSON", "").strip()
|
||||
if not bootstrap_path:
|
||||
return
|
||||
|
||||
with open(bootstrap_path, "r", encoding="utf-8") as bootstrap_file:
|
||||
payload = json.load(bootstrap_file)
|
||||
|
||||
now = datetime.now(timezone.utc)
|
||||
candidate = ReleaseCandidate(
|
||||
id=payload.get("candidate_id", "candidate-1"),
|
||||
version=payload.get("version", "1.0.0"),
|
||||
source_snapshot_ref=payload.get("source_snapshot_ref", "snapshot-ref"),
|
||||
created_at=now,
|
||||
created_by=payload.get("created_by", "operator"),
|
||||
status=ReleaseCandidateStatus.DRAFT,
|
||||
)
|
||||
repository.save_candidate(candidate)
|
||||
|
||||
registry_id = payload.get("registry_id", "REG-1")
|
||||
entries = [
|
||||
ResourceSourceEntry(
|
||||
source_id=f"S-{index + 1}",
|
||||
host=host,
|
||||
protocol="https",
|
||||
purpose="bootstrap",
|
||||
enabled=True,
|
||||
)
|
||||
for index, host in enumerate(payload.get("allowed_hosts", []))
|
||||
if str(host).strip()
|
||||
]
|
||||
if entries:
|
||||
repository.save_registry(
|
||||
ResourceSourceRegistry(
|
||||
registry_id=registry_id,
|
||||
name=payload.get("registry_name", "Bootstrap Internal Registry"),
|
||||
entries=entries,
|
||||
updated_at=now,
|
||||
updated_by=payload.get("created_by", "operator"),
|
||||
status=RegistryStatus.ACTIVE,
|
||||
)
|
||||
)
|
||||
|
||||
if entries:
|
||||
repository.save_policy(
|
||||
CleanProfilePolicy(
|
||||
policy_id=payload.get("policy_id", "POL-ENT-CLEAN"),
|
||||
policy_version=payload.get("policy_version", "1"),
|
||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||
active=True,
|
||||
internal_source_registry_ref=registry_id,
|
||||
prohibited_artifact_categories=payload.get(
|
||||
"prohibited_artifact_categories",
|
||||
["test-data", "demo", "load-test"],
|
||||
),
|
||||
required_system_categories=payload.get("required_system_categories", ["core"]),
|
||||
effective_from=now,
|
||||
)
|
||||
)
|
||||
|
||||
def _resolve_candidate_id(self) -> str:
|
||||
env_candidate = os.getenv("CLEAN_TUI_CANDIDATE_ID", "").strip()
|
||||
if env_candidate:
|
||||
return env_candidate
|
||||
|
||||
candidate_ids = list(self.repo.candidates.keys())
|
||||
if candidate_ids:
|
||||
return candidate_ids[0]
|
||||
return ""
|
||||
|
||||
def draw_header(self, max_y: int, max_x: int):
|
||||
header_text = " Enterprise Clean Release Validator (TUI) "
|
||||
self.stdscr.attron(curses.color_pair(1) | curses.A_BOLD)
|
||||
# Avoid slicing if possible to satisfy Pyre, or use explicit int
|
||||
centered = header_text.center(max_x)
|
||||
self.stdscr.addstr(0, 0, centered[:max_x])
|
||||
self.stdscr.attroff(curses.color_pair(1) | curses.A_BOLD)
|
||||
|
||||
candidate = self.overview.get("candidate")
|
||||
candidate_text = self.candidate_id or "not-set"
|
||||
profile_text = "enterprise-clean"
|
||||
lifecycle = getattr(candidate, "status", "UNKNOWN")
|
||||
info_line_text = (
|
||||
f" │ Candidate: [{candidate_text}] Profile: [{profile_text}] "
|
||||
f"Lifecycle: [{lifecycle}] Mode: [{self.mode}]"
|
||||
).ljust(max_x)
|
||||
self.stdscr.addstr(2, 0, info_line_text[:max_x])
|
||||
|
||||
def draw_checks(self):
|
||||
self.stdscr.addstr(4, 3, "Checks:")
|
||||
check_defs = [
|
||||
(CheckStageName.DATA_PURITY, "Data Purity (no test/demo payloads)"),
|
||||
(CheckStageName.INTERNAL_SOURCES_ONLY, "Internal Sources Only (company servers)"),
|
||||
(CheckStageName.NO_EXTERNAL_ENDPOINTS, "No External Internet Endpoints"),
|
||||
(CheckStageName.MANIFEST_CONSISTENCY, "Release Manifest Consistency"),
|
||||
]
|
||||
|
||||
row = 5
|
||||
drawn_checks = {c["stage"]: c for c in self.checks_progress}
|
||||
|
||||
for stage, desc in check_defs:
|
||||
status_text = " "
|
||||
color = curses.color_pair(5)
|
||||
|
||||
if stage in drawn_checks:
|
||||
c = drawn_checks[stage]
|
||||
if c["status"] == "RUNNING":
|
||||
status_text = "..."
|
||||
color = curses.color_pair(4)
|
||||
elif c["status"] == CheckStageStatus.PASS:
|
||||
status_text = "PASS"
|
||||
color = curses.color_pair(2)
|
||||
elif c["status"] == CheckStageStatus.FAIL:
|
||||
status_text = "FAIL"
|
||||
color = curses.color_pair(3)
|
||||
|
||||
self.stdscr.addstr(row, 4, f"[{status_text:^4}] {desc}")
|
||||
if status_text != " ":
|
||||
self.stdscr.addstr(row, 50, f"{status_text:>10}", color | curses.A_BOLD)
|
||||
row += 1
|
||||
|
||||
def draw_sources(self):
|
||||
self.stdscr.addstr(12, 3, "Allowed Internal Sources:", curses.A_BOLD)
|
||||
reg = self.overview.get("registry")
|
||||
row = 13
|
||||
if reg:
|
||||
for entry in reg.entries:
|
||||
self.stdscr.addstr(row, 3, f" - {entry.host}")
|
||||
row += 1
|
||||
else:
|
||||
self.stdscr.addstr(row, 3, " - (none)")
|
||||
|
||||
def draw_status(self):
|
||||
color = curses.color_pair(5)
|
||||
if self.status == CheckFinalStatus.COMPLIANT: color = curses.color_pair(2)
|
||||
elif self.status == CheckFinalStatus.BLOCKED: color = curses.color_pair(3)
|
||||
|
||||
stat_str = str(self.status.value if hasattr(self.status, "value") else self.status)
|
||||
self.stdscr.addstr(18, 3, f"FINAL STATUS: {stat_str.upper()}", color | curses.A_BOLD)
|
||||
|
||||
if self.report_id:
|
||||
self.stdscr.addstr(19, 3, f"Report ID: {self.report_id}")
|
||||
|
||||
approval = self.overview.get("approval")
|
||||
publication = self.overview.get("publication")
|
||||
if approval:
|
||||
self.stdscr.addstr(20, 3, f"Approval: {approval.decision}")
|
||||
if publication:
|
||||
self.stdscr.addstr(20, 32, f"Publication: {publication.status}")
|
||||
|
||||
if self.violations_list:
|
||||
self.stdscr.addstr(21, 3, f"Violations Details ({len(self.violations_list)} total):", curses.color_pair(3) | curses.A_BOLD)
|
||||
row = 22
|
||||
for i, v in enumerate(self.violations_list[:5]):
|
||||
v_cat = str(getattr(v, "code", "VIOLATION"))
|
||||
msg = str(getattr(v, "message", "Violation detected"))
|
||||
location = str(
|
||||
getattr(v, "artifact_path", "")
|
||||
or getattr(getattr(v, "evidence_json", {}), "get", lambda *_: "")("location", "")
|
||||
)
|
||||
msg_text = f"[{v_cat}] {msg} (Loc: {location})"
|
||||
self.stdscr.addstr(row + i, 5, msg_text[:70], curses.color_pair(3))
|
||||
if self.last_error:
|
||||
self.stdscr.addstr(27, 3, f"Error: {self.last_error}"[:100], curses.color_pair(3) | curses.A_BOLD)
|
||||
|
||||
def draw_footer(self, max_y: int, max_x: int):
|
||||
footer_text = " F5 Run F6 Manifest F7 Refresh F8 Approve F9 Publish F10 Exit ".center(max_x)
|
||||
self.stdscr.attron(curses.color_pair(1))
|
||||
self.stdscr.addstr(max_y - 1, 0, footer_text[:max_x])
|
||||
self.stdscr.attroff(curses.color_pair(1))
|
||||
|
||||
# [DEF:run_checks:Function]
|
||||
# @PURPOSE: Execute compliance run via facade adapter and update UI state.
|
||||
# @PRE: Candidate and policy snapshots are present in repository.
|
||||
# @POST: UI reflects final run/report/violation state from service result.
|
||||
def run_checks(self):
|
||||
self.status = "RUNNING"
|
||||
self.report_id = None
|
||||
self.violations_list = []
|
||||
self.checks_progress = []
|
||||
self.last_error = None
|
||||
self.refresh_screen()
|
||||
|
||||
try:
|
||||
result = self.facade.run_compliance(candidate_id=self.candidate_id, actor="operator")
|
||||
except Exception as exc: # noqa: BLE001
|
||||
self.status = CheckFinalStatus.FAILED
|
||||
self.last_error = str(exc)
|
||||
self.refresh_screen()
|
||||
return
|
||||
|
||||
self.checks_progress = [
|
||||
{
|
||||
"stage": stage.stage_name,
|
||||
"status": CheckStageStatus.PASS if str(stage.decision).upper() == "PASSED" else CheckStageStatus.FAIL,
|
||||
}
|
||||
for stage in result.stage_runs
|
||||
]
|
||||
self.violations_list = result.violations
|
||||
self.report_id = result.report.id if result.report is not None else None
|
||||
|
||||
final_status = str(result.run.final_status or "").upper()
|
||||
if final_status in {"BLOCKED", CheckFinalStatus.BLOCKED.value}:
|
||||
self.status = CheckFinalStatus.BLOCKED
|
||||
elif final_status in {"COMPLIANT", "PASSED", CheckFinalStatus.COMPLIANT.value}:
|
||||
self.status = CheckFinalStatus.COMPLIANT
|
||||
else:
|
||||
self.status = CheckFinalStatus.FAILED
|
||||
self.refresh_overview()
|
||||
self.refresh_screen()
|
||||
|
||||
def build_manifest(self):
|
||||
try:
|
||||
manifest = self.facade.build_manifest(candidate_id=self.candidate_id, actor="operator")
|
||||
self.status = "READY"
|
||||
self.report_id = None
|
||||
self.violations_list = []
|
||||
self.checks_progress = []
|
||||
self.last_error = f"Manifest built: {manifest.id}"
|
||||
except Exception as exc: # noqa: BLE001
|
||||
self.last_error = str(exc)
|
||||
self.refresh_overview()
|
||||
self.refresh_screen()
|
||||
|
||||
def clear_history(self):
|
||||
self.status = "READY"
|
||||
self.report_id = None
|
||||
self.violations_list = []
|
||||
self.checks_progress = []
|
||||
self.last_error = None
|
||||
self.refresh_overview()
|
||||
self.refresh_screen()
|
||||
|
||||
def approve_latest(self):
|
||||
if not self.report_id:
|
||||
self.last_error = "F8 disabled: no compliance report available"
|
||||
self.refresh_screen()
|
||||
return
|
||||
try:
|
||||
self.facade.approve_latest(candidate_id=self.candidate_id, actor="operator")
|
||||
self.last_error = None
|
||||
except Exception as exc: # noqa: BLE001
|
||||
self.last_error = str(exc)
|
||||
self.refresh_overview()
|
||||
self.refresh_screen()
|
||||
|
||||
def publish_latest(self):
|
||||
if not self.report_id:
|
||||
self.last_error = "F9 disabled: no compliance report available"
|
||||
self.refresh_screen()
|
||||
return
|
||||
try:
|
||||
self.facade.publish_latest(candidate_id=self.candidate_id, actor="operator")
|
||||
self.last_error = None
|
||||
except Exception as exc: # noqa: BLE001
|
||||
self.last_error = str(exc)
|
||||
self.refresh_overview()
|
||||
self.refresh_screen()
|
||||
|
||||
def refresh_overview(self):
|
||||
if not self.report_id:
|
||||
self.last_error = "F9 disabled: no compliance report available"
|
||||
self.refresh_screen()
|
||||
return
|
||||
try:
|
||||
self.facade.publish_latest(candidate_id=self.candidate_id, actor="operator")
|
||||
self.last_error = None
|
||||
except Exception as exc: # noqa: BLE001
|
||||
self.last_error = str(exc)
|
||||
self.refresh_overview()
|
||||
self.refresh_screen()
|
||||
|
||||
def refresh_overview(self):
|
||||
if not self.candidate_id:
|
||||
self.overview = {}
|
||||
return
|
||||
self.overview = self.facade.get_overview(candidate_id=self.candidate_id)
|
||||
|
||||
def refresh_screen(self):
|
||||
max_y, max_x = self.stdscr.getmaxyx()
|
||||
self.stdscr.clear()
|
||||
try:
|
||||
self.draw_header(max_y, max_x)
|
||||
self.draw_checks()
|
||||
self.draw_sources()
|
||||
self.draw_status()
|
||||
self.draw_footer(max_y, max_x)
|
||||
except Exception:
|
||||
pass
|
||||
self.stdscr.refresh()
|
||||
|
||||
def loop(self):
|
||||
self.refresh_screen()
|
||||
while True:
|
||||
char = self.stdscr.getch()
|
||||
if char == curses.KEY_F10:
|
||||
break
|
||||
elif char == curses.KEY_F5:
|
||||
self.run_checks()
|
||||
elif char == curses.KEY_F6:
|
||||
self.build_manifest()
|
||||
elif char == curses.KEY_F7:
|
||||
self.clear_history()
|
||||
elif char == curses.KEY_F8:
|
||||
self.approve_latest()
|
||||
elif char == curses.KEY_F9:
|
||||
self.publish_latest()
|
||||
# [/DEF:CleanReleaseTUI:Class]
|
||||
|
||||
|
||||
def tui_main(stdscr: curses.window):
|
||||
curses.curs_set(0) # Hide cursor
|
||||
app = CleanReleaseTUI(stdscr)
|
||||
app.loop()
|
||||
|
||||
|
||||
def main() -> int:
|
||||
# TUI requires interactive terminal; headless mode must use CLI/API flow.
|
||||
if not sys.stdout.isatty():
|
||||
print(
|
||||
"TTY is required for TUI mode. Use CLI/API workflow instead.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return 2
|
||||
try:
|
||||
curses.wrapper(tui_main)
|
||||
return 0
|
||||
except Exception as e:
|
||||
print(f"Error starting TUI: {e}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
# [/DEF:backend.src.scripts.clean_release_tui:Module]
|
||||
@@ -9,8 +9,8 @@
|
||||
"last_name": "Admin"
|
||||
},
|
||||
"changed_by_name": "Superset Admin",
|
||||
"changed_on": "2026-02-10T13:39:35.945662",
|
||||
"changed_on_delta_humanized": "16 days ago",
|
||||
"changed_on": "2026-02-24T19:24:01.850617",
|
||||
"changed_on_delta_humanized": "7 days ago",
|
||||
"charts": [
|
||||
"TA-0001-001 test_chart"
|
||||
],
|
||||
@@ -19,12 +19,12 @@
|
||||
"id": 1,
|
||||
"last_name": "Admin"
|
||||
},
|
||||
"created_on_delta_humanized": "16 days ago",
|
||||
"created_on_delta_humanized": "13 days ago",
|
||||
"css": null,
|
||||
"dashboard_title": "TA-0001 Test dashboard",
|
||||
"id": 13,
|
||||
"is_managed_externally": false,
|
||||
"json_metadata": "{\"color_scheme_domain\": [], \"shared_label_colors\": [], \"map_label_colors\": {}, \"label_colors\": {}}",
|
||||
"json_metadata": "{\"color_scheme_domain\": [], \"shared_label_colors\": [], \"map_label_colors\": {}, \"label_colors\": {}, \"native_filter_configuration\": []}",
|
||||
"owners": [
|
||||
{
|
||||
"first_name": "Superset",
|
||||
@@ -32,13 +32,13 @@
|
||||
"last_name": "Admin"
|
||||
}
|
||||
],
|
||||
"position_json": null,
|
||||
"position_json": "{\"DASHBOARD_VERSION_KEY\": \"v2\", \"ROOT_ID\": {\"children\": [\"GRID_ID\"], \"id\": \"ROOT_ID\", \"type\": \"ROOT\"}, \"GRID_ID\": {\"children\": [\"ROW-N-LH8TG1XX\"], \"id\": \"GRID_ID\", \"parents\": [\"ROOT_ID\"], \"type\": \"GRID\"}, \"HEADER_ID\": {\"id\": \"HEADER_ID\", \"meta\": {\"text\": \"TA-0001 Test dashboard\"}, \"type\": \"HEADER\"}, \"ROW-N-LH8TG1XX\": {\"children\": [\"CHART-1EKC8H7C\"], \"id\": \"ROW-N-LH8TG1XX\", \"meta\": {\"0\": \"ROOT_ID\", \"background\": \"BACKGROUND_TRANSPARENT\"}, \"type\": \"ROW\", \"parents\": [\"ROOT_ID\", \"GRID_ID\"]}, \"CHART-1EKC8H7C\": {\"children\": [], \"id\": \"CHART-1EKC8H7C\", \"meta\": {\"chartId\": 162, \"height\": 50, \"sliceName\": \"TA-0001-001 test_chart\", \"uuid\": \"008cdaa7-21b3-4042-9f55-f15653609ebd\", \"width\": 4}, \"type\": \"CHART\", \"parents\": [\"ROOT_ID\", \"GRID_ID\", \"ROW-N-LH8TG1XX\"]}}",
|
||||
"published": true,
|
||||
"roles": [],
|
||||
"slug": null,
|
||||
"tags": [],
|
||||
"theme": null,
|
||||
"thumbnail_url": "/api/v1/dashboard/13/thumbnail/3cfc57e6aea7188b139f94fb437a1426/",
|
||||
"thumbnail_url": "/api/v1/dashboard/13/thumbnail/97dfd5d8d24f7cf01de45671c9a0699d/",
|
||||
"url": "/superset/dashboard/13/",
|
||||
"uuid": "124b28d4-d54a-4ade-ade7-2d0473b90686"
|
||||
}
|
||||
@@ -53,15 +53,15 @@
|
||||
"first_name": "Superset",
|
||||
"last_name": "Admin"
|
||||
},
|
||||
"changed_on": "2026-02-10T13:38:26.175551",
|
||||
"changed_on_humanized": "16 days ago",
|
||||
"changed_on": "2026-02-18T14:56:04.863722",
|
||||
"changed_on_humanized": "13 days ago",
|
||||
"column_formats": {},
|
||||
"columns": [
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158196",
|
||||
"column_name": "color",
|
||||
"created_on": "2026-02-10T13:38:26.158189",
|
||||
"changed_on": "2026-02-18T14:56:05.382289",
|
||||
"column_name": "has_2fa",
|
||||
"created_on": "2026-02-18T14:56:05.382138",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -71,16 +71,16 @@
|
||||
"is_active": true,
|
||||
"is_dttm": false,
|
||||
"python_date_format": null,
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "4fa810ee-99cc-4d1f-8c0d-0f289c3b01f4",
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "fe374f2a-9e06-4708-89fd-c3926e3e5faa",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158249",
|
||||
"column_name": "deleted",
|
||||
"created_on": "2026-02-10T13:38:26.158245",
|
||||
"changed_on": "2026-02-18T14:56:05.545701",
|
||||
"column_name": "is_ultra_restricted",
|
||||
"created_on": "2026-02-18T14:56:05.545465",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -92,14 +92,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "ebc07e82-7250-4eef-8d13-ea61561fa52c",
|
||||
"uuid": "eac7ecce-d472-4933-9652-d4f2811074fd",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158289",
|
||||
"column_name": "has_2fa",
|
||||
"created_on": "2026-02-10T13:38:26.158285",
|
||||
"changed_on": "2026-02-18T14:56:05.683578",
|
||||
"column_name": "is_primary_owner",
|
||||
"created_on": "2026-02-18T14:56:05.683257",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -111,14 +111,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "08e72f4d-3ced-4d9a-9f7d-2f85291ce88b",
|
||||
"uuid": "94a15acd-ef98-425b-8f0d-1ce038ca95c5",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158328",
|
||||
"column_name": "id",
|
||||
"created_on": "2026-02-10T13:38:26.158324",
|
||||
"changed_on": "2026-02-18T14:56:05.758231",
|
||||
"column_name": "is_app_user",
|
||||
"created_on": "2026-02-18T14:56:05.758142",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -128,16 +128,16 @@
|
||||
"is_active": true,
|
||||
"is_dttm": false,
|
||||
"python_date_format": null,
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "fd11955c-0130-4ea1-b3c0-d8b159971789",
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "d3fcd712-dc96-4bba-a026-aa82022eccf5",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158366",
|
||||
"changed_on": "2026-02-18T14:56:05.799597",
|
||||
"column_name": "is_admin",
|
||||
"created_on": "2026-02-10T13:38:26.158362",
|
||||
"created_on": "2026-02-18T14:56:05.799519",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -149,14 +149,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "13a6c8e1-c9f8-4f08-aa62-05bca7be547b",
|
||||
"uuid": "5a1c9de5-80f1-4fe8-a91b-e6e530688aae",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158404",
|
||||
"column_name": "is_app_user",
|
||||
"created_on": "2026-02-10T13:38:26.158400",
|
||||
"changed_on": "2026-02-18T14:56:05.819443",
|
||||
"column_name": "is_bot",
|
||||
"created_on": "2026-02-18T14:56:05.819382",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -168,14 +168,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "6321ba8a-28d7-4d68-a6b3-5cef6cd681a2",
|
||||
"uuid": "6c93e5de-e0d7-430c-88d7-87158905d60a",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158442",
|
||||
"column_name": "is_bot",
|
||||
"created_on": "2026-02-10T13:38:26.158438",
|
||||
"changed_on": "2026-02-18T14:56:05.827568",
|
||||
"column_name": "is_restricted",
|
||||
"created_on": "2026-02-18T14:56:05.827556",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -187,14 +187,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "f3ded50e-b1a2-4a88-b805-781d5923e062",
|
||||
"uuid": "2e8e6d32-0124-4e3a-a53f-6f200f852439",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158480",
|
||||
"changed_on": "2026-02-18T14:56:05.835380",
|
||||
"column_name": "is_owner",
|
||||
"created_on": "2026-02-10T13:38:26.158477",
|
||||
"created_on": "2026-02-18T14:56:05.835366",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -206,14 +206,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "8a1408eb-050d-4455-878c-22342df5da3d",
|
||||
"uuid": "510d651b-a595-4261-98e4-278af0a06594",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158532",
|
||||
"column_name": "is_primary_owner",
|
||||
"created_on": "2026-02-10T13:38:26.158528",
|
||||
"changed_on": "2026-02-18T14:56:05.843802",
|
||||
"column_name": "deleted",
|
||||
"created_on": "2026-02-18T14:56:05.843784",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -225,14 +225,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "054b8c16-82fd-480c-82e0-a0975229673a",
|
||||
"uuid": "2653fd2f-c0ce-484e-a5df-d2515b1e822d",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158583",
|
||||
"column_name": "is_restricted",
|
||||
"created_on": "2026-02-10T13:38:26.158579",
|
||||
"changed_on": "2026-02-18T14:56:05.851074",
|
||||
"column_name": "updated",
|
||||
"created_on": "2026-02-18T14:56:05.851063",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -240,18 +240,18 @@
|
||||
"groupby": true,
|
||||
"id": 781,
|
||||
"is_active": true,
|
||||
"is_dttm": false,
|
||||
"is_dttm": true,
|
||||
"python_date_format": null,
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "6932c25f-0273-4595-85c1-29422a801ded",
|
||||
"type": "DATETIME",
|
||||
"type_generic": 2,
|
||||
"uuid": "1b1f90c8-2567-49b8-9398-e7246396461e",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158621",
|
||||
"column_name": "is_ultra_restricted",
|
||||
"created_on": "2026-02-10T13:38:26.158618",
|
||||
"changed_on": "2026-02-18T14:56:05.857578",
|
||||
"column_name": "tz_offset",
|
||||
"created_on": "2026-02-18T14:56:05.857571",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -261,16 +261,16 @@
|
||||
"is_active": true,
|
||||
"is_dttm": false,
|
||||
"python_date_format": null,
|
||||
"type": "BOOLEAN",
|
||||
"type_generic": 3,
|
||||
"uuid": "9b14e5f9-3ab4-498e-b1e3-bbf49e9d61fe",
|
||||
"type": "LONGINTEGER",
|
||||
"type_generic": 0,
|
||||
"uuid": "e6d19b74-7f5d-447b-8071-951961dc2295",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158660",
|
||||
"column_name": "name",
|
||||
"created_on": "2026-02-10T13:38:26.158656",
|
||||
"changed_on": "2026-02-18T14:56:05.863101",
|
||||
"column_name": "channel_name",
|
||||
"created_on": "2026-02-18T14:56:05.863094",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -282,14 +282,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "ebee8249-0e10-4157-8a8e-96ae107887a3",
|
||||
"uuid": "e1f34628-ebc1-4e0c-8eea-54c3c9efba1b",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158697",
|
||||
"changed_on": "2026-02-18T14:56:05.877136",
|
||||
"column_name": "real_name",
|
||||
"created_on": "2026-02-10T13:38:26.158694",
|
||||
"created_on": "2026-02-18T14:56:05.877083",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -301,14 +301,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "553517a0-fe05-4ff5-a4eb-e9d2165d6f64",
|
||||
"uuid": "6cc5ab57-9431-428a-a331-0a5b10e4b074",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158735",
|
||||
"column_name": "team_id",
|
||||
"created_on": "2026-02-10T13:38:26.158731",
|
||||
"changed_on": "2026-02-18T14:56:05.893859",
|
||||
"column_name": "tz_label",
|
||||
"created_on": "2026-02-18T14:56:05.893834",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -320,14 +320,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "6c207fac-424d-465c-b80a-306b42b55ce8",
|
||||
"uuid": "8e6dbd8e-b880-4517-a5f6-64e429bd1bea",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158773",
|
||||
"column_name": "tz",
|
||||
"created_on": "2026-02-10T13:38:26.158769",
|
||||
"changed_on": "2026-02-18T14:56:05.902363",
|
||||
"column_name": "team_id",
|
||||
"created_on": "2026-02-18T14:56:05.902352",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -339,14 +339,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "6efcc042-0b78-4362-9373-2f684077d574",
|
||||
"uuid": "ba8e225d-221b-4275-aadb-e79557756f89",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158824",
|
||||
"column_name": "tz_label",
|
||||
"created_on": "2026-02-10T13:38:26.158820",
|
||||
"changed_on": "2026-02-18T14:56:05.910169",
|
||||
"column_name": "name",
|
||||
"created_on": "2026-02-18T14:56:05.910151",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -358,14 +358,14 @@
|
||||
"python_date_format": null,
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "c6a6ac40-5c60-472d-a878-4b65b8460ccc",
|
||||
"uuid": "02a7a026-d9f3-49e9-9586-534ebccdd867",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158861",
|
||||
"column_name": "tz_offset",
|
||||
"created_on": "2026-02-10T13:38:26.158857",
|
||||
"changed_on": "2026-02-18T14:56:05.915366",
|
||||
"column_name": "color",
|
||||
"created_on": "2026-02-18T14:56:05.915357",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -375,16 +375,16 @@
|
||||
"is_active": true,
|
||||
"is_dttm": false,
|
||||
"python_date_format": null,
|
||||
"type": "LONGINTEGER",
|
||||
"type_generic": 0,
|
||||
"uuid": "cf6da93a-bba9-47df-9154-6cfd0c9922fc",
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "0702fcdf-2d03-45db-8496-697d47b300d6",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158913",
|
||||
"column_name": "updated",
|
||||
"created_on": "2026-02-10T13:38:26.158909",
|
||||
"changed_on": "2026-02-18T14:56:05.919466",
|
||||
"column_name": "id",
|
||||
"created_on": "2026-02-18T14:56:05.919460",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -392,18 +392,18 @@
|
||||
"groupby": true,
|
||||
"id": 789,
|
||||
"is_active": true,
|
||||
"is_dttm": true,
|
||||
"is_dttm": false,
|
||||
"python_date_format": null,
|
||||
"type": "DATETIME",
|
||||
"type_generic": 2,
|
||||
"uuid": "2aa0a72a-5602-4799-b5ab-f22000108d62",
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "a4b58528-fcbf-45e9-af39-fe9d737ba380",
|
||||
"verbose_name": null
|
||||
},
|
||||
{
|
||||
"advanced_data_type": null,
|
||||
"changed_on": "2026-02-10T13:38:26.158967",
|
||||
"column_name": "channel_name",
|
||||
"created_on": "2026-02-10T13:38:26.158963",
|
||||
"changed_on": "2026-02-18T14:56:05.932553",
|
||||
"column_name": "tz",
|
||||
"created_on": "2026-02-18T14:56:05.932530",
|
||||
"description": null,
|
||||
"expression": null,
|
||||
"extra": null,
|
||||
@@ -415,7 +415,7 @@
|
||||
"python_date_format": null,
|
||||
"type": "STRING",
|
||||
"type_generic": 1,
|
||||
"uuid": "a84bd658-c83c-4e7f-9e1b-192595092d9b",
|
||||
"uuid": "bc872357-1920-42f3-aeda-b596122bcdb8",
|
||||
"verbose_name": null
|
||||
}
|
||||
],
|
||||
@@ -423,8 +423,8 @@
|
||||
"first_name": "Superset",
|
||||
"last_name": "Admin"
|
||||
},
|
||||
"created_on": "2026-02-10T13:38:26.050436",
|
||||
"created_on_humanized": "16 days ago",
|
||||
"created_on": "2026-02-18T14:56:04.317950",
|
||||
"created_on_humanized": "13 days ago",
|
||||
"database": {
|
||||
"allow_multi_catalog": false,
|
||||
"backend": "postgresql",
|
||||
@@ -452,8 +452,8 @@
|
||||
"main_dttm_col": "updated",
|
||||
"metrics": [
|
||||
{
|
||||
"changed_on": "2026-02-10T13:38:26.182269",
|
||||
"created_on": "2026-02-10T13:38:26.182264",
|
||||
"changed_on": "2026-02-18T14:56:05.085244",
|
||||
"created_on": "2026-02-18T14:56:05.085166",
|
||||
"currency": null,
|
||||
"d3format": null,
|
||||
"description": null,
|
||||
@@ -462,7 +462,7 @@
|
||||
"id": 33,
|
||||
"metric_name": "count",
|
||||
"metric_type": "count",
|
||||
"uuid": "7510f8ca-05ee-4a37-bec1-4a5d7bf2ac50",
|
||||
"uuid": "10c8b8cf-b697-4512-9e9e-2996721f829e",
|
||||
"verbose_name": "COUNT(*)",
|
||||
"warning_text": null
|
||||
}
|
||||
|
||||
@@ -45,6 +45,7 @@ INITIAL_PERMISSIONS = [
|
||||
{"resource": "plugin:storage", "action": "READ"},
|
||||
{"resource": "plugin:storage", "action": "WRITE"},
|
||||
{"resource": "plugin:debug", "action": "EXECUTE"},
|
||||
{"resource": "git_config", "action": "READ"},
|
||||
]
|
||||
# [/DEF:INITIAL_PERMISSIONS:Constant]
|
||||
|
||||
@@ -93,6 +94,7 @@ def seed_permissions():
|
||||
("plugins", "READ"),
|
||||
("tasks", "READ"),
|
||||
("tasks", "WRITE"),
|
||||
("git_config", "READ"),
|
||||
]
|
||||
|
||||
for res, act in user_permissions:
|
||||
|
||||
@@ -100,7 +100,10 @@ def test_dashboard_dataset_relations():
|
||||
logger.info(f" Found {len(dashboards)} dashboards using this dataset:")
|
||||
|
||||
for dash in dashboards:
|
||||
logger.info(f" - Dashboard ID {dash.get('id')}: {dash.get('dashboard_title', dash.get('title', 'Unknown'))}")
|
||||
if isinstance(dash, dict):
|
||||
logger.info(f" - Dashboard ID {dash.get('id')}: {dash.get('dashboard_title', dash.get('title', 'Unknown'))}")
|
||||
else:
|
||||
logger.info(f" - Dashboard: {dash}")
|
||||
elif 'result' in related_objects:
|
||||
# Some Superset versions use 'result' wrapper
|
||||
result = related_objects['result']
|
||||
|
||||
81
backend/src/services/__tests__/test_llm_provider.py
Normal file
81
backend/src/services/__tests__/test_llm_provider.py
Normal file
@@ -0,0 +1,81 @@
|
||||
# [DEF:__tests__/test_llm_provider:Module]
|
||||
# @RELATION: VERIFIES -> ../llm_provider.py
|
||||
# @PURPOSE: Contract testing for LLMProviderService and EncryptionManager
|
||||
# [/DEF:__tests__/test_llm_provider:Module]
|
||||
|
||||
import pytest
|
||||
import os
|
||||
from unittest.mock import MagicMock
|
||||
from sqlalchemy.orm import Session
|
||||
from src.services.llm_provider import EncryptionManager, LLMProviderService
|
||||
from src.models.llm import LLMProvider
|
||||
from src.plugins.llm_analysis.models import LLMProviderConfig, ProviderType
|
||||
|
||||
# @TEST_CONTRACT: EncryptionManagerModel -> Invariants
|
||||
# @TEST_INVARIANT: symmetric_encryption
|
||||
def test_encryption_cycle():
|
||||
"""Verify encrypted data can be decrypted back to original string."""
|
||||
manager = EncryptionManager()
|
||||
original = "secret_api_key_123"
|
||||
encrypted = manager.encrypt(original)
|
||||
assert encrypted != original
|
||||
assert manager.decrypt(encrypted) == original
|
||||
|
||||
# @TEST_EDGE: empty_string_encryption
|
||||
def test_empty_string_encryption():
|
||||
manager = EncryptionManager()
|
||||
original = ""
|
||||
encrypted = manager.encrypt(original)
|
||||
assert manager.decrypt(encrypted) == ""
|
||||
|
||||
# @TEST_EDGE: decrypt_invalid_data
|
||||
def test_decrypt_invalid_data():
|
||||
manager = EncryptionManager()
|
||||
with pytest.raises(Exception):
|
||||
manager.decrypt("not-encrypted-string")
|
||||
|
||||
# @TEST_FIXTURE: mock_db_session
|
||||
@pytest.fixture
|
||||
def mock_db():
|
||||
return MagicMock(spec=Session)
|
||||
|
||||
@pytest.fixture
|
||||
def service(mock_db):
|
||||
return LLMProviderService(db=mock_db)
|
||||
|
||||
def test_get_all_providers(service, mock_db):
|
||||
service.get_all_providers()
|
||||
mock_db.query.assert_called()
|
||||
mock_db.query().all.assert_called()
|
||||
|
||||
def test_create_provider(service, mock_db):
|
||||
config = LLMProviderConfig(
|
||||
provider_type=ProviderType.OPENAI,
|
||||
name="Test OpenAI",
|
||||
base_url="https://api.openai.com",
|
||||
api_key="sk-test",
|
||||
default_model="gpt-4",
|
||||
is_active=True
|
||||
)
|
||||
|
||||
provider = service.create_provider(config)
|
||||
|
||||
mock_db.add.assert_called()
|
||||
mock_db.commit.assert_called()
|
||||
# Verify API key was encrypted
|
||||
assert provider.api_key != "sk-test"
|
||||
# Decrypt to verify it matches
|
||||
assert EncryptionManager().decrypt(provider.api_key) == "sk-test"
|
||||
|
||||
def test_get_decrypted_api_key(service, mock_db):
|
||||
# Setup mock provider
|
||||
encrypted_key = EncryptionManager().encrypt("secret-value")
|
||||
mock_provider = LLMProvider(id="p1", api_key=encrypted_key)
|
||||
mock_db.query().filter().first.return_value = mock_provider
|
||||
|
||||
key = service.get_decrypted_api_key("p1")
|
||||
assert key == "secret-value"
|
||||
|
||||
def test_get_decrypted_api_key_not_found(service, mock_db):
|
||||
mock_db.query().filter().first.return_value = None
|
||||
assert service.get_decrypted_api_key("missing") is None
|
||||
140
backend/src/services/__tests__/test_rbac_permission_catalog.py
Normal file
140
backend/src/services/__tests__/test_rbac_permission_catalog.py
Normal file
@@ -0,0 +1,140 @@
|
||||
# [DEF:backend.src.services.__tests__.test_rbac_permission_catalog:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, rbac, permissions, catalog, discovery, sync
|
||||
# @PURPOSE: Verifies RBAC permission catalog discovery and idempotent synchronization behavior.
|
||||
# @LAYER: Service Tests
|
||||
# @RELATION: TESTS -> backend.src.services.rbac_permission_catalog
|
||||
# @INVARIANT: Synchronization adds only missing normalized permission pairs.
|
||||
|
||||
# [SECTION: IMPORTS]
|
||||
from types import SimpleNamespace
|
||||
from unittest.mock import MagicMock
|
||||
|
||||
import src.services.rbac_permission_catalog as catalog
|
||||
# [/SECTION: IMPORTS]
|
||||
|
||||
|
||||
# [DEF:test_discover_route_permissions_extracts_declared_pairs_and_ignores_tests:Function]
|
||||
# @PURPOSE: Ensures route-scanner extracts has_permission pairs from route files and skips __tests__.
|
||||
# @PRE: Temporary route directory contains route and test files.
|
||||
# @POST: Returned set includes production route permissions and excludes test-only declarations.
|
||||
def test_discover_route_permissions_extracts_declared_pairs_and_ignores_tests(tmp_path, monkeypatch):
|
||||
routes_dir = tmp_path / "routes"
|
||||
routes_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
(routes_dir / "dashboards.py").write_text(
|
||||
'\n'.join(
|
||||
[
|
||||
'_ = Depends(has_permission("plugin:migration", "READ"))',
|
||||
'_ = Depends(has_permission("plugin:migration", "EXECUTE"))',
|
||||
'_ = Depends(has_permission("tasks", "WRITE"))',
|
||||
]
|
||||
),
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
tests_dir = routes_dir / "__tests__"
|
||||
tests_dir.mkdir(parents=True, exist_ok=True)
|
||||
(tests_dir / "test_fake.py").write_text(
|
||||
'_ = Depends(has_permission("plugin:ignored", "READ"))',
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
monkeypatch.setattr(catalog, "ROUTES_DIR", routes_dir)
|
||||
|
||||
discovered = catalog._discover_route_permissions()
|
||||
|
||||
assert ("plugin:migration", "READ") in discovered
|
||||
assert ("plugin:migration", "EXECUTE") in discovered
|
||||
assert ("tasks", "WRITE") in discovered
|
||||
assert ("plugin:ignored", "READ") not in discovered
|
||||
# [/DEF:test_discover_route_permissions_extracts_declared_pairs_and_ignores_tests:Function]
|
||||
|
||||
|
||||
# [DEF:test_discover_declared_permissions_unions_route_and_plugin_permissions:Function]
|
||||
# @PURPOSE: Ensures full catalog includes route-level permissions plus dynamic plugin EXECUTE rights.
|
||||
# @PRE: Route discovery and plugin loader both return permission sources.
|
||||
# @POST: Result set contains union of both sources.
|
||||
def test_discover_declared_permissions_unions_route_and_plugin_permissions(monkeypatch):
|
||||
monkeypatch.setattr(
|
||||
catalog,
|
||||
"_discover_route_permissions",
|
||||
lambda: {("tasks", "READ"), ("plugin:migration", "READ")},
|
||||
)
|
||||
|
||||
plugin_loader = MagicMock()
|
||||
plugin_loader.get_all_plugin_configs.return_value = [
|
||||
SimpleNamespace(id="superset-backup"),
|
||||
SimpleNamespace(id="llm_dashboard_validation"),
|
||||
]
|
||||
|
||||
discovered = catalog.discover_declared_permissions(plugin_loader=plugin_loader)
|
||||
|
||||
assert ("tasks", "READ") in discovered
|
||||
assert ("plugin:migration", "READ") in discovered
|
||||
assert ("plugin:superset-backup", "EXECUTE") in discovered
|
||||
assert ("plugin:llm_dashboard_validation", "EXECUTE") in discovered
|
||||
# [/DEF:test_discover_declared_permissions_unions_route_and_plugin_permissions:Function]
|
||||
|
||||
|
||||
# [DEF:test_sync_permission_catalog_inserts_only_missing_normalized_pairs:Function]
|
||||
# @PURPOSE: Ensures synchronization inserts only missing pairs and normalizes action/resource tokens.
|
||||
# @PRE: DB already contains subset of permissions.
|
||||
# @POST: Only missing normalized pairs are inserted and commit is executed once.
|
||||
def test_sync_permission_catalog_inserts_only_missing_normalized_pairs():
|
||||
db = MagicMock()
|
||||
db.query.return_value.all.return_value = [
|
||||
SimpleNamespace(resource="tasks", action="READ"),
|
||||
SimpleNamespace(resource="plugin:migration", action="EXECUTE"),
|
||||
]
|
||||
|
||||
declared_permissions = {
|
||||
("tasks", "read"),
|
||||
("plugin:migration", "execute"),
|
||||
("plugin:migration", "READ"),
|
||||
("", "WRITE"),
|
||||
("plugin:migration", ""),
|
||||
}
|
||||
|
||||
inserted_count = catalog.sync_permission_catalog(
|
||||
db=db,
|
||||
declared_permissions=declared_permissions,
|
||||
)
|
||||
|
||||
assert inserted_count == 1
|
||||
assert db.add.call_count == 1
|
||||
inserted_permission = db.add.call_args[0][0]
|
||||
assert inserted_permission.resource == "plugin:migration"
|
||||
assert inserted_permission.action == "READ"
|
||||
db.commit.assert_called_once()
|
||||
# [/DEF:test_sync_permission_catalog_inserts_only_missing_normalized_pairs:Function]
|
||||
|
||||
|
||||
# [DEF:test_sync_permission_catalog_is_noop_when_all_permissions_exist:Function]
|
||||
# @PURPOSE: Ensures synchronization is idempotent when all declared pairs already exist.
|
||||
# @PRE: DB contains full declared permission set.
|
||||
# @POST: No inserts are added and commit is not called.
|
||||
def test_sync_permission_catalog_is_noop_when_all_permissions_exist():
|
||||
db = MagicMock()
|
||||
db.query.return_value.all.return_value = [
|
||||
SimpleNamespace(resource="tasks", action="READ"),
|
||||
SimpleNamespace(resource="plugin:migration", action="READ"),
|
||||
]
|
||||
|
||||
declared_permissions = {
|
||||
("tasks", "READ"),
|
||||
("plugin:migration", "READ"),
|
||||
}
|
||||
|
||||
inserted_count = catalog.sync_permission_catalog(
|
||||
db=db,
|
||||
declared_permissions=declared_permissions,
|
||||
)
|
||||
|
||||
assert inserted_count == 0
|
||||
db.add.assert_not_called()
|
||||
db.commit.assert_not_called()
|
||||
# [/DEF:test_sync_permission_catalog_is_noop_when_all_permissions_exist:Function]
|
||||
|
||||
|
||||
# [/DEF:backend.src.services.__tests__.test_rbac_permission_catalog:Module]
|
||||
@@ -9,7 +9,7 @@
|
||||
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch, AsyncMock
|
||||
from datetime import datetime
|
||||
from datetime import datetime, timezone
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_with_status:Function]
|
||||
@@ -269,4 +269,71 @@ def test_get_last_task_for_resource_no_match():
|
||||
# [/DEF:test_get_last_task_for_resource_no_match:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_dashboards_with_status_handles_mixed_naive_and_aware_task_datetimes:Function]
|
||||
# @TEST: get_dashboards_with_status handles mixed naive/aware datetimes without comparison errors.
|
||||
# @PRE: Task list includes both timezone-aware and timezone-naive timestamps.
|
||||
# @POST: Latest task is selected deterministically and no exception is raised.
|
||||
@pytest.mark.asyncio
|
||||
async def test_get_dashboards_with_status_handles_mixed_naive_and_aware_task_datetimes():
|
||||
with patch("src.services.resource_service.SupersetClient") as mock_client, \
|
||||
patch("src.services.resource_service.GitService"):
|
||||
|
||||
from src.services.resource_service import ResourceService
|
||||
|
||||
service = ResourceService()
|
||||
mock_client.return_value.get_dashboards_summary.return_value = [
|
||||
{"id": 1, "title": "Dashboard 1", "slug": "dash-1"}
|
||||
]
|
||||
|
||||
task_naive = MagicMock()
|
||||
task_naive.id = "task-naive"
|
||||
task_naive.plugin_id = "llm_dashboard_validation"
|
||||
task_naive.status = "SUCCESS"
|
||||
task_naive.params = {"dashboard_id": "1", "environment_id": "prod"}
|
||||
task_naive.started_at = datetime(2024, 1, 1, 10, 0, 0)
|
||||
|
||||
task_aware = MagicMock()
|
||||
task_aware.id = "task-aware"
|
||||
task_aware.plugin_id = "llm_dashboard_validation"
|
||||
task_aware.status = "SUCCESS"
|
||||
task_aware.params = {"dashboard_id": "1", "environment_id": "prod"}
|
||||
task_aware.started_at = datetime(2024, 1, 1, 12, 0, 0, tzinfo=timezone.utc)
|
||||
|
||||
env = MagicMock()
|
||||
env.id = "prod"
|
||||
|
||||
result = await service.get_dashboards_with_status(env, [task_naive, task_aware])
|
||||
|
||||
assert result[0]["last_task"]["task_id"] == "task-aware"
|
||||
# [/DEF:test_get_dashboards_with_status_handles_mixed_naive_and_aware_task_datetimes:Function]
|
||||
|
||||
|
||||
# [DEF:test_get_last_task_for_resource_handles_mixed_naive_and_aware_created_at:Function]
|
||||
# @TEST: _get_last_task_for_resource handles mixed naive/aware created_at values.
|
||||
# @PRE: Matching tasks include naive and aware created_at timestamps.
|
||||
# @POST: Latest task is returned without raising datetime comparison errors.
|
||||
def test_get_last_task_for_resource_handles_mixed_naive_and_aware_created_at():
|
||||
from src.services.resource_service import ResourceService
|
||||
|
||||
service = ResourceService()
|
||||
|
||||
task_naive = MagicMock()
|
||||
task_naive.id = "task-old"
|
||||
task_naive.status = "SUCCESS"
|
||||
task_naive.params = {"resource_id": "dashboard-1"}
|
||||
task_naive.created_at = datetime(2024, 1, 1, 10, 0, 0)
|
||||
|
||||
task_aware = MagicMock()
|
||||
task_aware.id = "task-new"
|
||||
task_aware.status = "RUNNING"
|
||||
task_aware.params = {"resource_id": "dashboard-1"}
|
||||
task_aware.created_at = datetime(2024, 1, 1, 12, 0, 0, tzinfo=timezone.utc)
|
||||
|
||||
result = service._get_last_task_for_resource("dashboard-1", [task_naive, task_aware])
|
||||
|
||||
assert result is not None
|
||||
assert result["task_id"] == "task-new"
|
||||
# [/DEF:test_get_last_task_for_resource_handles_mixed_naive_and_aware_created_at:Function]
|
||||
|
||||
|
||||
# [/DEF:backend.src.services.__tests__.test_resource_service:Module]
|
||||
|
||||
16
backend/src/services/clean_release/__init__.py
Normal file
16
backend/src/services/clean_release/__init__.py
Normal file
@@ -0,0 +1,16 @@
|
||||
# [DEF:clean_release:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Redesigned clean release compliance subsystem.
|
||||
# @LAYER: Domain
|
||||
|
||||
from ...core.logger import logger
|
||||
|
||||
# [REASON] Initializing clean_release package.
|
||||
logger.reason("Clean release compliance subsystem initialized.")
|
||||
|
||||
# Legacy compatibility exports are intentionally lazy to avoid import cycles.
|
||||
__all__ = [
|
||||
"logger",
|
||||
]
|
||||
|
||||
# [/DEF:clean_release:Module]
|
||||
@@ -0,0 +1,24 @@
|
||||
# [DEF:backend.tests.services.clean_release.test_audit_service:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, clean-release, audit, logging
|
||||
# @PURPOSE: Validate audit hooks emit expected log patterns for clean release lifecycle.
|
||||
# @LAYER: Infra
|
||||
# @RELATION: TESTS -> backend.src.services.clean_release.audit_service
|
||||
|
||||
from unittest.mock import patch
|
||||
from src.services.clean_release.audit_service import audit_preparation, audit_check_run, audit_report
|
||||
|
||||
@patch("src.services.clean_release.audit_service.logger")
|
||||
def test_audit_preparation(mock_logger):
|
||||
audit_preparation("cand-1", "PREPARED")
|
||||
mock_logger.info.assert_called_with("[REASON] clean-release preparation candidate=cand-1 status=PREPARED")
|
||||
|
||||
@patch("src.services.clean_release.audit_service.logger")
|
||||
def test_audit_check_run(mock_logger):
|
||||
audit_check_run("check-1", "COMPLIANT")
|
||||
mock_logger.info.assert_called_with("[REFLECT] clean-release check_run=check-1 final_status=COMPLIANT")
|
||||
|
||||
@patch("src.services.clean_release.audit_service.logger")
|
||||
def test_audit_report(mock_logger):
|
||||
audit_report("rep-1", "cand-1")
|
||||
mock_logger.info.assert_called_with("[EXPLORE] clean-release report_id=rep-1 candidate=cand-1")
|
||||
@@ -0,0 +1,112 @@
|
||||
# [DEF:backend.tests.services.clean_release.test_compliance_orchestrator:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, clean-release, orchestrator, stage-state-machine
|
||||
# @PURPOSE: Validate compliance orchestrator stage transitions and final status derivation.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.services.clean_release.compliance_orchestrator
|
||||
# @INVARIANT: Failed mandatory stage forces BLOCKED terminal status.
|
||||
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
from src.models.clean_release import (
|
||||
CheckFinalStatus,
|
||||
CheckStageName,
|
||||
CheckStageResult,
|
||||
CheckStageStatus,
|
||||
)
|
||||
from src.services.clean_release.compliance_orchestrator import CleanComplianceOrchestrator
|
||||
from src.services.clean_release.report_builder import ComplianceReportBuilder
|
||||
from src.services.clean_release.repository import CleanReleaseRepository
|
||||
|
||||
|
||||
# [DEF:test_orchestrator_stage_failure_blocks_release:Function]
|
||||
# @PURPOSE: Verify mandatory stage failure forces BLOCKED final status.
|
||||
def test_orchestrator_stage_failure_blocks_release():
|
||||
repository = CleanReleaseRepository()
|
||||
orchestrator = CleanComplianceOrchestrator(repository)
|
||||
|
||||
run = orchestrator.start_check_run(
|
||||
candidate_id="2026.03.03-rc1",
|
||||
policy_id="policy-enterprise-clean-v1",
|
||||
triggered_by="tester",
|
||||
execution_mode="tui",
|
||||
)
|
||||
run = orchestrator.execute_stages(
|
||||
run,
|
||||
forced_results=[
|
||||
CheckStageResult(stage=CheckStageName.DATA_PURITY, status=CheckStageStatus.PASS, details="ok"),
|
||||
CheckStageResult(stage=CheckStageName.INTERNAL_SOURCES_ONLY, status=CheckStageStatus.PASS, details="ok"),
|
||||
CheckStageResult(stage=CheckStageName.NO_EXTERNAL_ENDPOINTS, status=CheckStageStatus.FAIL, details="external"),
|
||||
CheckStageResult(stage=CheckStageName.MANIFEST_CONSISTENCY, status=CheckStageStatus.PASS, details="ok"),
|
||||
],
|
||||
)
|
||||
run = orchestrator.finalize_run(run)
|
||||
|
||||
assert run.final_status == CheckFinalStatus.BLOCKED
|
||||
# [/DEF:test_orchestrator_stage_failure_blocks_release:Function]
|
||||
|
||||
|
||||
# [DEF:test_orchestrator_compliant_candidate:Function]
|
||||
# @PURPOSE: Verify happy path where all mandatory stages pass yields COMPLIANT.
|
||||
def test_orchestrator_compliant_candidate():
|
||||
repository = CleanReleaseRepository()
|
||||
orchestrator = CleanComplianceOrchestrator(repository)
|
||||
|
||||
run = orchestrator.start_check_run(
|
||||
candidate_id="2026.03.03-rc1",
|
||||
policy_id="policy-enterprise-clean-v1",
|
||||
triggered_by="tester",
|
||||
execution_mode="tui",
|
||||
)
|
||||
run = orchestrator.execute_stages(
|
||||
run,
|
||||
forced_results=[
|
||||
CheckStageResult(stage=CheckStageName.DATA_PURITY, status=CheckStageStatus.PASS, details="ok"),
|
||||
CheckStageResult(stage=CheckStageName.INTERNAL_SOURCES_ONLY, status=CheckStageStatus.PASS, details="ok"),
|
||||
CheckStageResult(stage=CheckStageName.NO_EXTERNAL_ENDPOINTS, status=CheckStageStatus.PASS, details="ok"),
|
||||
CheckStageResult(stage=CheckStageName.MANIFEST_CONSISTENCY, status=CheckStageStatus.PASS, details="ok"),
|
||||
],
|
||||
)
|
||||
run = orchestrator.finalize_run(run)
|
||||
|
||||
assert run.final_status == CheckFinalStatus.COMPLIANT
|
||||
# [/DEF:test_orchestrator_compliant_candidate:Function]
|
||||
|
||||
|
||||
# [DEF:test_orchestrator_missing_stage_result:Function]
|
||||
# @PURPOSE: Verify incomplete mandatory stage set cannot end as COMPLIANT and results in FAILED.
|
||||
def test_orchestrator_missing_stage_result():
|
||||
repository = CleanReleaseRepository()
|
||||
orchestrator = CleanComplianceOrchestrator(repository)
|
||||
|
||||
run = orchestrator.start_check_run("cand-1", "pol-1", "tester", "tui")
|
||||
run = orchestrator.execute_stages(
|
||||
run,
|
||||
forced_results=[CheckStageResult(stage=CheckStageName.DATA_PURITY, status=CheckStageStatus.PASS, details="ok")],
|
||||
)
|
||||
run = orchestrator.finalize_run(run)
|
||||
|
||||
assert run.final_status == CheckFinalStatus.FAILED
|
||||
# [/DEF:test_orchestrator_missing_stage_result:Function]
|
||||
|
||||
|
||||
# [DEF:test_orchestrator_report_generation_error:Function]
|
||||
# @PURPOSE: Verify downstream report errors do not mutate orchestrator final status.
|
||||
def test_orchestrator_report_generation_error():
|
||||
repository = CleanReleaseRepository()
|
||||
orchestrator = CleanComplianceOrchestrator(repository)
|
||||
|
||||
run = orchestrator.start_check_run("cand-1", "pol-1", "tester", "tui")
|
||||
run = orchestrator.finalize_run(run)
|
||||
assert run.final_status == CheckFinalStatus.FAILED
|
||||
|
||||
with patch.object(ComplianceReportBuilder, "build_report_payload", side_effect=ValueError("Report error")):
|
||||
builder = ComplianceReportBuilder(repository)
|
||||
with pytest.raises(ValueError, match="Report error"):
|
||||
builder.build_report_payload(run, [])
|
||||
|
||||
assert run.final_status == CheckFinalStatus.FAILED
|
||||
# [/DEF:test_orchestrator_report_generation_error:Function]
|
||||
# [/DEF:backend.tests.services.clean_release.test_compliance_orchestrator:Module]
|
||||
@@ -0,0 +1,41 @@
|
||||
# [DEF:backend.tests.services.clean_release.test_manifest_builder:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: tests, clean-release, manifest, deterministic
|
||||
# @PURPOSE: Validate deterministic manifest generation behavior for US1.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: VERIFIES -> backend.src.services.clean_release.manifest_builder
|
||||
# @INVARIANT: Same input artifacts produce identical deterministic hash.
|
||||
|
||||
from src.services.clean_release.manifest_builder import build_distribution_manifest
|
||||
|
||||
|
||||
# [DEF:test_manifest_deterministic_hash_for_same_input:Function]
|
||||
# @PURPOSE: Ensure hash is stable for same candidate/policy/artifact input.
|
||||
# @PRE: Same input lists are passed twice.
|
||||
# @POST: Hash and summary remain identical.
|
||||
def test_manifest_deterministic_hash_for_same_input():
|
||||
artifacts = [
|
||||
{"path": "a.yaml", "category": "system-init", "classification": "required-system", "reason": "required"},
|
||||
{"path": "b.yaml", "category": "test-data", "classification": "excluded-prohibited", "reason": "prohibited"},
|
||||
]
|
||||
|
||||
manifest1 = build_distribution_manifest(
|
||||
manifest_id="m1",
|
||||
candidate_id="2026.03.03-rc1",
|
||||
policy_id="policy-enterprise-clean-v1",
|
||||
generated_by="tester",
|
||||
artifacts=artifacts,
|
||||
)
|
||||
manifest2 = build_distribution_manifest(
|
||||
manifest_id="m2",
|
||||
candidate_id="2026.03.03-rc1",
|
||||
policy_id="policy-enterprise-clean-v1",
|
||||
generated_by="tester",
|
||||
artifacts=artifacts,
|
||||
)
|
||||
|
||||
assert manifest1.deterministic_hash == manifest2.deterministic_hash
|
||||
assert manifest1.summary.included_count == manifest2.summary.included_count
|
||||
assert manifest1.summary.excluded_count == manifest2.summary.excluded_count
|
||||
# [/DEF:test_manifest_deterministic_hash_for_same_input:Function]
|
||||
# [/DEF:backend.tests.services.clean_release.test_manifest_builder:Module]
|
||||
@@ -0,0 +1,114 @@
|
||||
# [DEF:__tests__/test_policy_engine:Module]
|
||||
# @RELATION: VERIFIES -> ../policy_engine.py
|
||||
# @PURPOSE: Contract testing for CleanPolicyEngine
|
||||
# [/DEF:__tests__/test_policy_engine:Module]
|
||||
|
||||
import pytest
|
||||
from datetime import datetime
|
||||
from src.models.clean_release import (
|
||||
CleanProfilePolicy,
|
||||
ResourceSourceRegistry,
|
||||
ResourceSourceEntry,
|
||||
ProfileType,
|
||||
RegistryStatus
|
||||
)
|
||||
from src.services.clean_release.policy_engine import CleanPolicyEngine
|
||||
|
||||
# @TEST_FIXTURE: policy_enterprise_clean
|
||||
@pytest.fixture
|
||||
def enterprise_clean_setup():
|
||||
policy = CleanProfilePolicy(
|
||||
policy_id="POL-1",
|
||||
policy_version="1",
|
||||
active=True,
|
||||
prohibited_artifact_categories=["demo", "test"],
|
||||
required_system_categories=["core"],
|
||||
internal_source_registry_ref="REG-1",
|
||||
effective_from=datetime.now(),
|
||||
profile=ProfileType.ENTERPRISE_CLEAN
|
||||
)
|
||||
registry = ResourceSourceRegistry(
|
||||
registry_id="REG-1",
|
||||
name="Internal Registry",
|
||||
entries=[
|
||||
ResourceSourceEntry(source_id="S1", host="internal.com", protocol="https", purpose="p1", enabled=True)
|
||||
],
|
||||
updated_at=datetime.now(),
|
||||
updated_by="admin",
|
||||
status=RegistryStatus.ACTIVE
|
||||
)
|
||||
return policy, registry
|
||||
|
||||
# @TEST_SCENARIO: policy_valid
|
||||
def test_policy_valid(enterprise_clean_setup):
|
||||
policy, registry = enterprise_clean_setup
|
||||
engine = CleanPolicyEngine(policy, registry)
|
||||
result = engine.validate_policy()
|
||||
assert result.ok is True
|
||||
assert not result.blocking_reasons
|
||||
|
||||
# @TEST_EDGE: missing_registry_ref
|
||||
def test_missing_registry_ref(enterprise_clean_setup):
|
||||
policy, registry = enterprise_clean_setup
|
||||
policy.internal_source_registry_ref = " "
|
||||
engine = CleanPolicyEngine(policy, registry)
|
||||
result = engine.validate_policy()
|
||||
assert result.ok is False
|
||||
assert "Policy missing internal_source_registry_ref" in result.blocking_reasons
|
||||
|
||||
# @TEST_EDGE: conflicting_registry
|
||||
def test_conflicting_registry(enterprise_clean_setup):
|
||||
policy, registry = enterprise_clean_setup
|
||||
registry.registry_id = "WRONG-REG"
|
||||
engine = CleanPolicyEngine(policy, registry)
|
||||
result = engine.validate_policy()
|
||||
assert result.ok is False
|
||||
assert "Policy registry ref does not match provided registry" in result.blocking_reasons
|
||||
|
||||
# @TEST_INVARIANT: deterministic_classification
|
||||
def test_classify_artifact(enterprise_clean_setup):
|
||||
policy, registry = enterprise_clean_setup
|
||||
engine = CleanPolicyEngine(policy, registry)
|
||||
|
||||
# Required
|
||||
assert engine.classify_artifact({"category": "core", "path": "p1"}) == "required-system"
|
||||
# Prohibited
|
||||
assert engine.classify_artifact({"category": "demo", "path": "p2"}) == "excluded-prohibited"
|
||||
# Allowed
|
||||
assert engine.classify_artifact({"category": "others", "path": "p3"}) == "allowed"
|
||||
|
||||
# @TEST_EDGE: external_endpoint
|
||||
def test_validate_resource_source(enterprise_clean_setup):
|
||||
policy, registry = enterprise_clean_setup
|
||||
engine = CleanPolicyEngine(policy, registry)
|
||||
|
||||
# Internal (OK)
|
||||
res_ok = engine.validate_resource_source("internal.com")
|
||||
assert res_ok.ok is True
|
||||
|
||||
# External (Blocked)
|
||||
res_fail = engine.validate_resource_source("external.evil")
|
||||
assert res_fail.ok is False
|
||||
assert res_fail.violation["category"] == "external-source"
|
||||
assert res_fail.violation["blocked_release"] is True
|
||||
|
||||
def test_evaluate_candidate(enterprise_clean_setup):
|
||||
policy, registry = enterprise_clean_setup
|
||||
engine = CleanPolicyEngine(policy, registry)
|
||||
|
||||
artifacts = [
|
||||
{"path": "core.js", "category": "core"},
|
||||
{"path": "demo.sql", "category": "demo"}
|
||||
]
|
||||
sources = ["internal.com", "google.com"]
|
||||
|
||||
classified, violations = engine.evaluate_candidate(artifacts, sources)
|
||||
|
||||
assert len(classified) == 2
|
||||
assert classified[0]["classification"] == "required-system"
|
||||
assert classified[1]["classification"] == "excluded-prohibited"
|
||||
|
||||
# 1 violation for demo artifact + 1 for google.com source
|
||||
assert len(violations) == 2
|
||||
assert violations[0]["category"] == "data-purity"
|
||||
assert violations[1]["category"] == "external-source"
|
||||
@@ -0,0 +1,127 @@
|
||||
# [DEF:backend.tests.services.clean_release.test_preparation_service:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, clean-release, preparation, flow
|
||||
# @PURPOSE: Validate release candidate preparation flow, including policy evaluation and manifest persisting.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.services.clean_release.preparation_service
|
||||
# @INVARIANT: Candidate preparation always persists manifest and candidate status deterministically.
|
||||
|
||||
import pytest
|
||||
from unittest.mock import MagicMock, patch
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from src.models.clean_release import (
|
||||
CleanProfilePolicy,
|
||||
ResourceSourceRegistry,
|
||||
ResourceSourceEntry,
|
||||
ReleaseCandidate,
|
||||
ReleaseCandidateStatus,
|
||||
ProfileType,
|
||||
DistributionManifest
|
||||
)
|
||||
from src.services.clean_release.preparation_service import prepare_candidate
|
||||
|
||||
def _mock_policy() -> CleanProfilePolicy:
|
||||
return CleanProfilePolicy(
|
||||
policy_id="pol-1",
|
||||
policy_version="1.0.0",
|
||||
active=True,
|
||||
prohibited_artifact_categories=["prohibited"],
|
||||
required_system_categories=["system"],
|
||||
external_source_forbidden=True,
|
||||
internal_source_registry_ref="reg-1",
|
||||
effective_from=datetime.now(timezone.utc),
|
||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||
)
|
||||
|
||||
def _mock_registry() -> ResourceSourceRegistry:
|
||||
return ResourceSourceRegistry(
|
||||
registry_id="reg-1",
|
||||
name="Reg",
|
||||
entries=[ResourceSourceEntry(source_id="s1", host="nexus.internal", protocol="https", purpose="pkg", enabled=True)],
|
||||
updated_at=datetime.now(timezone.utc),
|
||||
updated_by="tester"
|
||||
)
|
||||
|
||||
def _mock_candidate(candidate_id: str) -> ReleaseCandidate:
|
||||
return ReleaseCandidate(
|
||||
candidate_id=candidate_id,
|
||||
version="1.0.0",
|
||||
profile=ProfileType.ENTERPRISE_CLEAN,
|
||||
created_at=datetime.now(timezone.utc),
|
||||
status=ReleaseCandidateStatus.DRAFT,
|
||||
created_by="tester",
|
||||
source_snapshot_ref="v1.0.0-snapshot"
|
||||
)
|
||||
|
||||
def test_prepare_candidate_success():
|
||||
# Setup
|
||||
repository = MagicMock()
|
||||
candidate_id = "cand-1"
|
||||
candidate = _mock_candidate(candidate_id)
|
||||
repository.get_candidate.return_value = candidate
|
||||
repository.get_active_policy.return_value = _mock_policy()
|
||||
repository.get_registry.return_value = _mock_registry()
|
||||
|
||||
artifacts = [{"path": "file1.txt", "category": "system"}]
|
||||
sources = ["nexus.internal"]
|
||||
|
||||
# Execute
|
||||
with patch("src.services.clean_release.preparation_service.CleanPolicyEngine") as MockEngine:
|
||||
mock_engine_instance = MockEngine.return_value
|
||||
mock_engine_instance.validate_policy.return_value.ok = True
|
||||
mock_engine_instance.evaluate_candidate.return_value = (
|
||||
[{"path": "file1.txt", "category": "system", "classification": "required-system", "reason": "system-core"}],
|
||||
[]
|
||||
)
|
||||
|
||||
result = prepare_candidate(repository, candidate_id, artifacts, sources, "operator-1")
|
||||
|
||||
# Verify
|
||||
assert result["status"] == ReleaseCandidateStatus.PREPARED.value
|
||||
assert candidate.status == ReleaseCandidateStatus.PREPARED
|
||||
repository.save_manifest.assert_called_once()
|
||||
repository.save_candidate.assert_called_with(candidate)
|
||||
|
||||
def test_prepare_candidate_with_violations():
|
||||
# Setup
|
||||
repository = MagicMock()
|
||||
candidate_id = "cand-1"
|
||||
candidate = _mock_candidate(candidate_id)
|
||||
repository.get_candidate.return_value = candidate
|
||||
repository.get_active_policy.return_value = _mock_policy()
|
||||
repository.get_registry.return_value = _mock_registry()
|
||||
|
||||
artifacts = [{"path": "bad.txt", "category": "prohibited"}]
|
||||
sources = []
|
||||
|
||||
# Execute
|
||||
with patch("src.services.clean_release.preparation_service.CleanPolicyEngine") as MockEngine:
|
||||
mock_engine_instance = MockEngine.return_value
|
||||
mock_engine_instance.validate_policy.return_value.ok = True
|
||||
mock_engine_instance.evaluate_candidate.return_value = (
|
||||
[{"path": "bad.txt", "category": "prohibited", "classification": "excluded-prohibited", "reason": "test-data"}],
|
||||
[{"category": "data-purity", "blocked_release": True}]
|
||||
)
|
||||
|
||||
result = prepare_candidate(repository, candidate_id, artifacts, sources, "operator-1")
|
||||
|
||||
# Verify
|
||||
assert result["status"] == ReleaseCandidateStatus.BLOCKED.value
|
||||
assert candidate.status == ReleaseCandidateStatus.BLOCKED
|
||||
assert len(result["violations"]) == 1
|
||||
|
||||
def test_prepare_candidate_not_found():
|
||||
repository = MagicMock()
|
||||
repository.get_candidate.return_value = None
|
||||
|
||||
with pytest.raises(ValueError, match="Candidate not found"):
|
||||
prepare_candidate(repository, "non-existent", [], [], "op")
|
||||
|
||||
def test_prepare_candidate_no_active_policy():
|
||||
repository = MagicMock()
|
||||
repository.get_candidate.return_value = _mock_candidate("cand-1")
|
||||
repository.get_active_policy.return_value = None
|
||||
|
||||
with pytest.raises(ValueError, match="Active clean policy not found"):
|
||||
prepare_candidate(repository, "cand-1", [], [], "op")
|
||||
@@ -0,0 +1,112 @@
|
||||
# [DEF:backend.tests.services.clean_release.test_report_builder:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, clean-release, report-builder, counters
|
||||
# @PURPOSE: Validate compliance report builder counter integrity and blocked-run constraints.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.services.clean_release.report_builder
|
||||
# @INVARIANT: blocked run requires at least one blocking violation.
|
||||
|
||||
from datetime import datetime, timezone
|
||||
|
||||
import pytest
|
||||
|
||||
from src.models.clean_release import (
|
||||
CheckFinalStatus,
|
||||
ComplianceCheckRun,
|
||||
ComplianceViolation,
|
||||
ExecutionMode,
|
||||
ViolationCategory,
|
||||
ViolationSeverity,
|
||||
)
|
||||
from src.services.clean_release.report_builder import ComplianceReportBuilder
|
||||
from src.services.clean_release.repository import CleanReleaseRepository
|
||||
|
||||
|
||||
# [DEF:_terminal_run:Function]
|
||||
# @PURPOSE: Build terminal/non-terminal run fixtures for report builder tests.
|
||||
def _terminal_run(status: CheckFinalStatus) -> ComplianceCheckRun:
|
||||
return ComplianceCheckRun(
|
||||
check_run_id="check-1",
|
||||
candidate_id="2026.03.03-rc1",
|
||||
policy_id="policy-enterprise-clean-v1",
|
||||
started_at=datetime.now(timezone.utc),
|
||||
finished_at=datetime.now(timezone.utc),
|
||||
final_status=status,
|
||||
triggered_by="tester",
|
||||
execution_mode=ExecutionMode.TUI,
|
||||
checks=[],
|
||||
)
|
||||
# [/DEF:_terminal_run:Function]
|
||||
|
||||
|
||||
# [DEF:_blocking_violation:Function]
|
||||
# @PURPOSE: Build a blocking violation fixture for blocked report scenarios.
|
||||
def _blocking_violation() -> ComplianceViolation:
|
||||
return ComplianceViolation(
|
||||
violation_id="viol-1",
|
||||
check_run_id="check-1",
|
||||
category=ViolationCategory.EXTERNAL_SOURCE,
|
||||
severity=ViolationSeverity.CRITICAL,
|
||||
location="pypi.org",
|
||||
remediation="replace",
|
||||
blocked_release=True,
|
||||
detected_at=datetime.now(timezone.utc),
|
||||
)
|
||||
# [/DEF:_blocking_violation:Function]
|
||||
|
||||
|
||||
# [DEF:test_report_builder_blocked_requires_blocking_violations:Function]
|
||||
# @PURPOSE: Verify BLOCKED run requires at least one blocking violation.
|
||||
def test_report_builder_blocked_requires_blocking_violations():
|
||||
builder = ComplianceReportBuilder(CleanReleaseRepository())
|
||||
run = _terminal_run(CheckFinalStatus.BLOCKED)
|
||||
|
||||
with pytest.raises(ValueError):
|
||||
builder.build_report_payload(run, [])
|
||||
# [/DEF:test_report_builder_blocked_requires_blocking_violations:Function]
|
||||
|
||||
|
||||
# [DEF:test_report_builder_blocked_with_two_violations:Function]
|
||||
# @PURPOSE: Verify report builder generates conformant payload for a BLOCKED run with violations.
|
||||
def test_report_builder_blocked_with_two_violations():
|
||||
builder = ComplianceReportBuilder(CleanReleaseRepository())
|
||||
run = _terminal_run(CheckFinalStatus.BLOCKED)
|
||||
v1 = _blocking_violation()
|
||||
v2 = _blocking_violation()
|
||||
v2.violation_id = "viol-2"
|
||||
v2.category = ViolationCategory.DATA_PURITY
|
||||
|
||||
report = builder.build_report_payload(run, [v1, v2])
|
||||
|
||||
assert report.check_run_id == run.check_run_id
|
||||
assert report.candidate_id == run.candidate_id
|
||||
assert report.final_status == CheckFinalStatus.BLOCKED
|
||||
assert report.violations_count == 2
|
||||
assert report.blocking_violations_count == 2
|
||||
# [/DEF:test_report_builder_blocked_with_two_violations:Function]
|
||||
|
||||
|
||||
# [DEF:test_report_builder_counter_consistency:Function]
|
||||
# @PURPOSE: Verify violations counters remain consistent for blocking payload.
|
||||
def test_report_builder_counter_consistency():
|
||||
builder = ComplianceReportBuilder(CleanReleaseRepository())
|
||||
run = _terminal_run(CheckFinalStatus.BLOCKED)
|
||||
report = builder.build_report_payload(run, [_blocking_violation()])
|
||||
|
||||
assert report.violations_count == 1
|
||||
assert report.blocking_violations_count == 1
|
||||
# [/DEF:test_report_builder_counter_consistency:Function]
|
||||
|
||||
|
||||
# [DEF:test_missing_operator_summary:Function]
|
||||
# @PURPOSE: Validate non-terminal run prevents operator summary/report generation.
|
||||
def test_missing_operator_summary():
|
||||
builder = ComplianceReportBuilder(CleanReleaseRepository())
|
||||
run = _terminal_run(CheckFinalStatus.RUNNING)
|
||||
|
||||
with pytest.raises(ValueError) as exc:
|
||||
builder.build_report_payload(run, [])
|
||||
|
||||
assert "Cannot build report for non-terminal run" in str(exc.value)
|
||||
# [/DEF:test_missing_operator_summary:Function]
|
||||
# [/DEF:backend.tests.services.clean_release.test_report_builder:Module]
|
||||
@@ -0,0 +1,58 @@
|
||||
# [DEF:backend.tests.services.clean_release.test_source_isolation:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, clean-release, source-isolation, internal-only
|
||||
# @PURPOSE: Verify internal source registry validation behavior.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.services.clean_release.source_isolation
|
||||
# @INVARIANT: External endpoints always produce blocking violations.
|
||||
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from src.models.clean_release import ResourceSourceEntry, ResourceSourceRegistry
|
||||
from src.services.clean_release.source_isolation import validate_internal_sources
|
||||
|
||||
|
||||
def _registry() -> ResourceSourceRegistry:
|
||||
return ResourceSourceRegistry(
|
||||
registry_id="registry-internal-v1",
|
||||
name="Internal Sources",
|
||||
entries=[
|
||||
ResourceSourceEntry(
|
||||
source_id="src-1",
|
||||
host="repo.intra.company.local",
|
||||
protocol="https",
|
||||
purpose="artifact-repo",
|
||||
enabled=True,
|
||||
),
|
||||
ResourceSourceEntry(
|
||||
source_id="src-2",
|
||||
host="pypi.intra.company.local",
|
||||
protocol="https",
|
||||
purpose="package-mirror",
|
||||
enabled=True,
|
||||
),
|
||||
],
|
||||
updated_at=datetime.now(timezone.utc),
|
||||
updated_by="tester",
|
||||
status="active",
|
||||
)
|
||||
|
||||
|
||||
def test_validate_internal_sources_all_internal_ok():
|
||||
result = validate_internal_sources(
|
||||
registry=_registry(),
|
||||
endpoints=["repo.intra.company.local", "pypi.intra.company.local"],
|
||||
)
|
||||
assert result["ok"] is True
|
||||
assert result["violations"] == []
|
||||
|
||||
|
||||
def test_validate_internal_sources_external_blocked():
|
||||
result = validate_internal_sources(
|
||||
registry=_registry(),
|
||||
endpoints=["repo.intra.company.local", "pypi.org"],
|
||||
)
|
||||
assert result["ok"] is False
|
||||
assert len(result["violations"]) == 1
|
||||
assert result["violations"][0]["category"] == "external-source"
|
||||
assert result["violations"][0]["blocked_release"] is True
|
||||
27
backend/src/services/clean_release/__tests__/test_stages.py
Normal file
27
backend/src/services/clean_release/__tests__/test_stages.py
Normal file
@@ -0,0 +1,27 @@
|
||||
# [DEF:backend.tests.services.clean_release.test_stages:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: tests, clean-release, compliance, stages
|
||||
# @PURPOSE: Validate final status derivation logic from stage results.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: TESTS -> backend.src.services.clean_release.stages
|
||||
|
||||
from src.models.clean_release import CheckFinalStatus, CheckStageName, CheckStageResult, CheckStageStatus
|
||||
from src.services.clean_release.stages import derive_final_status, MANDATORY_STAGE_ORDER
|
||||
|
||||
def test_derive_final_status_compliant():
|
||||
results = [CheckStageResult(stage=s, status=CheckStageStatus.PASS, details="ok") for s in MANDATORY_STAGE_ORDER]
|
||||
assert derive_final_status(results) == CheckFinalStatus.COMPLIANT
|
||||
|
||||
def test_derive_final_status_blocked():
|
||||
results = [CheckStageResult(stage=s, status=CheckStageStatus.PASS, details="ok") for s in MANDATORY_STAGE_ORDER]
|
||||
results[1].status = CheckStageStatus.FAIL
|
||||
assert derive_final_status(results) == CheckFinalStatus.BLOCKED
|
||||
|
||||
def test_derive_final_status_failed_missing():
|
||||
results = [CheckStageResult(stage=MANDATORY_STAGE_ORDER[0], status=CheckStageStatus.PASS, details="ok")]
|
||||
assert derive_final_status(results) == CheckFinalStatus.FAILED
|
||||
|
||||
def test_derive_final_status_failed_skipped():
|
||||
results = [CheckStageResult(stage=s, status=CheckStageStatus.PASS, details="ok") for s in MANDATORY_STAGE_ORDER]
|
||||
results[2].status = CheckStageStatus.SKIPPED
|
||||
assert derive_final_status(results) == CheckFinalStatus.FAILED
|
||||
178
backend/src/services/clean_release/approval_service.py
Normal file
178
backend/src/services/clean_release/approval_service.py
Normal file
@@ -0,0 +1,178 @@
|
||||
# [DEF:backend.src.services.clean_release.approval_service:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, approval, decision, lifecycle, gate
|
||||
# @PURPOSE: Enforce approval/rejection gates over immutable compliance reports.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.audit_service
|
||||
# @INVARIANT: Approval is allowed only for PASSED report bound to candidate; decisions are append-only.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import List
|
||||
from uuid import uuid4
|
||||
|
||||
from ...core.logger import belief_scope, logger
|
||||
from ...models.clean_release import ApprovalDecision
|
||||
from .audit_service import audit_preparation
|
||||
from .enums import ApprovalDecisionType, CandidateStatus, ComplianceDecision
|
||||
from .exceptions import ApprovalGateError
|
||||
from .repository import CleanReleaseRepository
|
||||
|
||||
|
||||
# [DEF:_get_or_init_decisions_store:Function]
|
||||
# @PURPOSE: Provide append-only in-memory storage for approval decisions.
|
||||
# @PRE: repository is initialized.
|
||||
# @POST: Returns mutable decision list attached to repository.
|
||||
def _get_or_init_decisions_store(repository: CleanReleaseRepository) -> List[ApprovalDecision]:
|
||||
decisions = getattr(repository, "approval_decisions", None)
|
||||
if decisions is None:
|
||||
decisions = []
|
||||
setattr(repository, "approval_decisions", decisions)
|
||||
return decisions
|
||||
# [/DEF:_get_or_init_decisions_store:Function]
|
||||
|
||||
|
||||
# [DEF:_latest_decision_for_candidate:Function]
|
||||
# @PURPOSE: Resolve latest approval decision for candidate from append-only store.
|
||||
# @PRE: candidate_id is non-empty.
|
||||
# @POST: Returns latest ApprovalDecision or None.
|
||||
def _latest_decision_for_candidate(repository: CleanReleaseRepository, candidate_id: str) -> ApprovalDecision | None:
|
||||
decisions = _get_or_init_decisions_store(repository)
|
||||
scoped = [item for item in decisions if item.candidate_id == candidate_id]
|
||||
if not scoped:
|
||||
return None
|
||||
return sorted(scoped, key=lambda item: item.decided_at or datetime.min.replace(tzinfo=timezone.utc), reverse=True)[0]
|
||||
# [/DEF:_latest_decision_for_candidate:Function]
|
||||
|
||||
|
||||
# [DEF:_resolve_candidate_and_report:Function]
|
||||
# @PURPOSE: Validate candidate/report existence and ownership prior to decision persistence.
|
||||
# @PRE: candidate_id and report_id are non-empty.
|
||||
# @POST: Returns tuple(candidate, report); raises ApprovalGateError on contract violation.
|
||||
def _resolve_candidate_and_report(
|
||||
repository: CleanReleaseRepository,
|
||||
*,
|
||||
candidate_id: str,
|
||||
report_id: str,
|
||||
):
|
||||
candidate = repository.get_candidate(candidate_id)
|
||||
if candidate is None:
|
||||
raise ApprovalGateError(f"candidate '{candidate_id}' not found")
|
||||
|
||||
report = repository.get_report(report_id)
|
||||
if report is None:
|
||||
raise ApprovalGateError(f"report '{report_id}' not found")
|
||||
|
||||
if report.candidate_id != candidate_id:
|
||||
raise ApprovalGateError("report belongs to another candidate")
|
||||
|
||||
return candidate, report
|
||||
# [/DEF:_resolve_candidate_and_report:Function]
|
||||
|
||||
|
||||
# [DEF:approve_candidate:Function]
|
||||
# @PURPOSE: Persist immutable APPROVED decision and advance candidate lifecycle to APPROVED.
|
||||
# @PRE: Candidate exists, report belongs to candidate, report final_status is PASSED, candidate not already APPROVED.
|
||||
# @POST: Approval decision is appended and candidate transitions to APPROVED.
|
||||
def approve_candidate(
|
||||
*,
|
||||
repository: CleanReleaseRepository,
|
||||
candidate_id: str,
|
||||
report_id: str,
|
||||
decided_by: str,
|
||||
comment: str | None = None,
|
||||
) -> ApprovalDecision:
|
||||
with belief_scope("approval_service.approve_candidate"):
|
||||
logger.reason(f"[REASON] Evaluating approve gate candidate_id={candidate_id} report_id={report_id}")
|
||||
|
||||
if not decided_by or not decided_by.strip():
|
||||
raise ApprovalGateError("decided_by must be non-empty")
|
||||
|
||||
candidate, report = _resolve_candidate_and_report(
|
||||
repository,
|
||||
candidate_id=candidate_id,
|
||||
report_id=report_id,
|
||||
)
|
||||
|
||||
if report.final_status != ComplianceDecision.PASSED.value:
|
||||
raise ApprovalGateError("approve requires PASSED compliance report")
|
||||
|
||||
latest = _latest_decision_for_candidate(repository, candidate_id)
|
||||
if latest is not None and latest.decision == ApprovalDecisionType.APPROVED.value:
|
||||
raise ApprovalGateError("candidate is already approved")
|
||||
|
||||
if candidate.status == CandidateStatus.APPROVED.value:
|
||||
raise ApprovalGateError("candidate is already approved")
|
||||
|
||||
try:
|
||||
if candidate.status != CandidateStatus.CHECK_PASSED.value:
|
||||
raise ApprovalGateError(
|
||||
f"candidate status '{candidate.status}' cannot transition to APPROVED"
|
||||
)
|
||||
candidate.transition_to(CandidateStatus.APPROVED)
|
||||
repository.save_candidate(candidate)
|
||||
except ApprovalGateError:
|
||||
raise
|
||||
except Exception as exc: # noqa: BLE001
|
||||
logger.explore(f"[EXPLORE] Candidate transition to APPROVED failed candidate_id={candidate_id}: {exc}")
|
||||
raise ApprovalGateError(str(exc)) from exc
|
||||
|
||||
decision = ApprovalDecision(
|
||||
id=f"approve-{uuid4()}",
|
||||
candidate_id=candidate_id,
|
||||
report_id=report_id,
|
||||
decision=ApprovalDecisionType.APPROVED.value,
|
||||
decided_by=decided_by,
|
||||
decided_at=datetime.now(timezone.utc),
|
||||
comment=comment,
|
||||
)
|
||||
_get_or_init_decisions_store(repository).append(decision)
|
||||
audit_preparation(candidate_id, "APPROVED", repository=repository, actor=decided_by)
|
||||
logger.reflect(f"[REFLECT] Approval persisted candidate_id={candidate_id} decision_id={decision.id}")
|
||||
return decision
|
||||
# [/DEF:approve_candidate:Function]
|
||||
|
||||
|
||||
# [DEF:reject_candidate:Function]
|
||||
# @PURPOSE: Persist immutable REJECTED decision without promoting candidate lifecycle.
|
||||
# @PRE: Candidate exists and report belongs to candidate.
|
||||
# @POST: Rejected decision is appended; candidate lifecycle is unchanged.
|
||||
def reject_candidate(
|
||||
*,
|
||||
repository: CleanReleaseRepository,
|
||||
candidate_id: str,
|
||||
report_id: str,
|
||||
decided_by: str,
|
||||
comment: str | None = None,
|
||||
) -> ApprovalDecision:
|
||||
with belief_scope("approval_service.reject_candidate"):
|
||||
logger.reason(f"[REASON] Evaluating reject decision candidate_id={candidate_id} report_id={report_id}")
|
||||
|
||||
if not decided_by or not decided_by.strip():
|
||||
raise ApprovalGateError("decided_by must be non-empty")
|
||||
|
||||
_resolve_candidate_and_report(
|
||||
repository,
|
||||
candidate_id=candidate_id,
|
||||
report_id=report_id,
|
||||
)
|
||||
|
||||
decision = ApprovalDecision(
|
||||
id=f"reject-{uuid4()}",
|
||||
candidate_id=candidate_id,
|
||||
report_id=report_id,
|
||||
decision=ApprovalDecisionType.REJECTED.value,
|
||||
decided_by=decided_by,
|
||||
decided_at=datetime.now(timezone.utc),
|
||||
comment=comment,
|
||||
)
|
||||
_get_or_init_decisions_store(repository).append(decision)
|
||||
audit_preparation(candidate_id, "REJECTED", repository=repository, actor=decided_by)
|
||||
logger.reflect(f"[REFLECT] Rejection persisted candidate_id={candidate_id} decision_id={decision.id}")
|
||||
return decision
|
||||
# [/DEF:reject_candidate:Function]
|
||||
|
||||
# [/DEF:backend.src.services.clean_release.approval_service:Module]
|
||||
107
backend/src/services/clean_release/audit_service.py
Normal file
107
backend/src/services/clean_release/audit_service.py
Normal file
@@ -0,0 +1,107 @@
|
||||
# [DEF:backend.src.services.clean_release.audit_service:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, audit, lifecycle, logging
|
||||
# @PURPOSE: Provide lightweight audit hooks for clean release preparation/check/report lifecycle.
|
||||
# @LAYER: Infra
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.logger
|
||||
# @INVARIANT: Audit hooks are append-only log actions.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Dict, Optional
|
||||
from uuid import uuid4
|
||||
|
||||
from ...core.logger import logger
|
||||
|
||||
|
||||
def _append_event(repository, payload: Dict[str, Any]) -> None:
|
||||
if repository is not None and hasattr(repository, "append_audit_event"):
|
||||
repository.append_audit_event(payload)
|
||||
|
||||
|
||||
def audit_preparation(candidate_id: str, status: str, repository=None, actor: str = "system") -> None:
|
||||
logger.info(f"[REASON] clean-release preparation candidate={candidate_id} status={status}")
|
||||
_append_event(
|
||||
repository,
|
||||
{
|
||||
"id": f"audit-{uuid4()}",
|
||||
"action": "PREPARATION",
|
||||
"candidate_id": candidate_id,
|
||||
"actor": actor,
|
||||
"status": status,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def audit_check_run(
|
||||
check_run_id: str,
|
||||
final_status: str,
|
||||
repository=None,
|
||||
*,
|
||||
candidate_id: Optional[str] = None,
|
||||
actor: str = "system",
|
||||
) -> None:
|
||||
logger.info(f"[REFLECT] clean-release check_run={check_run_id} final_status={final_status}")
|
||||
_append_event(
|
||||
repository,
|
||||
{
|
||||
"id": f"audit-{uuid4()}",
|
||||
"action": "CHECK_RUN",
|
||||
"run_id": check_run_id,
|
||||
"candidate_id": candidate_id,
|
||||
"actor": actor,
|
||||
"status": final_status,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def audit_violation(
|
||||
run_id: str,
|
||||
stage_name: str,
|
||||
code: str,
|
||||
repository=None,
|
||||
*,
|
||||
candidate_id: Optional[str] = None,
|
||||
actor: str = "system",
|
||||
) -> None:
|
||||
logger.info(f"[EXPLORE] clean-release violation run_id={run_id} stage={stage_name} code={code}")
|
||||
_append_event(
|
||||
repository,
|
||||
{
|
||||
"id": f"audit-{uuid4()}",
|
||||
"action": "VIOLATION",
|
||||
"run_id": run_id,
|
||||
"candidate_id": candidate_id,
|
||||
"actor": actor,
|
||||
"stage_name": stage_name,
|
||||
"code": code,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
},
|
||||
)
|
||||
|
||||
|
||||
def audit_report(
|
||||
report_id: str,
|
||||
candidate_id: str,
|
||||
repository=None,
|
||||
*,
|
||||
run_id: Optional[str] = None,
|
||||
actor: str = "system",
|
||||
) -> None:
|
||||
logger.info(f"[EXPLORE] clean-release report_id={report_id} candidate={candidate_id}")
|
||||
_append_event(
|
||||
repository,
|
||||
{
|
||||
"id": f"audit-{uuid4()}",
|
||||
"action": "REPORT",
|
||||
"report_id": report_id,
|
||||
"run_id": run_id,
|
||||
"candidate_id": candidate_id,
|
||||
"actor": actor,
|
||||
"timestamp": datetime.now(timezone.utc).isoformat(),
|
||||
},
|
||||
)
|
||||
# [/DEF:backend.src.services.clean_release.audit_service:Module]
|
||||
107
backend/src/services/clean_release/candidate_service.py
Normal file
107
backend/src/services/clean_release/candidate_service.py
Normal file
@@ -0,0 +1,107 @@
|
||||
# [DEF:backend.src.services.clean_release.candidate_service:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, candidate, artifacts, lifecycle, validation
|
||||
# @PURPOSE: Register release candidates with validated artifacts and advance lifecycle through legal transitions.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release
|
||||
# @PRE: candidate_id must be unique; artifacts input must be non-empty and valid.
|
||||
# @POST: candidate and artifacts are persisted; candidate transitions DRAFT -> PREPARED only.
|
||||
# @INVARIANT: Candidate lifecycle transitions are delegated to domain guard logic.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Dict, Iterable, List
|
||||
|
||||
from ...models.clean_release import CandidateArtifact, ReleaseCandidate
|
||||
from .enums import CandidateStatus
|
||||
from .repository import CleanReleaseRepository
|
||||
|
||||
|
||||
# [DEF:_validate_artifacts:Function]
|
||||
# @PURPOSE: Validate raw artifact payload list for required fields and shape.
|
||||
# @PRE: artifacts payload is provided by caller.
|
||||
# @POST: Returns normalized artifact list or raises ValueError.
|
||||
def _validate_artifacts(artifacts: Iterable[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
normalized = list(artifacts)
|
||||
if not normalized:
|
||||
raise ValueError("artifacts must not be empty")
|
||||
|
||||
required_fields = ("id", "path", "sha256", "size")
|
||||
for index, artifact in enumerate(normalized):
|
||||
if not isinstance(artifact, dict):
|
||||
raise ValueError(f"artifact[{index}] must be an object")
|
||||
for field in required_fields:
|
||||
if field not in artifact:
|
||||
raise ValueError(f"artifact[{index}] missing required field '{field}'")
|
||||
if not str(artifact["id"]).strip():
|
||||
raise ValueError(f"artifact[{index}] field 'id' must be non-empty")
|
||||
if not str(artifact["path"]).strip():
|
||||
raise ValueError(f"artifact[{index}] field 'path' must be non-empty")
|
||||
if not str(artifact["sha256"]).strip():
|
||||
raise ValueError(f"artifact[{index}] field 'sha256' must be non-empty")
|
||||
if not isinstance(artifact["size"], int) or artifact["size"] <= 0:
|
||||
raise ValueError(f"artifact[{index}] field 'size' must be a positive integer")
|
||||
return normalized
|
||||
# [/DEF:_validate_artifacts:Function]
|
||||
|
||||
|
||||
# [DEF:register_candidate:Function]
|
||||
# @PURPOSE: Register a candidate and persist its artifacts with legal lifecycle transition.
|
||||
# @PRE: candidate_id must be unique and artifacts must pass validation.
|
||||
# @POST: Candidate exists in repository with PREPARED status and artifacts persisted.
|
||||
def register_candidate(
|
||||
repository: CleanReleaseRepository,
|
||||
candidate_id: str,
|
||||
version: str,
|
||||
source_snapshot_ref: str,
|
||||
created_by: str,
|
||||
artifacts: Iterable[Dict[str, Any]],
|
||||
) -> ReleaseCandidate:
|
||||
if not candidate_id or not candidate_id.strip():
|
||||
raise ValueError("candidate_id must be non-empty")
|
||||
if not version or not version.strip():
|
||||
raise ValueError("version must be non-empty")
|
||||
if not source_snapshot_ref or not source_snapshot_ref.strip():
|
||||
raise ValueError("source_snapshot_ref must be non-empty")
|
||||
if not created_by or not created_by.strip():
|
||||
raise ValueError("created_by must be non-empty")
|
||||
|
||||
existing = repository.get_candidate(candidate_id)
|
||||
if existing is not None:
|
||||
raise ValueError(f"candidate '{candidate_id}' already exists")
|
||||
|
||||
validated_artifacts = _validate_artifacts(artifacts)
|
||||
|
||||
candidate = ReleaseCandidate(
|
||||
id=candidate_id,
|
||||
version=version,
|
||||
source_snapshot_ref=source_snapshot_ref,
|
||||
created_by=created_by,
|
||||
created_at=datetime.now(timezone.utc),
|
||||
status=CandidateStatus.DRAFT.value,
|
||||
)
|
||||
repository.save_candidate(candidate)
|
||||
|
||||
for artifact_payload in validated_artifacts:
|
||||
artifact = CandidateArtifact(
|
||||
id=str(artifact_payload["id"]),
|
||||
candidate_id=candidate_id,
|
||||
path=str(artifact_payload["path"]),
|
||||
sha256=str(artifact_payload["sha256"]),
|
||||
size=int(artifact_payload["size"]),
|
||||
detected_category=artifact_payload.get("detected_category"),
|
||||
declared_category=artifact_payload.get("declared_category"),
|
||||
source_uri=artifact_payload.get("source_uri"),
|
||||
source_host=artifact_payload.get("source_host"),
|
||||
metadata_json=artifact_payload.get("metadata_json", {}),
|
||||
)
|
||||
repository.save_artifact(artifact)
|
||||
|
||||
candidate.transition_to(CandidateStatus.PREPARED)
|
||||
repository.save_candidate(candidate)
|
||||
return candidate
|
||||
# [/DEF:register_candidate:Function]
|
||||
|
||||
# [/DEF:backend.src.services.clean_release.candidate_service:Module]
|
||||
@@ -0,0 +1,197 @@
|
||||
# [DEF:backend.src.services.clean_release.compliance_execution_service:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, compliance, execution, stages, immutable-evidence
|
||||
# @PURPOSE: Create and execute compliance runs with trusted snapshots, deterministic stages, violations and immutable report persistence.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.policy_resolution_service
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.stages
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.report_builder
|
||||
# @INVARIANT: A run binds to exactly one candidate/manifest/policy/registry snapshot set.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Iterable, List, Optional
|
||||
from uuid import uuid4
|
||||
|
||||
from ...core.logger import belief_scope, logger
|
||||
from ...models.clean_release import ComplianceReport, ComplianceRun, ComplianceStageRun, ComplianceViolation, DistributionManifest
|
||||
from .audit_service import audit_check_run, audit_report, audit_violation
|
||||
from .enums import ComplianceDecision, RunStatus
|
||||
from .exceptions import ComplianceRunError, PolicyResolutionError
|
||||
from .policy_resolution_service import resolve_trusted_policy_snapshots
|
||||
from .report_builder import ComplianceReportBuilder
|
||||
from .repository import CleanReleaseRepository
|
||||
from .stages import build_default_stages, derive_final_status
|
||||
from .stages.base import ComplianceStage, ComplianceStageContext, build_stage_run_record
|
||||
|
||||
|
||||
# [DEF:ComplianceExecutionResult:Class]
|
||||
# @PURPOSE: Return envelope for compliance execution with run/report and persisted stage artifacts.
|
||||
@dataclass
|
||||
class ComplianceExecutionResult:
|
||||
run: ComplianceRun
|
||||
report: Optional[ComplianceReport]
|
||||
stage_runs: List[ComplianceStageRun]
|
||||
violations: List[ComplianceViolation]
|
||||
# [/DEF:ComplianceExecutionResult:Class]
|
||||
|
||||
|
||||
# [DEF:ComplianceExecutionService:Class]
|
||||
# @PURPOSE: Execute clean-release compliance lifecycle over trusted snapshots and immutable evidence.
|
||||
# @PRE: repository and config_manager are initialized.
|
||||
# @POST: run state, stage records, violations and optional report are persisted consistently.
|
||||
class ComplianceExecutionService:
|
||||
TASK_PLUGIN_ID = "clean-release-compliance"
|
||||
def __init__(
|
||||
self,
|
||||
*,
|
||||
repository: CleanReleaseRepository,
|
||||
config_manager,
|
||||
stages: Optional[Iterable[ComplianceStage]] = None,
|
||||
):
|
||||
self.repository = repository
|
||||
self.config_manager = config_manager
|
||||
self.stages = list(stages) if stages is not None else build_default_stages()
|
||||
self.report_builder = ComplianceReportBuilder(repository)
|
||||
|
||||
# [DEF:_resolve_manifest:Function]
|
||||
# @PURPOSE: Resolve explicit manifest or fallback to latest candidate manifest.
|
||||
# @PRE: candidate exists.
|
||||
# @POST: Returns manifest snapshot or raises ComplianceRunError.
|
||||
def _resolve_manifest(self, candidate_id: str, manifest_id: Optional[str]) -> DistributionManifest:
|
||||
with belief_scope("ComplianceExecutionService._resolve_manifest"):
|
||||
if manifest_id:
|
||||
manifest = self.repository.get_manifest(manifest_id)
|
||||
if manifest is None:
|
||||
raise ComplianceRunError(f"manifest '{manifest_id}' not found")
|
||||
if manifest.candidate_id != candidate_id:
|
||||
raise ComplianceRunError("manifest does not belong to candidate")
|
||||
return manifest
|
||||
|
||||
manifests = self.repository.get_manifests_by_candidate(candidate_id)
|
||||
if not manifests:
|
||||
raise ComplianceRunError(f"candidate '{candidate_id}' has no manifest")
|
||||
return sorted(manifests, key=lambda item: item.manifest_version, reverse=True)[0]
|
||||
# [/DEF:_resolve_manifest:Function]
|
||||
|
||||
# [DEF:_persist_stage_run:Function]
|
||||
# @PURPOSE: Persist stage run if repository supports stage records.
|
||||
# @POST: Stage run is persisted when adapter is available, otherwise no-op.
|
||||
def _persist_stage_run(self, stage_run: ComplianceStageRun) -> None:
|
||||
if hasattr(self.repository, "save_stage_run"):
|
||||
self.repository.save_stage_run(stage_run)
|
||||
# [/DEF:_persist_stage_run:Function]
|
||||
|
||||
# [DEF:_persist_violations:Function]
|
||||
# @PURPOSE: Persist stage violations via repository adapters.
|
||||
# @POST: Violations are appended to repository evidence store.
|
||||
def _persist_violations(self, violations: List[ComplianceViolation]) -> None:
|
||||
for violation in violations:
|
||||
self.repository.save_violation(violation)
|
||||
# [/DEF:_persist_violations:Function]
|
||||
|
||||
# [DEF:execute_run:Function]
|
||||
# @PURPOSE: Execute compliance run stages and finalize immutable report on terminal success.
|
||||
# @PRE: candidate exists and trusted policy/registry snapshots are resolvable.
|
||||
# @POST: Run and evidence are persisted; report exists for SUCCEEDED runs.
|
||||
def execute_run(
|
||||
self,
|
||||
*,
|
||||
candidate_id: str,
|
||||
requested_by: str,
|
||||
manifest_id: Optional[str] = None,
|
||||
) -> ComplianceExecutionResult:
|
||||
with belief_scope("ComplianceExecutionService.execute_run"):
|
||||
logger.reason(f"Starting compliance execution candidate_id={candidate_id}")
|
||||
|
||||
candidate = self.repository.get_candidate(candidate_id)
|
||||
if candidate is None:
|
||||
raise ComplianceRunError(f"candidate '{candidate_id}' not found")
|
||||
|
||||
manifest = self._resolve_manifest(candidate_id, manifest_id)
|
||||
|
||||
try:
|
||||
policy_snapshot, registry_snapshot = resolve_trusted_policy_snapshots(
|
||||
config_manager=self.config_manager,
|
||||
repository=self.repository,
|
||||
)
|
||||
except PolicyResolutionError as exc:
|
||||
raise ComplianceRunError(str(exc)) from exc
|
||||
|
||||
run = ComplianceRun(
|
||||
id=f"run-{uuid4()}",
|
||||
candidate_id=candidate_id,
|
||||
manifest_id=manifest.id,
|
||||
manifest_digest=manifest.manifest_digest,
|
||||
policy_snapshot_id=policy_snapshot.id,
|
||||
registry_snapshot_id=registry_snapshot.id,
|
||||
requested_by=requested_by,
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
started_at=datetime.now(timezone.utc),
|
||||
status=RunStatus.RUNNING.value,
|
||||
)
|
||||
self.repository.save_check_run(run)
|
||||
|
||||
stage_runs: List[ComplianceStageRun] = []
|
||||
violations: List[ComplianceViolation] = []
|
||||
report: Optional[ComplianceReport] = None
|
||||
|
||||
context = ComplianceStageContext(
|
||||
run=run,
|
||||
candidate=candidate,
|
||||
manifest=manifest,
|
||||
policy=policy_snapshot,
|
||||
registry=registry_snapshot,
|
||||
)
|
||||
|
||||
try:
|
||||
for stage in self.stages:
|
||||
started = datetime.now(timezone.utc)
|
||||
result = stage.execute(context)
|
||||
finished = datetime.now(timezone.utc)
|
||||
|
||||
stage_run = build_stage_run_record(
|
||||
run_id=run.id,
|
||||
stage_name=stage.stage_name,
|
||||
result=result,
|
||||
started_at=started,
|
||||
finished_at=finished,
|
||||
)
|
||||
self._persist_stage_run(stage_run)
|
||||
stage_runs.append(stage_run)
|
||||
|
||||
if result.violations:
|
||||
self._persist_violations(result.violations)
|
||||
violations.extend(result.violations)
|
||||
|
||||
run.final_status = derive_final_status(stage_runs).value
|
||||
run.status = RunStatus.SUCCEEDED.value
|
||||
run.finished_at = datetime.now(timezone.utc)
|
||||
self.repository.save_check_run(run)
|
||||
|
||||
report = self.report_builder.build_report_payload(run, violations)
|
||||
report = self.report_builder.persist_report(report)
|
||||
run.report_id = report.id
|
||||
self.repository.save_check_run(run)
|
||||
logger.reflect(f"[REFLECT] Compliance run completed run_id={run.id} final_status={run.final_status}")
|
||||
except Exception as exc: # noqa: BLE001
|
||||
run.status = RunStatus.FAILED.value
|
||||
run.final_status = ComplianceDecision.ERROR.value
|
||||
run.failure_reason = str(exc)
|
||||
run.finished_at = datetime.now(timezone.utc)
|
||||
self.repository.save_check_run(run)
|
||||
logger.explore(f"[EXPLORE] Compliance run failed run_id={run.id}: {exc}")
|
||||
|
||||
return ComplianceExecutionResult(
|
||||
run=run,
|
||||
report=report,
|
||||
stage_runs=stage_runs,
|
||||
violations=violations,
|
||||
)
|
||||
# [/DEF:execute_run:Function]
|
||||
# [/DEF:ComplianceExecutionService:Class]
|
||||
|
||||
# [/DEF:backend.src.services.clean_release.compliance_execution_service:Module]
|
||||
138
backend/src/services/clean_release/compliance_orchestrator.py
Normal file
138
backend/src/services/clean_release/compliance_orchestrator.py
Normal file
@@ -0,0 +1,138 @@
|
||||
# [DEF:backend.src.services.clean_release.compliance_orchestrator:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, orchestrator, compliance-gate, stages
|
||||
# @PURPOSE: Execute mandatory clean compliance stages and produce final COMPLIANT/BLOCKED/FAILED outcome.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.stages
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.report_builder
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @INVARIANT: COMPLIANT is impossible when any mandatory stage fails.
|
||||
# @TEST_CONTRACT: ComplianceCheckRun -> ComplianceCheckRun
|
||||
# @TEST_FIXTURE: compliant_candidate -> file:backend/tests/fixtures/clean_release/fixtures_clean_release.json
|
||||
# @TEST_EDGE: stage_failure_blocks_release -> Mandatory stage returns FAIL and final status becomes BLOCKED
|
||||
# @TEST_EDGE: missing_stage_result -> Finalization with incomplete/empty mandatory stage set must not produce COMPLIANT
|
||||
# @TEST_EDGE: report_generation_error -> Downstream reporting failure does not alter orchestrator status derivation contract
|
||||
# @TEST_INVARIANT: compliant_requires_all_mandatory_pass -> VERIFIED_BY: [stage_failure_blocks_release]
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import List, Optional
|
||||
from uuid import uuid4
|
||||
|
||||
from .enums import (
|
||||
RunStatus,
|
||||
ComplianceDecision,
|
||||
ComplianceStageName,
|
||||
ViolationCategory,
|
||||
ViolationSeverity,
|
||||
)
|
||||
from ...models.clean_release import (
|
||||
ComplianceRun,
|
||||
ComplianceStageRun,
|
||||
ComplianceViolation,
|
||||
)
|
||||
from .policy_engine import CleanPolicyEngine
|
||||
from .repository import CleanReleaseRepository
|
||||
from .stages import derive_final_status
|
||||
|
||||
|
||||
# [DEF:CleanComplianceOrchestrator:Class]
|
||||
# @PURPOSE: Coordinate clean-release compliance verification stages.
|
||||
class CleanComplianceOrchestrator:
|
||||
def __init__(self, repository: CleanReleaseRepository):
|
||||
self.repository = repository
|
||||
|
||||
# [DEF:start_check_run:Function]
|
||||
# @PURPOSE: Initiate a new compliance run session.
|
||||
# @PRE: candidate_id and policy_id must exist in repository.
|
||||
# @POST: Returns initialized ComplianceRun in RUNNING state.
|
||||
def start_check_run(self, candidate_id: str, policy_id: str, requested_by: str, manifest_id: str) -> ComplianceRun:
|
||||
manifest = self.repository.get_manifest(manifest_id)
|
||||
policy = self.repository.get_policy(policy_id)
|
||||
if not manifest or not policy:
|
||||
raise ValueError("Manifest or Policy not found")
|
||||
|
||||
check_run = ComplianceRun(
|
||||
id=f"check-{uuid4()}",
|
||||
candidate_id=candidate_id,
|
||||
manifest_id=manifest_id,
|
||||
manifest_digest=manifest.manifest_digest,
|
||||
policy_snapshot_id=policy_id,
|
||||
registry_snapshot_id=policy.registry_snapshot_id,
|
||||
requested_by=requested_by,
|
||||
requested_at=datetime.now(timezone.utc),
|
||||
status=RunStatus.RUNNING,
|
||||
)
|
||||
return self.repository.save_check_run(check_run)
|
||||
|
||||
def execute_stages(self, check_run: ComplianceRun, forced_results: Optional[List[ComplianceStageRun]] = None) -> ComplianceRun:
|
||||
if forced_results is not None:
|
||||
# In a real scenario, we'd persist these stages.
|
||||
return self.repository.save_check_run(check_run)
|
||||
|
||||
# Real Logic Integration
|
||||
candidate = self.repository.get_candidate(check_run.candidate_id)
|
||||
policy = self.repository.get_policy(check_run.policy_snapshot_id)
|
||||
if not candidate or not policy:
|
||||
check_run.status = RunStatus.FAILED
|
||||
return self.repository.save_check_run(check_run)
|
||||
|
||||
registry = self.repository.get_registry(check_run.registry_snapshot_id)
|
||||
manifest = self.repository.get_manifest(check_run.manifest_id)
|
||||
|
||||
if not registry or not manifest:
|
||||
check_run.status = RunStatus.FAILED
|
||||
return self.repository.save_check_run(check_run)
|
||||
|
||||
# Simulate stage execution and violation detection
|
||||
# 1. DATA_PURITY
|
||||
summary = manifest.content_json.get("summary", {})
|
||||
purity_ok = summary.get("prohibited_detected_count", 0) == 0
|
||||
|
||||
if not purity_ok:
|
||||
check_run.final_status = ComplianceDecision.BLOCKED
|
||||
else:
|
||||
check_run.final_status = ComplianceDecision.PASSED
|
||||
|
||||
check_run.status = RunStatus.SUCCEEDED
|
||||
check_run.finished_at = datetime.now(timezone.utc)
|
||||
|
||||
return self.repository.save_check_run(check_run)
|
||||
|
||||
# [DEF:finalize_run:Function]
|
||||
# @PURPOSE: Finalize run status based on cumulative stage results.
|
||||
# @POST: Status derivation follows strict MANDATORY_STAGE_ORDER.
|
||||
def finalize_run(self, check_run: ComplianceRun) -> ComplianceRun:
|
||||
# If not already set by execute_stages
|
||||
if not check_run.final_status:
|
||||
check_run.final_status = ComplianceDecision.PASSED
|
||||
|
||||
check_run.status = RunStatus.SUCCEEDED
|
||||
check_run.finished_at = datetime.now(timezone.utc)
|
||||
return self.repository.save_check_run(check_run)
|
||||
# [/DEF:CleanComplianceOrchestrator:Class]
|
||||
|
||||
|
||||
# [DEF:run_check_legacy:Function]
|
||||
# @PURPOSE: Legacy wrapper for compatibility with previous orchestrator call style.
|
||||
# @PRE: Candidate/policy/manifest identifiers are valid for repository.
|
||||
# @POST: Returns finalized ComplianceRun produced by orchestrator.
|
||||
def run_check_legacy(
|
||||
repository: CleanReleaseRepository,
|
||||
candidate_id: str,
|
||||
policy_id: str,
|
||||
requested_by: str,
|
||||
manifest_id: str,
|
||||
) -> ComplianceRun:
|
||||
orchestrator = CleanComplianceOrchestrator(repository)
|
||||
run = orchestrator.start_check_run(
|
||||
candidate_id=candidate_id,
|
||||
policy_id=policy_id,
|
||||
requested_by=requested_by,
|
||||
manifest_id=manifest_id,
|
||||
)
|
||||
run = orchestrator.execute_stages(run)
|
||||
return orchestrator.finalize_run(run)
|
||||
# [/DEF:run_check_legacy:Function]
|
||||
# [/DEF:backend.src.services.clean_release.compliance_orchestrator:Module]
|
||||
50
backend/src/services/clean_release/demo_data_service.py
Normal file
50
backend/src/services/clean_release/demo_data_service.py
Normal file
@@ -0,0 +1,50 @@
|
||||
# [DEF:backend.src.services.clean_release.demo_data_service:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, demo-mode, namespace, isolation, repository
|
||||
# @PURPOSE: Provide deterministic namespace helpers and isolated in-memory repository creation for demo and real modes.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @INVARIANT: Demo and real namespaces must never collide for generated physical identifiers.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from .repository import CleanReleaseRepository
|
||||
|
||||
|
||||
# [DEF:resolve_namespace:Function]
|
||||
# @PURPOSE: Resolve canonical clean-release namespace for requested mode.
|
||||
# @PRE: mode is a non-empty string identifying runtime mode.
|
||||
# @POST: Returns deterministic namespace key for demo/real separation.
|
||||
def resolve_namespace(mode: str) -> str:
|
||||
normalized = (mode or "").strip().lower()
|
||||
if normalized == "demo":
|
||||
return "clean-release:demo"
|
||||
return "clean-release:real"
|
||||
# [/DEF:resolve_namespace:Function]
|
||||
|
||||
|
||||
# [DEF:build_namespaced_id:Function]
|
||||
# @PURPOSE: Build storage-safe physical identifier under mode namespace.
|
||||
# @PRE: namespace and logical_id are non-empty strings.
|
||||
# @POST: Returns deterministic "{namespace}::{logical_id}" identifier.
|
||||
def build_namespaced_id(namespace: str, logical_id: str) -> str:
|
||||
if not namespace or not namespace.strip():
|
||||
raise ValueError("namespace must be non-empty")
|
||||
if not logical_id or not logical_id.strip():
|
||||
raise ValueError("logical_id must be non-empty")
|
||||
return f"{namespace}::{logical_id}"
|
||||
# [/DEF:build_namespaced_id:Function]
|
||||
|
||||
|
||||
# [DEF:create_isolated_repository:Function]
|
||||
# @PURPOSE: Create isolated in-memory repository instance for selected mode namespace.
|
||||
# @PRE: mode is a valid runtime mode marker.
|
||||
# @POST: Returns repository instance tagged with namespace metadata.
|
||||
def create_isolated_repository(mode: str) -> CleanReleaseRepository:
|
||||
namespace = resolve_namespace(mode)
|
||||
repository = CleanReleaseRepository()
|
||||
setattr(repository, "namespace", namespace)
|
||||
return repository
|
||||
# [/DEF:create_isolated_repository:Function]
|
||||
|
||||
# [/DEF:backend.src.services.clean_release.demo_data_service:Module]
|
||||
85
backend/src/services/clean_release/dto.py
Normal file
85
backend/src/services/clean_release/dto.py
Normal file
@@ -0,0 +1,85 @@
|
||||
# [DEF:clean_release_dto:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Data Transfer Objects for clean release compliance subsystem.
|
||||
# @LAYER: Application
|
||||
|
||||
from datetime import datetime
|
||||
from typing import List, Optional, Dict, Any
|
||||
from pydantic import BaseModel, Field
|
||||
from src.services.clean_release.enums import CandidateStatus, RunStatus, ComplianceDecision
|
||||
|
||||
class CandidateDTO(BaseModel):
|
||||
"""DTO for ReleaseCandidate."""
|
||||
id: str
|
||||
version: str
|
||||
source_snapshot_ref: str
|
||||
build_id: Optional[str] = None
|
||||
created_at: datetime
|
||||
created_by: str
|
||||
status: CandidateStatus
|
||||
|
||||
class ArtifactDTO(BaseModel):
|
||||
"""DTO for CandidateArtifact."""
|
||||
id: str
|
||||
candidate_id: str
|
||||
path: str
|
||||
sha256: str
|
||||
size: int
|
||||
detected_category: Optional[str] = None
|
||||
declared_category: Optional[str] = None
|
||||
source_uri: Optional[str] = None
|
||||
source_host: Optional[str] = None
|
||||
metadata: Dict[str, Any] = Field(default_factory=dict)
|
||||
|
||||
class ManifestDTO(BaseModel):
|
||||
"""DTO for DistributionManifest."""
|
||||
id: str
|
||||
candidate_id: str
|
||||
manifest_version: int
|
||||
manifest_digest: str
|
||||
artifacts_digest: str
|
||||
created_at: datetime
|
||||
created_by: str
|
||||
source_snapshot_ref: str
|
||||
content_json: Dict[str, Any]
|
||||
|
||||
class ComplianceRunDTO(BaseModel):
|
||||
"""DTO for ComplianceRun status tracking."""
|
||||
run_id: str
|
||||
candidate_id: str
|
||||
status: RunStatus
|
||||
final_status: Optional[ComplianceDecision] = None
|
||||
report_id: Optional[str] = None
|
||||
task_id: Optional[str] = None
|
||||
|
||||
class ReportDTO(BaseModel):
|
||||
"""Compact report view."""
|
||||
report_id: str
|
||||
candidate_id: str
|
||||
final_status: ComplianceDecision
|
||||
policy_version: str
|
||||
manifest_digest: str
|
||||
violation_count: int
|
||||
generated_at: datetime
|
||||
|
||||
class CandidateOverviewDTO(BaseModel):
|
||||
"""Read model for candidate overview."""
|
||||
candidate_id: str
|
||||
version: str
|
||||
source_snapshot_ref: str
|
||||
status: CandidateStatus
|
||||
latest_manifest_id: Optional[str] = None
|
||||
latest_manifest_digest: Optional[str] = None
|
||||
latest_run_id: Optional[str] = None
|
||||
latest_run_status: Optional[RunStatus] = None
|
||||
latest_report_id: Optional[str] = None
|
||||
latest_report_final_status: Optional[ComplianceDecision] = None
|
||||
latest_policy_snapshot_id: Optional[str] = None
|
||||
latest_policy_version: Optional[str] = None
|
||||
latest_registry_snapshot_id: Optional[str] = None
|
||||
latest_registry_version: Optional[str] = None
|
||||
latest_approval_decision: Optional[str] = None
|
||||
latest_publication_id: Optional[str] = None
|
||||
latest_publication_status: Optional[str] = None
|
||||
|
||||
# [/DEF:clean_release_dto:Module]
|
||||
72
backend/src/services/clean_release/enums.py
Normal file
72
backend/src/services/clean_release/enums.py
Normal file
@@ -0,0 +1,72 @@
|
||||
# [DEF:clean_release_enums:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Canonical enums for clean release lifecycle and compliance.
|
||||
# @LAYER: Domain
|
||||
|
||||
from enum import Enum
|
||||
|
||||
class CandidateStatus(str, Enum):
|
||||
"""Lifecycle states for a ReleaseCandidate."""
|
||||
DRAFT = "DRAFT"
|
||||
PREPARED = "PREPARED"
|
||||
MANIFEST_BUILT = "MANIFEST_BUILT"
|
||||
CHECK_PENDING = "CHECK_PENDING"
|
||||
CHECK_RUNNING = "CHECK_RUNNING"
|
||||
CHECK_PASSED = "CHECK_PASSED"
|
||||
CHECK_BLOCKED = "CHECK_BLOCKED"
|
||||
CHECK_ERROR = "CHECK_ERROR"
|
||||
APPROVED = "APPROVED"
|
||||
PUBLISHED = "PUBLISHED"
|
||||
REVOKED = "REVOKED"
|
||||
|
||||
class RunStatus(str, Enum):
|
||||
"""Execution status for a ComplianceRun."""
|
||||
PENDING = "PENDING"
|
||||
RUNNING = "RUNNING"
|
||||
SUCCEEDED = "SUCCEEDED"
|
||||
FAILED = "FAILED"
|
||||
CANCELLED = "CANCELLED"
|
||||
|
||||
class ComplianceDecision(str, Enum):
|
||||
"""Final compliance result for a run or stage."""
|
||||
PASSED = "PASSED"
|
||||
BLOCKED = "BLOCKED"
|
||||
ERROR = "ERROR"
|
||||
|
||||
class ApprovalDecisionType(str, Enum):
|
||||
"""Types of approval decisions."""
|
||||
APPROVED = "APPROVED"
|
||||
REJECTED = "REJECTED"
|
||||
|
||||
class PublicationStatus(str, Enum):
|
||||
"""Status of a publication record."""
|
||||
ACTIVE = "ACTIVE"
|
||||
REVOKED = "REVOKED"
|
||||
|
||||
class ComplianceStageName(str, Enum):
|
||||
"""Canonical names for compliance stages."""
|
||||
DATA_PURITY = "DATA_PURITY"
|
||||
INTERNAL_SOURCES_ONLY = "INTERNAL_SOURCES_ONLY"
|
||||
NO_EXTERNAL_ENDPOINTS = "NO_EXTERNAL_ENDPOINTS"
|
||||
MANIFEST_CONSISTENCY = "MANIFEST_CONSISTENCY"
|
||||
|
||||
class ClassificationType(str, Enum):
|
||||
"""Classification types for artifacts."""
|
||||
REQUIRED_SYSTEM = "required-system"
|
||||
ALLOWED = "allowed"
|
||||
EXCLUDED_PROHIBITED = "excluded-prohibited"
|
||||
|
||||
class ViolationSeverity(str, Enum):
|
||||
"""Severity levels for compliance violations."""
|
||||
CRITICAL = "CRITICAL"
|
||||
MAJOR = "MAJOR"
|
||||
MINOR = "MINOR"
|
||||
|
||||
class ViolationCategory(str, Enum):
|
||||
"""Categories for compliance violations."""
|
||||
DATA_PURITY = "DATA_PURITY"
|
||||
SOURCE_ISOLATION = "SOURCE_ISOLATION"
|
||||
MANIFEST_CONSISTENCY = "MANIFEST_CONSISTENCY"
|
||||
EXTERNAL_ENDPOINT = "EXTERNAL_ENDPOINT"
|
||||
|
||||
# [/DEF:clean_release_enums:Module]
|
||||
38
backend/src/services/clean_release/exceptions.py
Normal file
38
backend/src/services/clean_release/exceptions.py
Normal file
@@ -0,0 +1,38 @@
|
||||
# [DEF:clean_release_exceptions:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Domain exceptions for clean release compliance subsystem.
|
||||
# @LAYER: Domain
|
||||
|
||||
class CleanReleaseError(Exception):
|
||||
"""Base exception for clean release subsystem."""
|
||||
pass
|
||||
|
||||
class CandidateNotFoundError(CleanReleaseError):
|
||||
"""Raised when a release candidate is not found."""
|
||||
pass
|
||||
|
||||
class IllegalTransitionError(CleanReleaseError):
|
||||
"""Raised when a forbidden lifecycle transition is attempted."""
|
||||
pass
|
||||
|
||||
class ManifestImmutableError(CleanReleaseError):
|
||||
"""Raised when an attempt is made to mutate an existing manifest."""
|
||||
pass
|
||||
|
||||
class PolicyResolutionError(CleanReleaseError):
|
||||
"""Raised when trusted policy or registry cannot be resolved."""
|
||||
pass
|
||||
|
||||
class ComplianceRunError(CleanReleaseError):
|
||||
"""Raised when a compliance run fails or is invalid."""
|
||||
pass
|
||||
|
||||
class ApprovalGateError(CleanReleaseError):
|
||||
"""Raised when approval requirements are not met."""
|
||||
pass
|
||||
|
||||
class PublicationGateError(CleanReleaseError):
|
||||
"""Raised when publication requirements are not met."""
|
||||
pass
|
||||
|
||||
# [/DEF:clean_release_exceptions:Module]
|
||||
122
backend/src/services/clean_release/facade.py
Normal file
122
backend/src/services/clean_release/facade.py
Normal file
@@ -0,0 +1,122 @@
|
||||
# [DEF:clean_release_facade:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Unified entry point for clean release operations.
|
||||
# @LAYER: Application
|
||||
|
||||
from typing import List, Optional
|
||||
from src.services.clean_release.repositories import (
|
||||
CandidateRepository, ArtifactRepository, ManifestRepository,
|
||||
PolicyRepository, ComplianceRepository, ReportRepository,
|
||||
ApprovalRepository, PublicationRepository, AuditRepository
|
||||
)
|
||||
from src.services.clean_release.dto import (
|
||||
CandidateDTO, ArtifactDTO, ManifestDTO, ComplianceRunDTO,
|
||||
ReportDTO, CandidateOverviewDTO
|
||||
)
|
||||
from src.services.clean_release.enums import CandidateStatus, RunStatus, ComplianceDecision
|
||||
from src.models.clean_release import CleanPolicySnapshot, SourceRegistrySnapshot
|
||||
from src.core.logger import belief_scope
|
||||
from src.core.config_manager import ConfigManager
|
||||
|
||||
class CleanReleaseFacade:
|
||||
"""
|
||||
@PURPOSE: Orchestrates repositories and services to provide a clean API for UI/CLI.
|
||||
"""
|
||||
def __init__(
|
||||
self,
|
||||
candidate_repo: CandidateRepository,
|
||||
artifact_repo: ArtifactRepository,
|
||||
manifest_repo: ManifestRepository,
|
||||
policy_repo: PolicyRepository,
|
||||
compliance_repo: ComplianceRepository,
|
||||
report_repo: ReportRepository,
|
||||
approval_repo: ApprovalRepository,
|
||||
publication_repo: PublicationRepository,
|
||||
audit_repo: AuditRepository,
|
||||
config_manager: ConfigManager
|
||||
):
|
||||
self.candidate_repo = candidate_repo
|
||||
self.artifact_repo = artifact_repo
|
||||
self.manifest_repo = manifest_repo
|
||||
self.policy_repo = policy_repo
|
||||
self.compliance_repo = compliance_repo
|
||||
self.report_repo = report_repo
|
||||
self.approval_repo = approval_repo
|
||||
self.publication_repo = publication_repo
|
||||
self.audit_repo = audit_repo
|
||||
self.config_manager = config_manager
|
||||
|
||||
def resolve_active_policy_snapshot(self) -> Optional[CleanPolicySnapshot]:
|
||||
"""
|
||||
@PURPOSE: Resolve the active policy snapshot based on ConfigManager.
|
||||
"""
|
||||
with belief_scope("CleanReleaseFacade.resolve_active_policy_snapshot"):
|
||||
config = self.config_manager.get_config()
|
||||
policy_id = config.settings.clean_release.active_policy_id
|
||||
if not policy_id:
|
||||
return None
|
||||
return self.policy_repo.get_policy_snapshot(policy_id)
|
||||
|
||||
def resolve_active_registry_snapshot(self) -> Optional[SourceRegistrySnapshot]:
|
||||
"""
|
||||
@PURPOSE: Resolve the active registry snapshot based on ConfigManager.
|
||||
"""
|
||||
with belief_scope("CleanReleaseFacade.resolve_active_registry_snapshot"):
|
||||
config = self.config_manager.get_config()
|
||||
registry_id = config.settings.clean_release.active_registry_id
|
||||
if not registry_id:
|
||||
return None
|
||||
return self.policy_repo.get_registry_snapshot(registry_id)
|
||||
|
||||
def get_candidate_overview(self, candidate_id: str) -> Optional[CandidateOverviewDTO]:
|
||||
"""
|
||||
@PURPOSE: Build a comprehensive overview for a candidate.
|
||||
"""
|
||||
with belief_scope("CleanReleaseFacade.get_candidate_overview"):
|
||||
candidate = self.candidate_repo.get_by_id(candidate_id)
|
||||
if not candidate:
|
||||
return None
|
||||
|
||||
manifest = self.manifest_repo.get_latest_for_candidate(candidate_id)
|
||||
runs = self.compliance_repo.list_runs_by_candidate(candidate_id)
|
||||
latest_run = runs[-1] if runs else None
|
||||
|
||||
report = None
|
||||
if latest_run:
|
||||
report = self.report_repo.get_by_run(latest_run.id)
|
||||
|
||||
approval = self.approval_repo.get_latest_for_candidate(candidate_id)
|
||||
publication = self.publication_repo.get_latest_for_candidate(candidate_id)
|
||||
|
||||
active_policy = self.resolve_active_policy_snapshot()
|
||||
active_registry = self.resolve_active_registry_snapshot()
|
||||
|
||||
return CandidateOverviewDTO(
|
||||
candidate_id=candidate.id,
|
||||
version=candidate.version,
|
||||
source_snapshot_ref=candidate.source_snapshot_ref,
|
||||
status=CandidateStatus(candidate.status),
|
||||
latest_manifest_id=manifest.id if manifest else None,
|
||||
latest_manifest_digest=manifest.manifest_digest if manifest else None,
|
||||
latest_run_id=latest_run.id if latest_run else None,
|
||||
latest_run_status=RunStatus(latest_run.status) if latest_run else None,
|
||||
latest_report_id=report.id if report else None,
|
||||
latest_report_final_status=ComplianceDecision(report.final_status) if report else None,
|
||||
latest_policy_snapshot_id=active_policy.id if active_policy else None,
|
||||
latest_policy_version=active_policy.policy_version if active_policy else None,
|
||||
latest_registry_snapshot_id=active_registry.id if active_registry else None,
|
||||
latest_registry_version=active_registry.registry_version if active_registry else None,
|
||||
latest_approval_decision=approval.decision if approval else None,
|
||||
latest_publication_id=publication.id if publication else None,
|
||||
latest_publication_status=publication.status if publication else None
|
||||
)
|
||||
|
||||
def list_candidates(self) -> List[CandidateOverviewDTO]:
|
||||
"""
|
||||
@PURPOSE: List all candidates with their current status.
|
||||
"""
|
||||
with belief_scope("CleanReleaseFacade.list_candidates"):
|
||||
candidates = self.candidate_repo.list_all()
|
||||
return [self.get_candidate_overview(c.id) for c in candidates]
|
||||
|
||||
# [/DEF:clean_release_facade:Module]
|
||||
109
backend/src/services/clean_release/manifest_builder.py
Normal file
109
backend/src/services/clean_release/manifest_builder.py
Normal file
@@ -0,0 +1,109 @@
|
||||
# [DEF:backend.src.services.clean_release.manifest_builder:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, manifest, deterministic-hash, summary
|
||||
# @PURPOSE: Build deterministic distribution manifest from classified artifact input.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release
|
||||
# @INVARIANT: Equal semantic artifact sets produce identical deterministic hash values.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import hashlib
|
||||
import json
|
||||
from datetime import datetime, timezone
|
||||
from typing import Iterable, List, Dict, Any
|
||||
|
||||
from ...models.clean_release import (
|
||||
ClassificationType,
|
||||
DistributionManifest,
|
||||
ManifestItem,
|
||||
ManifestSummary,
|
||||
)
|
||||
|
||||
|
||||
def _stable_hash_payload(candidate_id: str, policy_id: str, items: List[ManifestItem]) -> str:
|
||||
serialized = [
|
||||
{
|
||||
"path": item.path,
|
||||
"category": item.category,
|
||||
"classification": item.classification.value,
|
||||
"reason": item.reason,
|
||||
"checksum": item.checksum,
|
||||
}
|
||||
for item in sorted(items, key=lambda i: (i.path, i.category, i.classification.value, i.reason, i.checksum or ""))
|
||||
]
|
||||
payload = {
|
||||
"candidate_id": candidate_id,
|
||||
"policy_id": policy_id,
|
||||
"items": serialized,
|
||||
}
|
||||
digest = hashlib.sha256(json.dumps(payload, ensure_ascii=False, sort_keys=True).encode("utf-8")).hexdigest()
|
||||
return digest
|
||||
|
||||
|
||||
# [DEF:build_distribution_manifest:Function]
|
||||
# @PURPOSE: Build DistributionManifest with deterministic hash and validated counters.
|
||||
# @PRE: artifacts list contains normalized classification values.
|
||||
# @POST: Returns DistributionManifest with summary counts matching items cardinality.
|
||||
def build_distribution_manifest(
|
||||
manifest_id: str,
|
||||
candidate_id: str,
|
||||
policy_id: str,
|
||||
generated_by: str,
|
||||
artifacts: Iterable[Dict[str, Any]],
|
||||
) -> DistributionManifest:
|
||||
items = [
|
||||
ManifestItem(
|
||||
path=a["path"],
|
||||
category=a["category"],
|
||||
classification=ClassificationType(a["classification"]),
|
||||
reason=a["reason"],
|
||||
checksum=a.get("checksum"),
|
||||
)
|
||||
for a in artifacts
|
||||
]
|
||||
|
||||
included_count = sum(1 for item in items if item.classification in {ClassificationType.REQUIRED_SYSTEM, ClassificationType.ALLOWED})
|
||||
excluded_count = sum(1 for item in items if item.classification == ClassificationType.EXCLUDED_PROHIBITED)
|
||||
prohibited_detected_count = excluded_count
|
||||
|
||||
summary = ManifestSummary(
|
||||
included_count=included_count,
|
||||
excluded_count=excluded_count,
|
||||
prohibited_detected_count=prohibited_detected_count,
|
||||
)
|
||||
|
||||
deterministic_hash = _stable_hash_payload(candidate_id, policy_id, items)
|
||||
|
||||
return DistributionManifest(
|
||||
manifest_id=manifest_id,
|
||||
candidate_id=candidate_id,
|
||||
generated_at=datetime.now(timezone.utc),
|
||||
generated_by=generated_by,
|
||||
items=items,
|
||||
summary=summary,
|
||||
deterministic_hash=deterministic_hash,
|
||||
)
|
||||
# [/DEF:build_distribution_manifest:Function]
|
||||
|
||||
|
||||
# [DEF:build_manifest:Function]
|
||||
# @PURPOSE: Legacy compatibility wrapper for old manifest builder import paths.
|
||||
# @PRE: Same as build_distribution_manifest.
|
||||
# @POST: Returns DistributionManifest produced by canonical builder.
|
||||
def build_manifest(
|
||||
manifest_id: str,
|
||||
candidate_id: str,
|
||||
policy_id: str,
|
||||
generated_by: str,
|
||||
artifacts: Iterable[Dict[str, Any]],
|
||||
) -> DistributionManifest:
|
||||
return build_distribution_manifest(
|
||||
manifest_id=manifest_id,
|
||||
candidate_id=candidate_id,
|
||||
policy_id=policy_id,
|
||||
generated_by=generated_by,
|
||||
artifacts=artifacts,
|
||||
)
|
||||
# [/DEF:build_manifest:Function]
|
||||
# [/DEF:backend.src.services.clean_release.manifest_builder:Module]
|
||||
88
backend/src/services/clean_release/manifest_service.py
Normal file
88
backend/src/services/clean_release/manifest_service.py
Normal file
@@ -0,0 +1,88 @@
|
||||
# [DEF:backend.src.services.clean_release.manifest_service:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, manifest, versioning, immutability, lifecycle
|
||||
# @PURPOSE: Build immutable distribution manifests with deterministic digest and version increment.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.manifest_builder
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release
|
||||
# @PRE: Candidate exists and is PREPARED or MANIFEST_BUILT; artifacts are present.
|
||||
# @POST: New immutable manifest is persisted with incremented version and deterministic digest.
|
||||
# @INVARIANT: Existing manifests are never mutated.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Any, Dict, List
|
||||
|
||||
from ...models.clean_release import DistributionManifest
|
||||
from .enums import CandidateStatus
|
||||
from .manifest_builder import build_distribution_manifest
|
||||
from .repository import CleanReleaseRepository
|
||||
|
||||
|
||||
# [DEF:build_manifest_snapshot:Function]
|
||||
# @PURPOSE: Create a new immutable manifest version for a candidate.
|
||||
# @PRE: Candidate is prepared, artifacts are available, candidate_id is valid.
|
||||
# @POST: Returns persisted DistributionManifest with monotonically incremented version.
|
||||
def build_manifest_snapshot(
|
||||
repository: CleanReleaseRepository,
|
||||
candidate_id: str,
|
||||
created_by: str,
|
||||
policy_id: str = "policy-default",
|
||||
) -> DistributionManifest:
|
||||
if not candidate_id or not candidate_id.strip():
|
||||
raise ValueError("candidate_id must be non-empty")
|
||||
if not created_by or not created_by.strip():
|
||||
raise ValueError("created_by must be non-empty")
|
||||
|
||||
candidate = repository.get_candidate(candidate_id)
|
||||
if candidate is None:
|
||||
raise ValueError(f"candidate '{candidate_id}' not found")
|
||||
|
||||
if candidate.status not in {CandidateStatus.PREPARED.value, CandidateStatus.MANIFEST_BUILT.value}:
|
||||
raise ValueError("candidate must be PREPARED or MANIFEST_BUILT to build manifest")
|
||||
|
||||
artifacts = repository.get_artifacts_by_candidate(candidate_id)
|
||||
if not artifacts:
|
||||
raise ValueError("candidate artifacts are required to build manifest")
|
||||
|
||||
existing = repository.get_manifests_by_candidate(candidate_id)
|
||||
for manifest in existing:
|
||||
if not manifest.immutable:
|
||||
raise ValueError("existing manifest immutability invariant violated")
|
||||
|
||||
next_version = max((m.manifest_version for m in existing), default=0) + 1
|
||||
manifest_id = f"manifest-{candidate_id}-v{next_version}"
|
||||
|
||||
classified_artifacts: List[Dict[str, Any]] = [
|
||||
{
|
||||
"path": artifact.path,
|
||||
"category": artifact.detected_category or "generic",
|
||||
"classification": "allowed",
|
||||
"reason": "artifact import",
|
||||
"checksum": artifact.sha256,
|
||||
}
|
||||
for artifact in artifacts
|
||||
]
|
||||
|
||||
manifest = build_distribution_manifest(
|
||||
manifest_id=manifest_id,
|
||||
candidate_id=candidate_id,
|
||||
policy_id=policy_id,
|
||||
generated_by=created_by,
|
||||
artifacts=classified_artifacts,
|
||||
)
|
||||
manifest.manifest_version = next_version
|
||||
manifest.source_snapshot_ref = candidate.source_snapshot_ref
|
||||
manifest.artifacts_digest = manifest.manifest_digest
|
||||
manifest.immutable = True
|
||||
repository.save_manifest(manifest)
|
||||
|
||||
if candidate.status == CandidateStatus.PREPARED.value:
|
||||
candidate.transition_to(CandidateStatus.MANIFEST_BUILT)
|
||||
repository.save_candidate(candidate)
|
||||
|
||||
return manifest
|
||||
# [/DEF:build_manifest_snapshot:Function]
|
||||
|
||||
# [/DEF:backend.src.services.clean_release.manifest_service:Module]
|
||||
67
backend/src/services/clean_release/mappers.py
Normal file
67
backend/src/services/clean_release/mappers.py
Normal file
@@ -0,0 +1,67 @@
|
||||
# [DEF:clean_release_mappers:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Map between domain entities (SQLAlchemy models) and DTOs.
|
||||
# @LAYER: Application
|
||||
|
||||
from typing import List
|
||||
from src.models.clean_release import (
|
||||
ReleaseCandidate, DistributionManifest, ComplianceRun,
|
||||
ComplianceStageRun, ComplianceViolation, ComplianceReport,
|
||||
CleanPolicySnapshot, SourceRegistrySnapshot, ApprovalDecision,
|
||||
PublicationRecord
|
||||
)
|
||||
from src.services.clean_release.dto import (
|
||||
CandidateDTO, ArtifactDTO, ManifestDTO, ComplianceRunDTO,
|
||||
ReportDTO
|
||||
)
|
||||
from src.services.clean_release.enums import (
|
||||
CandidateStatus, RunStatus, ComplianceDecision,
|
||||
ViolationSeverity, ViolationCategory
|
||||
)
|
||||
|
||||
def map_candidate_to_dto(candidate: ReleaseCandidate) -> CandidateDTO:
|
||||
return CandidateDTO(
|
||||
id=candidate.id,
|
||||
version=candidate.version,
|
||||
source_snapshot_ref=candidate.source_snapshot_ref,
|
||||
build_id=candidate.build_id,
|
||||
created_at=candidate.created_at,
|
||||
created_by=candidate.created_by,
|
||||
status=CandidateStatus(candidate.status)
|
||||
)
|
||||
|
||||
def map_manifest_to_dto(manifest: DistributionManifest) -> ManifestDTO:
|
||||
return ManifestDTO(
|
||||
id=manifest.id,
|
||||
candidate_id=manifest.candidate_id,
|
||||
manifest_version=manifest.manifest_version,
|
||||
manifest_digest=manifest.manifest_digest,
|
||||
artifacts_digest=manifest.artifacts_digest,
|
||||
created_at=manifest.created_at,
|
||||
created_by=manifest.created_by,
|
||||
source_snapshot_ref=manifest.source_snapshot_ref,
|
||||
content_json=manifest.content_json or {}
|
||||
)
|
||||
|
||||
def map_run_to_dto(run: ComplianceRun) -> ComplianceRunDTO:
|
||||
return ComplianceRunDTO(
|
||||
run_id=run.id,
|
||||
candidate_id=run.candidate_id,
|
||||
status=RunStatus(run.status),
|
||||
final_status=ComplianceDecision(run.final_status) if run.final_status else None,
|
||||
task_id=run.task_id
|
||||
)
|
||||
|
||||
def map_report_to_dto(report: ComplianceReport) -> ReportDTO:
|
||||
# Note: ReportDTO in dto.py is a compact view
|
||||
return ReportDTO(
|
||||
report_id=report.id,
|
||||
candidate_id=report.candidate_id,
|
||||
final_status=ComplianceDecision(report.final_status),
|
||||
policy_version="unknown", # Would need to resolve from run/snapshot
|
||||
manifest_digest="unknown", # Would need to resolve from run/manifest
|
||||
violation_count=0, # Would need to resolve from violations
|
||||
generated_at=report.generated_at
|
||||
)
|
||||
|
||||
# [/DEF:clean_release_mappers:Module]
|
||||
152
backend/src/services/clean_release/policy_engine.py
Normal file
152
backend/src/services/clean_release/policy_engine.py
Normal file
@@ -0,0 +1,152 @@
|
||||
# [DEF:backend.src.services.clean_release.policy_engine:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, policy, classification, source-isolation
|
||||
# @PURPOSE: Evaluate artifact/source policies for enterprise clean profile with deterministic outcomes.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release.CleanProfilePolicy
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release.ResourceSourceRegistry
|
||||
# @INVARIANT: Enterprise-clean policy always treats non-registry sources as violations.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass
|
||||
from typing import Dict, Iterable, List, Tuple
|
||||
|
||||
from ...core.logger import belief_scope, logger
|
||||
from ...models.clean_release import CleanPolicySnapshot, SourceRegistrySnapshot
|
||||
|
||||
|
||||
@dataclass
|
||||
class PolicyValidationResult:
|
||||
ok: bool
|
||||
blocking_reasons: List[str]
|
||||
|
||||
|
||||
@dataclass
|
||||
class SourceValidationResult:
|
||||
ok: bool
|
||||
violation: Dict | None
|
||||
|
||||
|
||||
# [DEF:CleanPolicyEngine:Class]
|
||||
# @PRE: Active policy exists and is internally consistent.
|
||||
# @POST: Deterministic classification and source validation are available.
|
||||
# @TEST_CONTRACT: CandidateEvaluationInput -> PolicyValidationResult|SourceValidationResult
|
||||
# @TEST_SCENARIO: policy_valid -> Enterprise clean policy with matching registry returns ok=True
|
||||
# @TEST_FIXTURE: policy_enterprise_clean -> file:backend/tests/fixtures/clean_release/fixtures_clean_release.json
|
||||
# @TEST_EDGE: missing_registry_ref -> policy has empty registry_snapshot_id
|
||||
# @TEST_EDGE: conflicting_registry -> policy registry ref does not match registry id
|
||||
# @TEST_EDGE: external_endpoint -> endpoint not present in enabled internal registry entries
|
||||
# @TEST_INVARIANT: deterministic_classification -> VERIFIED_BY: [policy_valid]
|
||||
class CleanPolicyEngine:
|
||||
def __init__(self, policy: CleanPolicySnapshot, registry: SourceRegistrySnapshot):
|
||||
self.policy = policy
|
||||
self.registry = registry
|
||||
|
||||
def validate_policy(self) -> PolicyValidationResult:
|
||||
with belief_scope("clean_policy_engine.validate_policy"):
|
||||
logger.reason("Validating enterprise-clean policy and internal registry consistency")
|
||||
reasons: List[str] = []
|
||||
|
||||
# Snapshots are immutable and assumed active if resolved by facade
|
||||
if not self.policy.registry_snapshot_id.strip():
|
||||
reasons.append("Policy missing registry_snapshot_id")
|
||||
|
||||
content = self.policy.content_json or {}
|
||||
profile = content.get("profile", "standard")
|
||||
|
||||
if profile == "enterprise-clean":
|
||||
if not content.get("prohibited_artifact_categories"):
|
||||
reasons.append("Enterprise policy requires prohibited artifact categories")
|
||||
if not content.get("external_source_forbidden"):
|
||||
reasons.append("Enterprise policy requires external_source_forbidden=true")
|
||||
|
||||
if self.registry.id != self.policy.registry_snapshot_id:
|
||||
reasons.append("Policy registry ref does not match provided registry")
|
||||
|
||||
if not self.registry.allowed_hosts:
|
||||
reasons.append("Registry must contain allowed hosts")
|
||||
|
||||
logger.reflect(f"Policy validation completed. blocking_reasons={len(reasons)}")
|
||||
return PolicyValidationResult(ok=len(reasons) == 0, blocking_reasons=reasons)
|
||||
|
||||
def classify_artifact(self, artifact: Dict) -> str:
|
||||
category = (artifact.get("category") or "").strip()
|
||||
content = self.policy.content_json or {}
|
||||
|
||||
required = content.get("required_system_categories", [])
|
||||
prohibited = content.get("prohibited_artifact_categories", [])
|
||||
|
||||
if category in required:
|
||||
logger.reason(f"Artifact category '{category}' classified as required-system")
|
||||
return "required-system"
|
||||
if category in prohibited:
|
||||
logger.reason(f"Artifact category '{category}' classified as excluded-prohibited")
|
||||
return "excluded-prohibited"
|
||||
logger.reflect(f"Artifact category '{category}' classified as allowed")
|
||||
return "allowed"
|
||||
|
||||
def validate_resource_source(self, endpoint: str) -> SourceValidationResult:
|
||||
with belief_scope("clean_policy_engine.validate_resource_source"):
|
||||
if not endpoint:
|
||||
logger.explore("Empty endpoint detected; treating as blocking external-source violation")
|
||||
return SourceValidationResult(
|
||||
ok=False,
|
||||
violation={
|
||||
"category": "external-source",
|
||||
"location": "<empty-endpoint>",
|
||||
"remediation": "Replace with approved internal server",
|
||||
"blocked_release": True,
|
||||
},
|
||||
)
|
||||
|
||||
allowed_hosts = set(self.registry.allowed_hosts or [])
|
||||
normalized = endpoint.strip().lower()
|
||||
|
||||
if normalized in allowed_hosts:
|
||||
logger.reason(f"Endpoint '{normalized}' is present in internal allowlist")
|
||||
return SourceValidationResult(ok=True, violation=None)
|
||||
|
||||
logger.explore(f"Endpoint '{endpoint}' is outside internal allowlist")
|
||||
return SourceValidationResult(
|
||||
ok=False,
|
||||
violation={
|
||||
"category": "external-source",
|
||||
"location": endpoint,
|
||||
"remediation": "Replace with approved internal server",
|
||||
"blocked_release": True,
|
||||
},
|
||||
)
|
||||
|
||||
def evaluate_candidate(self, artifacts: Iterable[Dict], sources: Iterable[str]) -> Tuple[List[Dict], List[Dict]]:
|
||||
with belief_scope("clean_policy_engine.evaluate_candidate"):
|
||||
logger.reason("Evaluating candidate artifacts and resource sources against enterprise policy")
|
||||
classified: List[Dict] = []
|
||||
violations: List[Dict] = []
|
||||
|
||||
for artifact in artifacts:
|
||||
classification = self.classify_artifact(artifact)
|
||||
enriched = dict(artifact)
|
||||
enriched["classification"] = classification
|
||||
if classification == "excluded-prohibited":
|
||||
violations.append(
|
||||
{
|
||||
"category": "data-purity",
|
||||
"location": artifact.get("path", "<unknown-path>"),
|
||||
"remediation": "Remove prohibited content",
|
||||
"blocked_release": True,
|
||||
}
|
||||
)
|
||||
classified.append(enriched)
|
||||
|
||||
for source in sources:
|
||||
source_result = self.validate_resource_source(source)
|
||||
if not source_result.ok and source_result.violation:
|
||||
violations.append(source_result.violation)
|
||||
|
||||
logger.reflect(
|
||||
f"Candidate evaluation finished. artifacts={len(classified)} violations={len(violations)}"
|
||||
)
|
||||
return classified, violations
|
||||
# [/DEF:CleanPolicyEngine:Class]
|
||||
# [/DEF:backend.src.services.clean_release.policy_engine:Module]
|
||||
@@ -0,0 +1,64 @@
|
||||
# [DEF:backend.src.services.clean_release.policy_resolution_service:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, policy, registry, trusted-resolution, immutable-snapshots
|
||||
# @PURPOSE: Resolve trusted policy and registry snapshots from ConfigManager without runtime overrides.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.core.config_manager
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.exceptions
|
||||
# @INVARIANT: Trusted snapshot resolution is based only on ConfigManager active identifiers.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Optional, Tuple
|
||||
|
||||
from ...models.clean_release import CleanPolicySnapshot, SourceRegistrySnapshot
|
||||
from .exceptions import PolicyResolutionError
|
||||
from .repository import CleanReleaseRepository
|
||||
|
||||
|
||||
# [DEF:resolve_trusted_policy_snapshots:Function]
|
||||
# @PURPOSE: Resolve immutable trusted policy and registry snapshots using active config IDs only.
|
||||
# @PRE: ConfigManager provides active_policy_id and active_registry_id; repository contains referenced snapshots.
|
||||
# @POST: Returns immutable policy and registry snapshots; runtime override attempts are rejected.
|
||||
# @SIDE_EFFECT: None.
|
||||
def resolve_trusted_policy_snapshots(
|
||||
*,
|
||||
config_manager,
|
||||
repository: CleanReleaseRepository,
|
||||
policy_id_override: Optional[str] = None,
|
||||
registry_id_override: Optional[str] = None,
|
||||
) -> Tuple[CleanPolicySnapshot, SourceRegistrySnapshot]:
|
||||
if policy_id_override is not None or registry_id_override is not None:
|
||||
raise PolicyResolutionError("override attempt is forbidden for trusted policy resolution")
|
||||
|
||||
config = config_manager.get_config()
|
||||
clean_release_settings = getattr(getattr(config, "settings", None), "clean_release", None)
|
||||
if clean_release_settings is None:
|
||||
raise PolicyResolutionError("clean_release settings are missing")
|
||||
|
||||
policy_id = getattr(clean_release_settings, "active_policy_id", None)
|
||||
registry_id = getattr(clean_release_settings, "active_registry_id", None)
|
||||
|
||||
if not policy_id:
|
||||
raise PolicyResolutionError("missing trusted profile: active_policy_id is not configured")
|
||||
if not registry_id:
|
||||
raise PolicyResolutionError("missing trusted registry: active_registry_id is not configured")
|
||||
|
||||
policy_snapshot = repository.get_policy(policy_id)
|
||||
if policy_snapshot is None:
|
||||
raise PolicyResolutionError(f"trusted policy snapshot '{policy_id}' was not found")
|
||||
|
||||
registry_snapshot = repository.get_registry(registry_id)
|
||||
if registry_snapshot is None:
|
||||
raise PolicyResolutionError(f"trusted registry snapshot '{registry_id}' was not found")
|
||||
|
||||
if not bool(getattr(policy_snapshot, "immutable", False)):
|
||||
raise PolicyResolutionError("policy snapshot must be immutable")
|
||||
if not bool(getattr(registry_snapshot, "immutable", False)):
|
||||
raise PolicyResolutionError("registry snapshot must be immutable")
|
||||
|
||||
return policy_snapshot, registry_snapshot
|
||||
# [/DEF:resolve_trusted_policy_snapshots:Function]
|
||||
|
||||
# [/DEF:backend.src.services.clean_release.policy_resolution_service:Module]
|
||||
92
backend/src/services/clean_release/preparation_service.py
Normal file
92
backend/src/services/clean_release/preparation_service.py
Normal file
@@ -0,0 +1,92 @@
|
||||
# [DEF:backend.src.services.clean_release.preparation_service:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, preparation, manifest, policy-evaluation
|
||||
# @PURPOSE: Prepare release candidate by policy evaluation and deterministic manifest creation.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.policy_engine
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.manifest_builder
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @INVARIANT: Candidate preparation always persists manifest and candidate status deterministically.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import Dict, Iterable
|
||||
|
||||
from .manifest_builder import build_distribution_manifest
|
||||
from .policy_engine import CleanPolicyEngine
|
||||
from .repository import CleanReleaseRepository
|
||||
from .enums import CandidateStatus
|
||||
|
||||
|
||||
def prepare_candidate(
|
||||
repository: CleanReleaseRepository,
|
||||
candidate_id: str,
|
||||
artifacts: Iterable[Dict],
|
||||
sources: Iterable[str],
|
||||
operator_id: str,
|
||||
) -> Dict:
|
||||
candidate = repository.get_candidate(candidate_id)
|
||||
if candidate is None:
|
||||
raise ValueError(f"Candidate not found: {candidate_id}")
|
||||
|
||||
policy = repository.get_active_policy()
|
||||
if policy is None:
|
||||
raise ValueError("Active clean policy not found")
|
||||
|
||||
registry = repository.get_registry(policy.registry_snapshot_id)
|
||||
if registry is None:
|
||||
raise ValueError("Registry not found for active policy")
|
||||
|
||||
engine = CleanPolicyEngine(policy=policy, registry=registry)
|
||||
validation = engine.validate_policy()
|
||||
if not validation.ok:
|
||||
raise ValueError(f"Invalid policy: {validation.blocking_reasons}")
|
||||
|
||||
classified, violations = engine.evaluate_candidate(artifacts=artifacts, sources=sources)
|
||||
|
||||
manifest = build_distribution_manifest(
|
||||
manifest_id=f"manifest-{candidate_id}",
|
||||
candidate_id=candidate_id,
|
||||
policy_id=policy.policy_id,
|
||||
generated_by=operator_id,
|
||||
artifacts=classified,
|
||||
)
|
||||
repository.save_manifest(manifest)
|
||||
|
||||
# Note: In the new model, BLOCKED is a ComplianceDecision, not a CandidateStatus.
|
||||
# CandidateStatus.PREPARED is the correct next state after preparation.
|
||||
candidate.transition_to(CandidateStatus.PREPARED)
|
||||
repository.save_candidate(candidate)
|
||||
|
||||
status_value = candidate.status.value if hasattr(candidate.status, "value") else str(candidate.status)
|
||||
manifest_id_value = getattr(manifest, "manifest_id", None) or getattr(manifest, "id", "")
|
||||
return {
|
||||
"candidate_id": candidate_id,
|
||||
"status": status_value,
|
||||
"manifest_id": manifest_id_value,
|
||||
"violations": violations,
|
||||
"prepared_at": datetime.now(timezone.utc).isoformat(),
|
||||
}
|
||||
|
||||
|
||||
# [DEF:prepare_candidate_legacy:Function]
|
||||
# @PURPOSE: Legacy compatibility wrapper kept for migration period.
|
||||
# @PRE: Same as prepare_candidate.
|
||||
# @POST: Delegates to canonical prepare_candidate and preserves response shape.
|
||||
def prepare_candidate_legacy(
|
||||
repository: CleanReleaseRepository,
|
||||
candidate_id: str,
|
||||
artifacts: Iterable[Dict],
|
||||
sources: Iterable[str],
|
||||
operator_id: str,
|
||||
) -> Dict:
|
||||
return prepare_candidate(
|
||||
repository=repository,
|
||||
candidate_id=candidate_id,
|
||||
artifacts=artifacts,
|
||||
sources=sources,
|
||||
operator_id=operator_id,
|
||||
)
|
||||
# [/DEF:prepare_candidate_legacy:Function]
|
||||
# [/DEF:backend.src.services.clean_release.preparation_service:Module]
|
||||
173
backend/src/services/clean_release/publication_service.py
Normal file
173
backend/src/services/clean_release/publication_service.py
Normal file
@@ -0,0 +1,173 @@
|
||||
# [DEF:backend.src.services.clean_release.publication_service:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, publication, revoke, gate, lifecycle
|
||||
# @PURPOSE: Enforce publication and revocation gates with append-only publication records.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.approval_service
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.audit_service
|
||||
# @INVARIANT: Publication records are append-only snapshots; revoke mutates only publication status for targeted record.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from typing import List
|
||||
from uuid import uuid4
|
||||
|
||||
from ...core.logger import belief_scope, logger
|
||||
from ...models.clean_release import PublicationRecord
|
||||
from .audit_service import audit_preparation
|
||||
from .enums import ApprovalDecisionType, CandidateStatus, PublicationStatus
|
||||
from .exceptions import PublicationGateError
|
||||
from .repository import CleanReleaseRepository
|
||||
|
||||
|
||||
# [DEF:_get_or_init_publications_store:Function]
|
||||
# @PURPOSE: Provide in-memory append-only publication storage.
|
||||
# @PRE: repository is initialized.
|
||||
# @POST: Returns publication list attached to repository.
|
||||
def _get_or_init_publications_store(repository: CleanReleaseRepository) -> List[PublicationRecord]:
|
||||
publications = getattr(repository, "publication_records", None)
|
||||
if publications is None:
|
||||
publications = []
|
||||
setattr(repository, "publication_records", publications)
|
||||
return publications
|
||||
# [/DEF:_get_or_init_publications_store:Function]
|
||||
|
||||
|
||||
# [DEF:_latest_publication_for_candidate:Function]
|
||||
# @PURPOSE: Resolve latest publication record for candidate.
|
||||
# @PRE: candidate_id is non-empty.
|
||||
# @POST: Returns latest record or None.
|
||||
def _latest_publication_for_candidate(
|
||||
repository: CleanReleaseRepository,
|
||||
candidate_id: str,
|
||||
) -> PublicationRecord | None:
|
||||
records = [item for item in _get_or_init_publications_store(repository) if item.candidate_id == candidate_id]
|
||||
if not records:
|
||||
return None
|
||||
return sorted(records, key=lambda item: item.published_at or datetime.min.replace(tzinfo=timezone.utc), reverse=True)[0]
|
||||
# [/DEF:_latest_publication_for_candidate:Function]
|
||||
|
||||
|
||||
# [DEF:_latest_approval_for_candidate:Function]
|
||||
# @PURPOSE: Resolve latest approval decision from repository decision store.
|
||||
# @PRE: candidate_id is non-empty.
|
||||
# @POST: Returns latest decision object or None.
|
||||
def _latest_approval_for_candidate(repository: CleanReleaseRepository, candidate_id: str):
|
||||
decisions = getattr(repository, "approval_decisions", [])
|
||||
scoped = [item for item in decisions if item.candidate_id == candidate_id]
|
||||
if not scoped:
|
||||
return None
|
||||
return sorted(scoped, key=lambda item: item.decided_at or datetime.min.replace(tzinfo=timezone.utc), reverse=True)[0]
|
||||
# [/DEF:_latest_approval_for_candidate:Function]
|
||||
|
||||
|
||||
# [DEF:publish_candidate:Function]
|
||||
# @PURPOSE: Create immutable publication record for approved candidate.
|
||||
# @PRE: Candidate exists, report belongs to candidate, latest approval is APPROVED.
|
||||
# @POST: New ACTIVE publication record is appended.
|
||||
def publish_candidate(
|
||||
*,
|
||||
repository: CleanReleaseRepository,
|
||||
candidate_id: str,
|
||||
report_id: str,
|
||||
published_by: str,
|
||||
target_channel: str,
|
||||
publication_ref: str | None = None,
|
||||
) -> PublicationRecord:
|
||||
with belief_scope("publication_service.publish_candidate"):
|
||||
logger.reason(f"[REASON] Evaluating publish gate candidate_id={candidate_id} report_id={report_id}")
|
||||
|
||||
if not published_by or not published_by.strip():
|
||||
raise PublicationGateError("published_by must be non-empty")
|
||||
if not target_channel or not target_channel.strip():
|
||||
raise PublicationGateError("target_channel must be non-empty")
|
||||
|
||||
candidate = repository.get_candidate(candidate_id)
|
||||
if candidate is None:
|
||||
raise PublicationGateError(f"candidate '{candidate_id}' not found")
|
||||
|
||||
report = repository.get_report(report_id)
|
||||
if report is None:
|
||||
raise PublicationGateError(f"report '{report_id}' not found")
|
||||
if report.candidate_id != candidate_id:
|
||||
raise PublicationGateError("report belongs to another candidate")
|
||||
|
||||
latest_approval = _latest_approval_for_candidate(repository, candidate_id)
|
||||
if latest_approval is None or latest_approval.decision != ApprovalDecisionType.APPROVED.value:
|
||||
raise PublicationGateError("publish requires APPROVED decision")
|
||||
|
||||
latest_publication = _latest_publication_for_candidate(repository, candidate_id)
|
||||
if latest_publication is not None and latest_publication.status == PublicationStatus.ACTIVE.value:
|
||||
raise PublicationGateError("candidate already has active publication")
|
||||
|
||||
if candidate.status == CandidateStatus.APPROVED.value:
|
||||
try:
|
||||
candidate.transition_to(CandidateStatus.PUBLISHED)
|
||||
repository.save_candidate(candidate)
|
||||
except Exception as exc: # noqa: BLE001
|
||||
logger.explore(f"[EXPLORE] Candidate transition to PUBLISHED failed candidate_id={candidate_id}: {exc}")
|
||||
raise PublicationGateError(str(exc)) from exc
|
||||
|
||||
record = PublicationRecord(
|
||||
id=f"pub-{uuid4()}",
|
||||
candidate_id=candidate_id,
|
||||
report_id=report_id,
|
||||
published_by=published_by,
|
||||
published_at=datetime.now(timezone.utc),
|
||||
target_channel=target_channel,
|
||||
publication_ref=publication_ref,
|
||||
status=PublicationStatus.ACTIVE.value,
|
||||
)
|
||||
_get_or_init_publications_store(repository).append(record)
|
||||
audit_preparation(candidate_id, "PUBLISHED", repository=repository, actor=published_by)
|
||||
logger.reflect(f"[REFLECT] Publication persisted candidate_id={candidate_id} publication_id={record.id}")
|
||||
return record
|
||||
# [/DEF:publish_candidate:Function]
|
||||
|
||||
|
||||
# [DEF:revoke_publication:Function]
|
||||
# @PURPOSE: Revoke existing publication record without deleting history.
|
||||
# @PRE: publication_id exists in repository publication store.
|
||||
# @POST: Target publication status becomes REVOKED and updated record is returned.
|
||||
def revoke_publication(
|
||||
*,
|
||||
repository: CleanReleaseRepository,
|
||||
publication_id: str,
|
||||
revoked_by: str,
|
||||
comment: str | None = None,
|
||||
) -> PublicationRecord:
|
||||
with belief_scope("publication_service.revoke_publication"):
|
||||
logger.reason(f"[REASON] Evaluating revoke gate publication_id={publication_id}")
|
||||
|
||||
if not revoked_by or not revoked_by.strip():
|
||||
raise PublicationGateError("revoked_by must be non-empty")
|
||||
if not publication_id or not publication_id.strip():
|
||||
raise PublicationGateError("publication_id must be non-empty")
|
||||
|
||||
records = _get_or_init_publications_store(repository)
|
||||
record = next((item for item in records if item.id == publication_id), None)
|
||||
if record is None:
|
||||
raise PublicationGateError(f"publication '{publication_id}' not found")
|
||||
if record.status == PublicationStatus.REVOKED.value:
|
||||
raise PublicationGateError("publication is already revoked")
|
||||
|
||||
record.status = PublicationStatus.REVOKED.value
|
||||
candidate = repository.get_candidate(record.candidate_id)
|
||||
if candidate is not None:
|
||||
# Lifecycle remains publication-driven; republish after revoke is supported by new publication record.
|
||||
repository.save_candidate(candidate)
|
||||
|
||||
audit_preparation(
|
||||
record.candidate_id,
|
||||
f"REVOKED:{comment or ''}".strip(":"),
|
||||
repository=repository,
|
||||
actor=revoked_by,
|
||||
)
|
||||
logger.reflect(f"[REFLECT] Publication revoked publication_id={publication_id}")
|
||||
return record
|
||||
# [/DEF:revoke_publication:Function]
|
||||
|
||||
# [/DEF:backend.src.services.clean_release.publication_service:Module]
|
||||
68
backend/src/services/clean_release/report_builder.py
Normal file
68
backend/src/services/clean_release/report_builder.py
Normal file
@@ -0,0 +1,68 @@
|
||||
# [DEF:backend.src.services.clean_release.report_builder:Module]
|
||||
# @TIER: CRITICAL
|
||||
# @SEMANTICS: clean-release, report, audit, counters, violations
|
||||
# @PURPOSE: Build and persist compliance reports with consistent counter invariants.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.repository
|
||||
# @INVARIANT: blocking_violations_count never exceeds violations_count.
|
||||
# @TEST_CONTRACT: ComplianceCheckRun,List[ComplianceViolation] -> ComplianceReport
|
||||
# @TEST_FIXTURE: blocked_with_two_violations -> file:backend/tests/fixtures/clean_release/fixtures_clean_release.json
|
||||
# @TEST_EDGE: empty_violations_for_blocked -> BLOCKED run with zero blocking violations raises ValueError
|
||||
# @TEST_EDGE: counter_mismatch -> blocking counter cannot exceed total violations counter
|
||||
# @TEST_EDGE: missing_operator_summary -> non-terminal run prevents report creation and summary generation
|
||||
# @TEST_INVARIANT: blocking_count_le_total_count -> VERIFIED_BY: [counter_mismatch, empty_violations_for_blocked]
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from datetime import datetime, timezone
|
||||
from uuid import uuid4
|
||||
from typing import List
|
||||
|
||||
from .enums import RunStatus, ComplianceDecision
|
||||
from ...models.clean_release import ComplianceRun, ComplianceReport, ComplianceViolation
|
||||
from .repository import CleanReleaseRepository
|
||||
|
||||
|
||||
class ComplianceReportBuilder:
|
||||
def __init__(self, repository: CleanReleaseRepository):
|
||||
self.repository = repository
|
||||
|
||||
def build_report_payload(self, check_run: ComplianceRun, violations: List[ComplianceViolation]) -> ComplianceReport:
|
||||
if check_run.status == RunStatus.RUNNING:
|
||||
raise ValueError("Cannot build report for non-terminal run")
|
||||
|
||||
violations_count = len(violations)
|
||||
blocking_violations_count = sum(
|
||||
1
|
||||
for v in violations
|
||||
if bool(getattr(v, "blocked_release", False))
|
||||
or bool(getattr(v, "evidence_json", {}).get("blocked_release", False))
|
||||
)
|
||||
|
||||
if check_run.final_status == ComplianceDecision.BLOCKED and blocking_violations_count <= 0:
|
||||
raise ValueError("Blocked run requires at least one blocking violation")
|
||||
|
||||
summary = (
|
||||
"Compliance passed with no blocking violations"
|
||||
if check_run.final_status == ComplianceDecision.PASSED
|
||||
else f"Blocked with {blocking_violations_count} blocking violation(s)"
|
||||
)
|
||||
|
||||
return ComplianceReport(
|
||||
id=f"CCR-{uuid4()}",
|
||||
run_id=check_run.id,
|
||||
candidate_id=check_run.candidate_id,
|
||||
generated_at=datetime.now(timezone.utc),
|
||||
final_status=check_run.final_status,
|
||||
summary_json={
|
||||
"operator_summary": summary,
|
||||
"violations_count": violations_count,
|
||||
"blocking_violations_count": blocking_violations_count,
|
||||
},
|
||||
immutable=True,
|
||||
)
|
||||
|
||||
def persist_report(self, report: ComplianceReport) -> ComplianceReport:
|
||||
return self.repository.save_report(report)
|
||||
# [/DEF:backend.src.services.clean_release.report_builder:Module]
|
||||
28
backend/src/services/clean_release/repositories/__init__.py
Normal file
28
backend/src/services/clean_release/repositories/__init__.py
Normal file
@@ -0,0 +1,28 @@
|
||||
# [DEF:clean_release_repositories:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Export all clean release repositories.
|
||||
|
||||
from .candidate_repository import CandidateRepository
|
||||
from .artifact_repository import ArtifactRepository
|
||||
from .manifest_repository import ManifestRepository
|
||||
from .policy_repository import PolicyRepository
|
||||
from .compliance_repository import ComplianceRepository
|
||||
from .report_repository import ReportRepository
|
||||
from .approval_repository import ApprovalRepository
|
||||
from .publication_repository import PublicationRepository
|
||||
from .audit_repository import AuditRepository, CleanReleaseAuditLog
|
||||
|
||||
__all__ = [
|
||||
"CandidateRepository",
|
||||
"ArtifactRepository",
|
||||
"ManifestRepository",
|
||||
"PolicyRepository",
|
||||
"ComplianceRepository",
|
||||
"ReportRepository",
|
||||
"ApprovalRepository",
|
||||
"PublicationRepository",
|
||||
"AuditRepository",
|
||||
"CleanReleaseAuditLog"
|
||||
]
|
||||
|
||||
# [/DEF:clean_release_repositories:Module]
|
||||
@@ -0,0 +1,53 @@
|
||||
# [DEF:approval_repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Persist and query approval decisions.
|
||||
# @LAYER: Infra
|
||||
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from src.models.clean_release import ApprovalDecision
|
||||
from src.core.logger import belief_scope
|
||||
|
||||
class ApprovalRepository:
|
||||
"""
|
||||
@PURPOSE: Encapsulates database operations for ApprovalDecision.
|
||||
"""
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def save(self, decision: ApprovalDecision) -> ApprovalDecision:
|
||||
"""
|
||||
@PURPOSE: Persist an approval decision.
|
||||
@POST: Decision is committed and refreshed.
|
||||
"""
|
||||
with belief_scope("ApprovalRepository.save"):
|
||||
self.db.add(decision)
|
||||
self.db.commit()
|
||||
self.db.refresh(decision)
|
||||
return decision
|
||||
|
||||
def get_by_id(self, decision_id: str) -> Optional[ApprovalDecision]:
|
||||
"""
|
||||
@PURPOSE: Retrieve a decision by ID.
|
||||
"""
|
||||
with belief_scope("ApprovalRepository.get_by_id"):
|
||||
return self.db.query(ApprovalDecision).filter(ApprovalDecision.id == decision_id).first()
|
||||
|
||||
def get_latest_for_candidate(self, candidate_id: str) -> Optional[ApprovalDecision]:
|
||||
"""
|
||||
@PURPOSE: Retrieve the latest decision for a candidate.
|
||||
"""
|
||||
with belief_scope("ApprovalRepository.get_latest_for_candidate"):
|
||||
return self.db.query(ApprovalDecision)\
|
||||
.filter(ApprovalDecision.candidate_id == candidate_id)\
|
||||
.order_by(ApprovalDecision.decided_at.desc())\
|
||||
.first()
|
||||
|
||||
def list_by_candidate(self, candidate_id: str) -> List[ApprovalDecision]:
|
||||
"""
|
||||
@PURPOSE: List all decisions for a specific candidate.
|
||||
"""
|
||||
with belief_scope("ApprovalRepository.list_by_candidate"):
|
||||
return self.db.query(ApprovalDecision).filter(ApprovalDecision.candidate_id == candidate_id).all()
|
||||
|
||||
# [/DEF:approval_repository:Module]
|
||||
@@ -0,0 +1,54 @@
|
||||
# [DEF:artifact_repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Persist and query candidate artifacts.
|
||||
# @LAYER: Infra
|
||||
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from src.models.clean_release import CandidateArtifact
|
||||
from src.core.logger import belief_scope
|
||||
|
||||
class ArtifactRepository:
|
||||
"""
|
||||
@PURPOSE: Encapsulates database operations for CandidateArtifact.
|
||||
"""
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def save(self, artifact: CandidateArtifact) -> CandidateArtifact:
|
||||
"""
|
||||
@PURPOSE: Persist an artifact.
|
||||
@POST: Artifact is committed and refreshed.
|
||||
"""
|
||||
with belief_scope("ArtifactRepository.save"):
|
||||
self.db.add(artifact)
|
||||
self.db.commit()
|
||||
self.db.refresh(artifact)
|
||||
return artifact
|
||||
|
||||
def save_all(self, artifacts: List[CandidateArtifact]) -> List[CandidateArtifact]:
|
||||
"""
|
||||
@PURPOSE: Persist multiple artifacts in a single transaction.
|
||||
"""
|
||||
with belief_scope("ArtifactRepository.save_all"):
|
||||
self.db.add_all(artifacts)
|
||||
self.db.commit()
|
||||
for artifact in artifacts:
|
||||
self.db.refresh(artifact)
|
||||
return artifacts
|
||||
|
||||
def get_by_id(self, artifact_id: str) -> Optional[CandidateArtifact]:
|
||||
"""
|
||||
@PURPOSE: Retrieve an artifact by ID.
|
||||
"""
|
||||
with belief_scope("ArtifactRepository.get_by_id"):
|
||||
return self.db.query(CandidateArtifact).filter(CandidateArtifact.id == artifact_id).first()
|
||||
|
||||
def list_by_candidate(self, candidate_id: str) -> List[CandidateArtifact]:
|
||||
"""
|
||||
@PURPOSE: List all artifacts for a specific candidate.
|
||||
"""
|
||||
with belief_scope("ArtifactRepository.list_by_candidate"):
|
||||
return self.db.query(CandidateArtifact).filter(CandidateArtifact.candidate_id == candidate_id).all()
|
||||
|
||||
# [/DEF:artifact_repository:Module]
|
||||
@@ -0,0 +1,46 @@
|
||||
# [DEF:audit_repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Persist and query audit logs for clean release operations.
|
||||
# @LAYER: Infra
|
||||
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from sqlalchemy import Column, String, DateTime, JSON
|
||||
from src.models.mapping import Base
|
||||
from src.core.logger import belief_scope
|
||||
from datetime import datetime
|
||||
import uuid
|
||||
|
||||
from src.models.clean_release import CleanReleaseAuditLog
|
||||
|
||||
class AuditRepository:
|
||||
"""
|
||||
@PURPOSE: Encapsulates database operations for CleanReleaseAuditLog.
|
||||
"""
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def log(self, action: str, actor: str, candidate_id: Optional[str] = None, details: Optional[dict] = None) -> CleanReleaseAuditLog:
|
||||
"""
|
||||
@PURPOSE: Create an audit log entry.
|
||||
"""
|
||||
with belief_scope("AuditRepository.log"):
|
||||
entry = CleanReleaseAuditLog(
|
||||
action=action,
|
||||
actor=actor,
|
||||
candidate_id=candidate_id,
|
||||
details_json=details or {}
|
||||
)
|
||||
self.db.add(entry)
|
||||
self.db.commit()
|
||||
self.db.refresh(entry)
|
||||
return entry
|
||||
|
||||
def list_by_candidate(self, candidate_id: str) -> List[CleanReleaseAuditLog]:
|
||||
"""
|
||||
@PURPOSE: List all audit entries for a specific candidate.
|
||||
"""
|
||||
with belief_scope("AuditRepository.list_by_candidate"):
|
||||
return self.db.query(CleanReleaseAuditLog).filter(CleanReleaseAuditLog.candidate_id == candidate_id).all()
|
||||
|
||||
# [/DEF:audit_repository:Module]
|
||||
@@ -0,0 +1,47 @@
|
||||
# [DEF:candidate_repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Persist and query release candidates.
|
||||
# @LAYER: Infra
|
||||
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from src.models.clean_release import ReleaseCandidate
|
||||
from src.core.logger import belief_scope
|
||||
|
||||
class CandidateRepository:
|
||||
"""
|
||||
@PURPOSE: Encapsulates database operations for ReleaseCandidate.
|
||||
"""
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def save(self, candidate: ReleaseCandidate) -> ReleaseCandidate:
|
||||
"""
|
||||
@PURPOSE: Persist a release candidate.
|
||||
@POST: Candidate is committed and refreshed.
|
||||
"""
|
||||
with belief_scope("CandidateRepository.save"):
|
||||
# [REASON] Using merge to handle both create and update.
|
||||
# Note: In a real implementation, we might want to use a separate DB model
|
||||
# if the domain model differs significantly from the DB schema.
|
||||
# For now, we assume the domain model is compatible with SQLAlchemy Base if registered.
|
||||
self.db.add(candidate)
|
||||
self.db.commit()
|
||||
self.db.refresh(candidate)
|
||||
return candidate
|
||||
|
||||
def get_by_id(self, candidate_id: str) -> Optional[ReleaseCandidate]:
|
||||
"""
|
||||
@PURPOSE: Retrieve a candidate by ID.
|
||||
"""
|
||||
with belief_scope("CandidateRepository.get_by_id"):
|
||||
return self.db.query(ReleaseCandidate).filter(ReleaseCandidate.id == candidate_id).first()
|
||||
|
||||
def list_all(self) -> List[ReleaseCandidate]:
|
||||
"""
|
||||
@PURPOSE: List all candidates.
|
||||
"""
|
||||
with belief_scope("CandidateRepository.list_all"):
|
||||
return self.db.query(ReleaseCandidate).all()
|
||||
|
||||
# [/DEF:candidate_repository:Module]
|
||||
@@ -0,0 +1,87 @@
|
||||
# [DEF:compliance_repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Persist and query compliance runs, stage runs, and violations.
|
||||
# @LAYER: Infra
|
||||
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from src.models.clean_release import ComplianceRun, ComplianceStageRun, ComplianceViolation
|
||||
from src.core.logger import belief_scope
|
||||
|
||||
class ComplianceRepository:
|
||||
"""
|
||||
@PURPOSE: Encapsulates database operations for Compliance execution records.
|
||||
"""
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def save_run(self, run: ComplianceRun) -> ComplianceRun:
|
||||
"""
|
||||
@PURPOSE: Persist a compliance run.
|
||||
"""
|
||||
with belief_scope("ComplianceRepository.save_run"):
|
||||
self.db.add(run)
|
||||
self.db.commit()
|
||||
self.db.refresh(run)
|
||||
return run
|
||||
|
||||
def get_run(self, run_id: str) -> Optional[ComplianceRun]:
|
||||
"""
|
||||
@PURPOSE: Retrieve a compliance run by ID.
|
||||
"""
|
||||
with belief_scope("ComplianceRepository.get_run"):
|
||||
return self.db.query(ComplianceRun).filter(ComplianceRun.id == run_id).first()
|
||||
|
||||
def list_runs_by_candidate(self, candidate_id: str) -> List[ComplianceRun]:
|
||||
"""
|
||||
@PURPOSE: List all runs for a specific candidate.
|
||||
"""
|
||||
with belief_scope("ComplianceRepository.list_runs_by_candidate"):
|
||||
return self.db.query(ComplianceRun).filter(ComplianceRun.candidate_id == candidate_id).all()
|
||||
|
||||
def save_stage_run(self, stage_run: ComplianceStageRun) -> ComplianceStageRun:
|
||||
"""
|
||||
@PURPOSE: Persist a stage execution record.
|
||||
"""
|
||||
with belief_scope("ComplianceRepository.save_stage_run"):
|
||||
self.db.add(stage_run)
|
||||
self.db.commit()
|
||||
self.db.refresh(stage_run)
|
||||
return stage_run
|
||||
|
||||
def list_stages_by_run(self, run_id: str) -> List[ComplianceStageRun]:
|
||||
"""
|
||||
@PURPOSE: List all stage runs for a specific compliance run.
|
||||
"""
|
||||
with belief_scope("ComplianceRepository.list_stages_by_run"):
|
||||
return self.db.query(ComplianceStageRun).filter(ComplianceStageRun.run_id == run_id).all()
|
||||
|
||||
def save_violation(self, violation: ComplianceViolation) -> ComplianceViolation:
|
||||
"""
|
||||
@PURPOSE: Persist a compliance violation.
|
||||
"""
|
||||
with belief_scope("ComplianceRepository.save_violation"):
|
||||
self.db.add(violation)
|
||||
self.db.commit()
|
||||
self.db.refresh(violation)
|
||||
return violation
|
||||
|
||||
def save_violations(self, violations: List[ComplianceViolation]) -> List[ComplianceViolation]:
|
||||
"""
|
||||
@PURPOSE: Persist multiple violations.
|
||||
"""
|
||||
with belief_scope("ComplianceRepository.save_violations"):
|
||||
self.db.add_all(violations)
|
||||
self.db.commit()
|
||||
for v in violations:
|
||||
self.db.refresh(v)
|
||||
return violations
|
||||
|
||||
def list_violations_by_run(self, run_id: str) -> List[ComplianceViolation]:
|
||||
"""
|
||||
@PURPOSE: List all violations for a specific compliance run.
|
||||
"""
|
||||
with belief_scope("ComplianceRepository.list_violations_by_run"):
|
||||
return self.db.query(ComplianceViolation).filter(ComplianceViolation.run_id == run_id).all()
|
||||
|
||||
# [/DEF:compliance_repository:Module]
|
||||
@@ -0,0 +1,53 @@
|
||||
# [DEF:manifest_repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Persist and query distribution manifests.
|
||||
# @LAYER: Infra
|
||||
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from src.models.clean_release import DistributionManifest
|
||||
from src.core.logger import belief_scope
|
||||
|
||||
class ManifestRepository:
|
||||
"""
|
||||
@PURPOSE: Encapsulates database operations for DistributionManifest.
|
||||
"""
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def save(self, manifest: DistributionManifest) -> DistributionManifest:
|
||||
"""
|
||||
@PURPOSE: Persist a manifest.
|
||||
@POST: Manifest is committed and refreshed.
|
||||
"""
|
||||
with belief_scope("ManifestRepository.save"):
|
||||
self.db.add(manifest)
|
||||
self.db.commit()
|
||||
self.db.refresh(manifest)
|
||||
return manifest
|
||||
|
||||
def get_by_id(self, manifest_id: str) -> Optional[DistributionManifest]:
|
||||
"""
|
||||
@PURPOSE: Retrieve a manifest by ID.
|
||||
"""
|
||||
with belief_scope("ManifestRepository.get_by_id"):
|
||||
return self.db.query(DistributionManifest).filter(DistributionManifest.id == manifest_id).first()
|
||||
|
||||
def get_latest_for_candidate(self, candidate_id: str) -> Optional[DistributionManifest]:
|
||||
"""
|
||||
@PURPOSE: Retrieve the latest manifest for a candidate.
|
||||
"""
|
||||
with belief_scope("ManifestRepository.get_latest_for_candidate"):
|
||||
return self.db.query(DistributionManifest)\
|
||||
.filter(DistributionManifest.candidate_id == candidate_id)\
|
||||
.order_by(DistributionManifest.manifest_version.desc())\
|
||||
.first()
|
||||
|
||||
def list_by_candidate(self, candidate_id: str) -> List[DistributionManifest]:
|
||||
"""
|
||||
@PURPOSE: List all manifests for a specific candidate.
|
||||
"""
|
||||
with belief_scope("ManifestRepository.list_by_candidate"):
|
||||
return self.db.query(DistributionManifest).filter(DistributionManifest.candidate_id == candidate_id).all()
|
||||
|
||||
# [/DEF:manifest_repository:Module]
|
||||
@@ -0,0 +1,52 @@
|
||||
# [DEF:policy_repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Persist and query policy and registry snapshots.
|
||||
# @LAYER: Infra
|
||||
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from src.models.clean_release import CleanPolicySnapshot, SourceRegistrySnapshot
|
||||
from src.core.logger import belief_scope
|
||||
|
||||
class PolicyRepository:
|
||||
"""
|
||||
@PURPOSE: Encapsulates database operations for Policy and Registry snapshots.
|
||||
"""
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def save_policy_snapshot(self, snapshot: CleanPolicySnapshot) -> CleanPolicySnapshot:
|
||||
"""
|
||||
@PURPOSE: Persist a policy snapshot.
|
||||
"""
|
||||
with belief_scope("PolicyRepository.save_policy_snapshot"):
|
||||
self.db.add(snapshot)
|
||||
self.db.commit()
|
||||
self.db.refresh(snapshot)
|
||||
return snapshot
|
||||
|
||||
def get_policy_snapshot(self, snapshot_id: str) -> Optional[CleanPolicySnapshot]:
|
||||
"""
|
||||
@PURPOSE: Retrieve a policy snapshot by ID.
|
||||
"""
|
||||
with belief_scope("PolicyRepository.get_policy_snapshot"):
|
||||
return self.db.query(CleanPolicySnapshot).filter(CleanPolicySnapshot.id == snapshot_id).first()
|
||||
|
||||
def save_registry_snapshot(self, snapshot: SourceRegistrySnapshot) -> SourceRegistrySnapshot:
|
||||
"""
|
||||
@PURPOSE: Persist a registry snapshot.
|
||||
"""
|
||||
with belief_scope("PolicyRepository.save_registry_snapshot"):
|
||||
self.db.add(snapshot)
|
||||
self.db.commit()
|
||||
self.db.refresh(snapshot)
|
||||
return snapshot
|
||||
|
||||
def get_registry_snapshot(self, snapshot_id: str) -> Optional[SourceRegistrySnapshot]:
|
||||
"""
|
||||
@PURPOSE: Retrieve a registry snapshot by ID.
|
||||
"""
|
||||
with belief_scope("PolicyRepository.get_registry_snapshot"):
|
||||
return self.db.query(SourceRegistrySnapshot).filter(SourceRegistrySnapshot.id == snapshot_id).first()
|
||||
|
||||
# [/DEF:policy_repository:Module]
|
||||
@@ -0,0 +1,53 @@
|
||||
# [DEF:publication_repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Persist and query publication records.
|
||||
# @LAYER: Infra
|
||||
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from src.models.clean_release import PublicationRecord
|
||||
from src.core.logger import belief_scope
|
||||
|
||||
class PublicationRepository:
|
||||
"""
|
||||
@PURPOSE: Encapsulates database operations for PublicationRecord.
|
||||
"""
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def save(self, record: PublicationRecord) -> PublicationRecord:
|
||||
"""
|
||||
@PURPOSE: Persist a publication record.
|
||||
@POST: Record is committed and refreshed.
|
||||
"""
|
||||
with belief_scope("PublicationRepository.save"):
|
||||
self.db.add(record)
|
||||
self.db.commit()
|
||||
self.db.refresh(record)
|
||||
return record
|
||||
|
||||
def get_by_id(self, record_id: str) -> Optional[PublicationRecord]:
|
||||
"""
|
||||
@PURPOSE: Retrieve a record by ID.
|
||||
"""
|
||||
with belief_scope("PublicationRepository.get_by_id"):
|
||||
return self.db.query(PublicationRecord).filter(PublicationRecord.id == record_id).first()
|
||||
|
||||
def get_latest_for_candidate(self, candidate_id: str) -> Optional[PublicationRecord]:
|
||||
"""
|
||||
@PURPOSE: Retrieve the latest record for a candidate.
|
||||
"""
|
||||
with belief_scope("PublicationRepository.get_latest_for_candidate"):
|
||||
return self.db.query(PublicationRecord)\
|
||||
.filter(PublicationRecord.candidate_id == candidate_id)\
|
||||
.order_by(PublicationRecord.published_at.desc())\
|
||||
.first()
|
||||
|
||||
def list_by_candidate(self, candidate_id: str) -> List[PublicationRecord]:
|
||||
"""
|
||||
@PURPOSE: List all records for a specific candidate.
|
||||
"""
|
||||
with belief_scope("PublicationRepository.list_by_candidate"):
|
||||
return self.db.query(PublicationRecord).filter(PublicationRecord.candidate_id == candidate_id).all()
|
||||
|
||||
# [/DEF:publication_repository:Module]
|
||||
@@ -0,0 +1,50 @@
|
||||
# [DEF:report_repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @PURPOSE: Persist and query compliance reports.
|
||||
# @LAYER: Infra
|
||||
|
||||
from typing import Optional, List
|
||||
from sqlalchemy.orm import Session
|
||||
from src.models.clean_release import ComplianceReport
|
||||
from src.core.logger import belief_scope
|
||||
|
||||
class ReportRepository:
|
||||
"""
|
||||
@PURPOSE: Encapsulates database operations for ComplianceReport.
|
||||
"""
|
||||
def __init__(self, db: Session):
|
||||
self.db = db
|
||||
|
||||
def save(self, report: ComplianceReport) -> ComplianceReport:
|
||||
"""
|
||||
@PURPOSE: Persist a compliance report.
|
||||
@POST: Report is committed and refreshed.
|
||||
"""
|
||||
with belief_scope("ReportRepository.save"):
|
||||
self.db.add(report)
|
||||
self.db.commit()
|
||||
self.db.refresh(report)
|
||||
return report
|
||||
|
||||
def get_by_id(self, report_id: str) -> Optional[ComplianceReport]:
|
||||
"""
|
||||
@PURPOSE: Retrieve a report by ID.
|
||||
"""
|
||||
with belief_scope("ReportRepository.get_by_id"):
|
||||
return self.db.query(ComplianceReport).filter(ComplianceReport.id == report_id).first()
|
||||
|
||||
def get_by_run(self, run_id: str) -> Optional[ComplianceReport]:
|
||||
"""
|
||||
@PURPOSE: Retrieve a report for a specific compliance run.
|
||||
"""
|
||||
with belief_scope("ReportRepository.get_by_run"):
|
||||
return self.db.query(ComplianceReport).filter(ComplianceReport.run_id == run_id).first()
|
||||
|
||||
def list_by_candidate(self, candidate_id: str) -> List[ComplianceReport]:
|
||||
"""
|
||||
@PURPOSE: List all reports for a specific candidate.
|
||||
"""
|
||||
with belief_scope("ReportRepository.list_by_candidate"):
|
||||
return self.db.query(ComplianceReport).filter(ComplianceReport.candidate_id == candidate_id).all()
|
||||
|
||||
# [/DEF:report_repository:Module]
|
||||
124
backend/src/services/clean_release/repository.py
Normal file
124
backend/src/services/clean_release/repository.py
Normal file
@@ -0,0 +1,124 @@
|
||||
# [DEF:backend.src.services.clean_release.repository:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, repository, persistence, in-memory
|
||||
# @PURPOSE: Provide repository adapter for clean release entities with deterministic access methods.
|
||||
# @LAYER: Infra
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release
|
||||
# @INVARIANT: Repository operations are side-effect free outside explicit save/update calls.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from ...models.clean_release import (
|
||||
CleanPolicySnapshot,
|
||||
ComplianceRun,
|
||||
ComplianceReport,
|
||||
ComplianceStageRun,
|
||||
ComplianceViolation,
|
||||
DistributionManifest,
|
||||
ReleaseCandidate,
|
||||
SourceRegistrySnapshot,
|
||||
)
|
||||
|
||||
|
||||
# [DEF:CleanReleaseRepository:Class]
|
||||
# @PURPOSE: Data access object for clean release lifecycle.
|
||||
@dataclass
|
||||
class CleanReleaseRepository:
|
||||
candidates: Dict[str, ReleaseCandidate] = field(default_factory=dict)
|
||||
policies: Dict[str, CleanPolicySnapshot] = field(default_factory=dict)
|
||||
registries: Dict[str, SourceRegistrySnapshot] = field(default_factory=dict)
|
||||
artifacts: Dict[str, object] = field(default_factory=dict)
|
||||
manifests: Dict[str, DistributionManifest] = field(default_factory=dict)
|
||||
check_runs: Dict[str, ComplianceRun] = field(default_factory=dict)
|
||||
stage_runs: Dict[str, ComplianceStageRun] = field(default_factory=dict)
|
||||
reports: Dict[str, ComplianceReport] = field(default_factory=dict)
|
||||
violations: Dict[str, ComplianceViolation] = field(default_factory=dict)
|
||||
audit_events: List[Dict[str, Any]] = field(default_factory=list)
|
||||
|
||||
def save_candidate(self, candidate: ReleaseCandidate) -> ReleaseCandidate:
|
||||
self.candidates[candidate.id] = candidate
|
||||
return candidate
|
||||
|
||||
def get_candidate(self, candidate_id: str) -> Optional[ReleaseCandidate]:
|
||||
return self.candidates.get(candidate_id)
|
||||
|
||||
def save_policy(self, policy: CleanPolicySnapshot) -> CleanPolicySnapshot:
|
||||
self.policies[policy.id] = policy
|
||||
return policy
|
||||
|
||||
def get_policy(self, policy_id: str) -> Optional[CleanPolicySnapshot]:
|
||||
return self.policies.get(policy_id)
|
||||
|
||||
def get_active_policy(self) -> Optional[CleanPolicySnapshot]:
|
||||
# In-memory repo doesn't track 'active' flag on snapshot,
|
||||
# this should be resolved by facade using ConfigManager.
|
||||
return next(iter(self.policies.values()), None)
|
||||
|
||||
def save_registry(self, registry: SourceRegistrySnapshot) -> SourceRegistrySnapshot:
|
||||
self.registries[registry.id] = registry
|
||||
return registry
|
||||
|
||||
def get_registry(self, registry_id: str) -> Optional[SourceRegistrySnapshot]:
|
||||
return self.registries.get(registry_id)
|
||||
|
||||
def save_artifact(self, artifact) -> object:
|
||||
self.artifacts[artifact.id] = artifact
|
||||
return artifact
|
||||
|
||||
def get_artifacts_by_candidate(self, candidate_id: str) -> List[object]:
|
||||
return [a for a in self.artifacts.values() if a.candidate_id == candidate_id]
|
||||
|
||||
def save_manifest(self, manifest: DistributionManifest) -> DistributionManifest:
|
||||
self.manifests[manifest.id] = manifest
|
||||
return manifest
|
||||
|
||||
def get_manifest(self, manifest_id: str) -> Optional[DistributionManifest]:
|
||||
return self.manifests.get(manifest_id)
|
||||
|
||||
def save_distribution_manifest(self, manifest: DistributionManifest) -> DistributionManifest:
|
||||
return self.save_manifest(manifest)
|
||||
|
||||
def get_distribution_manifest(self, manifest_id: str) -> Optional[DistributionManifest]:
|
||||
return self.get_manifest(manifest_id)
|
||||
|
||||
def save_check_run(self, check_run: ComplianceRun) -> ComplianceRun:
|
||||
self.check_runs[check_run.id] = check_run
|
||||
return check_run
|
||||
|
||||
def get_check_run(self, check_run_id: str) -> Optional[ComplianceRun]:
|
||||
return self.check_runs.get(check_run_id)
|
||||
|
||||
def save_compliance_run(self, run: ComplianceRun) -> ComplianceRun:
|
||||
return self.save_check_run(run)
|
||||
|
||||
def get_compliance_run(self, run_id: str) -> Optional[ComplianceRun]:
|
||||
return self.get_check_run(run_id)
|
||||
|
||||
def save_report(self, report: ComplianceReport) -> ComplianceReport:
|
||||
existing = self.reports.get(report.id)
|
||||
if existing is not None:
|
||||
raise ValueError(f"immutable report snapshot already exists for id={report.id}")
|
||||
self.reports[report.id] = report
|
||||
return report
|
||||
|
||||
def get_report(self, report_id: str) -> Optional[ComplianceReport]:
|
||||
return self.reports.get(report_id)
|
||||
|
||||
def save_violation(self, violation: ComplianceViolation) -> ComplianceViolation:
|
||||
self.violations[violation.id] = violation
|
||||
return violation
|
||||
|
||||
def get_violations_by_run(self, run_id: str) -> List[ComplianceViolation]:
|
||||
return [v for v in self.violations.values() if v.run_id == run_id]
|
||||
|
||||
def get_manifests_by_candidate(self, candidate_id: str) -> List[DistributionManifest]:
|
||||
return [m for m in self.manifests.values() if m.candidate_id == candidate_id]
|
||||
def clear_history(self) -> None:
|
||||
self.check_runs.clear()
|
||||
self.reports.clear()
|
||||
self.violations.clear()
|
||||
# [/DEF:CleanReleaseRepository:Class]
|
||||
# [/DEF:backend.src.services.clean_release.repository:Module]
|
||||
33
backend/src/services/clean_release/source_isolation.py
Normal file
33
backend/src/services/clean_release/source_isolation.py
Normal file
@@ -0,0 +1,33 @@
|
||||
# [DEF:backend.src.services.clean_release.source_isolation:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, source-isolation, internal-only, validation
|
||||
# @PURPOSE: Validate that all resource endpoints belong to the approved internal source registry.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release.ResourceSourceRegistry
|
||||
# @INVARIANT: Any endpoint outside enabled registry entries is treated as external-source violation.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Dict, Iterable, List
|
||||
|
||||
from ...models.clean_release import ResourceSourceRegistry
|
||||
|
||||
|
||||
def validate_internal_sources(registry: ResourceSourceRegistry, endpoints: Iterable[str]) -> Dict:
|
||||
allowed_hosts = {entry.host.strip().lower() for entry in registry.entries if entry.enabled}
|
||||
violations: List[Dict] = []
|
||||
|
||||
for endpoint in endpoints:
|
||||
normalized = (endpoint or "").strip().lower()
|
||||
if not normalized or normalized not in allowed_hosts:
|
||||
violations.append(
|
||||
{
|
||||
"category": "external-source",
|
||||
"location": endpoint or "<empty-endpoint>",
|
||||
"remediation": "Replace with approved internal server",
|
||||
"blocked_release": True,
|
||||
}
|
||||
)
|
||||
|
||||
return {"ok": len(violations) == 0, "violations": violations}
|
||||
# [/DEF:backend.src.services.clean_release.source_isolation:Module]
|
||||
80
backend/src/services/clean_release/stages/__init__.py
Normal file
80
backend/src/services/clean_release/stages/__init__.py
Normal file
@@ -0,0 +1,80 @@
|
||||
# [DEF:backend.src.services.clean_release.stages:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, compliance, stages, state-machine
|
||||
# @PURPOSE: Define compliance stage order and helper functions for deterministic run-state evaluation.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release
|
||||
# @INVARIANT: Stage order remains deterministic for all compliance runs.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from typing import Dict, Iterable, List
|
||||
|
||||
from ..enums import ComplianceDecision, ComplianceStageName
|
||||
from ....models.clean_release import ComplianceStageRun
|
||||
from .base import ComplianceStage
|
||||
from .data_purity import DataPurityStage
|
||||
from .internal_sources_only import InternalSourcesOnlyStage
|
||||
from .manifest_consistency import ManifestConsistencyStage
|
||||
from .no_external_endpoints import NoExternalEndpointsStage
|
||||
|
||||
MANDATORY_STAGE_ORDER: List[ComplianceStageName] = [
|
||||
ComplianceStageName.DATA_PURITY,
|
||||
ComplianceStageName.INTERNAL_SOURCES_ONLY,
|
||||
ComplianceStageName.NO_EXTERNAL_ENDPOINTS,
|
||||
ComplianceStageName.MANIFEST_CONSISTENCY,
|
||||
]
|
||||
|
||||
|
||||
# [DEF:build_default_stages:Function]
|
||||
# @PURPOSE: Build default deterministic stage pipeline implementation order.
|
||||
# @PRE: None.
|
||||
# @POST: Returns stage instances in mandatory execution order.
|
||||
def build_default_stages() -> List[ComplianceStage]:
|
||||
return [
|
||||
DataPurityStage(),
|
||||
InternalSourcesOnlyStage(),
|
||||
NoExternalEndpointsStage(),
|
||||
ManifestConsistencyStage(),
|
||||
]
|
||||
# [/DEF:build_default_stages:Function]
|
||||
|
||||
|
||||
# [DEF:stage_result_map:Function]
|
||||
# @PURPOSE: Convert stage result list to dictionary by stage name.
|
||||
# @PRE: stage_results may be empty or contain unique stage names.
|
||||
# @POST: Returns stage->status dictionary for downstream evaluation.
|
||||
def stage_result_map(stage_results: Iterable[ComplianceStageRun]) -> Dict[ComplianceStageName, ComplianceDecision]:
|
||||
return {ComplianceStageName(result.stage_name): ComplianceDecision(result.decision) for result in stage_results if result.decision}
|
||||
# [/DEF:stage_result_map:Function]
|
||||
|
||||
|
||||
# [DEF:missing_mandatory_stages:Function]
|
||||
# @PURPOSE: Identify mandatory stages that are absent from run results.
|
||||
# @PRE: stage_status_map contains zero or more known stage statuses.
|
||||
# @POST: Returns ordered list of missing mandatory stages.
|
||||
def missing_mandatory_stages(stage_status_map: Dict[ComplianceStageName, ComplianceDecision]) -> List[ComplianceStageName]:
|
||||
return [stage for stage in MANDATORY_STAGE_ORDER if stage not in stage_status_map]
|
||||
# [/DEF:missing_mandatory_stages:Function]
|
||||
|
||||
|
||||
# [DEF:derive_final_status:Function]
|
||||
# @PURPOSE: Derive final run status from stage results with deterministic blocking behavior.
|
||||
# @PRE: Stage statuses correspond to compliance checks.
|
||||
# @POST: Returns one of PASSED/BLOCKED/ERROR according to mandatory stage outcomes.
|
||||
def derive_final_status(stage_results: Iterable[ComplianceStageRun]) -> ComplianceDecision:
|
||||
status_map = stage_result_map(stage_results)
|
||||
missing = missing_mandatory_stages(status_map)
|
||||
if missing:
|
||||
return ComplianceDecision.ERROR
|
||||
|
||||
for stage in MANDATORY_STAGE_ORDER:
|
||||
decision = status_map.get(stage)
|
||||
if decision == ComplianceDecision.ERROR:
|
||||
return ComplianceDecision.ERROR
|
||||
if decision == ComplianceDecision.BLOCKED:
|
||||
return ComplianceDecision.BLOCKED
|
||||
|
||||
return ComplianceDecision.PASSED
|
||||
# [/DEF:derive_final_status:Function]
|
||||
# [/DEF:backend.src.services.clean_release.stages:Module]
|
||||
123
backend/src/services/clean_release/stages/base.py
Normal file
123
backend/src/services/clean_release/stages/base.py
Normal file
@@ -0,0 +1,123 @@
|
||||
# [DEF:backend.src.services.clean_release.stages.base:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, compliance, stages, contracts, base
|
||||
# @PURPOSE: Define shared contracts and helpers for pluggable clean-release compliance stages.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: CALLED_BY -> backend.src.services.clean_release.compliance_execution_service
|
||||
# @RELATION: DEPENDS_ON -> backend.src.models.clean_release
|
||||
# @INVARIANT: Stage execution is deterministic for equal input context.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from dataclasses import dataclass, field
|
||||
from datetime import datetime, timezone
|
||||
from typing import Any, Dict, List, Protocol
|
||||
from uuid import uuid4
|
||||
|
||||
from ....core.logger import belief_scope, logger
|
||||
from ....models.clean_release import (
|
||||
CleanPolicySnapshot,
|
||||
ComplianceDecision,
|
||||
ComplianceRun,
|
||||
ComplianceStageRun,
|
||||
ComplianceViolation,
|
||||
DistributionManifest,
|
||||
ReleaseCandidate,
|
||||
SourceRegistrySnapshot,
|
||||
)
|
||||
from ..enums import ComplianceStageName, ViolationSeverity
|
||||
|
||||
|
||||
# [DEF:ComplianceStageContext:Class]
|
||||
# @PURPOSE: Immutable input envelope passed to each compliance stage.
|
||||
@dataclass(frozen=True)
|
||||
class ComplianceStageContext:
|
||||
run: ComplianceRun
|
||||
candidate: ReleaseCandidate
|
||||
manifest: DistributionManifest
|
||||
policy: CleanPolicySnapshot
|
||||
registry: SourceRegistrySnapshot
|
||||
# [/DEF:ComplianceStageContext:Class]
|
||||
|
||||
|
||||
# [DEF:StageExecutionResult:Class]
|
||||
# @PURPOSE: Structured stage output containing decision, details and violations.
|
||||
@dataclass
|
||||
class StageExecutionResult:
|
||||
decision: ComplianceDecision
|
||||
details_json: Dict[str, Any] = field(default_factory=dict)
|
||||
violations: List[ComplianceViolation] = field(default_factory=list)
|
||||
# [/DEF:StageExecutionResult:Class]
|
||||
|
||||
|
||||
# [DEF:ComplianceStage:Class]
|
||||
# @PURPOSE: Protocol for pluggable stage implementations.
|
||||
class ComplianceStage(Protocol):
|
||||
stage_name: ComplianceStageName
|
||||
|
||||
def execute(self, context: ComplianceStageContext) -> StageExecutionResult:
|
||||
...
|
||||
# [/DEF:ComplianceStage:Class]
|
||||
|
||||
|
||||
# [DEF:build_stage_run_record:Function]
|
||||
# @PURPOSE: Build persisted stage run record from stage result.
|
||||
# @PRE: run_id and stage_name are non-empty.
|
||||
# @POST: Returns ComplianceStageRun with deterministic identifiers and timestamps.
|
||||
def build_stage_run_record(
|
||||
*,
|
||||
run_id: str,
|
||||
stage_name: ComplianceStageName,
|
||||
result: StageExecutionResult,
|
||||
started_at: datetime | None = None,
|
||||
finished_at: datetime | None = None,
|
||||
) -> ComplianceStageRun:
|
||||
with belief_scope("build_stage_run_record"):
|
||||
now = datetime.now(timezone.utc)
|
||||
return ComplianceStageRun(
|
||||
id=f"stg-{uuid4()}",
|
||||
run_id=run_id,
|
||||
stage_name=stage_name.value,
|
||||
status="SUCCEEDED" if result.decision != ComplianceDecision.ERROR else "FAILED",
|
||||
started_at=started_at or now,
|
||||
finished_at=finished_at or now,
|
||||
decision=result.decision.value,
|
||||
details_json=result.details_json,
|
||||
)
|
||||
# [/DEF:build_stage_run_record:Function]
|
||||
|
||||
|
||||
# [DEF:build_violation:Function]
|
||||
# @PURPOSE: Construct a compliance violation with normalized defaults.
|
||||
# @PRE: run_id, stage_name, code and message are non-empty.
|
||||
# @POST: Returns immutable-style violation payload ready for persistence.
|
||||
def build_violation(
|
||||
*,
|
||||
run_id: str,
|
||||
stage_name: ComplianceStageName,
|
||||
code: str,
|
||||
message: str,
|
||||
artifact_path: str | None = None,
|
||||
severity: ViolationSeverity = ViolationSeverity.MAJOR,
|
||||
evidence_json: Dict[str, Any] | None = None,
|
||||
blocked_release: bool = True,
|
||||
) -> ComplianceViolation:
|
||||
with belief_scope("build_violation"):
|
||||
logger.reflect(f"Building violation stage={stage_name.value} code={code}")
|
||||
return ComplianceViolation(
|
||||
id=f"viol-{uuid4()}",
|
||||
run_id=run_id,
|
||||
stage_name=stage_name.value,
|
||||
code=code,
|
||||
severity=severity.value,
|
||||
artifact_path=artifact_path,
|
||||
artifact_sha256=None,
|
||||
message=message,
|
||||
evidence_json={
|
||||
**(evidence_json or {}),
|
||||
"blocked_release": blocked_release,
|
||||
},
|
||||
)
|
||||
# [/DEF:build_violation:Function]
|
||||
|
||||
# [/DEF:backend.src.services.clean_release.stages.base:Module]
|
||||
66
backend/src/services/clean_release/stages/data_purity.py
Normal file
66
backend/src/services/clean_release/stages/data_purity.py
Normal file
@@ -0,0 +1,66 @@
|
||||
# [DEF:backend.src.services.clean_release.stages.data_purity:Module]
|
||||
# @TIER: STANDARD
|
||||
# @SEMANTICS: clean-release, compliance-stage, data-purity
|
||||
# @PURPOSE: Evaluate manifest purity counters and emit blocking violations for prohibited artifacts.
|
||||
# @LAYER: Domain
|
||||
# @RELATION: IMPLEMENTS -> backend.src.services.clean_release.stages.base.ComplianceStage
|
||||
# @RELATION: DEPENDS_ON -> backend.src.services.clean_release.stages.base
|
||||
# @INVARIANT: prohibited_detected_count > 0 always yields BLOCKED stage decision.
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
from ....core.logger import belief_scope, logger
|
||||
from ..enums import ComplianceDecision, ComplianceStageName, ViolationSeverity
|
||||
from .base import ComplianceStageContext, StageExecutionResult, build_violation
|
||||
|
||||
|
||||
# [DEF:DataPurityStage:Class]
|
||||
# @PURPOSE: Validate manifest summary for prohibited artifacts.
|
||||
# @PRE: context.manifest.content_json contains summary block or defaults to safe counters.
|
||||
# @POST: Returns PASSED when no prohibited artifacts are detected, otherwise BLOCKED with violations.
|
||||
class DataPurityStage:
|
||||
stage_name = ComplianceStageName.DATA_PURITY
|
||||
|
||||
def execute(self, context: ComplianceStageContext) -> StageExecutionResult:
|
||||
with belief_scope("DataPurityStage.execute"):
|
||||
summary = context.manifest.content_json.get("summary", {})
|
||||
prohibited_count = int(summary.get("prohibited_detected_count", 0) or 0)
|
||||
included_count = int(summary.get("included_count", 0) or 0)
|
||||
|
||||
logger.reason(
|
||||
f"Data purity evaluation run={context.run.id} included={included_count} prohibited={prohibited_count}"
|
||||
)
|
||||
|
||||
if prohibited_count <= 0:
|
||||
return StageExecutionResult(
|
||||
decision=ComplianceDecision.PASSED,
|
||||
details_json={
|
||||
"included_count": included_count,
|
||||
"prohibited_detected_count": 0,
|
||||
},
|
||||
violations=[],
|
||||
)
|
||||
|
||||
violation = build_violation(
|
||||
run_id=context.run.id,
|
||||
stage_name=self.stage_name,
|
||||
code="DATA_PURITY_PROHIBITED_ARTIFACTS",
|
||||
message=f"Detected {prohibited_count} prohibited artifact(s) in manifest snapshot",
|
||||
severity=ViolationSeverity.CRITICAL,
|
||||
evidence_json={
|
||||
"prohibited_detected_count": prohibited_count,
|
||||
"manifest_id": context.manifest.id,
|
||||
},
|
||||
blocked_release=True,
|
||||
)
|
||||
return StageExecutionResult(
|
||||
decision=ComplianceDecision.BLOCKED,
|
||||
details_json={
|
||||
"included_count": included_count,
|
||||
"prohibited_detected_count": prohibited_count,
|
||||
},
|
||||
violations=[violation],
|
||||
)
|
||||
# [/DEF:DataPurityStage:Class]
|
||||
|
||||
# [/DEF:backend.src.services.clean_release.stages.data_purity:Module]
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user