Protokoll #27695

ID27695
Zeitstempel2025-12-28 10:24:58.282806
Clientroot
IP145.224.72.140
Modellclaude-sonnet-4-20250514
Statuscompleted
Tokens5,325 (Input: 2,654, Output: 2,671)
Dauer231 ms
Request-Zeit2025-12-28 10:24:58.282806
Response-Zeit2025-12-28 10:24:58.513503

Request

{
    "event": "PreToolUse",
    "tool_name": "mcp__mcp-docs__docs_update",
    "tool_input": {
        "id": 156,
        "content": "# MCP Server Refactoring Plan\n\n**Version:** 1.1  \n**Erstellt:** 2025-12-28  \n**Aktualisiert:** 2025-12-28 (nach Supervision)  \n**Status:** Korrigiert  \n**Task:** #507\n\n---\n\n## 1. Executive Summary\n\n### Ist-Zustand\n- **5 MCP-Server**: mcp-db, mcp-tasks, mcp-contracts, mcp-docs, mcp-code\n- **91 Python-Dateien** (ohne venv)\n- **9.248 LOC** gesamt\n- **Signifikante Code-Duplikation** (~25% redundanter Code)\n- **Inkonsistente Patterns** zwischen Servern\n\n### Ziel\nKonsolidierung zu einer modularen, wartbaren Architektur unter Einhaltung von:\n- **DRY** (Don't Repeat Yourself)\n- **SRP** (Single Responsibility Principle)\n- **SOLID** Principles\n- **Clean Architecture**\n- **KISS** (Keep It Simple, Stupid)\n- **YAGNI** (You Aren't Gonna Need It)\n\n### Erwartete Verbesserungen\n- **~35% weniger Code** durch Elimination von Duplikaten\n- **Einheitliche Patterns** für alle Server\n- **Verbesserte Wartbarkeit** durch Modularisierung\n- **Schnellere Feature-Entwicklung** durch shared Components\n\n---\n\n## 2. Analyse der Code-Duplikation\n\n### 2.1 Kritische Duplikate (Sofort beheben)\n\n#### db_connection.py (4 Dateien, ~240 LOC dupliziert)\n\n| Server | Zeilen | Bibliothek | Besonderheit |\n|--------|--------|------------|--------------|\n| mcp-db | 59 | mysql.connector + **Pooling** | Dynamisches DB-Switching via `USE` |\n| mcp-tasks | 62 | pymysql | Festes `DB_NAME` |\n| mcp-contracts | 60 | pymysql | Festes `DB_NAME` |\n| mcp-docs | 60 | pymysql | Festes `DB_NAME` |\n\n**Problem:** 3 von 4 Dateien sind nahezu identisch (mcp-tasks, mcp-contracts, mcp-docs).\n\n**Lösung:** Zentralisieren in `shared\/infrastructure\/db_connection.py`\n\n```python\n# shared\/infrastructure\/db_connection.py\nfrom contextlib import contextmanager\nfrom typing import Generator, Protocol, runtime_checkable\n\nimport pymysql\nfrom pymysql.connections import Connection\n\n\n@runtime_checkable\nclass AppDatabaseConfig(Protocol):\n    \"\"\"Protocol für App-Datenbank Konfiguration\"\"\"\n    DB_HOST: str\n    DB_PORT: int\n    DB_USER: str\n    DB_PASSWORD: str\n    DB_NAME: str  # Kann None sein für mcp-db (dynamisches Switching)\n\n\n@runtime_checkable\nclass LogDatabaseConfig(Protocol):\n    \"\"\"Protocol für Log-Datenbank Konfiguration (separater User)\"\"\"\n    LOG_DB_HOST: str\n    LOG_DB_NAME: str\n    LOG_DB_USER: str\n    LOG_DB_PASSWORD: str\n\n\nclass DatabaseConnection:\n    \"\"\"Zentrale Datenbankverbindungsverwaltung\"\"\"\n    \n    @classmethod\n    @contextmanager\n    def get_connection(\n        cls, \n        config: AppDatabaseConfig,\n        database: str | None = None,\n        autocommit: bool = False\n    ) -> Generator[Connection, None, None]:\n        \"\"\"\n        Context Manager für App-DB Connection.\n        \n        Args:\n            config: Server-Konfiguration mit DB-Credentials\n            database: Optional - überschreibt config.DB_NAME (für mcp-db)\n            autocommit: True für Logging, False für Transaktionen\n        \"\"\"\n        conn = None\n        db_name = database or getattr(config, 'DB_NAME', None)\n        \n        try:\n            conn = pymysql.connect(\n                host=config.DB_HOST,\n                port=getattr(config, 'DB_PORT', 3306),\n                user=config.DB_USER,\n                password=config.DB_PASSWORD,\n                database=db_name,\n                charset=\"utf8mb4\",\n                cursorclass=pymysql.cursors.DictCursor,\n                autocommit=autocommit,\n            )\n            yield conn\n            if not autocommit:\n                conn.commit()\n        except Exception:\n            if conn and not autocommit:\n                conn.rollback()\n            raise\n        finally:\n            if conn:\n                conn.close()\n    \n    @classmethod\n    @contextmanager\n    def get_log_connection(\n        cls, \n        config: LogDatabaseConfig\n    ) -> Generator[Connection, None, None]:\n        \"\"\"\n        Separate Verbindung für Logging (autocommit=True).\n        \n        Args:\n            config: Server-Konfiguration mit LOG_DB_* Credentials\n        \"\"\"\n        conn = None\n        try:\n            conn = pymysql.connect(\n                host=config.LOG_DB_HOST,\n                user=config.LOG_DB_USER,\n                password=config.LOG_DB_PASSWORD,\n                database=config.LOG_DB_NAME,\n                charset=\"utf8mb4\",\n                cursorclass=pymysql.cursors.DictCursor,\n                autocommit=True,\n            )\n            yield conn\n        finally:\n            if conn:\n                conn.close()\n```\n\n#### protokoll_logger.py (4 Dateien, ~240 LOC dupliziert)\n\n| Server | Zeilen | Unterschied |\n|--------|--------|-------------|\n| mcp-db | 60 | Eigenes Pooling, LogEntry aus domain |\n| mcp-tasks | 62 | Extra: task_id Handling |\n| mcp-contracts | 60 | Standard Pattern |\n| mcp-docs | 59 | Standard Pattern |\n\n**Problem:** mcp-contracts und mcp-docs sind identisch bis auf Log-Message.\n\n**Lösung:** Zentralisieren in `shared\/infrastructure\/protokoll_logger.py`\n\n```python\n# shared\/infrastructure\/protokoll_logger.py\nfrom dataclasses import dataclass\nfrom datetime import datetime\nfrom typing import Optional\nimport logging\nimport sys\n\nfrom shared.infrastructure.db_connection import DatabaseConnection, LogDatabaseConfig\n\n\n@dataclass\nclass LogEntry:\n    \"\"\"Standardisierter Log-Eintrag für alle MCP-Server\"\"\"\n    timestamp: datetime\n    client_name: str\n    request: str\n    status: str\n    duration_ms: int\n    error_message: Optional[str] = None\n    tool_name: Optional[str] = None\n    context_id: Optional[int] = None  # task_id, contract_id, etc.\n\n\nclass ProtokollLogger:\n    \"\"\"Fail-Safe Logger für mcp_log Tabelle\"\"\"\n    \n    def __init__(self, client_name: str, config: LogDatabaseConfig):\n        self.client_name = client_name\n        self.config = config\n        self._logger = logging.getLogger(f\"mcp.{client_name}\")\n    \n    def log(self, entry: LogEntry) -> None:\n        \"\"\"Schreibt Log-Eintrag. Fehler gehen nur zu stderr.\"\"\"\n        try:\n            with DatabaseConnection.get_log_connection(self.config) as conn:\n                with conn.cursor() as cursor:\n                    request_str = self._format_request(entry)\n                    cursor.execute(\n                        \"\"\"INSERT INTO mcp_log\n                           (timestamp, client_name, request, status, duration_ms, error_message)\n                           VALUES (%s, %s, %s, %s, %s, %s)\"\"\",\n                        (\n                            entry.timestamp,\n                            self.client_name,\n                            request_str[:500],\n                            entry.status,\n                            entry.duration_ms,\n                            entry.error_message[:500] if entry.error_message else None,\n                        )\n                    )\n        except Exception as e:\n            print(f\"CRITICAL: {self.client_name} log failed: {e}\", file=sys.stderr)\n    \n    def _format_request(self, entry: LogEntry) -> str:\n        \"\"\"Formatiert Request mit Tool-Name und Context-ID\"\"\"\n        parts = []\n        if entry.tool_name:\n            parts.append(f\"[{entry.tool_name}]\")\n        if entry.context_id:\n            parts.append(f\"id={entry.context_id}\")\n        parts.append(entry.request[:400] if entry.request else \"\")\n        return \" \".join(parts)\n\n\n# Singleton-Registry (KEIN lru_cache wegen nicht-hashbarer Config)\n_logger_instances: dict[str, ProtokollLogger] = {}\n\n\ndef get_logger(client_name: str, config: LogDatabaseConfig) -> ProtokollLogger:\n    \"\"\"\n    Singleton-Factory für Logger.\n    \n    Verwendet dict-Registry statt lru_cache, da Config-Objekte\n    nicht hashable sind.\n    \"\"\"\n    if client_name not in _logger_instances:\n        _logger_instances[client_name] = ProtokollLogger(client_name, config)\n    return _logger_instances[client_name]\n```\n\n#### config.py (5 Dateien, ~300 LOC mit Overlap)\n\n**Gemeinsame Felder in allen Servern:**\n- DB_HOST, DB_PORT, DB_USER, DB_PASSWORD\n- LOG_DB_HOST, LOG_DB_NAME, LOG_DB_USER, LOG_DB_PASSWORD\n\n**Lösung:** Base Config in shared\/\n\n```python\n# shared\/config_base.py\nimport os\nfrom typing import ClassVar\n\n\nclass BaseConfig:\n    \"\"\"\n    Gemeinsame Konfiguration für alle MCP-Server.\n    \n    Alle Server erben diese Basis und fügen spezifische Felder hinzu.\n    \"\"\"\n    \n    # App-Datenbank (für CRUD-Operationen)\n    DB_HOST: ClassVar[str] = os.getenv(\"DB_HOST\", \"localhost\")\n    DB_PORT: ClassVar[int] = int(os.getenv(\"DB_PORT\", \"3306\"))\n    DB_USER: ClassVar[str] = os.getenv(\"DB_USER\", \"root\")\n    DB_PASSWORD: ClassVar[str] = os.getenv(\"DB_PASSWORD\", \"\")\n    DB_NAME: ClassVar[str] = os.getenv(\"DB_NAME\", \"ki_dev\")\n    \n    # Log-Datenbank (separater User mit nur INSERT-Rechten)\n    LOG_DB_HOST: ClassVar[str] = os.getenv(\"LOG_DB_HOST\", \"localhost\")\n    LOG_DB_NAME: ClassVar[str] = os.getenv(\"LOG_DB_NAME\", \"ki_dev\")\n    LOG_DB_USER: ClassVar[str] = os.getenv(\"LOG_DB_USER\", \"mcp_logger\")\n    LOG_DB_PASSWORD: ClassVar[str] = os.getenv(\"LOG_DB_PASSWORD\", \"\")\n\n\n# mcp-tasks\/config.py\nfrom shared.config_base import BaseConfig\nfrom typing import List\n\n\nclass Config(BaseConfig):\n    \"\"\"Task-spezifische Konfiguration\"\"\"\n    \n    OLLAMA_HOST: str = os.getenv(\"OLLAMA_HOST\", \"http:\/\/localhost:11434\")\n    OLLAMA_DEFAULT_MODEL: str = os.getenv(\"OLLAMA_MODEL\", \"mistral\")\n    VALID_STATUSES: List[str] = [\"pending\", \"in_progress\", \"completed\", \"failed\", \"cancelled\"]\n    # ...\n```\n\n### 2.2 Server-Struktur Duplikation\n\n#### server.py (5 Dateien, ~250 LOC)\n\nAlle Server folgen dem gleichen Pattern:\n1. Logging Setup\n2. dotenv laden\n3. FastMCP initialisieren\n4. Tools importieren\n5. Tools registrieren\n6. main() mit mcp.run()\n\n**Lösung:** Factory Pattern mit explizitem .env-Pfad\n\n```python\n# shared\/server_factory.py\nimport logging\nfrom pathlib import Path\nfrom typing import Callable, List\n\nfrom dotenv import load_dotenv\nfrom mcp.server.fastmcp import FastMCP\n\n\ndef setup_logging(server_name: str, level: int = logging.INFO) -> logging.Logger:\n    \"\"\"\n    Konfiguriert Logger für einen MCP-Server.\n    \n    Verwendet explizite Handler statt logging.ba... [TRUNCATED-d5e55836d595f043]"
    }
}

Response

{
    "tool_response": [
        {
            "type": "text",
            "text": "{\n  \"success\": true,\n  \"doc\": {\n    \"id\": 156,\n    \"parent_id\": 35,\n    \"slug\": \"mcp-server-refactoring-plan\",\n    \"path\": \"\/mcp\/mcp-server-refactoring-plan\",\n    \"title\": \"MCP Server Refactoring Plan\",\n    \"description\": \"Vollständiger Refactoring-Plan für alle 5 MCP-Server nach DRY, SRP, SOLID, Clean Architecture\",\n    \"content\": \"# MCP Server Refactoring Plan\\n\\n**Version:** 1.1  \\n**Erstellt:** 2025-12-28  \\n**Aktualisiert:** 2025-12-28 (nach Supervision)  \\n**Status:** Korrigiert  \\n**Task:** #507\\n\\n---\\n\\n## 1. Executive Summary\\n\\n### Ist-Zustand\\n- **5 MCP-Server**: mcp-db, mcp-tasks, mcp-contracts, mcp-docs, mcp-code\\n- **91 Python-Dateien** (ohne venv)\\n- **9.248 LOC** gesamt\\n- **Signifikante Code-Duplikation** (~25% redundanter Code)\\n- **Inkonsistente Patterns** zwischen Servern\\n\\n### Ziel\\nKonsolidierung zu einer modularen, wartbaren Architektur unter Einhaltung von:\\n- **DRY** (Don't Repeat Yourself)\\n- **SRP** (Single Responsibility Principle)\\n- **SOLID** Principles\\n- **Clean Architecture**\\n- **KISS** (Keep It Simple, Stupid)\\n- **YAGNI** (You Aren't Gonna Need It)\\n\\n### Erwartete Verbesserungen\\n- **~35% weniger Code** durch Elimination von Duplikaten\\n- **Einheitliche Patterns** für alle Server\\n- **Verbesserte Wartbarkeit** durch Modularisierung\\n- **Schnellere Feature-Entwicklung** durch shared Components\\n\\n---\\n\\n## 2. Analyse der Code-Duplikation\\n\\n### 2.1 Kritische Duplikate (Sofort beheben)\\n\\n#### db_connection.py (4 Dateien, ~240 LOC dupliziert)\\n\\n| Server | Zeilen | Bibliothek | Besonderheit |\\n|--------|--------|------------|--------------|\\n| mcp-db | 59 | mysql.connector + **Pooling** | Dynamisches DB-Switching via `USE` |\\n| mcp-tasks | 62 | pymysql | Festes `DB_NAME` |\\n| mcp-contracts | 60 | pymysql | Festes `DB_NAME` |\\n| mcp-docs | 60 | pymysql | Festes `DB_NAME` |\\n\\n**Problem:** 3 von 4 Dateien sind nahezu identisch (mcp-tasks, mcp-contracts, mcp-docs).\\n\\n**Lösung:** Zentralisieren in `shared\/infrastructure\/db_connection.py`\\n\\n```python\\n# shared\/infrastructure\/db_connection.py\\nfrom contextlib import contextmanager\\nfrom typing import Generator, Protocol, runtime_checkable\\n\\nimport pymysql\\nfrom pymysql.connections import Connection\\n\\n\\n@runtime_checkable\\nclass AppDatabaseConfig(Protocol):\\n    \\\"\\\"\\\"Protocol für App-Datenbank Konfiguration\\\"\\\"\\\"\\n    DB_HOST: str\\n    DB_PORT: int\\n    DB_USER: str\\n    DB_PASSWORD: str\\n    DB_NAME: str  # Kann None sein für mcp-db (dynamisches Switching)\\n\\n\\n@runtime_checkable\\nclass LogDatabaseConfig(Protocol):\\n    \\\"\\\"\\\"Protocol für Log-Datenbank Konfiguration (separater User)\\\"\\\"\\\"\\n    LOG_DB_HOST: str\\n    LOG_DB_NAME: str\\n    LOG_DB_USER: str\\n    LOG_DB_PASSWORD: str\\n\\n\\nclass DatabaseConnection:\\n    \\\"\\\"\\\"Zentrale Datenbankverbindungsverwaltung\\\"\\\"\\\"\\n    \\n    @classmethod\\n    @contextmanager\\n    def get_connection(\\n        cls, \\n        config: AppDatabaseConfig,\\n        database: str | None = None,\\n        autocommit: bool = False\\n    ) -> Generator[Connection, None, None]:\\n        \\\"\\\"\\\"\\n        Context Manager für App-DB Connection.\\n        \\n        Args:\\n            config: Server-Konfiguration mit DB-Credentials\\n            database: Optional - überschreibt config.DB_NAME (für mcp-db)\\n            autocommit: True für Logging, False für Transaktionen\\n        \\\"\\\"\\\"\\n        conn = None\\n        db_name = database or getattr(config, 'DB_NAME', None)\\n        \\n        try:\\n            conn = pymysql.connect(\\n                host=config.DB_HOST,\\n                port=getattr(config, 'DB_PORT', 3306),\\n                user=config.DB_USER,\\n                password=config.DB_PASSWORD,\\n                database=db_name,\\n                charset=\\\"utf8mb4\\\",\\n                cursorclass=pymysql.cursors.DictCursor,\\n                autocommit=autocommit,\\n            )\\n            yield conn\\n            if not autocommit:\\n                conn.commit()\\n        except Exception:\\n            if conn and not autocommit:\\n                conn.rollback()\\n            raise\\n        finally:\\n            if conn:\\n                conn.close()\\n    \\n    @classmethod\\n    @contextmanager\\n    def get_log_connection(\\n        cls, \\n        config: LogDatabaseConfig\\n    ) -> Generator[Connection, None, None]:\\n        \\\"\\\"\\\"\\n        Separate Verbindung für Logging (autocommit=True).\\n        \\n        Args:\\n            config: Server-Konfiguration mit LOG_DB_* Credentials\\n        \\\"\\\"\\\"\\n        conn = None\\n        try:\\n            conn = pymysql.connect(\\n                host=config.LOG_DB_HOST,\\n                user=config.LOG_DB_USER,\\n                password=config.LOG_DB_PASSWORD,\\n                database=config.LOG_DB_NAME,\\n                charset=\\\"utf8mb4\\\",\\n                cursorclass=pymysql.cursors.DictCursor,\\n                autocommit=True,\\n            )\\n            yield conn\\n        finally:\\n            if conn:\\n                conn.close()\\n```\\n\\n#### protokoll_logger.py (4 Dateien, ~240 LOC dupliziert)\\n\\n| Server | Zeilen | Unterschied |\\n|--------|--------|-------------|\\n| mcp-db | 60 | Eigenes Pooling, LogEntry aus domain |\\n| mcp-tasks | 62 | Extra: task_id Handling |\\n| mcp-contracts | 60 | Standard Pattern |\\n| mcp-docs | 59 | Standard Pattern |\\n\\n**Problem:** mcp-contracts und mcp-docs sind identisch bis auf Log-Message.\\n\\n**Lösung:** Zentralisieren in `shared\/infrastructure\/protokoll_logger.py`\\n\\n```python\\n# shared\/infrastructure\/protokoll_logger.py\\nfrom dataclasses import dataclass\\nfrom datetime import datetime\\nfrom typing import Optional\\nimport logging\\nimport sys\\n\\nfrom shared.infrastructure.db_connection import DatabaseConnection, LogDatabaseConfig\\n\\n\\n@dataclass\\nclass LogEntry:\\n    \\\"\\\"\\\"Standardisierter Log-Eintrag für alle MCP-Server\\\"\\\"\\\"\\n    timestamp: datetime\\n    client_name: str\\n    request: str\\n    status: str\\n    duration_ms: int\\n    error_message: Optional[str] = None\\n    tool_name: Optional[str] = None\\n    context_id: Optional[int] = None  # task_id, contract_id, etc.\\n\\n\\nclass ProtokollLogger:\\n    \\\"\\\"\\\"Fail-Safe Logger für mcp_log Tabelle\\\"\\\"\\\"\\n    \\n    def __init__(self, client_name: str, config: LogDatabaseConfig):\\n        self.client_name = client_name\\n        self.config = config\\n        self._logger = logging.getLogger(f\\\"mcp.{client_name}\\\")\\n    \\n    def log(self, entry: LogEntry) -> None:\\n        \\\"\\\"\\\"Schreibt Log-Eintrag. Fehler gehen nur zu stderr.\\\"\\\"\\\"\\n        try:\\n            with DatabaseConnection.get_log_connection(self.config) as conn:\\n                with conn.cursor() as cursor:\\n                    request_str = self._format_request(entry)\\n                    cursor.execute(\\n                        \\\"\\\"\\\"INSERT INTO mcp_log\\n                           (timestamp, client_name, request, status, duration_ms, error_message)\\n                           VALUES (%s, %s, %s, %s, %s, %s)\\\"\\\"\\\",\\n                        (\\n                            entry.timestamp,\\n                            self.client_name,\\n                            request_str[:500],\\n                            entry.status,\\n                            entry.duration_ms,\\n                            entry.error_message[:500] if entry.error_message else None,\\n                        )\\n                    )\\n        except Exception as e:\\n            print(f\\\"CRITICAL: {self.client_name} log failed: {e}\\\", file=sys.stderr)\\n    \\n    def _format_request(self, entry: LogEntry) -> str:\\n        \\\"\\\"\\\"Formatiert Request mit Tool-Name und Context-ID\\\"\\\"\\\"\\n        parts = []\\n        if entry.tool_name:\\n            parts.append(f\\\"[{entry.tool_name}]\\\")\\n        if entry.context_id:\\n            parts.append(f\\\"id={entry.context_id}\\\")\\n        parts.append(entry.request[:400] if entry.request else \\\"\\\")\\n        return \\\" \\\".join(parts)\\n\\n\\n# Singleton-Registry (KEIN lru_cache wegen nicht-hashbarer Config)\\n_logger_instances: dict[str, ProtokollLogger] = {}\\n\\n\\ndef get_logger(client_name: str, config: LogDatabaseConfig) -> ProtokollLogger:\\n    \\\"\\\"\\\"\\n    Singleton-Factory für Logger.\\n    \\n    Verwendet dict-Registry statt lru_cache, da Config-Objekte\\n    nicht hashable sind.\\n    \\\"\\\"\\\"\\n    if client_name not in _logger_instances:\\n        _logger_instances[client_name] = ProtokollLogger(client_name, config)\\n    return _logger_instances[client_name]\\n```\\n\\n#### config.py (5 Dateien, ~300 LOC mit Overlap)\\n\\n**Gemeinsame Felder in allen Servern:**\\n- DB_HOST, DB_PORT, DB_USER, DB_PASSWORD\\n- LOG_DB_HOST, LOG_DB_NAME, LOG_DB_USER, LOG_DB_PASSWORD\\n\\n**Lösung:** Base Config in shared\/\\n\\n```python\\n# shared\/config_base.py\\nimport os\\nfrom typing import ClassVar\\n\\n\\nclass BaseConfig:\\n    \\\"\\\"\\\"\\n    Gemeinsame Konfiguration für alle MCP-Server.\\n    \\n    Alle Server erben diese Basis und fügen spezifische Felder hinzu.\\n    \\\"\\\"\\\"\\n    \\n    # App-Datenbank (für CRUD-Operationen)\\n    DB_HOST: ClassVar[str] = os.getenv(\\\"DB_HOST\\\", \\\"localhost\\\")\\n    DB_PORT: ClassVar[int] = int(os.getenv(\\\"DB_PORT\\\", \\\"3306\\\"))\\n    DB_USER: ClassVar[str] = os.getenv(\\\"DB_USER\\\", \\\"root\\\")\\n    DB_PASSWORD: ClassVar[str] = os.getenv(\\\"DB_PASSWORD\\\", \\\"\\\")\\n    DB_NAME: ClassVar[str] = os.getenv(\\\"DB_NAME\\\", \\\"ki_dev\\\")\\n    \\n    # Log-Datenbank (separater User mit nur INSERT-Rechten)\\n    LOG_DB_HOST: ClassVar[str] = os.getenv(\\\"LOG_DB_HOST\\\", \\\"localhost\\\")\\n    LOG_DB_NAME: ClassVar[str] = os.getenv(\\\"LOG_DB_NAME\\\", \\\"ki_dev\\\")\\n    LOG_DB_USER: ClassVar[str] = os.getenv(\\\"LOG_DB_USER\\\", \\\"mcp_logger\\\")\\n    LOG_DB_PASSWORD: ClassVar[str] = os.getenv(\\\"LOG_DB_PASSWORD\\\", \\\"\\\")\\n\\n\\n# mcp-tasks\/config.py\\nfrom shared.config_base import BaseConfig\\nfrom typing import List\\n\\n\\nclass Config(BaseConfig):\\n    \\\"\\\"\\\"Task-spezifische Konfiguration\\\"\\\"\\\"\\n    \\n    OLLAMA_HOST: str = os.getenv(\\\"OLLAMA_HOST\\\", \\\"http:\/\/localhost:11434\\\")\\n    OLLAMA_DEFAULT_MODEL: str = os.getenv(\\\"OLLAMA_MODEL\\\", \\\"mistral\\\")\\n    VALID_STAT... [TRUNCATED-00e18aa19e270665]"
        }
    ]
}
← Vorheriger Zur Liste Nächster →