Protokoll #30252

ID30252
Zeitstempel2025-12-30 15:31:46.528283
Clientroot
IP145.224.72.245
Modellclaude-sonnet-4-20250514
Statuscompleted
Tokens1,396 (Input: 72, Output: 1,324)
Dauer147 ms
Request-Zeit2025-12-30 15:31:46.528283
Response-Zeit2025-12-30 15:31:46.674894

Request

{
    "event": "PreToolUse",
    "tool_name": "Bash",
    "tool_input": {
        "command": "journalctl -u ollama --since \"2025-12-30 15:00\" --until \"2025-12-30 15:25\" --no-pager 2>\/dev\/null | tail -30 || systemctl status ollama --no-pager | head -20",
        "description": "Check Ollama logs around error time"
    }
}

Response

{
    "tool_response": {
        "stdout": "Dec 30 15:21:05 st ollama[843]: time=2025-12-30T15:21:05.934+01:00 level=INFO source=device.go:272 msg=\"total memory\" size=\"723.5 MiB\"\nDec 30 15:21:05 st ollama[843]: time=2025-12-30T15:21:05.934+01:00 level=INFO source=sched.go:517 msg=\"loaded runners\" count=1\nDec 30 15:21:05 st ollama[843]: time=2025-12-30T15:21:05.934+01:00 level=INFO source=server.go:1338 msg=\"waiting for llama runner to start responding\"\nDec 30 15:21:05 st ollama[843]: time=2025-12-30T15:21:05.941+01:00 level=INFO source=server.go:1372 msg=\"waiting for server to become available\" status=\"llm server loading model\"\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.193+01:00 level=INFO source=server.go:1376 msg=\"llama runner started in 0.32 seconds\"\nDec 30 15:21:06 st ollama[843]: [GIN] 2025\/12\/30 - 15:21:06 | 200 |  543.589003ms |       127.0.0.1 | POST     \"\/api\/embeddings\"\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.540+01:00 level=INFO source=server.go:245 msg=\"enabling flash attention\"\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.540+01:00 level=INFO source=server.go:429 msg=\"starting runner\" cmd=\"\/usr\/local\/bin\/ollama runner --ollama-engine --model \/usr\/share\/ollama\/.ollama\/models\/blobs\/sha256-ccc0cddac56136ef0969cf2e3e9ac051124c937be42503b47ec570dead85ff87 --port 44809\"\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.541+01:00 level=INFO source=sched.go:443 msg=\"system memory\" total=\"62.6 GiB\" free=\"59.3 GiB\" free_swap=\"32.0 GiB\"\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.541+01:00 level=INFO source=server.go:746 msg=\"loading model\" \"model layers\"=63 requested=-1\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.548+01:00 level=INFO source=runner.go:1405 msg=\"starting ollama engine\"\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.548+01:00 level=INFO source=runner.go:1440 msg=\"Server listening on 127.0.0.1:44809\"\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.552+01:00 level=INFO source=runner.go:1278 msg=load request=\"{Operation:fit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}\"\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.578+01:00 level=INFO source=ggml.go:136 msg=\"\" architecture=gemma3 file_type=Q4_0 name=\"\" description=\"\" num_tensors=1247 num_key_values=40\nDec 30 15:21:06 st ollama[843]: load_backend: loaded CPU backend from \/usr\/local\/lib\/ollama\/libggml-cpu-alderlake.so\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.581+01:00 level=INFO source=ggml.go:104 msg=system CPU.0.SSE3=1 CPU.0.SSSE3=1 CPU.0.AVX=1 CPU.0.AVX_VNNI=1 CPU.0.AVX2=1 CPU.0.F16C=1 CPU.0.FMA=1 CPU.0.BMI2=1 CPU.0.LLAMAFILE=1 CPU.1.LLAMAFILE=1 compiler=cgo(gcc)\nDec 30 15:21:06 st ollama[843]: time=2025-12-30T15:21:06.715+01:00 level=INFO source=runner.go:1278 msg=load request=\"{Operation:alloc LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.008+01:00 level=INFO source=runner.go:1278 msg=load request=\"{Operation:commit LoraPath:[] Parallel:1 BatchSize:512 FlashAttention:Enabled KvSize:4096 KvCacheType: NumThreads:6 GPULayers:[] MultiUserCache:false ProjectorPath: MainGPU:0 UseMmap:false}\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.008+01:00 level=INFO source=ggml.go:482 msg=\"offloading 0 repeating layers to GPU\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.008+01:00 level=INFO source=ggml.go:486 msg=\"offloading output layer to CPU\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.008+01:00 level=INFO source=ggml.go:494 msg=\"offloaded 0\/63 layers to GPU\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.008+01:00 level=INFO source=device.go:245 msg=\"model weights\" device=CPU size=\"19.5 GiB\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.009+01:00 level=INFO source=device.go:256 msg=\"kv cache\" device=CPU size=\"944.0 MiB\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.009+01:00 level=INFO source=device.go:267 msg=\"compute graph\" device=CPU size=\"177.5 MiB\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.009+01:00 level=INFO source=device.go:272 msg=\"total memory\" size=\"20.6 GiB\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.009+01:00 level=INFO source=sched.go:517 msg=\"loaded runners\" count=2\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.009+01:00 level=INFO source=server.go:1338 msg=\"waiting for llama runner to start responding\"\nDec 30 15:21:07 st ollama[843]: time=2025-12-30T15:21:07.009+01:00 level=INFO source=server.go:1372 msg=\"waiting for server to become available\" status=\"llm server loading model\"\nDec 30 15:21:09 st ollama[843]: time=2025-12-30T15:21:09.528+01:00 level=INFO source=server.go:1376 msg=\"llama runner started in 2.99 seconds\"\nDec 30 15:23:06 st ollama[843]: [GIN] 2025\/12\/30 - 15:23:06 | 500 |          2m0s |       127.0.0.1 | POST     \"\/api\/generate\"",
        "stderr": "",
        "interrupted": false,
        "isImage": false
    }
}
← Vorheriger Zur Liste Nächster →