Skip to content

Getting ollama to work with Trilium #7470

@deajan

Description

@deajan

Description

So this has been an issue for me since LLM integration has been started with Trilium.
I cannot get IA integration to work.

My setup:

  • Windows 11
  • Local installed ollama 0.12.6
  • Trilium Notes 0.99.3 desktop synchronized with 0.99.3 server

Ollama seems to work flawlessly (it's configured to listen to 0.0.0.0, but anyway Trilium desktop is installed on the same computer):

curl http://localhost:11434
Ollama is running

ollama list
NAME                        ID              SIZE      MODIFIED
mxbai-embed-large:latest    468836162de7    669 MB    4 months ago
llama3.1:8b                 46e0c10c039e    4.9 GB    4 months ago

ollama run llama3.1:8b
>>> Why is Trilium Notes one of the best notes application ?
Trilium Notes has gained a loyal following among note-taking enthusiasts due to its unique features and
capabilities. Here are some reasons why it's considered one of the top note-taking applications:
[...]

I can also use the direct Ollama GUI prompt:
Image

Trilium is configured to use ollama:

Image

Running a AI chat I get the following (please note the "failed to process message" popup at window top):
Image

Here's the full result of the AI page:
Image

Reading the backend logs at 10:56:25.095, it seems that ollama sends a proper response to Trilium.
Reading the backend logs at 10:57:11.981, it seems that Trilium tries to call a tool of 12 "unknown" tools.

I did have previously activated AI/LLM with vector search which has been disabled. Is there anything I need to check in my DB ? Perhaps some remaining config that I cannot undo without the prior vector search settings page ?

anonymized-full-2025-10-23T090550.zip

TriliumNext Version

0.99.3

What operating system are you using?

Windows

What is your setup?

Local + server sync

Operating System Version

Windows 11

Error logs

Backend logs

10:55:14.173 
 _____     _ _ _
|_   _| __(_) (_)_   _ _ __ ___   | \ | | ___ | |_ ___  ___
  | || '__| | | | | | | '_ ` _ \  |  \| |/ _ \| __/ _ \/ __|
  | || |  | | | | |_| | | | | | | | |\  | (_) | ||  __/\__ \
  |_||_|  |_|_|_|\__,_|_| |_| |_| |_| \_|\___/ \__\___||___/ 0.99.3

10:55:14.174 📦 Versions:    app=0.99.3 db=233 sync=36 clipper=1.0
10:55:14.174 🔧 Build:       2025-10-22 20:11:30 (5ff07820d3)
10:55:14.174 📂 Data dir:    C:\Users\redacted\AppData\Roaming\trilium-data
10:55:14.174 ⏰ UTC time:    2025-10-23 08:55:13
10:55:14.175 💻 CPU:         Intel(R) Xeon(R) E-2286M  CPU @ 2.40GHz (16-core @ 2400 Mhz)
10:55:14.175 💾 DB size:     103.4 MiB
10:55:14.175 
10:55:14.208 Becca (note cache) load took 32ms
10:55:14.272 Marking attributes _hidden_lexcludeFromNoteMap as deleted
10:55:14.279 Created new note '_help_nRqcgfTb97uV', branch '_help_poXkQfguuA0U__help_nRqcgfTb97uV' of type 'doc', mime ''
10:55:14.285 Updating attribute _help_fDLvzOx29Pfg_ldocName from "User Guide/User Guide/Installation & Setup/Server Installation/2. Reverse proxy/Apache" to "User Guide/User Guide/Installation & Setup/Server Installation/2. Reverse proxy/Apache using Docker"
10:55:14.287 Created new note '_help_LLzSMXACKhUs', branch '_help_vcjrb3VVYPZI__help_LLzSMXACKhUs' of type 'doc', mime ''
10:55:14.289 Updating attribute _help_l2VkvOwUNfZj_ldocName from "User Guide/User Guide/Installation & Setup/Server Installation/TLS Configuration" to "User Guide/User Guide/Installation & Setup/Server Installation/HTTPS (TLS)"
10:55:14.290 Updating attribute _help_l2VkvOwUNfZj_liconClass from "bx bx-file" to "bx bx-lock-alt"
10:55:14.290 Updating attribute _help_0hzsNCP31IAB_liconClass from "bx bx-lock-alt" to "bx bx-user"
10:55:14.291 Updating attribute _help_ZjLYv08Rp3qC_ldocName from "User Guide/User Guide/Basic Concepts and Features/Navigation/Quick edit.clone" to "User Guide/User Guide/Basic Concepts and Features/UI Elements/Quick edit"
10:55:14.293 Updating attribute _help_NRnIZmSMc5sj_ldocName from "User Guide/User Guide/Basic Concepts and Features/Notes/Export as PDF" to "User Guide/User Guide/Basic Concepts and Features/Notes/Printing & Exporting as PDF"
10:55:14.294 Updating attribute _help_NRnIZmSMc5sj_liconClass from "bx bxs-file-pdf" to "bx bx-printer"
10:55:14.294 Updating attribute _help_ZjLYv08Rp3qC_ldocName from "User Guide/User Guide/Basic Concepts and Features/UI Elements/Quick edit" to "User Guide/User Guide/Basic Concepts and Features/Navigation/Quick edit.clone"
10:55:14.297 Created new note '_help_zP3PMqaG71Ct', branch '_help_GTwFsgaA0lCt__help_zP3PMqaG71Ct' of type 'doc', mime ''
10:55:14.301 Created new note '_help_vElnKeDNPSVl', branch '_help_CdNpE2pqjmI6__help_vElnKeDNPSVl' of type 'doc', mime ''
10:55:14.305 Created new note '_template_presentation_slide', branch '_templates__template_presentation_slide' of type 'text', mime 'text/html'
10:55:14.308 Created new note '_template_presentation', branch '_templates__template_presentation' of type 'book', mime ''
10:55:14.314 Created new note '_template_presentation_first', branch '_template_presentation__template_presentation_first' of type 'text', mime 'text/html'
10:55:14.317 Created new note '_template_presentation_second', branch '_template_presentation__template_presentation_second' of type 'text', mime 'text/html'
10:55:14.329 Trusted reverse proxy: false
10:55:14.330 App HTTP server starting up at port 37840
10:55:14.535 Registered global shortcut Ctrl+Alt+P for action createNoteIntoInbox
10:55:14.543 Listening on port 37840
10:55:14.614 CSRF token generation: Successful
10:55:14.806 404 GET /assets/v0.99.3/stylesheets/print.css
10:55:15.079 200 GET /api/tree with 222116 bytes took 8ms
10:55:15.088 200 GET /api/keyboard-actions with 21784 bytes took 7ms
10:55:15.090 200 GET /api/autocomplete/notesCount with 4 bytes took 1ms
10:55:15.092 200 GET /api/options with 11193 bytes took 0ms
10:55:15.095 200 GET /api/options/locales with 1023 bytes took 0ms
10:55:15.163 Slow 200 GET /api/script/widgets with 649465 bytes took 17ms
10:55:15.171 200 POST /api/tree/load with 6670 bytes took 0ms
10:55:15.177 200 POST /api/tree/load with 3865 bytes took 0ms
10:55:15.180 200 POST /api/tree/load with 4025 bytes took 1ms
10:55:15.223 200 POST /api/tree/load with 3631 bytes took 1ms
10:55:15.224 200 POST /api/tree/load with 3997 bytes took 0ms
10:55:15.442 200 GET /api/keyboard-shortcuts-for-notes with 2 bytes took 1ms
10:55:15.443 200 GET /api/system-checks with 27 bytes took 0ms
10:55:15.444 200 POST /api/tree/load with 3996 bytes took 0ms
10:55:15.445 200 POST /api/tree/load with 14392 bytes took 0ms
10:55:15.447 200 GET /api/tree?subTreeNoteId=26w3HbI3WNC5 with 30142 bytes took 1ms
10:55:15.447 200 POST /api/tree/load with 25357 bytes took 0ms
10:55:15.463 Slow 200 GET /api/recent-changes/undefined with 2 bytes took 15ms
10:55:15.464 200 POST /api/tree/load with 5638 bytes took 0ms
10:55:15.465 200 GET /api/notes/yyE8TjPO2E76/blob with 305 bytes took 1ms
10:55:15.473 200 GET /api/sql/schema with 7987 bytes took 1ms
10:55:15.602 200 POST /api/tree/load with 13229 bytes took 1ms
10:55:15.603 200 POST /api/tree/load with 15665 bytes took 0ms
10:55:15.604 200 GET /api/sql/schema with 7987 bytes took 1ms
10:55:15.605 200 GET /api/sql/schema with 7987 bytes took 1ms
10:55:15.605 200 GET /api/notes/cFBXVpayRIld/blob with 750 bytes took 0ms
10:55:15.664 JS Info: Creating new CKEditor
10:55:15.821 200 GET /api/note-map/cFBXVpayRIld/backlink-count with 11 bytes took 0ms
10:55:15.822 200 GET /api/notes/cFBXVpayRIld/attachments with 2 bytes took 0ms
10:55:15.823 200 GET /api/notes/root with 315 bytes took 0ms
10:55:15.825 200 GET /api/search/%23textSnippet with 2 bytes took 1ms
10:55:15.827 200 GET /api/sql/schema with 7987 bytes took 0ms
10:55:15.829 200 GET /api/sql/schema with 7987 bytes took 0ms
10:55:15.831 200 GET /api/sql/schema with 7987 bytes took 1ms
10:55:15.832 200 GET /api/sql/schema with 7987 bytes took 1ms
10:55:15.833 200 GET /api/sql/schema with 7987 bytes took 0ms
10:55:15.834 200 GET /api/sql/schema with 7987 bytes took 0ms
10:55:15.835 200 GET /api/notes/_template_text_snippet with 341 bytes took 0ms
10:55:15.966 200 GET /api/notes/_template_list_view with 326 bytes took 1ms
10:55:16.211 200 GET /api/notes/_template_grid_view with 326 bytes took 0ms
10:55:16.213 200 GET /api/notes/_template_calendar with 324 bytes took 0ms
10:55:16.219 200 GET /api/notes/_template_table with 318 bytes took 0ms
10:55:16.224 200 GET /api/notes/_template_geo_map with 322 bytes took 1ms
10:55:16.226 200 GET /api/notes/_template_board with 318 bytes took 0ms
10:55:16.237 200 GET /api/notes/_template_presentation with 332 bytes took 0ms
10:55:16.242 200 GET /api/search-templates with 31 bytes took 1ms
10:55:16.244 200 POST /api/tree/load with 7860 bytes took 0ms
10:55:16.247 200 GET /api/notes/lkhYxdEfTBqV with 332 bytes took 0ms
10:55:16.249 200 GET /api/notes/MvMzuwE8RyhL with 334 bytes took 0ms
10:55:16.499 JS Info: CKEditor state changed to ready
10:55:16.602 No handler matched for custom 'fonts/InconsolataNerdFontMono-Bold.woff2' request.
10:55:17.394 200 GET /api/script/startup with 2172 bytes took 2ms
10:55:17.396 200 POST /api/tree/load with 3861 bytes took 0ms
10:55:17.398 200 PUT /api/special-notes/api-script-launcher with 350 bytes took 1ms
10:55:17.399 200 POST /api/tree/load with 4144 bytes took 0ms
10:55:17.411 200 PUT /api/special-notes/api-script-launcher with 355 bytes took 2ms
10:55:17.521 200 GET /api/sql/schema with 7987 bytes took 1ms
10:55:17.522 200 POST /api/tree/load with 6529 bytes took 0ms
10:55:17.545 200 GET /api/notes/_options/blob with 155 bytes took 0ms
10:55:17.547 200 GET /api/tree?subTreeNoteId=_hidden with 69428 bytes took 1ms
10:55:17.548 204 PUT /api/options with 0 bytes took 1ms
10:55:17.555 200 GET /api/notes/_options/attachments with 2 bytes took 0ms
10:55:17.565 200 GET /api/note-map/_options/backlink-count with 11 bytes took 0ms
10:55:17.619 204 PUT /api/branches/root__hidden/expanded/1 with 0 bytes took 2ms
10:55:17.620 200 GET /api/tree?subTreeNoteId=_options with 10825 bytes took 0ms
10:55:17.621 204 PUT /api/branches/_hidden__options/expanded/1 with 0 bytes took 0ms
10:55:17.622 200 POST /api/tree/load with 10824 bytes took 0ms
10:55:18.230 Table counts: notes: 1921, revisions: 3751, attachments: 969, branches: 2031, attributes: 2365, etapi_tokens: 2, blobs: 5090
10:55:18.310 All consistency checks passed with no errors detected (took 81ms)
10:55:18.792 200 GET /api/notes/_optionsAi/blob with 157 bytes took 0ms
10:55:18.792 200 GET /api/notes/yyE8TjPO2E76/blob with 305 bytes took 0ms
10:55:18.795 204 PUT /api/options with 0 bytes took 1ms
10:55:18.807 200 GET /api/note-map/_optionsAi/backlink-count with 11 bytes took 0ms
10:55:18.808 200 GET /api/notes/_optionsAi/attachments with 2 bytes took 1ms
10:55:18.820 Slow 200 GET /api/llm/providers/ollama/models?baseUrl=http%3A%2F%2Flocalhost%3A11434 with 700 bytes took 22ms
10:55:19.329 200 POST /api/tree/load with 14713 bytes took 0ms
10:55:19.715 Sending message to all clients: {"type":"sync-push-in-progress","lastSyncedPush":45760}
10:55:19.716 Sync hGf9ulNkYs: Pushing 51 sync changes in 195ms
10:55:19.716 Nothing to push
10:55:19.780 Sending message to all clients: {"type":"sync-pull-in-progress","lastSyncedPush":45820}
10:55:19.781 updated: {attributes: [sKgsikhWsExQ, GjaydihDWFNf], options: [aiEnabled], branches: [root_mPWpU5v6g7wD, root_BYTDpCdA4RpY], blobs: [bWgda934XcAEaumXMGYl, 8aIrPG3r0mP1TZmgQXM4], notes: [mPWpU5v6g7wD, BYTDpCdA4RpY]}, alreadyUpdated: 0, erased: 0, alreadyErased: 0
10:55:19.783 Sync sgrle17Ib1: Pulled 9 changes in 13 KB, starting at entityChangeId=69526166 in 63ms and applied them in 4ms, 0 outstanding pulls
10:55:19.785 200 POST /api/tree/load with 2381 bytes took 0ms
10:55:19.828 200 POST /api/tree/load with 2381 bytes took 1ms
10:55:19.842 Finished pull
10:55:19.842 Nothing to push
10:55:20.074 Content hash computation took 21ms
10:55:20.074 Content hash checks PASSED
10:55:20.074 Sending message to all clients: {"type":"sync-finished","lastSyncedPush":45831}
10:55:28.147 Updating option 'zoomFactor' to '0.9'
10:55:28.148 204 PUT /api/options with 0 bytes took 1ms
10:55:28.544 Updating option 'zoomFactor' to '0.8'
10:55:28.545 204 PUT /api/options with 0 bytes took 1ms
10:55:28.920 Updating option 'zoomFactor' to '0.7'
10:55:28.921 204 PUT /api/options with 0 bytes took 1ms
10:55:51.606 Created new note 'vzd1tWqllrug', branch 'root_vzd1tWqllrug' of type 'aiChat', mime 'application/json'
10:55:51.607 200 POST /api/notes/root/children?target=into&targetBranchId= with 554 bytes took 3ms
10:55:51.610 200 POST /api/tree/load with 2515 bytes took 0ms
10:55:51.841 200 GET /api/notes/vzd1tWqllrug/blob with 199 bytes took 0ms
10:55:51.842 200 GET /api/notes/yyE8TjPO2E76/blob with 305 bytes took 0ms
10:55:51.859 204 PUT /api/options with 0 bytes took 1ms
10:55:52.015 200 GET /api/note-map/vzd1tWqllrug/backlink-count with 11 bytes took 0ms
10:55:52.015 200 GET /api/notes/vzd1tWqllrug/attachments with 2 bytes took 0ms
10:55:52.036 200 GET /api/sql/schema with 7987 bytes took 1ms
10:55:52.044 200 GET /api/notes/vzd1tWqllrug/attachments with 2 bytes took 1ms
10:55:52.044 200 GET /api/note-map/vzd1tWqllrug/backlink-count with 11 bytes took 0ms
10:55:52.834 204 PUT /api/options with 0 bytes took 1ms
10:56:08.051 204 PUT /api/notes/vzd1tWqllrug/data with 0 bytes took 4ms
10:56:08.083 === Starting streamMessage ===
10:56:08.084 === Starting background streaming process ===
10:56:08.084 200 POST /api/llm/chat/vzd1tWqllrug/messages/stream with 0 bytes took 1ms
10:56:08.084 [WS-SERVER] Sending LLM stream message: chatNoteId=vzd1tWqllrug, content=false, thinking=true, toolExecution=false, done=false
10:56:08.085 [WS-SERVER] Sent LLM stream message to 1 clients
10:56:08.085 Initializing LLM tools during AIServiceManager construction...
10:56:08.085 initializeAgentTools called, but tools are already initialized in constructor
10:56:08.085 Agent tools initialized successfully
10:56:08.086 Created and cached new ollama service
10:56:08.087 Initializing LLM tools...
10:56:08.087 Successfully registered 12 LLM tools: search_notes, keyword_search_notes, attribute_search, search_suggestion, read_note, create_note, update_note, summarize_note, manage_attributes, manage_relationships, extract_content, calendar_integration
10:56:08.087 LLM tools initialized successfully
10:56:08.087 ContextExtractionStage initialized
10:56:08.087 AgentToolsContextStage initialized
10:56:08.088 ========== STARTING CHAT PIPELINE ==========
10:56:08.088 Executing chat pipeline with 2 messages
10:56:08.088 Found 12 tools already registered
10:56:08.088 ========== MODEL SELECTION ==========
10:56:08.088 Executing pipeline stage: ModelSelection
10:56:08.088 [ModelSelectionStage] Input options: {"model":"llama3.1:8b","stream":true}
10:56:08.088 [ModelSelectionStage] Stream option in input: true, type: boolean
10:56:08.088 [ModelSelectionStage] After copy, stream: true, type: boolean
10:56:08.088 Using explicitly specified model: llama3.1:8b
10:56:08.088 [ModelSelectionStage] Returning early with stream: true
10:56:08.088 Pipeline stage ModelSelection completed in 0ms
10:56:08.088 Selected model: llama3.1:8b, enableTools: undefined
10:56:08.088 Enhanced context option check: input.options={"useAdvancedContext":true,"model":"llama3.1:8b","stream":true,"chatNoteId":"vzd1tWqllrug"}
10:56:08.088 Enhanced context decision: useEnhancedContext=true, hasQuery=true
10:56:08.088 ========== STAGE 1: USER QUERY ==========
10:56:08.088 Processing query with: question="Why is Trilium Notes one of the best notes applica...", noteId=undefined, showThinking=true
10:56:08.088 ========== STAGE 2: QUERY DECOMPOSITION ==========
10:56:08.088 Performing query decomposition to generate effective search queries
10:56:08.089 Decomposing query: "Why is Trilium Notes one of the best notes application ?"
10:56:08.089 Sending decomposition prompt to LLM for query: "Why is Trilium Notes one of the best notes application ?"
10:56:08.091 200 GET /api/notes/vzd1tWqllrug/blob with 658 bytes took 1ms
10:56:08.092 === Starting simplified handleSendMessage ===
10:56:08.092 LLM POST message: chatNoteId=vzd1tWqllrug, contentLength=0
10:56:08.092 ERROR: Error processing message: Error: Content cannot be empty
10:56:08.092 200 POST /api/llm/chat/vzd1tWqllrug/messages with 66 bytes took 0ms
10:56:08.104 204 PUT /api/notes/vzd1tWqllrug/data with 0 bytes took 2ms
10:56:08.123 200 GET /api/notes/vzd1tWqllrug/blob with 658 bytes took 0ms
10:56:08.161 Using model llama3.1:8b from provider ollama
10:56:08.161 Model capabilities: {"supportsTools":true,"supportsStreaming":true,"contextWindow":8192}
10:56:08.162 Bypassing formatter for Ollama request with 2 messages
10:56:08.162 Sending request to Ollama with model: llama3.1:8b
10:56:08.162 Creating new Ollama client with base URL: http://localhost:11434
10:56:08.162 Ollama client successfully created
10:56:08.162 Sending non-streaming request to Ollama
10:56:08.162 Ollama API request to: http://localhost:11434/api/chat
10:56:08.162 Ollama API request method: POST
10:56:08.162 Ollama API request headers: {"Content-Type":"application/json","Accept":"application/json","User-Agent":"ollama-js/0.6.0 (x64 win32 Node.js/v22.20.0)"}
10:56:14.581 Sending message to all clients: {"type":"sync-push-in-progress","lastSyncedPush":45831}
10:56:14.582 Sync rdc3Z79zJy: Pushing 3 sync changes in 75ms
10:56:14.582 Nothing to push
10:56:14.644 Finished pull
10:56:14.645 Nothing to push
10:56:14.955 Content hash computation took 46ms
10:56:14.956 Content hash checks PASSED
10:56:14.956 Sending message to all clients: {"type":"sync-finished","lastSyncedPush":45844}
10:56:25.094 Ollama API response status: 200
10:56:25.094 Received response from Ollama
10:56:25.095 Received LLM response for decomposition: {"Trilium Notes features and benefits specific to note-taking efficiency":"",
"Comparison with other popular note-taking applications":"",
"User reviews and ratings of Trilium Notes":"",
"Unique selli...
10:56:25.095 Cleaned JSON string: {"Trilium Notes features and benefits specific to note-taking efficiency":"",
"Comparison with other popular note-taking applications":"",
"User reviews and ratings of Trilium Notes":"",
"Unique selling points of Trilium Notes compared to similar apps":"",
"Specific use cases where Trilium Notes excels, such as organization or collaboration":"}  I will format this into a JSON array for you. Here it is: [" 	}
10:56:25.095 Extracted JSON structure: {"Trilium Notes features and benefits specific to note-taking efficiency":"",
"Comparison with other popular note-taking applications":"",
"User reviews and ratings of Trilium Notes":"",
"Unique selling points of Trilium Notes compared to similar apps":"",
"Specific use cases where Trilium Notes excels, such as organization or collaboration":"}  I will format this into a JSON array for you. Here it is: [" 	}
10:56:25.095 Extracted 6 queries from JSON object
10:56:25.095 Added original query to sub-queries list
10:56:25.095 Final sub-queries for vector search: "Why is Trilium Notes one of the best notes application ?", "Trilium Notes features and benefits specific to note-taking efficiency", "Comparison with other popular note-taking applications", "User reviews and ratings of Trilium Notes", "Unique selling points of Trilium Notes compared to similar apps", "Specific use cases where Trilium Notes excels, such as organization or collaboration", "}  I will format this into a JSON array for you. Here it is: ["
10:56:25.095 Query decomposed with complexity 3/10 into 7 search queries
10:56:25.095 ========== STAGE 3: VECTOR SEARCH (DISABLED) ==========
10:56:25.095 Vector search has been removed - LLM will rely on tool calls for context
10:56:25.095 Vector search disabled - using tool-based context extraction instead
10:56:25.095 ========== SEMANTIC CONTEXT EXTRACTION ==========
10:56:25.095 Executing pipeline stage: SemanticContextExtraction
10:56:25.095 Semantic context extraction disabled - vector search has been removed. Using tool-based context instead for note global
10:56:25.095 Pipeline stage SemanticContextExtraction completed in 0ms
10:56:25.095 Extracted semantic context (0 chars)
10:56:25.095 ========== STAGE 4: MESSAGE PREPARATION ==========
10:56:25.095 Executing pipeline stage: MessagePreparation
10:56:25.095 Preparing messages for provider: llama3.1, context: false, system prompt: false, tools: false
10:56:25.095 Formatted 2 messages into 3 messages for provider: llama3.1
10:56:25.095 Pipeline stage MessagePreparation completed in 0ms
10:56:25.095 Prepared 3 messages for LLM, tools enabled: false
10:56:25.096 [ChatPipeline] Request type info - Format: not specified, Options from pipelineInput: {"stream":true}
10:56:25.096 [ChatPipeline] Stream settings - config.enableStreaming: true, format parameter: undefined, modelSelection.options.stream: true, streamCallback available: true
10:56:25.096 [ChatPipeline] Stream callback available, enabling streaming
10:56:25.096 [ChatPipeline] Final streaming decision: stream=true, will stream to client=true
10:56:25.096 ========== STAGE 5: LLM COMPLETION ==========
10:56:25.096 Executing pipeline stage: LLMCompletion
10:56:25.096 ========== LLM COMPLETION STAGE - INPUT MESSAGES ==========
10:56:25.096 Total input messages: 3
10:56:25.096 Message 0 (system): You are an intelligent AI assistant for Trilium Notes, a hierarchical note-taking application. Help the user with their notes, knowledge management, a...
10:56:25.096 Message 1 (user): Why is Trilium Notes one of the best notes application ?
10:56:25.096 Message 2 (user): Why is Trilium Notes one of the best notes application ?
10:56:25.096 LLM completion options: {"model":"llama3.1:8b","stream":true,"hasToolExecutionStatus":false}
10:56:25.096 [LLMCompletionStage] Stream explicitly set to: true
10:56:25.096 Adding 12 tools to LLM request
10:56:25.096 Generating LLM completion, provider: auto, model: llama3.1:8b
10:56:25.096 [LLMCompletionStage] Using auto-selected service
10:56:25.096 [AIServiceManager] generateChatCompletion called with options: {"model":"llama3.1:8b","stream":true,"enableTools":true}
10:56:25.096 [AIServiceManager] Stream option type: boolean
10:56:25.096 [AIServiceManager] Using selected provider ollama with options.stream: true
10:56:25.177 Using model llama3.1:8b from provider ollama
10:56:25.177 Model capabilities: {"supportsTools":true,"supportsStreaming":true,"contextWindow":8192}
10:56:25.177 Ollama formatter received 3 messages
10:56:25.177 Message 0 - role: system, keys: role, content, content length: 283
10:56:25.177 Message 1 - role: user, keys: role, content, content length: 56
10:56:25.177 Message 2 - role: user, keys: role, content, content length: 56
10:56:25.177 Adding tool instructions to system prompt for Ollama
10:56:25.177 Using new system message: # Trilium Base System Prompt

You are an AI assi...
10:56:25.177 Adding message with role user without context injection, keys: role, content
10:56:25.177 Adding message with role user without context injection, keys: role, content
10:56:25.177 Ollama formatter produced 3 formatted messages
10:56:25.177 Formatted message 0 - role: system, keys: role, content, content length: 4626
10:56:25.178 Formatted message 1 - role: user, keys: role, content, content length: 56
10:56:25.178 Formatted message 2 - role: user, keys: role, content, content length: 56
10:56:25.178 Sending to Ollama with formatted messages: 3 (with tool instructions)
10:56:25.178 Sending 12 tool definitions to Ollama
10:56:25.178 Sending request to Ollama with model: llama3.1:8b
10:56:25.178 Using streaming mode with Ollama client
10:56:25.178 Ollama streaming details: model=llama3.1:8b, streamCallback=provided
10:56:25.178 Performing Ollama health check...
10:56:25.178 Ollama API request to: http://localhost:11434/api/tags
10:56:25.178 Ollama API request method: GET
10:56:25.178 Ollama API request headers: {"Content-Type":"application/json","Accept":"application/json","User-Agent":"ollama-js/0.6.0 (x64 win32 Node.js/v22.20.0)"}
10:56:25.180 Ollama API response status: 200
10:56:25.180 Ollama health check successful
10:56:25.180 Making Ollama streaming request after successful health check
10:56:25.180 Ollama API request to: http://localhost:11434/api/chat
10:56:25.181 Ollama API request method: POST
10:56:25.181 Ollama API request headers: {"Content-Type":"application/json","Accept":"application/json","User-Agent":"ollama-js/0.6.0 (x64 win32 Node.js/v22.20.0)"}
10:57:11.794 Ollama API response status: 200
10:57:11.794 Starting Ollama stream processing with model llama3.1:8b
10:57:11.795 Processing Ollama stream chunk #1, done=false, has content=false, content length=0
10:57:11.795 Detected 1 tool calls in stream chunk
10:57:11.795 Sending chunk to callback: chunkNumber=1, contentLength=0, done=false
10:57:11.795 Successfully called streamCallback with first chunk
10:57:11.979 Processing Ollama stream chunk #2, done=true, has content=false, content length=0
10:57:11.979 Empty final chunk received with done=true flag
10:57:11.979 Sending chunk to callback: chunkNumber=2, contentLength=0, done=true
10:57:11.979 Successfully called streamCallback with done=true flag
10:57:11.979 Completed Ollama streaming: processed 2 chunks, final content: 0 chars
10:57:11.979 Pipeline stage LLMCompletion completed in 46883ms
10:57:11.979 Received LLM response from model: llama3.1:8b, provider: Ollama
10:57:11.979 ========== TOOL EXECUTION DECISION ==========
10:57:11.979 Tools enabled in options: true
10:57:11.979 Response provider: Ollama
10:57:11.980 Response model: llama3.1:8b
10:57:11.980 [TOOL CALL DEBUG] Starting tool call detection for provider: Ollama
10:57:11.980 [TOOL CALL DEBUG] Response properties: text, model, provider, tool_calls, usage
10:57:11.980 [TOOL CALL DEBUG] tool_calls exists as a direct property
10:57:11.980 [TOOL CALL DEBUG] tool_calls type: object
10:57:11.980 [TOOL CALL DEBUG] tool_calls is an array with length: 1
10:57:11.980 Response has tool_calls property with 1 tools
10:57:11.980 Tool calls details: [{"id":"tool-call-1761209831979-0","type":"function","function":{"name":"search_suggestion","arguments":"{}"}}]
10:57:11.980 Response has tool_calls: true
10:57:11.980 [TOOL CALL DEBUG] Final tool_calls that will be used: [{"id":"tool-call-1761209831979-0","type":"function","function":{"name":"search_suggestion","arguments":"{}"}}]
10:57:11.980 ========== STAGE 6: TOOL EXECUTION ==========
10:57:11.980 Response contains 1 tool calls, processing...
10:57:11.980 ========== TOOL CALL DETAILS ==========
10:57:11.980 Tool call 1: name=search_suggestion, id=tool-call-1761209831979-0
10:57:11.980 Arguments: {}
10:57:11.980 [WS-SERVER] Sending LLM stream message: chatNoteId=vzd1tWqllrug, content=false, thinking=false, toolExecution=true, done=false
10:57:11.980 [WS-SERVER] Sent LLM stream message to 1 clients
10:57:11.980 ========== TOOL ITERATION 1/5 ==========
10:57:11.980 ========== PIPELINE TOOL EXECUTION FLOW ==========
10:57:11.980 About to call toolCalling.execute with 1 tool calls
10:57:11.980 Tool calls being passed to stage: [{"id":"tool-call-1761209831979-0","type":"function","function":{"name":"search_suggestion","arguments":"{}"}}]
10:57:11.980 Executing pipeline stage: ToolCalling
10:57:11.981 ========== TOOL CALLING STAGE ENTRY ==========
10:57:11.981 Response provider: Ollama, model: llama3.1:8b
10:57:11.981 LLM requested 1 tool calls from provider: Ollama
10:57:11.981 Available tools in registry: 12
10:57:11.981 Available tools: unknown, unknown, unknown, unknown, unknown, unknown, unknown, unknown, unknown, unknown, unknown, unknown
10:57:11.981 ========== STARTING TOOL EXECUTION ==========
10:57:11.981 Executing 1 tool calls in parallel
10:57:11.981 Validating 1 tools before execution
10:57:11.981 ========== TOOL CALL 1 OF 1 ==========
10:57:11.982 Tool call 1 received - Name: search_suggestion, ID: tool-call-1761209831979-0
10:57:11.982 Tool parameters: {}
10:57:11.982 Tool validated successfully: search_suggestion
10:57:11.982 Received string arguments in tool calling stage: {}...
10:57:11.982 Parsed JSON arguments: 
10:57:11.982 ================ EXECUTING TOOL: search_suggestion ================
10:57:11.982 Tool parameters: 
10:57:11.982 Parameters values: 
10:57:11.982 Starting tool execution for search_suggestion...
10:57:11.982 Executing search_suggestion tool - Type: "undefined", UserQuery: ""
10:57:11.982 ================ TOOL EXECUTION COMPLETED in 0ms ================
10:57:11.982 Tool execution completed in 0ms - Result: Object with keys: error, validTypes
10:57:11.982 ========== TOOL EXECUTION COMPLETE ==========
10:57:11.982 Completed execution of 1 tools in 1ms
10:57:11.982 -------- Tool Result for search_suggestion (ID: tool-call-1761209831979-0) --------
10:57:11.982 Result type: object
10:57:11.982 Result preview: {
  "error": "Invalid search type: undefined",
  "validTypes": [
    "basic",
    "attribute",
    "content",
    "relation",
    "date",
    "advance...
10:57:11.982 Tool result status: SUCCESS
10:57:11.982 ========== FOLLOW-UP DECISION ==========
10:57:11.982 Follow-up needed: true
10:57:11.982 Reasoning: Has tool results to process  
10:57:11.982 Total messages to return to pipeline: 5
10:57:11.982 Last 3 messages in conversation:
10:57:11.982 Message 2 (user): Why is Trilium Notes one of the best notes application ?
10:57:11.982 Message 3 (assistant): 
10:57:11.982 Message 4 (tool): {
  "error": "Invalid search type: undefined",
  "validTypes": [
    "basic",
    "attribute",
    "...
10:57:11.982 Pipeline stage ToolCalling completed in 2ms
10:57:11.982 ToolCalling stage execution complete, got result with needsFollowUp: true
10:57:11.982 ========== TOOL EXECUTION RESULTS ==========
10:57:11.982 Received 1 tool results
10:57:11.982 Tool result 1: tool_call_id=tool-call-1761209831979-0, content={
  "error": "Invalid search type: undefined",
  "validTypes": [
    "basic",
    "attribute",
    "content",
    "relation",
    "date",
    "advanced"
  ]
}
10:57:11.982 Tool result status: SUCCESS
10:57:11.983 Tool result for: search_suggestion
10:57:11.983 [WS-SERVER] Sending LLM stream message: chatNoteId=vzd1tWqllrug, content=false, thinking=false, toolExecution=true, done=false
10:57:11.983 [WS-SERVER] Sent LLM stream message to 1 clients
10:57:11.983 ========== TOOL FOLLOW-UP REQUIRED ==========
10:57:11.983 Tool execution complete, sending results back to LLM
10:57:11.983 [WS-SERVER] Sending LLM stream message: chatNoteId=vzd1tWqllrug, content=false, thinking=false, toolExecution=true, done=false
10:57:11.983 [WS-SERVER] Sent LLM stream message to 1 clients
10:57:11.983 Created tool execution status for Ollama: 1 entries
10:57:11.983 Tool status 1: search_suggestion - success
10:57:11.983 ========== SENDING TOOL RESULTS TO LLM FOR FOLLOW-UP ==========
10:57:11.983 Total messages being sent: 5
10:57:11.983 Message 2 (user): Why is Trilium Notes one of the best notes application ?
10:57:11.983 Message 3 (assistant): 
10:57:11.983   Has 1 tool calls
10:57:11.983 Message 4 (tool): {
  "error": "Invalid search type: undefined",
  "validTypes": [
    "basic",
    "attribute",
    "...
10:57:11.983   Tool call ID: tool-call-1761209831979-0
10:57:11.983 LLM follow-up request options: {"model":"llama3.1:8b","enableTools":true,"stream":true,"provider":"Ollama"}
10:57:11.983 Executing pipeline stage: LLMCompletion
10:57:11.983 ========== LLM COMPLETION STAGE - INPUT MESSAGES ==========
10:57:11.983 Total input messages: 5
10:57:11.983 Contains 1 tool result messages - likely a tool follow-up request
10:57:11.983 Message 2 (user): Why is Trilium Notes one of the best notes application ?
10:57:11.983 Message 3 (assistant): 
10:57:11.983   Contains 1 tool calls
10:57:11.983 Message 4 (tool): {
  "error": "Invalid search type: undefined",
  "validTypes": [
    "basic",
    "attribute",
    "content",
    "relation",
    "date",
    "advance...
10:57:11.983   Tool call ID: tool-call-1761209831979-0
10:57:11.983 LLM completion options: {"model":"llama3.1:8b","enableTools":true,"stream":true,"hasToolExecutionStatus":true}
10:57:11.983 [LLMCompletionStage] Stream explicitly set to: true
10:57:11.983 Adding 12 tools to LLM request
10:57:11.983 Generating LLM completion, provider: auto, model: llama3.1:8b
10:57:11.983 [LLMCompletionStage] Using auto-selected service
10:57:11.983 [AIServiceManager] generateChatCompletion called with options: {"model":"llama3.1:8b","stream":true,"enableTools":true}
10:57:11.983 [AIServiceManager] Stream option type: boolean
10:57:11.984 [AIServiceManager] Using selected provider ollama with options.stream: true
10:57:12.063 Using model llama3.1:8b from provider ollama
10:57:12.063 Model capabilities: {"supportsTools":true,"supportsStreaming":true,"contextWindow":8192}
10:57:12.063 Adding tool execution feedback to messages
10:57:12.063 Added tool execution feedback: 1 statuses
10:57:12.063 Ollama formatter received 6 messages
10:57:12.063 Message 0 - role: system, keys: role, content, content length: 283
10:57:12.063 Message 1 - role: user, keys: role, content, content length: 56
10:57:12.063 Message 2 - role: user, keys: role, content, content length: 56
10:57:12.063 Message 3 - role: assistant, keys: role, content, tool_calls, content length: 0
10:57:12.063 Message 3 has 1 tool_calls
10:57:12.063 Message 4 - role: tool, keys: role, content, name, tool_call_id, content length: 158
10:57:12.063 Message 4 has tool_call_id: tool-call-1761209831979-0
10:57:12.063 Message 4 has name: search_suggestion
10:57:12.063 Message 5 - role: system, keys: role, content, content length: 82
10:57:12.063 Adding tool instructions to system prompt for Ollama
10:57:12.063 Using new system message: # Trilium Base System Prompt

You are an AI assi...
10:57:12.063 Adding message with role user without context injection, keys: role, content
10:57:12.063 Adding message with role user without context injection, keys: role, content
10:57:12.063 Adding message with role assistant without context injection, keys: role, content, tool_calls
10:57:12.063 Adding message with role tool without context injection, keys: role, content, name, tool_call_id
10:57:12.063 Ollama formatter produced 5 formatted messages
10:57:12.064 Formatted message 0 - role: system, keys: role, content, content length: 4626
10:57:12.064 Formatted message 1 - role: user, keys: role, content, content length: 56
10:57:12.064 Formatted message 2 - role: user, keys: role, content, content length: 56
10:57:12.064 Formatted message 3 - role: assistant, keys: role, content, tool_calls, content length: 0
10:57:12.064 Formatted message 3 has 1 tool_calls
10:57:12.064 Formatted message 4 - role: tool, keys: role, content, name, tool_call_id, content length: 158
10:57:12.064 Formatted message 4 has tool_call_id: tool-call-1761209831979-0
10:57:12.064 Formatted message 4 has name: search_suggestion
10:57:12.064 Sending to Ollama with formatted messages: 5 (with tool instructions)
10:57:12.064 Sending 12 tool definitions to Ollama
10:57:12.064 Sending request to Ollama with model: llama3.1:8b
10:57:12.064 Using streaming mode with Ollama client
10:57:12.064 Ollama streaming details: model=llama3.1:8b, streamCallback=provided
10:57:12.064 Performing Ollama health check...
10:57:12.064 Ollama API request to: http://localhost:11434/api/tags
10:57:12.064 Ollama API request method: GET
10:57:12.064 Ollama API request headers: {"Content-Type":"application/json","Accept":"application/json","User-Agent":"ollama-js/0.6.0 (x64 win32 Node.js/v22.20.0)"}
10:57:12.066 Ollama API response status: 200
10:57:12.066 Ollama health check successful
10:57:12.066 Making Ollama streaming request after successful health check
10:57:12.066 Ollama API request to: http://localhost:11434/api/chat
10:57:12.066 Ollama API request method: POST
10:57:12.066 Ollama API request headers: {"Content-Type":"application/json","Accept":"application/json","User-Agent":"ollama-js/0.6.0 (x64 win32 Node.js/v22.20.0)"}
10:57:13.889 Ollama API response status: 200
10:57:13.889 Starting Ollama stream processing with model llama3.1:8b
10:57:13.889 Processing Ollama stream chunk #1, done=false, has content=true, content length=5
10:57:13.889 First content chunk [5 chars]: "Based"
10:57:13.889 Sending chunk to callback: chunkNumber=1, contentLength=5, done=false
10:57:13.889 Successfully called streamCallback with first chunk
10:57:14.499 Nothing to push
10:57:14.558 Finished pull
10:57:14.559 Nothing to push
10:57:14.799 Content hash computation took 53ms
10:57:14.799 Content hash checks PASSED
10:57:14.799 Sending message to all clients: {"type":"sync-finished","lastSyncedPush":45844}
10:57:15.148 Processing Ollama stream chunk #10, done=false, has content=true, content length=2
10:57:16.457 Processing Ollama stream chunk #20, done=false, has content=true, content length=4
10:57:17.841 Processing Ollama stream chunk #30, done=false, has content=true, content length=3
10:57:19.313 Processing Ollama stream chunk #40, done=false, has content=true, content length=3
10:57:20.898 Processing Ollama stream chunk #50, done=false, has content=true, content length=2
10:57:22.490 Processing Ollama stream chunk #60, done=false, has content=true, content length=4
10:57:24.044 Processing Ollama stream chunk #70, done=false, has content=true, content length=7
10:57:25.571 Processing Ollama stream chunk #80, done=false, has content=true, content length=3
10:57:27.075 Processing Ollama stream chunk #90, done=false, has content=true, content length=5
10:57:28.600 Processing Ollama stream chunk #100, done=false, has content=true, content length=6
10:57:30.130 Processing Ollama stream chunk #110, done=false, has content=true, content length=4
10:57:31.747 Processing Ollama stream chunk #120, done=false, has content=true, content length=2
10:57:33.315 Processing Ollama stream chunk #130, done=false, has content=true, content length=5
10:57:34.816 Processing Ollama stream chunk #140, done=true, has content=false, content length=0
10:57:34.816 Empty final chunk received with done=true flag
10:57:34.816 Sending chunk to callback: chunkNumber=140, contentLength=0, done=true
10:57:34.816 Successfully called streamCallback with done=true flag
10:57:34.816 Completed Ollama streaming: processed 140 chunks, final content: 603 chars
10:57:34.817 Response contains no tool calls - plain text response
10:57:34.817 Pipeline stage LLMCompletion completed in 22834ms
10:57:34.817 ========== LLM FOLLOW-UP RESPONSE RECEIVED ==========
10:57:34.817 Follow-up response model: llama3.1:8b, provider: Ollama
10:57:34.817 Follow-up response text: Based on the Trilium Notes system, I will try another search variation.

{
  "name": "search_result",
  "parameters": {
    "note_id": "abc123def456",...
10:57:34.817 Follow-up contains tool calls: false
10:57:34.817 ========== TOOL EXECUTION COMPLETE ==========
10:57:34.817 No more tool calls, breaking tool execution loop
10:57:34.817 Resuming streaming with final response: 603 chars
10:57:34.817 [WS-SERVER] Sending LLM stream message: chatNoteId=vzd1tWqllrug, content=true, thinking=false, toolExecution=false, done=true
10:57:34.817 [WS-SERVER] Sent LLM stream message to 1 clients
10:57:34.817 Sent final response with done=true signal and text content
10:57:34.817 ========== FINAL RESPONSE PROCESSING ==========
10:57:34.817 Executing pipeline stage: ResponseProcessing
10:57:34.817 Processing LLM response from model: llama3.1:8b
10:57:34.817 Token usage - prompt: 1066, completion: 140, total: 1206
10:57:34.817 Pipeline stage ResponseProcessing completed in 0ms
10:57:34.817 Final response processed, returning to user (603 chars)
10:57:34.817 ========== PIPELINE COMPLETE ==========
10:57:34.835 204 PUT /api/notes/vzd1tWqllrug/data with 0 bytes took 2ms
10:57:34.836 200 GET /api/llm/chat/vzd1tWqllrug with 1757 bytes took 1ms
10:57:34.854 200 GET /api/notes/vzd1tWqllrug/blob with 2921 bytes took 0ms
10:57:49.113 Updating option 'zoomFactor' to '0.6'
10:57:49.113 204 PUT /api/options with 0 bytes took 0ms
10:58:14.536 Sending message to all clients: {"type":"sync-push-in-progress","lastSyncedPush":45844}
10:58:14.536 Sync Q8efM9ZBoc: Pushing 2 sync changes in 73ms
10:58:14.537 Nothing to push
10:58:14.595 Finished pull
10:58:14.595 Nothing to push
10:58:14.820 Content hash computation took 21ms
10:58:14.820 Content hash checks PASSED
10:58:14.820 Sending message to all clients: {"type":"sync-finished","lastSyncedPush":45846}
10:59:12.438 200 GET /api/sql/schema with 7987 bytes took 0ms
10:59:12.439 200 GET /api/notes/_backendLog/blob with 157 bytes took 0ms
10:59:12.451 204 PUT /api/options with 0 bytes took 2ms

Metadata

Metadata

Assignees

No one assigned

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions