Skip to content
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .env
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
_APP_EMBEDDING_MODELS=embeddinggemma
18 changes: 18 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
version: '3'

services:
ollama:
build: .
restart: unless-stopped
args:
MODELS: ${_APP_EMBEDDING_MODELS:-embeddinggemma}
# duration to keep model in memory
OLLAMA_KEEP_ALIVE: 24h
ports:
- "11434:11434"
volumes:
- appwrite-models:/root/.ollama

volumes:
appwrite-models:
23 changes: 23 additions & 0 deletions dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
FROM ollama/ollama:0.12.7

# Preload specific models
ARG MODELS
# needed to set in the environment
ARG OLLAMA_KEEP_ALIVE
ENV OLLAMA_KEEP_ALIVE=${OLLAMA_KEEP_ALIVE:-24h}

# Pre-pull models at build time for Docker layer caching
RUN ollama serve & \
sleep 5 && \
for m in $MODELS; do \
echo "Pulling model $m..."; \
ollama pull $m || exit 1; \
done && \
pkill ollama
Comment on lines +12 to +16
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Quote the $MODELS variable in the build-time model loop.

Unquoted variables risk word-splitting if model names contain spaces or special characters. Also use word-boundary matching in grep.

   for m in $MODELS; do \
       echo "Pulling model $m..."; \
-      ollama pull $m || exit 1; \
+      ollama pull "$m" || exit 1; \
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for m in $MODELS; do \
echo "Pulling model $m..."; \
ollama pull $m || exit 1; \
done && \
pkill ollama
for m in $MODELS; do \
echo "Pulling model $m..."; \
ollama pull "$m" || exit 1; \
done && \
pkill ollama


# Expose Ollama default port
EXPOSE 11434

# On container start, quickly ensure models exist (no re-download unless missing)
ENTRYPOINT ["/bin/bash", "-c", "(sleep 2; for m in $MODELS; do ollama list | grep -q $m || ollama pull $m; done) & exec ollama $0"]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix the ENTRYPOINT command logic—$0 won't pass CMD correctly.

In the bash -c context, $0 refers to bash itself, not the CMD argument. This means exec ollama $0 passes "bash" to ollama (invalid), and CMD ["serve"] is never invoked. The background model check also won't block container startup if a pull fails, so the service may start with missing models.

-ENTRYPOINT ["/bin/bash", "-c", "(sleep 2; for m in $MODELS; do ollama list | grep -q $m || ollama pull $m; done) & exec ollama $0"]
+ENTRYPOINT ["/bin/bash", "-c", "for m in \"$MODELS\"; do ollama list | grep -qw \"$m\" || ollama pull \"$m\" || exit 1; done && exec ollama serve"]
 CMD ["serve"]

This change:

  • Removes the $0 reference and explicitly uses serve.
  • Makes the model check synchronous so the container fails if models are unavailable.
  • Adds word-boundary matching to grep (-w) and quotes variables to handle model names safely.
  • Removes the unused CMD since the command is now fully specified in ENTRYPOINT.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In dockerfile around line 22, the ENTRYPOINT uses bash -c with $0 (which is
bash) and runs the model-check in background so CMD isn't passed and failed
pulls are ignored; change ENTRYPOINT to run a synchronous shell script/command
that quotes $MODELS, iterates over each model, uses word-boundary grep (-w)
and/or exact matching, attempts ollama pull and exits non-zero on any pull
failure so startup fails if models are missing, then exec ollama serve
explicitly (remove reliance on $0 and remove the now-unused CMD).

CMD ["serve"]