|
1 | | -# Changelog |
| 1 | +# CHANGELOG |
2 | 2 |
|
3 | | -All notable changes to mem0-mcp-selfhosted are documented here. |
4 | 3 |
|
5 | | -## [0.1.0] - 2026-02-27 |
| 4 | +## v0.1.1 (2026-02-27) |
6 | 5 |
|
7 | | -First public release. Self-hosted mem0 MCP server for Claude Code with 11 tools, dual LLM provider support, and knowledge graph integration. |
| 6 | +### Bug Fixes |
| 7 | + |
| 8 | +- Use NEO4J_DATABASE env var instead of config dict for non-default database |
| 9 | + ([`74e1188`](https://github.com/elvismdev/mem0-mcp-selfhosted/commit/74e1188d38154846ec8b12602fde1d757197873b)) |
| 10 | + |
| 11 | +mem0ai's graph_memory.py passes config as positional args to Neo4jGraph() where pos 3 is `token`, |
| 12 | + not `database`. Setting database in the config dict causes it to land in the token parameter, |
| 13 | + resulting in AuthenticationError. Use NEO4J_DATABASE env var which langchain_neo4j reads via |
| 14 | + get_from_dict_or_env(). |
| 15 | + |
| 16 | +Upstream: mem0ai #3906, #3981, #4085 (none merged) |
| 17 | + |
| 18 | +Resolves: PAR-57 |
8 | 19 |
|
9 | | -### New Features |
10 | 20 |
|
11 | | -- **11 MCP tools** — 9 memory tools (`add_memory`, `search_memories`, `get_memories`, `get_memory`, `update_memory`, `delete_memory`, `delete_all_memories`, `list_entities`, `delete_entities`) + 2 graph tools (`search_graph`, `get_entity`) + `memory_assistant` prompt |
12 | | -- **Dual LLM providers** — Anthropic (Claude) and Ollama as configurable main LLM for fact extraction and memory updates. Set `MEM0_PROVIDER=ollama` for a fully local setup with no cloud dependencies |
13 | | -- **Knowledge graph** — Neo4j-backed entity and relationship extraction via `enable_graph` toggle. Supports 5 graph LLM providers: `anthropic`, `ollama`, `gemini`, `gemini_split`, and `anthropic_oat` |
14 | | -- **Split-model graph pipeline** — `gemini_split` routes entity extraction to Gemini and contradiction detection to Claude, combining Gemini's extraction quality with Claude's reasoning |
15 | | -- **Zero-config Anthropic auth** — Automatically reads Claude Code's OAT token from `~/.claude/.credentials.json`. No API key needed for Claude Code users |
16 | | -- **OAT token self-refresh** — Proactive pre-expiry refresh + 3-step defensive retry (piggyback on credentials file, self-refresh via OAuth, wait-and-retry). Long-running sessions survive token rotation seamlessly |
17 | | -- **`MEM0_PROVIDER` cascade** — Single env var configures both main LLM and graph LLM providers. `MEM0_OLLAMA_URL` cascades to all Ollama-backed services. Per-service overrides still work |
18 | | -- **Structured outputs** — Claude Opus/Sonnet/Haiku 4.x models use native JSON schema via `output_config` for reliable fact extraction |
19 | | -- **Ollama defense-in-depth** — 6 layers for reliable structured output from Ollama: `/no_think` injection, deterministic params, think-tag stripping, JSON extraction, and retry on empty responses |
20 | | -- **Per-call graph toggle** — `enable_graph` parameter on `add_memory` and `search_memories` with thread-safe locking |
21 | | -- **Wildcard graph search** — Pass `*` to `search_graph` to list all entities |
22 | | -- **Qdrant Facet API** — `list_entities` uses server-side aggregation (Qdrant v1.12+) with scroll+dedupe fallback for older versions |
23 | | -- **Safe bulk delete** — Never calls `memory.delete_all()`. Iterates and deletes individually with explicit graph cleanup |
| 21 | +## v0.1.0 (2026-02-27) |
24 | 22 |
|
25 | 23 | ### Bug Fixes |
26 | 24 |
|
27 | | -- Fix `anthropic_oat` provider not registered in `LlmFactory`, preventing explicit use |
28 | | -- Fix `is_oat_token(None)` crash in proactive refresh when no Anthropic token configured |
29 | | -- Fix `response.content[0]` IndexError when Anthropic API returns empty content |
30 | | -- Fix thread-safety race condition in `safe_bulk_delete` reading mutable `enable_graph` state |
31 | | -- Fix contradiction model defaulting to Ollama model name when sent to Anthropic API |
32 | | -- Fix Anthropic provider not registered for `gemini_split` contradiction LLM |
33 | | -- Fix `MEM0_QDRANT_TIMEOUT` rejected by Pydantic — use pre-configured `QdrantClient` instead |
34 | | -- Fix Gemini `_parse_response` signature mismatch after upstream `tools` parameter addition |
35 | | -- Fix Neo4j `CypherSyntaxError` on LLM-generated relationship names with hyphens or leading digits |
36 | | - |
37 | | -### Infrastructure |
38 | | - |
39 | | -- **301 tests** — Unit, contract, integration, MCP protocol, and concurrency test suites |
40 | | -- **Centralized env helpers** — `env()`, `opt_env()`, `bool_env()` with consistent whitespace stripping across all modules |
41 | | -- **Telemetry suppression** — mem0ai PostHog telemetry disabled before any imports |
42 | | -- **Relationship sanitizer** — Monkey-patches mem0ai's sanitizer at startup for Neo4j identifier compliance |
43 | | -- **Gemini null content guard** — Patches `GeminiLLM._parse_response` to handle `content=None` responses |
44 | | -- **Transient retry** — Anthropic API 500/502/503/529 errors retried with exponential backoff |
| 25 | +- **ci**: Use angular parser compatible with PSR v9.15.2 |
| 26 | + ([`b5bc6ab`](https://github.com/elvismdev/mem0-mcp-selfhosted/commit/b5bc6ab45edff26f07fc73774c7e0c57d22cb40d)) |
| 27 | + |
| 28 | +The v9 GitHub Action does not recognize "conventional" parser name (v10+ only). Reverts to "angular" |
| 29 | + and changelog.changelog_file format. |
| 30 | + |
| 31 | +### Continuous Integration |
| 32 | + |
| 33 | +- Add python-semantic-release configuration and GitHub Actions workflow |
| 34 | + ([`2473ee4`](https://github.com/elvismdev/mem0-mcp-selfhosted/commit/2473ee4ec9c0db90b2bb412d3714caae7dc41498)) |
| 35 | + |
| 36 | +Automated versioning via Conventional Commits analysis, changelog generation, git tagging |
| 37 | + (v{version}), and GitHub Release creation on push to main. |
0 commit comments