LanceDB-backed long-term memory provider for OpenCode
Welcome to lancedb-opencode-pro! This plugin empowers OpenCode with a durable, long-term memory system powered by LanceDB.
To help you find what you need quickly, please select the guide that best fits your needs:
You see "Memory store unavailable" error or plugin not loading ๐ Read the Compatibility Guide (10 min)
- Fix:
rm -rf ~/.cache/opencode/node_modules/then restart - OpenCode v1.4.3 fully supported (NAPI issue was a cache problem, not a loader bug)
- Diagnosis checklist and solutions
You are new to this project and want to get it running quickly. ๐ Read the Quick Start Guide (15 min)
- Complete installation steps & examples
- Basic memory operations
- Troubleshooting common issues
You have it running and want to tune retrieval, use OpenAI, or share memory across projects. ๐ Read the Advanced Configuration (30 min)
- Hybrid retrieval (RRF, recency boost, importance weighting)
- Memory injection controls (budget/adaptive modes)
- OpenAI Embedding setup
- Cross-project memory sharing (global scope)
You want to understand the architecture, run tests, or contribute to the source code. ๐ Read the Development Workflow (20 min)
- Local environment setup
- OpenCode skills usage guide
- Testing and validation processes
- Release workflow
Register the plugin in ~/.config/opencode/opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"plugin": [
"lancedb-opencode-pro"
]
}Create ~/.config/opencode/lancedb-opencode-pro.json:
{
"provider": "lancedb-opencode-pro",
"dbPath": "~/.opencode/memory/lancedb",
"embedding": {
"provider": "ollama",
"model": "nomic-embed-text",
"baseUrl": "http://127.0.0.1:11434"
}
}# Verify Ollama is accessible
curl http://127.0.0.1:11434/api/tags
# Restart OpenCodeThat's it! OpenCode will now automatically capture key decisions and inject them into future conversations.
- Automatically extracts decisions, lessons, and patterns from assistant responses.
- Configurable minimum capture length (default: 80 chars).
- Auto-categorization into project or global scope.
- Vector Search + BM25 Lexical Search
- Reciprocal Rank Fusion (RRF)
- Recency boost & Importance weighting
| Tool | Description | Documentation |
|---|---|---|
memory_search |
Hybrid search for long-term memory | Doc |
memory_delete |
Delete a single memory entry | Doc |
memory_clear |
Clear all memories in a specific scope | Doc |
memory_stats |
View memory statistics | Doc |
memory_remember |
Manually store a memory | Doc |
memory_forget |
Remove or disable a memory | Doc |
memory_what_did_you_learn |
Show recent learning summaries | Doc |
memory_dashboard |
Weekly learning dashboard with trends | Doc |
memory_kpi |
Learning effectiveness KPIs (retry-to-success, memory lift) | Doc |
(For full details on tools like Effectiveness Feedback, Cross-Project Sharing, Deduplication, Citations, and Episodic Learning, please refer to the Advanced Configuration.)
{
"provider": "lancedb-opencode-pro",
"dbPath": "~/.opencode/memory/lancedb",
"embedding": {
"provider": "ollama",
"model": "nomic-embed-text",
"baseUrl": "http://127.0.0.1:11434"
},
"retrieval": {
"mode": "hybrid",
"vectorWeight": 0.7,
"bm25Weight": 0.3,
"minScore": 0.2
}
}Full configuration options including injection control, deduplication, and OpenAI setup are available in docs/ADVANCED_CONFIG.md.
- Environment Variables (
LANCEDB_OPENCODE_PRO_*) LANCEDB_OPENCODE_PRO_CONFIG_PATH- Project sidecar:
.opencode/lancedb-opencode-pro.json - Global sidecar:
~/.config/opencode/lancedb-opencode-pro.json - Legacy sidecar or built-in defaults
Quick validation using Docker:
docker compose build --no-cache && docker compose up -d
docker compose exec opencode-dev npm run verifyFull pre-release validation:
docker compose exec opencode-dev npm run verify:fullPerformance benchmark (optional):
# Mock mode (fast)
docker compose exec opencode-dev ./scripts/run-perf-benchmark.sh
# Real Ollama mode
docker compose exec opencode-dev ./scripts/run-perf-benchmark.sh --realCheck docs/memory-validation-checklist.md for more details.
Primary method: npm package (recommended)
Alternatively, install via .tgz release asset or build from source. See Install Options.
- v0.8.3: @opencode-ai/sdk and plugin upgrade to 1.4.7, Effect type mock fix, SDK compatibility docs
- v0.8.2: Plugin version mismatch fix (internal)
- v0.8.1: @opencode-ai/sdk and plugin upgrade to 1.4.3, bunfig.toml test isolation fix
- v0.8.0: Structured Logging via client.app.log() per OpenCode best practices, Test Environment Isolation Fix
- v0.7.0: OpenCode SDK v1.3.14 Compatibility, Node 22 memory_search Race Condition Fix
- v0.6.3: Index Creation Guard (defer on empty/insufficient tables, fix #70), LanceDB 0.27.2
- v0.6.2: Index Race Condition Fix (concurrent-process conflict handling, jitter backoff)
- v0.6.1: Event TTL/Archival, Index Creation Resilience, Duplicate Consolidation Performance
- v0.6.0: Learning Dashboard, KPI Pipeline, Feedback-Driven Ranking, Task-Type Aware Injection
[older versions removed - see CHANGELOG.md for full history]
See CHANGELOG.md for all changes.
- Read docs/DEVELOPMENT_WORKFLOW.md
- Understand OpenSpec specs in
openspec/specs/ - Use
backlog-to-openspecto create proposals
- Issues: Submit errors or requests on GitHub Issues.
- License: MIT License - see LICENSE.
Last Updated: 2026-04-17 Latest Version: v0.8.3