Configuration
robotmem uses a 3-layer configuration: dataclass defaults → ~/.robotmem/config.json → environment variables.
Config File
Location: ~/.robotmem/config.json (created automatically on first run)
Only non-default values need to be specified:
{
"embed_backend": "onnx",
"onnx_model": "BAAI/bge-small-en-v1.5"
}
All Settings
Database
| Setting | Default | Description |
|---|---|---|
db_path |
~/.robotmem/memory.db |
SQLite database file path |
Embedding
| Setting | Default | Description |
|---|---|---|
embed_backend |
"onnx" |
Embedding backend: "onnx" or "ollama" |
onnx_model |
"BAAI/bge-small-en-v1.5" |
FastEmbed ONNX model name |
onnx_dim |
384 |
ONNX model embedding dimension |
fastembed_cache_dir |
"" (system default) |
FastEmbed model cache directory |
embedding_model |
"nomic-embed-text" |
Ollama embedding model name |
embedding_dim |
768 |
Ollama model embedding dimension |
ollama_url |
"http://localhost:11434" |
Ollama API endpoint |
embed_api |
"ollama" |
Ollama API style: "ollama" or "openai_compat" |
Search
| Setting | Default | Description |
|---|---|---|
top_k |
10 |
Default number of results |
rrf_k |
60 |
RRF fusion constant (higher = more weight to lower ranks) |
Memory Defaults
| Setting | Default | Description |
|---|---|---|
collection |
"default" |
Default collection name |
default_confidence |
0.9 |
Initial confidence for new memories |
default_decay_rate |
0.01 |
Default time decay rate per day |
min_confidence |
0.3 |
Default minimum confidence for recall filtering |
Environment Variables
| Variable | Effect |
|---|---|
ROBOTMEM_HOME |
Override config/database directory (default: ~/.robotmem) |
# Example: use a project-specific database
export ROBOTMEM_HOME=/path/to/project/.robotmem
python -m robotmem
Embedding Backend Comparison
| Dimension | ONNX (default) | Ollama |
|---|---|---|
| Setup | Zero config | Requires ollama serve + model pull |
| Speed | ~5ms/query | ~20-50ms/query |
| Model size | 67MB (auto-downloaded) | 274MB (nomic-embed-text) |
| CPU/GPU | Pure CPU | CPU (GPU optional) |
| Offline | Fully offline after first download | Requires local Ollama server |
| Dimension | 384d | 768d |
| Quality | MTEB retrieval 51.68 | Higher for some tasks |
| Multilingual | Limited | Better with multilingual models |
Switch to Ollama
{
"embed_backend": "ollama",
"embedding_model": "nomic-embed-text",
"embedding_dim": 768,
"ollama_url": "http://localhost:11434"
}
ollama pull nomic-embed-text
OpenAI-Compatible API
For embedding servers that expose an OpenAI-compatible /v1/embeddings endpoint:
{
"embed_backend": "ollama",
"embed_api": "openai_compat",
"embedding_model": "your-model-name",
"embedding_dim": 768,
"ollama_url": "http://your-server:8080"
}
Database PRAGMA Settings
robotmem configures SQLite with production-optimized settings (not configurable):
| PRAGMA | Value | Rationale |
|---|---|---|
journal_mode |
WAL | Concurrent readers with single writer |
busy_timeout |
5000ms | Wait up to 5s for locked database |
synchronous |
NORMAL | Balance between safety and performance |
cache_size |
-8000 (8MB) | Reasonable memory usage for robot workloads |
Collections
Collections are logical namespaces for separating memories. Common patterns:
# Per-robot collections
start_session(collection="fetch-001")
start_session(collection="ur5e-003")
# Per-task collections
learn(insight="...", collection="grasping")
learn(insight="...", collection="navigation")
# Default collection
learn(insight="...") # uses "default"
All recall, learn, save_perception tools accept a collection parameter.
Graceful Degradation
robotmem is designed to work even when components fail:
| Component | Failure | Behavior |
|---|---|---|
| sqlite-vec | Not installed | Vector search disabled, BM25-only |
| Embedding (ONNX/Ollama) | Model unavailable | Vector search disabled, BM25-only |
| jieba | Not installed | CJK tokenization disabled, English search works |
| FTS5 | Creation fails | Full-text search disabled (rare) |
The MCP tools always return a valid response — errors are logged but never crash the server.