Introduction
robotmem is a persistent memory system for robotic AI agents. It lets robots remember skills, learn from failures, and build on past experience — across sessions, across reboots, across lifetimes.
Why robotmem?
Robots run thousands of experiments, but each episode starts from zero. robotmem stores every experience — parameters, trajectories, successes and failures — and retrieves the most relevant ones to guide future decisions.
Core Capabilities
6 API Tools
| Tool | Purpose |
|---|---|
learn |
Record physical experience (parameters, strategies, lessons) |
recall |
Retrieve experience — BM25 + vector hybrid search with context_filter and spatial_sort |
save_perception |
Store perception/trajectory/force data (visual / tactile / proprioceptive / auditory / procedural) |
forget / update |
Delete or correct erroneous memories |
start_session / end_session |
Episode lifecycle (automatic consolidation + proactive recall) |
5 Perception Types
robotmem understands 5 types of robotic perception:
- Visual — camera images, scene descriptions, object detection
- Tactile — force/torque sensors, contact events, grip feedback
- Auditory — sound events, voice commands, acoustic signatures
- Proprioceptive — joint angles, end-effector position, velocity
- Procedural — action sequences, trajectories, multi-step plans
Structured Experience Retrieval
Not just vector search — robotmem understands the structure of robotic experience:
# Only retrieve successful experiences
recall(query="push to target", context_filter='{"task.success": true}')
# Find nearest spatial scenario
recall(query="grasp object", spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}')
# Combined: success + distance < 0.05m
recall(
query="push",
context_filter='{"task.success": true, "params.final_distance.value": {"$lt": 0.05}}',
)
Context JSON — 4 Partitions
Every memory's context is structured into 4 semantic partitions:
{
"params": {"grip_force": {"value": 12.5, "unit": "N", "type": "scalar"}},
"spatial": {"object_position": [1.3, 0.7, 0.42], "target_position": [1.25, 0.6, 0.42]},
"robot": {"id": "fetch-001", "type": "Fetch", "dof": 7},
"task": {"name": "push_to_target", "success": true, "steps": 38}
}
recall automatically extracts params / spatial / robot / task as top-level fields in every returned memory.
Architecture at a Glance
SQLite + FTS5 + vec0
├── BM25 full-text search (jieba CJK tokenization)
├── Vector search (FastEmbed ONNX, pure CPU)
├── RRF fusion ranking
├── Structured filtering (context_filter)
└── Spatial nearest-neighbor sorting (spatial_sort)
- Pure CPU — no GPU required
- Single-file database:
~/.robotmem/memory.db - MCP Server (6 tools) or direct Python import
- Web management UI:
robotmem web
Comparison
| Dimension | MemoryVLA (Academic) | Mem0 (Product) | robotmem |
|---|---|---|---|
| Target users | Specific VLA model | Text AI | Robotic AI |
| Memory format | Vectors (unreadable) | Text | Natural language + perception + params |
| Structured filter | Not supported | Not supported | Supported (context_filter) |
| Spatial retrieval | Not supported | Not supported | Supported (spatial_sort) |
| Physical params | Not supported | Not supported | Supported (params partition) |
| Installation | Paper code compile | pip install | pip install |
| Database | Embedded | Cloud service | Local SQLite |
What's Next
- Installation — Get robotmem running in 2 minutes
- Getting Started — Your first episode with learn, recall, and sessions
- Configuration — Customize embedding backend, database path, and more