Find Robot Memories by Location
"Show me only successful grasps near position [1.3, 0.7, 0.42]."
That's a real query you can run against your robot's memory. Not "find documents about grasping" — find actual grasping experiences that happened near a specific physical location, and only the ones that worked.
No existing memory system can do this. Text memory systems (Mem0, Zep, Letta) search by semantic similarity — "what text is most similar to my query." That's useless when your robot needs to know: what worked here, at this position, in this part of the workspace?
Why Spatial Search Matters
Robots operate in physical space. The best strategy for grasping an object at [1.3, 0.7, 0.42] is probably similar to what worked at [1.3, 0.7, 0.40] — not what worked at [0.5, 1.2, 0.8] on the other side of the table.
When you combine spatial proximity with structured filtering, you get precise experience retrieval:
- Successful grasps near the current target
- Failed attempts in a specific region (to avoid repeating mistakes)
- Force profiles used at similar heights
This is spatial experience retrieval — a retrieval task that, as far as we know, has no prior implementation in any robot memory system.
The API
Three lines of Python. That's it.
from robotmem import recall
memories = recall(
"grasp object",
context_filter={"task.success": True},
spatial_sort={"field": "spatial.target", "target": [1.3, 0.7, 0.42]}
)
This does three things simultaneously:
- Semantic search — finds memories related to "grasp object" using BM25 + vector hybrid search
- Context filtering — keeps only memories where
task.successisTrue(structured JSON path query) - Spatial sorting — ranks results by Euclidean distance to
[1.3, 0.7, 0.42]
The result: your robot gets the most relevant successful experiences, ranked by physical proximity to its current task.
Structured Context Filtering
The context_filter parameter supports JSON path queries on the structured context stored with each memory. Some examples:
# Only successful experiences
context_filter={"task.success": True}
# Only experiences with force above 10N
context_filter={"params.force_peak": {"$gt": 10}}
# Only FetchPush experiences
context_filter={"env": "FetchPush-v4"}
Context is stored as JSON when you save a perception, and robotmem parses it at recall time for filtering. This means you can attach any structured metadata to your memories and query it later — without building a custom database schema.
Try It
pip install robotmem
python -c "
from robotmem import learn, recall
# Store a grasping experience with spatial context
learn(
insight='Grasped red cup: force=12.5N, 28 steps',
context='{\"task\": {\"success\": true}, \"spatial\": {\"target\": [1.3, 0.7, 0.42]}}'
)
# Retrieve by location
result = recall('grasp', spatial_sort={'field': 'spatial.target', 'target': [1.3, 0.7, 0.40]})
print(result['memories'][0]['content'])
"
Search Robot Memory by Location
Spatial queries + structured filtering. Open source.