Getting Started
This guide walks you through a complete robot learning episode using robotmem's 6 API tools.
The Episode Lifecycle
start_session() ← Begin episode
│
├── learn() ← Record declarative experience
├── save_perception()← Store sensor/trajectory data
├── recall() ← Retrieve relevant past experience
│
end_session() ← Consolidate + proactive recall
Step 1: Start a Session
Every episode begins with start_session. This creates a session context that groups all memories from this episode.
from robotmem import start_session
session = start_session(
context='{"robot_id": "arm-01", "robot_model": "UR5e", "environment": "kitchen-3F", "task_domain": "pick-and-place"}'
)
# Returns: {"session_id": "abc-123-...", "collection": "default", "active_memories_count": 42}
The context parameter accepts any JSON — robotmem doesn't enforce a schema, but the 4-partition convention is recommended:
{
"params": {"grip_force": {"value": 12.5, "unit": "N"}},
"spatial": {"object_position": [1.3, 0.7, 0.42]},
"robot": {"id": "arm-01", "type": "UR5e"},
"task": {"name": "pick-and-place", "domain": "kitchen"}
}
Step 2: Record Experience
learn — Declarative Memory
Use learn to record insights, parameters, strategies, and lessons:
from robotmem import learn
# Record a successful strategy
result = learn(
insight="grip_force=12.5N produces optimal grasp success rate for cylindrical objects",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}',
session_id=session["session_id"],
)
# Returns: {"status": "created", "memory_id": 1, "auto_inferred": {"category": "observation", ...}}
What happens automatically: - Auto-classify: Categorizes as observation / constraint / pattern / postmortem / etc. - Confidence estimation: Scores 0.80–0.95 based on content richness signals - Deduplication: 3-layer check (exact → Jaccard → cosine) prevents redundant memories - Tag inference: Assigns semantic tags from a controlled vocabulary of 50+ tags
save_perception — Procedural Memory
Use save_perception for sensor data, trajectories, and multi-step procedures:
from robotmem import save_perception
result = save_perception(
description="Successful grasp of red cup: 30 steps, force peak 12.5N at step 15",
perception_type="procedural", # visual | tactile | auditory | proprioceptive | procedural
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], [0.2, -0.2, 0.1, 0.9]], "steps": 30}',
metadata='{"format": "xyz_gripper", "hz": 10}',
session_id=session["session_id"],
)
# Returns: {"memory_id": 2, "perception_type": "procedural", "has_embedding": true}
Step 3: Retrieve Experience
recall — Hybrid Search
recall combines BM25 full-text search with vector similarity for best results:
from robotmem import recall
memories = recall(
query="how to grasp a cup",
n=5, # Return top 5
min_confidence=0.3, # Filter low-confidence memories
)
# Returns: {"memories": [...], "total": 5, "mode": "hybrid", "query_ms": 8.2}
Each returned memory includes:
{
"id": 1,
"content": "grip_force=12.5N produces optimal grasp success rate",
"type": "fact", # "fact" or "perception"
"perception_type": None, # "visual"/"tactile"/... for perceptions
"category": "observation", # auto-classified category
"confidence": 0.85, # 0.0–1.0
"params": {"grip_force": {"value": 12.5, "unit": "N"}}, # extracted from context
"spatial": {...}, # extracted from context
"robot": {...}, # extracted from context
"task": {"success": True}, # extracted from context
"_rrf_score": 1.0, # normalized relevance score
"created_at": "2026-03-09T10:30:00",
}
Structured Filtering (context_filter)
Filter memories by structured fields within their context JSON:
# Only successful experiences
recall(query="grasp", context_filter='{"task.success": true}')
# Range filter: distance < 0.05m
recall(query="push", context_filter='{"params.final_distance.value": {"$lt": 0.05}}')
# Combined
recall(query="grasp", context_filter='{"task.success": true, "robot.type": "UR5e"}')
Supported operators: $lt, $lte, $gt, $gte, $ne, and exact equality.
Spatial Sorting (spatial_sort)
Sort results by proximity to a target position:
recall(
query="grasp object",
spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}',
)
# With distance cutoff
recall(
query="grasp",
spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42], "max_distance": 0.1}',
)
Episode Replay
Pass session_id to recall all memories from a specific episode in chronological order:
recall(query="*", session_id="abc-123-...")
Step 4: End the Session
end_session triggers three automatic processes:
from robotmem import end_session
result = end_session(
session_id=session["session_id"],
outcome_score=0.85, # Optional: 0.0–1.0 episode success score
)
What happens automatically:
- Time Decay: Memories not accessed recently lose confidence gradually
- Formula:
confidence_new = confidence × (1 - decay_rate) ^ days_since_last_access - Default decay_rate: 0.01
-
Frequently recalled memories maintain high confidence
-
Consolidation: Similar memories within the session are merged
- Jaccard similarity > 0.50 → group and keep the most confident
- Protected categories (constraint, postmortem, gotcha) are never consolidated
-
Perceptions are never consolidated
-
Proactive Recall: Returns historical memories related to this session's latest experience
- Searches across all sessions for similar content
- Excludes current session's own memories
- Returns up to 5 related memories for the next episode
end_session response:
{
"status": "ended",
"session_id": "abc-123-...",
"summary": {
"memory_count": 15,
"by_type": {"fact": 12, "perception": 3},
"by_category": {"observation": 8, "pattern": 3, "decision": 1}
},
"decayed_count": 42,
"consolidated": {"merged_groups": 2, "superseded_count": 3, "compression_ratio": 0.2},
"related_memories": [
{"id": 5, "content": "Similar grasp strategy from kitchen-2F", ...},
],
}
Step 5: Correct Mistakes
forget — Soft Delete
from robotmem import forget
forget(memory_id=3, reason="Incorrect force reading due to sensor calibration error")
# Returns: {"status": "forgotten", "memory_id": 3, "reason": "..."}
update — Modify Content
from robotmem import update
update(
memory_id=1,
new_content="grip_force=11.0N produces optimal grasp (recalibrated sensor)",
context='{"params": {"grip_force": {"value": 11.0, "unit": "N"}}}',
)
# Returns: {"status": "updated", "memory_id": 1, "old_content": "...", "new_content": "..."}
Updating a memory automatically re-runs classification, re-generates embedding, and re-infers tags.
Complete Example
from robotmem import learn, recall, save_perception, start_session, end_session
# --- Episode Start ---
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# --- Record Experience ---
learn(
insight="grip_force=12.5N works best for red cups",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}',
session_id=session["session_id"],
)
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8]]}',
session_id=session["session_id"],
)
# --- Recall (next episode would do this) ---
memories = recall(
query="how to grasp a cup",
context_filter='{"task.success": true}',
)
for m in memories["memories"]:
print(f"#{m['id']} {m['content'][:60]} score={m['_rrf_score']:.2f}")
# --- Episode End ---
result = end_session(session_id=session["session_id"], outcome_score=0.85)
print(f"Consolidated: {result['consolidated']['superseded_count']} memories merged")
print(f"Related memories from history: {len(result['related_memories'])}")
What's Next
- Configuration — Customize embedding backend and search parameters
- API Reference — Complete parameter documentation for all 6 tools
- Architecture — Understand the search pipeline and database design