RobotMem + LIBERO
Persistent memory for lifelong robot learning — transfer skills learned in one LIBERO task suite to the next without forgetting.
Quick Start
from robotmem import RobotMemory
from libero.libero import benchmark
mem = RobotMemory(db="libero_lifelong.db")
task_suite = benchmark.get_benchmark("libero_long")
for task in task_suite.get_task_names():
env = task_suite.make_env(task)
obs = env.reset()
# Recall prior skills that match the current observation
prior = mem.recall(obs["agentview_image"], top_k=3)
action = policy.predict(obs, prior_experiences=prior)
obs, reward, done, info = env.step(action)
# Persist successful trajectories for future tasks
mem.save_perception(
observation=obs,
action=action,
reward=reward,
metadata={"task": task, "suite": "libero_long"}
)
What This Integration Does
LIBERO is a benchmark framework designed to study lifelong learning in robotic manipulation. It offers over 130 procedurally generated tasks organized into five suites — LIBERO-Spatial, LIBERO-Object, LIBERO-Goal, LIBERO-Long, and LIBERO-100 — each testing a different axis of knowledge transfer. The core challenge is catastrophic forgetting: when a robot learns new tasks, it tends to lose the skills it learned before.
RobotMem solves this by providing a persistent, queryable memory layer that sits alongside LIBERO's task suites. Every successful trajectory — the observation, the action taken, the reward received, and the task context — is stored in a local SQLite database. When the agent encounters a new task, it can recall relevant prior experiences using semantic similarity search over observations. This means a policy trained on LIBERO-Spatial can leverage manipulation primitives it learned there when it encounters a new LIBERO-Goal task, without retraining on the old data.
The integration is intentionally lightweight. RobotMem does not modify LIBERO's environment wrappers or task definitions. It operates as a side-channel memory that your policy can query at inference time. This makes it compatible with any learning algorithm you choose — behavior cloning, diffusion policy, transformer-based policies, or reinforcement learning approaches.
- Cross-suite skill transfer — Recall manipulation skills from LIBERO-Spatial when solving LIBERO-Goal tasks, bridging the gap between spatial and semantic knowledge.
- Anti-forgetting buffer — Persist successful trajectories permanently so prior task knowledge is never lost, regardless of how many new tasks are trained.
- Observation-based retrieval — Query the memory using raw camera observations (agentview or eye-in-hand) via vector similarity, no manual feature engineering required.
- Task-aware metadata — Tag each memory with suite name, task ID, success rate, and custom annotations for fine-grained retrieval filtering.
- Zero-config persistence — All data is stored in a single SQLite file that survives training restarts, machine reboots, and experiment reruns.
Why LIBERO Needs Persistent Memory
Standard lifelong learning approaches use replay buffers that live in RAM and vanish when training ends. LIBERO's benchmark protocol evaluates forward and backward transfer across task sequences, but most baselines lack a mechanism to store and retrieve experiences across training runs. RobotMem fills this gap: it gives every experiment a durable memory that accumulates knowledge over the robot's entire lifetime, not just a single training session.
In practice, this means you can train on LIBERO-Spatial today, shut down your machine, and resume tomorrow with LIBERO-Object — and your agent still remembers the spatial manipulation skills it learned yesterday. This is closer to how real robots need to operate: learning continuously without discarding what they already know.