RobotMem + LeRobot

Give your LeRobot policies persistent memory and sync learned experiences directly to HuggingFace Hub.

pip install robotmem

Quick Start

from robotmem import RobotMemory
from lerobot.common.policies.act.modeling_act import ACTPolicy

mem = RobotMemory(sync_backend="huggingface_hub")

# After a successful manipulation episode
mem.save_perception(
    observation=obs_tensor,
    action=action_seq,
    reward=cumulative_reward,
    tags=["pick-and-place", "ACT", "real-world"],
)

# Before a new episode, recall relevant experiences
prior = mem.recall(
    query="pick red cube from table",
    modality="vision+action",
    top_k=5,
)
policy = ACTPolicy.from_pretrained("lerobot/act_aloha_sim")
policy.load_memory_context(prior)

What This Integration Does

LeRobot, created by HuggingFace, is the most popular open-source framework for real-world robot learning. It provides pre-trained policies, standardized datasets, and simulation environments that thousands of researchers use daily. However, LeRobot policies treat every episode as independent. There is no built-in mechanism for a robot to remember what it learned from yesterday's training session and apply that knowledge today. This is where RobotMem bridges the gap.

When you integrate RobotMem into your LeRobot workflow, every manipulation experience becomes a persistent memory entry. These entries include raw observations, action sequences, reward signals, and semantic tags that describe the task context. RobotMem indexes these memories using a hybrid retrieval system that combines vector similarity search with structured metadata filtering. Before each new episode, your policy can recall the most relevant prior experiences and condition its behavior accordingly.

The HuggingFace Hub sync backend is particularly powerful for LeRobot users. Experiences saved on one machine are automatically pushed to your Hub repository, making them accessible across your entire fleet of robots. A robot training in your lab can share its learned experiences with a robot deployed in a warehouse, enabling cross-environment knowledge transfer without manual data pipelines.

Why LeRobot Teams Choose RobotMem

The HuggingFace ecosystem already solved model sharing and dataset hosting. RobotMem completes the picture by adding experience sharing. Instead of retraining from scratch when deploying to a new environment, your robot starts with a memory bank of relevant past experiences. Early adopters report up to 40% faster convergence on manipulation tasks when using memory-conditioned policies compared to training from scratch. The integration requires zero changes to your existing LeRobot training loop. You add two function calls: one to save after each episode, and one to recall before the next.

Start Building Robots That Remember

pip install robotmem