RobotMem + Isaac Lab
Persistent memory for NVIDIA GPU-accelerated robot training — store and recall perceptions across millions of simulation steps.
Quick Start
import robotmem
from omni.isaac.lab.envs import ManagerBasedRLEnv
mem = robotmem.connect("isaac-training-run")
env = ManagerBasedRLEnv(cfg=my_env_cfg)
obs, info = env.reset()
for step in range(num_steps):
actions = policy(obs)
obs, reward, terminated, truncated, info = env.step(actions)
mem.save_perception(
observation=obs.cpu().numpy(),
action=actions.cpu().numpy(),
reward=float(reward.mean()),
metadata={"step": step, "env": "Isaac-Reach-v0"}
)
if step % 1000 == 0:
past = mem.recall("high reward grasping strategies")
policy.update_from_experience(past)
What This Integration Does
Isaac Lab, developed by NVIDIA, is the leading framework for GPU-accelerated robot learning. It runs thousands of parallel environments on a single GPU, generating massive amounts of training data every second. Without persistent memory, all of that hard-earned experience vanishes the moment your training script exits or your machine restarts.
RobotMem bridges this gap by giving your Isaac Lab agents a durable memory layer. Every observation, action, and reward can be stored via save_perception and retrieved later using semantic search through recall. This means your robot can reference what it learned in a previous training session, compare strategies across different reward functions, and build up a growing body of experience that persists indefinitely.
Because Isaac Lab environments run on GPU tensors, RobotMem handles the conversion from CUDA tensors to storable formats automatically. You call .cpu().numpy() on your tensors and RobotMem takes care of the rest — serialization, indexing, and semantic embedding all happen behind the scenes. The memory database lives on disk, survives process crashes, and can be shared across machines.
- High-frequency persistence — capture perception data at thousands of steps per second without blocking your GPU training loop
- Semantic recall — query past experiences using natural language, not just numeric indices
- Cross-session continuity — resume training with full access to prior episodes, reward curves, and learned behaviors
- Multi-environment support — tag memories by environment config, reward function, or robot morphology for structured retrieval
- Zero infrastructure — no database server to manage; RobotMem uses local storage with optional sync
When to Use This
This integration is ideal when you are running reinforcement learning experiments in Isaac Lab and want to preserve experience data across training runs. It is particularly useful for curriculum learning, where the agent needs to recall what it mastered at earlier difficulty levels, and for multi-task training, where experiences from one task can inform performance on another. If you are doing sim-to-real transfer, persistent memory lets you carry simulation experience directly into your real-world deployment pipeline.