RobotMem + ManiSkill
Persistent experience replay for Gymnasium-compatible manipulation tasks — never lose episode data between training runs.
Quick Start
import robotmem
import gymnasium as gym
import mani_skill.envs
mem = robotmem.connect("maniskill-pick-cube")
env = gym.make("PickCube-v1", obs_mode="state", render_mode="rgb_array")
obs, info = env.reset()
for episode in range(100):
done = False
while not done:
action = env.action_space.sample() # replace with your policy
obs, reward, terminated, truncated, info = env.step(action)
done = terminated or truncated
mem.save_perception(
observation=obs,
action=action,
reward=reward,
metadata={"episode": episode, "task": "PickCube-v1", "success": info.get("success", False)}
)
# Recall successful grasps from all prior episodes
expert_moves = mem.recall("successful pick and place actions")
What This Integration Does
ManiSkill is the most widely used benchmark for robotic manipulation research, built on the SAPIEN physics engine and fully compatible with the Gymnasium API. It provides dozens of manipulation tasks — picking, stacking, pouring, assembly — that researchers use to develop and evaluate robot learning algorithms. However, the standard training loop discards all episode data once the process ends.
RobotMem adds a persistent memory layer to your ManiSkill training pipeline. Each step's observation, action, reward, and metadata are captured through save_perception and stored in a local database that survives across sessions, crashes, and even hardware changes. Later, you can query this memory using natural language via recall — asking for "successful pick and place actions" or "episodes where the gripper slipped" returns semantically relevant experiences.
Because ManiSkill follows the Gymnasium interface, integrating RobotMem requires no changes to your environment setup. You wrap your existing training loop with a few lines of RobotMem calls and immediately gain access to a growing experience library. This library becomes increasingly valuable over time: early exploration data helps bootstrap new experiments, successful trajectories serve as demonstrations for imitation learning, and failure cases provide negative examples for reward shaping.
- Gymnasium-native — works with any ManiSkill environment through the standard
gym.makeinterface, no custom wrappers needed - Episode-level tagging — attach task name, success flag, robot config, and custom metadata to every perception for structured filtering
- Semantic search over trajectories — find relevant past experiences using natural language queries instead of manual indexing
- Cross-task transfer — experiences from PickCube can inform StackCube; RobotMem's semantic index finds relevant knowledge across task boundaries
- Lightweight and local — no cloud services, no GPU overhead for storage; runs entirely on your workstation alongside SAPIEN
When to Use This
Use this integration whenever you are running ManiSkill experiments and want to preserve training data across sessions. It is especially valuable for experience replay research, where you need a persistent buffer that outlives individual training runs. If you are doing multi-task manipulation learning, RobotMem lets you build a shared experience pool across PickCube, StackCube, PegInsertionSide, and other tasks. For sim-to-real workflows, the persistent memory acts as a bridge — your real robot can recall what the simulated agent learned, filtered by task relevance and success rate.