RobotMem + robosuite
Persistent memory for robosuite manipulation experiments — store every interaction and recall what worked across sessions.
Quick Start
import robotmem
import robosuite as suite
from robosuite.wrappers import GymWrapper
mem = robotmem.connect("robosuite-lift-experiment")
env = suite.make("Lift", robots="Panda", has_renderer=False, use_camera_obs=False)
env = GymWrapper(env)
obs, info = env.reset()
for step in range(5000):
action = env.action_space.sample() # replace with your policy
obs, reward, terminated, truncated, info = env.step(action)
mem.save_perception(
observation=obs,
action=action,
reward=reward,
metadata={"step": step, "task": "Lift", "robot": "Panda"}
)
if terminated or truncated:
prior_successes = mem.recall("high reward lifting with Panda gripper")
obs, info = env.reset()
What This Integration Does
robosuite is the academic standard for robotic manipulation simulation. Developed by the ARISE Initiative at Stanford, it provides a modular framework for creating manipulation tasks with different robots, grippers, and objects, all powered by the MuJoCo physics engine. Researchers worldwide use robosuite to develop algorithms for pick-and-place, assembly, door opening, and dozens of other contact-rich tasks. But each experiment typically starts from scratch — prior session data is either lost or manually saved in ad-hoc formats.
RobotMem gives your robosuite agents a structured, searchable memory that persists across every training run. With the save_perception API, each step's observation vector, action, reward signal, and custom metadata are written to a local database. The recall API lets you query this database using natural language — asking "high reward lifting with Panda gripper" returns the most relevant stored experiences, ranked by semantic similarity. This transforms your experimental history from scattered log files into a queryable knowledge base.
The integration works through robosuite's GymWrapper, which exposes the standard Gymnasium interface. This means you do not need to modify your environment configuration or task definitions. Add RobotMem to your existing training script with three lines of code: connect to a memory store, save perceptions inside your step loop, and recall experiences whenever your agent needs guidance from past sessions. The memory store handles serialization, indexing, and retrieval automatically.
- GymWrapper compatible — integrates directly with robosuite's Gymnasium wrapper, no changes to environment setup or task definitions required
- Multi-robot memory — tag experiences by robot type (Panda, Sawyer, IIWA, Jaco) and retrieve robot-specific knowledge for transfer experiments
- Contact-rich task support — store fine-grained force and torque data alongside observations for tasks like peg insertion and nut assembly
- Experiment provenance — every perception is timestamped and tagged with task, robot, and session metadata for full reproducibility
- Offline analysis — query the memory database after training to analyze failure modes, compare reward distributions, or extract demonstration trajectories
When to Use This
This integration is designed for researchers running manipulation experiments in robosuite who want to preserve and reuse training data. It is particularly valuable for benchmark comparisons, where you need to track performance across different algorithms on the same task. If you are doing imitation learning, RobotMem's persistent memory serves as a demonstration buffer that grows with every expert rollout. For multi-robot studies, you can store experiences from Panda, Sawyer, and IIWA agents in the same memory and query across robot types to find transferable manipulation strategies. The integration also supports collaborative research — share your memory database with colleagues so they can build on your experimental results rather than re-running from scratch.