RobotMem vs Letta (MemGPT)
Which memory system should you use in 2026?
Quick Summary
Letta (formerly MemGPT) is a stateful agent framework where an LLM manages its own memory autonomously — deciding when to move information from in-context working memory to archival storage, and when to retrieve it. This is a powerful paradigm for long-horizon text-based tasks like research assistants or coding agents. RobotMem takes a different approach: it uses a deterministic, typed storage engine to persist structured sensory data from physical robots — RGB images, joint trajectories, force-torque arrays, audio clips — without relying on an LLM to make storage decisions. Where Letta is intelligent about managing text, RobotMem is efficient about managing sensor streams. Both are offline-capable, both are open source, but they serve fundamentally different systems.
Bottom line: Choose RobotMem for physical robots that produce sensory data and need fast, deterministic memory retrieval at the edge. Choose Letta for long-horizon LLM agent tasks where the agent itself should reason about what to remember and what to forget.
Feature Comparison
| Capability | RobotMem | Letta (MemGPT) |
|---|---|---|
| Target use case | ✓ Physical robots | LLM-driven AI agents |
| Multi-modal perception | ✓ 5 types (visual, tactile, auditory, proprioceptive, procedural) | ✕ Text / document focus |
| Trajectory storage | ✓ Joint trajectories + timestamps | ✕ Not supported |
| Numeric parameters | ✓ Force, velocity, torque arrays | ✕ Not supported |
| Offline / edge capable | ✓ Local ONNX embedding, no internet required | ~ Offline possible with local LLM (Ollama), but complex setup |
| Visual deduplication | ✓ dHash perceptual hashing | ✕ Not supported |
| MCP protocol | ✓ Built-in MCP server | ✕ Not supported |
| Natural language storage | ✓ Supported | ✓ Core feature |
| Model agnostic | ✓ ROS, MuJoCo, Isaac Gym, dm_control | ~ Works with multiple LLM backends |
| LLM-managed memory | ✕ Deterministic storage (no LLM needed) | ✓ Core architecture — LLM decides what to remember |
| Storage latency | ✓ Fast — no LLM inference at write time | ~ LLM inference required at each memory decision |
| Community & ecosystem | ~ Early stage, growing | ✓ Active research community, 12k+ GitHub stars |
| Setup complexity | ✓ Simple — pip install robotmem |
~ More configuration required (server + LLM backend) |
| License | Apache 2.0 | Apache 2.0 |
Detailed Comparison
1. LLM-Managed vs. Deterministic Memory
Letta's defining insight is that an LLM can manage its own memory — the model itself decides to call archival_memory_insert or archival_memory_search based on what it thinks is important. This allows for flexible, intelligent memory management in open-ended agent tasks. The trade-off is that every memory operation requires an LLM inference step, adding latency and cost. RobotMem takes the opposite approach: storage is deterministic — when a sensory episode arrives, it is stored immediately without LLM involvement. Search uses vector similarity over pre-computed ONNX embeddings. For robots processing hundreds of frames per second, the LLM-in-the-loop model would be a bottleneck.
2. Data Types: Sensory Episodes vs. Archival Text
Letta's archival memory is optimized for text: documents, notes, conversation summaries. RobotMem's storage model was designed from the ground up for the data robots actually produce: typed perception records with fields for image tensors, proprioceptive joint vectors, force-torque readings, and audio waveforms. It also stores trajectory replays — sequences of joint angles with timestamps — which are essential for imitation learning and motion primitives. No text serialization of this data would preserve the fidelity needed for replaying or analyzing robot behavior.
3. Offline Capability: ONNX vs. Local LLM
Both tools can operate offline, but with different complexity profiles. Letta requires a full LLM backend — typically 7B-70B parameters via Ollama or similar — to function, since the LLM is core to memory management, not just retrieval. On an NVIDIA Jetson or Raspberry Pi 5, running a 7B parameter model alongside the robot control loop is often infeasible. RobotMem's local inference uses a compact ONNX embedding model that runs comfortably on edge hardware, with no LLM inference required during normal operation.
4. Where Letta Wins: Long-Horizon Agent Reasoning
Letta's architecture shines for tasks where the agent itself must reason about what is worth remembering over long time horizons — research assistants that read papers over weeks, coding agents that learn project conventions over months, or personal assistants that track preferences across years. The LLM-driven memory model is uniquely suited to these open-ended, text-centric tasks. RobotMem stores whatever the robot perceives without judgment; Letta's agent decides what matters. For high-level cognitive agents running on cloud compute with reliable internet, Letta is the more research-forward choice.
Frequently Asked Questions
What is the main difference between RobotMem and Letta (MemGPT)?
RobotMem uses a deterministic storage engine for robot sensory data — images, joint trajectories, force readings — with no LLM involvement at write time. Letta uses an LLM to autonomously manage its own memory, deciding what to store and recall. RobotMem is optimized for high-frequency sensory data on edge hardware. Letta is optimized for intelligent, long-horizon reasoning in text-based agent tasks.
Can I use Letta for robot memory?
Letta can store text descriptions of robot observations, but it lacks native support for multi-modal sensor types, trajectory arrays, visual deduplication, or MCP protocol. More critically, Letta requires LLM inference at each memory operation — at robot data rates (hundreds of frames per episode), this creates prohibitive latency. RobotMem handles robot data natively and stores it without LLM overhead.
Is RobotMem free to use?
Yes. RobotMem is open source under Apache 2.0 and installs with pip install robotmem. Letta is also Apache 2.0 open source, but requires more setup — running a Letta server and configuring an LLM backend (local or cloud). Both are free; RobotMem has a lower barrier to getting started.
Ready to give your robot persistent memory?
Open source, fully offline, one pip install away.
Get Started on GitHub Read the Docs