ESC
入门指南 快速入门

快速入门

本指南将引导你使用 robotmem 的 6 个 API 工具完成一个完整的机器人学习回合。

回合生命周期

start_session()          ← 开始回合
    │
    ├── learn()          ← 记录陈述性经验
    ├── save_perception()← 存储传感器/轨迹数据
    ├── recall()         ← 检索相关的历史经验
    │
end_session()            ← 合并 + 主动回忆

第 1 步:启动会话

每个回合以 start_session 开始。这会创建一个会话上下文,将本回合的所有记忆归组。

from robotmem import start_session

session = start_session(
    context='{"robot_id": "arm-01", "robot_model": "UR5e", "environment": "kitchen-3F", "task_domain": "pick-and-place"}'
)
# 返回: {"session_id": "abc-123-...", "collection": "default", "active_memories_count": 42}

context 参数接受任意 JSON — robotmem 不强制 schema,但推荐使用 4 分区约定

{
    "params":  {"grip_force": {"value": 12.5, "unit": "N"}},
    "spatial": {"object_position": [1.3, 0.7, 0.42]},
    "robot":   {"id": "arm-01", "type": "UR5e"},
    "task":    {"name": "pick-and-place", "domain": "kitchen"}
}

第 2 步:记录经验

learn — 陈述性记忆

使用 learn 记录洞察、参数、策略和教训:

from robotmem import learn

# 记录一个成功的策略
result = learn(
    insight="grip_force=12.5N produces optimal grasp success rate for cylindrical objects",
    context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}',
    session_id=session["session_id"],
)
# 返回: {"status": "created", "memory_id": 1, "auto_inferred": {"category": "observation", ...}}

自动执行的处理: - 自动分类:归类为 observation / constraint / pattern / postmortem 等 - 置信度估计:基于内容丰富度信号,评分 0.80-0.95 - 去重:3 层检查(精确匹配 → Jaccard → 余弦)防止冗余记忆 - 标签推断:从 50+ 个受控词汇中分配语义标签

save_perception — 程序性记忆

使用 save_perception 存储传感器数据、轨迹和多步骤程序:

from robotmem import save_perception

result = save_perception(
    description="Successful grasp of red cup: 30 steps, force peak 12.5N at step 15",
    perception_type="procedural",  # visual | tactile | auditory | proprioceptive | procedural
    data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], [0.2, -0.2, 0.1, 0.9]], "steps": 30}',
    metadata='{"format": "xyz_gripper", "hz": 10}',
    session_id=session["session_id"],
)
# 返回: {"memory_id": 2, "perception_type": "procedural", "has_embedding": true}

第 3 步:检索经验

recall 结合 BM25 全文搜索和向量相似度搜索,获得最佳结果:

from robotmem import recall

memories = recall(
    query="how to grasp a cup",
    n=5,                    # 返回前 5 条
    min_confidence=0.3,     # 过滤低置信度记忆
)
# 返回: {"memories": [...], "total": 5, "mode": "hybrid", "query_ms": 8.2}

每条返回的记忆包含:

{
    "id": 1,
    "content": "grip_force=12.5N produces optimal grasp success rate",
    "type": "fact",                    # "fact" 或 "perception"
    "perception_type": None,           # 感知类型: "visual"/"tactile"/...
    "category": "observation",         # 自动分类的类别
    "confidence": 0.85,                # 0.0-1.0
    "params": {"grip_force": {"value": 12.5, "unit": "N"}},   # 从 context 提取
    "spatial": {...},                  # 从 context 提取
    "robot": {...},                    # 从 context 提取
    "task": {"success": True},         # 从 context 提取
    "_rrf_score": 1.0,                # 归一化相关性分数
    "created_at": "2026-03-09T10:30:00",
}

结构化过滤(context_filter)

通过 context JSON 中的结构化字段过滤记忆:

# 仅成功的经验
recall(query="grasp", context_filter='{"task.success": true}')

# 范围过滤:距离 < 0.05m
recall(query="push", context_filter='{"params.final_distance.value": {"$lt": 0.05}}')

# 组合条件
recall(query="grasp", context_filter='{"task.success": true, "robot.type": "UR5e"}')

支持的操作符:$lt$lte$gt$gte$ne 和精确相等。

空间排序(spatial_sort)

按与目标位置的接近程度排序结果:

recall(
    query="grasp object",
    spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42]}',
)

# 带距离阈值
recall(
    query="grasp",
    spatial_sort='{"field": "spatial.object_position", "target": [1.3, 0.7, 0.42], "max_distance": 0.1}',
)

回合回放

传入 session_id 可按时间顺序检索特定回合的所有记忆:

recall(query="*", session_id="abc-123-...")

第 4 步:结束会话

end_session 触发三个自动处理流程:

from robotmem import end_session

result = end_session(
    session_id=session["session_id"],
    outcome_score=0.85,  # 可选:0.0-1.0 回合成功分数
)

自动执行的处理:

  1. 时间衰减:长时间未被访问的记忆会逐渐降低置信度
  2. 公式:confidence_new = confidence × (1 - decay_rate) ^ days_since_last_access
  3. 默认 decay_rate:0.01
  4. 被频繁检索的记忆保持高置信度

  5. 合并:会话内相似的记忆被合并

  6. Jaccard 相似度 > 0.50 → 分组并保留置信度最高的
  7. 受保护的类别(constraint、postmortem、gotcha)永不合并
  8. 感知数据永不合并

  9. 主动回忆:返回与本次会话最新经验相关的历史记忆

  10. 在所有会话中搜索相似内容
  11. 排除当前会话自身的记忆
  12. 返回至多 5 条相关记忆,供下一回合使用

end_session 响应:

{
    "status": "ended",
    "session_id": "abc-123-...",
    "summary": {
        "memory_count": 15,
        "by_type": {"fact": 12, "perception": 3},
        "by_category": {"observation": 8, "pattern": 3, "decision": 1}
    },
    "decayed_count": 42,
    "consolidated": {"merged_groups": 2, "superseded_count": 3, "compression_ratio": 0.2},
    "related_memories": [
        {"id": 5, "content": "Similar grasp strategy from kitchen-2F", ...},
    ],
}

第 5 步:修正错误

forget — 软删除

from robotmem import forget

forget(memory_id=3, reason="Incorrect force reading due to sensor calibration error")
# 返回: {"status": "forgotten", "memory_id": 3, "reason": "..."}

update — 修改内容

from robotmem import update

update(
    memory_id=1,
    new_content="grip_force=11.0N produces optimal grasp (recalibrated sensor)",
    context='{"params": {"grip_force": {"value": 11.0, "unit": "N"}}}',
)
# 返回: {"status": "updated", "memory_id": 1, "old_content": "...", "new_content": "..."}

更新记忆会自动重新运行分类、重新生成向量嵌入并重新推断标签。

完整示例

from robotmem import learn, recall, save_perception, start_session, end_session

# --- 回合开始 ---
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')

# --- 记录经验 ---
learn(
    insight="grip_force=12.5N works best for red cups",
    context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}',
    session_id=session["session_id"],
)

save_perception(
    description="Grasp trajectory: 30 steps, success",
    perception_type="procedural",
    data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8]]}',
    session_id=session["session_id"],
)

# --- 检索(下一个回合会这样做)---
memories = recall(
    query="how to grasp a cup",
    context_filter='{"task.success": true}',
)
for m in memories["memories"]:
    print(f"#{m['id']}  {m['content'][:60]}  score={m['_rrf_score']:.2f}")

# --- 回合结束 ---
result = end_session(session_id=session["session_id"], outcome_score=0.85)
print(f"合并: {result['consolidated']['superseded_count']} 条记忆被合并")
print(f"历史相关记忆: {len(result['related_memories'])}")

下一步