External references (not same metric)
Use primary sources and align benchmark names, slice sizes, and configs before comparing ranks across papers or tools.
Docs
Quick links for session memory, MCP search configuration, and benchmark roadmaps. Full prose lives in the repo Markdown files (open on GitHub for the best reading experience).
Loading headline from metrics-dashboard.json…
Use primary sources and align benchmark names, slice sizes, and configs before comparing ranks across papers or tools.
| Peer | Editorial overall (0–10) | D / S / I / A | LME R@5 (harness) |
|---|
If I have seen further it is by standing on the shoulders of Giants.
— Isaac Newton, letter to Robert Hooke, 1675
(echoing Bernard of Chartres, nanos gigantum humeris insidentes)
Inspired by MemPalace (Ben Sigman on X). This repo is a concrete plugin implementation for Claude Code and adjacent agent tooling.
Opt-in per-chat notes under raw/memory/ when memory.enabled is set in config.json. This is separate from the benchmark docs below.
Configure mcp.search_backend in vault config: fts5 (BM25), grep, chromadb, or hybrid (FTS + Chroma with reciprocal-rank fusion; mcp.hybrid_rrf_k).
Under docs/memory/benchmarks/, “memory” means long-context retrieval evaluation (LME, LoCoMo, ConvoMem)—not session memory in raw/memory/.
runs.jsonl mirror for chartsmetrics-dashboard.json — live snapshot for this page (vault LME, LoCoMo, peer editorial reference)