Claude Code Plugin

llm-wiki

In short: install the plugin → type /llm-wiki:setup in chat → capture in raw/ → curate wiki//llm-wiki:build-og when you want a static viewer. From a shell, llm-wiki build-site and llm-wiki build-og are the same build (alias names).

Turn scattered source material into a maintained wiki your agent can keep improving over time — slash-command first (/llm-wiki:setup, /llm-wiki:ingest, /llm-wiki:build-og), not flag hell.

Every slash command is a curated prompt under commands/: setup, ingest, query, memory, MCP, and more. You stay in Claude Code; the repo still gives you raw/, wiki/, and schema-backed workflows when you need depth.

If I have seen further it is by standing on the shoulders of Giants.

Isaac Newton, letter to Robert Hooke, 1675
(echoing Bernard of Chartres, nanos gigantum humeris insidentes)

Pattern credit and thanks to Karpathy · LLM Wiki gist · Karpathy on X. Also inspired by MemPalace · Milla & Ben on X (more context). This repo is a concrete plugin implementation for Claude Code and adjacent agent tooling.

Install once
/plugin marketplace add https://github.com/SkinnnyJay/wiki-llm
/plugin install llm-wiki@llm-wiki-local

Type / in chat — each commands/<name>.md file is /llm-wiki:<name>. No need to hunt for bin/llm-wiki for daily work: use /llm-wiki:ingest, /llm-wiki:query, /llm-wiki:memory, /llm-wiki:status, /llm-wiki:build-og, and /llm-wiki:mcp as often as /llm-wiki:setup or /llm-wiki:configure. Full catalog is under Skills & commands.

1. Setup the vault
/llm-wiki:setup
2. Ingest one source
/llm-wiki:ingest
3. Build the viewer
/llm-wiki:build-og

The prompts carry the flags and adapters for you. Prefer a terminal? Same steps map to llm-wiki setup, ingest …, and llm-wiki build-site (alias build-og, same build) — see CLI reference.

How the landing pieces map to the actual repo

01

Vault Structure

The vault keeps raw inputs separate from curated knowledge. That lets you preserve the source of truth, revise derived notes safely, and keep schema-driven rules nearby.

llm-wiki/
  raw/        # immutable source captures
  wiki/       # curated markdown pages
  .schema/    # validation and structure rules
  outputs/    # optional reports and generated artifacts
  • raw/ protects the original material.
  • wiki/ holds maintained markdown the agent can improve over time.
  • .schema/ keeps ingest, query, and lint behavior disciplined.
  • Generated viewer artifacts can live alongside the vault without replacing the markdown source.
02

Ingest Pipeline

The pipeline is designed so new sources enter as recorded inputs first, then move through review and curation into the wiki layer.

Source file, URL, HN, optional connectors
raw/ captured without rewriting history
wiki/ curated markdown and linked notes
# In chat
/llm-wiki:ingest
# CLI equivalent (when you want exact flags)
llm-wiki ingest file ./notes.md --out clips/notes.md
llm-wiki ingest url https://example.com --out clips/example.md
  • Files, URLs, and Hacker News threads can all enter the same workflow.
  • Optional connectors can extend capture without changing the vault model.
  • The review path helps keep prompt-injected or low-trust material from silently becoming wiki truth.
03

Static Viewer

After ingest and curation, run the build so wiki/.og/ gets wiki-data.json plus the static viewer (graph UI, search, page reader). In chat that is /llm-wiki:build-og. In the shell, llm-wiki build-site and llm-wiki build-og are the same subcommand — two names, one pipeline: build-site is the canonical spelling; build-og is an alias that matches the .og output folder and the slash command name.

# Build (slash — runs the same pipeline as build-site / build-og)
/llm-wiki:build-og
# Either name emits wiki/.og/ + wiki-data.json — pick one:
llm-wiki build-site
# llm-wiki build-og

# Optional: build + local HTTP (port from viewer.port, default 8765)
llm-wiki build-og --serve-background
llm-wiki build-og --stop-serving
# Identical flags on build-site, e.g. llm-wiki build-site --serve-background

# Foreground (Ctrl+C stops): llm-wiki build-site --serve

Static files must be served over HTTP, not file://, so graph assets and wiki data load predictably. The slash command runs the build only. From a shell, --serve / --serve-background work on both build-site and build-og (same flags); --stop-serving stops the background server (pid in wiki/.og/.viewer-http.pid). Docs often show build-og next to --serve* because the name matches wiki/.og/. You can use ./scripts/serve-viewer.sh after a build if you prefer.

04

Skills & Commands

The main surface is slash commands in Claude Code — each one is a prompt backed by the same skills as the CLI. Use / for day-to-day vault work; reach for bin/llm-wiki when you want automation, scripts, or exact flags. For the full terminal reference, see CLI reference below.

Claude Code slash commands

This is the path we optimize for: prompt bodies live in commands/; invoke as /llm-wiki:<name> (hyphenated filenames stay hyphenated, e.g. graph-knowledge). After plugin changes, /reload-plugins in Claude Code. Each name below links to the markdown source on GitHub.

Vault setup & health

First run, re-wizard, or when you need to tune config or sanity-check the vault — not the same rhythm as capture and read below.

/llm-wiki:setup
New vault, re-wizard, vault-only, or memory-only — wiki-setup.
/llm-wiki:configure
Quick tweaks to config.json — same flow as MCP wiki_configure or llm-wiki configure -i when you use the CLI.
/llm-wiki:status
Vault health — integrations, search backend, KG; MCP wiki_status.
Core workflow

What you reach for most often once the vault exists: add sources, ask questions, ship the viewer, keep quality in check.

/llm-wiki:ingest
Land sources in raw/, then merge to wiki/ — adapters, limits, provenance.
/llm-wiki:query
Answer from the wiki with citations; optional save back to wiki/wiki-query.
/llm-wiki:build-og
Emit wiki/.og/ + wiki-data.json. CLI: llm-wiki build-site (canonical name) and llm-wiki build-og (alias, same flags — use either).
/llm-wiki:lint
Wiki health — orphans, broken wikilinks, contradictions — wiki-lint.
Memory & MCP
/llm-wiki:memory
Session memory under raw/memory/ — save, list, recall, prune — wiki-session-memory.
/llm-wiki:mcp
MCP server (stdio or SSE), editor install, search/KG backends — same as llm-wiki mcp … when you need the terminal.
Graphs & integrations
/llm-wiki:graph
Link graph / D3 views — graph --mode links|knowledge.
/llm-wiki:graph-knowledge
Knowledge-style cluster view (shortcut for graph --mode knowledge).
/llm-wiki:integrations
Adapter readiness, keys, Firecrawl-style tooling — integrations status|validate|wizard|set-key.
Raw layer
/llm-wiki:raw-prepare
Validate/clean raw/, raw finish, prep for wiki-ingest — wiki-raw-prepare.
Research & benchmarks
/llm-wiki:research
Ad-hoc topic research (not the same as query, which reads existing wiki/).
/llm-wiki:research-loop
Batch tasks from research-tasks.json / YAML.
/llm-wiki:benchmark
Retrieval benchmarks (LME / LoCoMo / ConvoMem) — ties to memory hub charts.
Vault git
/llm-wiki:git-status
Short status inside the vault repo.
/llm-wiki:git-log
History / log views scoped to the vault.
/llm-wiki:git-diff
Diffs for vault changes.
/llm-wiki:git-snapshot
Snapshot workflow helpers.
/llm-wiki:git-lifecycle
Lifecycle views across vault git history.

Try this first

Install the plugin once, then stay in the slash menu for the loop.

Install /plugin install llm-wiki@llm-wiki-local
Setup /llm-wiki:setup
Ingest /llm-wiki:ingest
Build /llm-wiki:build-og

Core (slash first)

  • /llm-wiki:setup scaffolds the vault — same as llm-wiki setup.
  • /llm-wiki:ingest drives sources into raw/ — adapters and flags live in the prompt.
  • /llm-wiki:build-og emits the static reader into wiki/.og/ — same pipeline as llm-wiki build-site or llm-wiki build-og (alias).

Keep it healthy

  • /llm-wiki:configure or /llm-wiki:status for config and health checks.
  • /llm-wiki:raw-prepare and /llm-wiki:lint for raw cleanup and wiki quality.
  • Skills like wiki-ingest and wiki-query align with the slash flows — use CLI only when you need non-interactive scripts.

Also works with: bin/llm-wiki or python3 scripts/llm_wiki.py for CI and power users; Cursor and Codex via AGENTS.md; manual clone for plugin development; Obsidian as an optional reader for llm-wiki/wiki/.

Commands, workflows, and ethos live in README, WORKFLOWS.md, ETHOS.md, and the repo skills.

05

Memory & retrieval

Open memory hub — charts & doc links

Three ideas share the word memory here: session memory (per-chat notes under raw/memory/ when enabled), search backends for MCP and the wiki (lexical, semantic, or hybrid with reciprocal-rank fusion), and benchmark docs for long-context retrieval evaluation (LME, LoCoMo, ConvoMem)—separate from vault content.

# In chat — session notes & MCP search context
/llm-wiki:memory
/llm-wiki:mcp

# Retrieval benchmarks (plugin repo)
/llm-wiki:benchmark
# After memory.enabled — CLI one-liners
llm-wiki memory save --current "Pinned fact for this chat"
llm-wiki memory list

# Same benchmarks from the terminal (plugin repo)
llm-wiki benchmark suites
  • Session memory: opt-in via memory.enabled; start from /llm-wiki:memory — CLI memory {save|log|list|show|recall|prune} matches the same feature; hooks can write .current-session for --current.
  • MCP search: mcp.search_backendfts5 (BM25), grep, chromadb (embeddings), or hybrid (FTS5 + Chroma + RRF; mcp.hybrid_rrf_k defaults to 60).
  • Benchmarks: CLI and methodology in benchmarks/README.md; charts and links on the Memory docs hub (this site).

“Memory” in docs/memory/benchmarks/ means long-context retrieval evaluation, not the session-memory feature under raw/memory/.

Terminal & automation (CLI)

Slash commands above are the friendly path. When you need scripts, CI, or exact flags, run bin/llm-wiki or python3 scripts/llm_wiki.py from the repo root or vault. Use --help on any subcommand.

Vault & config

llm-wiki configure · -i interactive
Write or edit config.json (paths, viewer, integrations, persona).
llm-wiki setup --root .
Scaffold llm-wiki/ from templates (raw, wiki, schema, CLAUDE.md).
llm-wiki teardown
Remove generated artifacts; optional --purge to remove the vault tree.
llm-wiki validate · --wikilinks
Check vault layout; optionally fail if [[wikilinks]] point to missing pages.
llm-wiki check · --plugin-repo
Fast sanity checks (config, optional compileall); plugin repo: agent docs + scripts.

Ingest & integrations

llm-wiki ingest …
Bring sources into raw/ via adapters (file, url, HN, etc.); see ingest --list.
llm-wiki deps check
Verify optional Python deps (e.g. PDF ingest helpers).
llm-wiki integrations (status · validate · wizard · set-key)
See which adapters are ready, validate config, interactive setup, or store API keys for tools like Firecrawl.

Build, graphs & context

llm-wiki build-site
Canonical subcommand: write wiki-data.json and the static viewer into wiki/.og/. Respects viewer.enabled.
llm-wiki build-og
Alias for build-site — identical behavior and flags (--serve, --serve-background, --stop-serving, --if-stale, --port). The name matches the .og folder and /llm-wiki:build-og.
llm-wiki graph · --mode links|knowledge
Generate a D3 graph in .tmp/llm-wiki-graph/ (link view or cluster view).
llm-wiki graph-knowledge
Shortcut for graph --mode knowledge (connected-component clusters).
llm-wiki list-topics
Show tag index with wiki coverage.
llm-wiki wake-up · --update-claude
Print L0+L1 context blob; optionally refresh memory stack in vault CLAUDE.md.

Raw layer

llm-wiki raw validate <path> · --autofix
Deterministic checks on a file under raw/; safe autofixes optional.
llm-wiki raw record <path>
Append one JSON line to the preparation audit log after a cleanup step.
llm-wiki raw finish <path>
Validate, log, and optional [prepare] git commit when git is enabled.
llm-wiki raw rebuild-index
Rebuild hashes/tags indexes from raw/ frontmatter.

Ops & automation

llm-wiki git (init · status · log · diff · snapshot · query · lifecycle)
Vault-scoped git helpers for snapshots, history, and lifecycle views.
llm-wiki research-loop
Run batch research tasks from research-tasks.json (or YAML with PyYAML).
llm-wiki security scan <file>
Heuristic security scan on one file (e.g. prompt-injection patterns).

Memory, search & benchmarks

llm-wiki memory (save · log · list · show · recall · prune)
Per-chat session notes under raw/memory/ when memory.enabled; --current uses .current-session.
llm-wiki mcp · --transport sse · mcp install
stdio JSON-RPC server (default) or HTTP SSE; vault tools including search (see mcp.search_backend).
llm-wiki benchmark (run · suites · report · history · compare · analyze)
Retrieval benchmarks (LME / LoCoMo / ConvoMem); see benchmarks/README.md and Memory docs hub.

Plugin repo & CI

llm-wiki sync-agent-docs · --check
Regenerate AGENTS.md / rules from docs/AGENTS.shared.md; --check fails if drift.
llm-wiki smoke-test
Run the pytest suite from the plugin root (contracts, CLI, vault flow).
llm-wiki test-report
Executable CLI/doc checks; optional --json for machine-readable output.