OpenClaw is one of the most interesting agent projects to ship this year. It is a personal AI assistant CLI that runs locally on your Mac, Windows, or Linux box and gives an LLM real control of your computer — files, shell, browser automation, and chat apps like WhatsApp, Telegram, Discord, Slack, Signal, and iMessage. As one user put it, it is “a smart model with eyes and hands at a desk with keyboard and mouse.”
It is also stateless by default. Every conversation starts cold. And that is the gap CortexDB fills.
The memory gap in local agents
Local agents have a structural disadvantage when it comes to memory. Cloud chat products can lean on a backend database; a local CLI has to ship its own. OpenClaw ships a builtin SQLite engine that handles the basics, and there is a community Honcho plugin for cross-session memory. Both work. Both also stop where a serious memory system needs to start: distribution, hybrid retrieval, knowledge graphs, governance, adaptive ranking.
CortexDB is a long-term memory layer purpose-built for AI agents: event-sourced, distributed, hybrid-retrieval, with a bitemporal knowledge graph and cross-encoder reranking. It is what you reach for when you want your agent to remember not just that something happened, but who said it, when, in what context, and how it relates to everything else you have ever told it.
The setup
Install the plugin:
openclaw plugins install @cortexdb/openclaw
openclaw gateway --forceAdd the plugin entry to openclaw.json:
{
"plugins": {
"entries": {
"openclaw-cortexdb": {
"enabled": true,
"config": {
"apiKey": "${CORTEXDB_API_KEY}",
"userId": "alice"
}
}
}
}
}What happens on every turn
The plugin hooks into three OpenClaw extension points:
- Before the prompt is built — CortexDB pulls the most relevant long-term context for the user's message and appends it to the system prompt as a preamble. Not just keyword or vector hits, but a fused, reranked, diversity-filtered context block.
- While the model is thinking — the model can invoke five tools directly:
memory_search,memory_list,memory_store,memory_get, andmemory_forget. The surface mirrors the Mem0 OpenClaw plugin shape. - After the assistant replies — the completed exchange is persisted on a non-blocking path. CortexDB's write path then runs PII scanning, partition computation, WAL append, entity extraction, ontology validation, graph indexing, fulltext indexing, vector embedding, temporal indexing, and replication.
Six things change
Plugging in a memory provider sounds incremental. It is not. Six things change the moment CortexDB is the backend.
- Hybrid retrieval, not just vectors. Six retrieval channels run in parallel on every recall — fulltext, entity-name, synonym, vector, graph BFS, and temporal — fused with reciprocal rank fusion and reranked with a cross-encoder.
- A real knowledge graph. Every memory is parsed by an LLM that extracts entities and relationships, stored in a vertex-cut graph with bitemporal validity windows.
- Adaptive ranking. Six scoring signals are fused with weights that adapt per agent and per query type from a feedback loop. No fine-tuning required.
- Always-on profile preamble. An L0 profile block and L1 session summary are prepended to every recall result, so your agent never has identity drift.
- Governance built in. PII scanning, right-to-be-forgotten, legal holds, and audit trails are in the write path, not bolted on.
- Distributed from day one. Consistent hashing, configurable replication, hinted handoff, anti-entropy repair. The Raft control plane handles cluster decisions; data writes never pay consensus latency.
A concrete example
Monday morning. You boot OpenClaw and say:
I'm building a desktop app called Lumen. It uses Tauri on the frontend and a Rust backend with sqlx and Postgres. The marketing launch is May 15.
CortexDB extracts entities (Lumen, Tauri, Rust, sqlx, Postgres) and edges (Lumen WorksOn Tauri, Lumen DependsOn sqlx, etc.).
Friday evening, new terminal. You say:
Can you draft the press release for the launch?
The agent already knows the project is Lumen, the launch is May 15, and the stack is Tauri/Rust/Postgres. The draft comes back grounded on the first try.
Three weeks later. You say:
Forget that we're using Postgres — we switched to SQLite.
The model calls memory_forget. The Postgres edge is tombstoned, the deletion is recorded in the audit log, and future recalls return only the SQLite fact. Bitemporal validity means “what stack did we plan on Monday?” still returns the original answer. History preserved; current state correct.
Why it matters
OpenClaw's whole pitch is that an agent on your desktop, with eyes and hands, can do what a coworker can do. Coworkers have memory. They remember the project, the people, the decisions, the failed experiments, and the half-finished thoughts from three months ago. Without long-term memory, your OpenClaw agent is a coworker with amnesia. With CortexDB, it is a coworker with a hippocampus.
Try it
The plugin is open source. Install @cortexdb/openclaw, point it at your CortexDB instance, and give your OpenClaw agent a memory that actually sticks. Full docs are on the OpenClaw integration page.