Now available — Start building today

The Long-Term Memory Layer for AI

A cognitive memory engine with a 6-phase recall pipeline, neural reranking, and adaptive query planning. 76.4% on LoCoMo. 51+ integrations. Production-ready.

quickstart.py
python
from cortexdb import Cortex

client = Cortex(api_key="your-key")
client.remember(content="Q3 revenue: $2.4M",
                tenant_id="acme")
result = client.recall(query="Q3 revenue?",
                       tenant_id="acme")
76.4%
LoCoMo Benchmark
LLM Judge (1,540 QA pairs)
53+
Integrations
frameworks & tools
16
Connectors
Slack, GitHub, Jira+
6
IDE Integrations
MCP server

Architecture

How it works

A 6-phase cognitive recall pipeline backed by a distributed, Raft-consensus cluster.

Cognitive Recall Pipeline
Phase 1Query
Phase 2Planner
Phase 34-CH Search
Phase 4Reranking
Phase 5Gate
Phase 6KG Enrich
Recall
~737
ms latency
Distributed Cluster · Raft Consensus
Healthy · RF 3
Leader
Node 2
Node 3
Node 4
Node 5
+
Horizontal
Scaling
Data replication
Raft consensus
Token routing

Features

Built for production AI workloads

Everything your AI agents need for reliable, accurate long-term memory. No compromises.

Event-Sourced Memory

Raw content preserved exactly as received. Never rewritten or hallucinated by an LLM. Full audit trail of every change.

Knowledge Graph

Auto-built knowledge graph with entity extraction, causal chains, and contradiction detection. Async enrichment pipeline extracts atomic facts with resolved dates — the graph enriches retrieval without blocking writes.

Cognitive Recall Pipeline

Six-phase retrieval pipeline: adaptive query planning, 4-channel hybrid search (BM25 + vector + entity + synonym diversity), neural cross-encoder reranking, statistical irrelevance detection, knowledge graph enrichment, and multi-signal adaptive scoring. Not a vector search wrapper — a ground-up cognitive memory engine.

Crash Durable

Battle-tested storage engine with zero data loss under any failure scenario. Your memories survive anything, guaranteed.

Multi-Tenant

Full isolation per tenant with namespace support. Predictable performance at any scale with automatic resource management.

16 Data Connectors

Slack, GitHub, Jira, Notion, Salesforce, and 11 more built-in. Ingest your team's knowledge automatically.

Architecture

Lossless vs. lossy

Other memory systems rewrite your data through an LLM before storing it. CortexDB preserves the original.

CortexDB

Lossless, event-sourced

Raw content stored as immutable events

Async enrichment extracts atomic facts + knowledge graph

4-channel retrieval + neural cross-encoder reranking

Adaptive query planner learns optimal strategy per query type

Others

Lossy, LLM-rewritten

Content rewritten by LLM before storage

LLM on critical write path — slow, unpredictable cost

Single-channel vector search (no reranking)

No irrelevance detection — always returns something

Comparison

How CortexDB is different

Architectural differences that matter at scale.

FeatureCortexDBCompetition
Data preservationLossless (raw events)Lossy (LLM-rewritten)
Write pathNo LLM on write pathLLM required on every write
Retrieval4-channel hybrid + neural rerankingVector-only
Irrelevance detectionQuad-signal gate (returns empty when nothing matches)Always returns something (hallucination risk)
Knowledge graphAuto-built with async enrichmentSeparate add-on
Query understandingAdaptive planner (6 query types, online learning)One-size-fits-all
Crash durabilityWAL + embedded storage (zero data loss)Depends on vector DB
Benchmark (LoCoMo)76.4%66.9% (Mem0)
Data connectors16 built-in0
Cluster modeBuilt-in Raft consensusN/A
Event sourcingFull audit trailNone
Multi-tenantNamespace isolationLimited

Benchmarks

Proven accuracy, not marketing claims

We evaluate against published academic benchmarks and share the results. No cherry-picked demos — real numbers on standardized tests.

LoCoMo Benchmark

Industry-standard long-term conversational memory benchmark. 10 conversations, 1,540 QA pairs across 4 categories.

76.4%

LLM Judge

Memori
82.0%
Zep
79.1%
LangMem
78.1%
CortexDB
76.4%
Mem0
66.9%

Binary CORRECT/WRONG judge (GPT-4o-mini). Categories 1–4. Same methodology as all published results.

Internal Precision Benchmark

69 queries across 107 memories. Tests domain filtering, irrelevance detection, causal reasoning, and temporal understanding.

94%

pass rate

Domain Filtering & Irrelevance

28/28 queries

100%

CortexDB

0%

Vector DB

Preference Understanding

5/5 queries

100%

CortexDB

80%

Vector DB

Enterprise Incident Response

15/18 queries

83%

CortexDB

72%

Vector DB

Personal AI Assistant

17/18 queries

94%

CortexDB

83%

Vector DB

Overall F1 Score
87.4%47.7%

Average recall latency

6-phase pipeline including embedding + neural reranking. Statistical filtering adds <100ms.

~737ms

sub-second for most queries

Developer Experience

First-class SDKs for every stack

Get started in minutes with our Python, TypeScript, or REST API.

app.py
python
from cortexdb import Cortex

client = Cortex("https://api.cortexdb.ai", api_key="your-key")

# Store a memory
client.remember(
    content="Q3 revenue exceeded $2.4M, up 34% YoY",
    tenant_id="acme-corp",
)

# Retrieve with hybrid search
result = client.recall(
    query="What was Q3 revenue?",
    tenant_id="acme-corp",
)

print(result.context)
# => "Q3 revenue exceeded $2.4M, up 34% YoY"
MCP Server

Works with your favorite IDE

Install the CortexDB MCP server and give any AI-powered IDE persistent long-term memory. One command, every conversation remembers.

terminal
bash
# Install the MCP server
pip install cortexdb-mcp

# Add to Claude Code (one command)
claude mcp add cortexdb cortexdb-mcp \
  -e CORTEXDB_URL=https://api.cortexdb.ai \
  -e CORTEXDB_API_KEY=your_key_here

20 built-in tools

Store, search, forget, explore knowledge graphs, run deployment reviews — all from your IDE.

Zero local storage

The MCP server is a lightweight bridge. All data lives in CortexDB cloud — nothing stored on your machine.

Works everywhere MCP does

Any client that speaks Model Context Protocol gets instant access to CortexDB's full memory system.

View MCP setup guide
53+ Integrations

Connects to everything you use

Drop-in support for 19 agent frameworks, 16 data connectors, 6 orchestration tools, and more. Click any integration for docs.

Ready to give your AI agents perfect memory?

Get started in under 5 minutes. Free tier available — no credit card required.