Use CortexDB as a persistent storage backend for Letta (formerly MemGPT) agents.
Letta Integration
CortexDB integrates with Letta (formerly MemGPT) as a storage backend, replacing the default storage with CortexDB's persistent, semantically-searchable memory system. This enables Letta agents to maintain long-term archival memory across sessions with rich retrieval capabilities.
Installation
pip install cortexdb[letta]
Quick Start
from cortexdb import Cortex
from cortexdb_letta import CortexDBStorageConnector, CortexDBArchivalStorage
client = Cortex(base_url="http://localhost:3141", api_key="your-cortex-api-key")
# Use as a storage connector for passages
connector = CortexDBStorageConnector(
client=client,
tenant_id="my-app",
table_name="passages",
)
# Use as archival (long-term) storage
archival = CortexDBArchivalStorage(
client=client,
tenant_id="my-app",
agent_id="agent-001",
)
As Memory Backend
Storage Connector
The CortexDBStorageConnector provides a general-purpose storage interface for Letta's passage and record management:
connector = CortexDBStorageConnector(
client=client,
tenant_id="my-app",
table_name="passages",
)
# Insert records
connector.insert({"text": "The API rate limit is 100 requests per minute."})
connector.insert_many([
{"text": "Deploy window: Tuesdays 2-4pm EST"},
{"text": "Staging environment: staging.example.com"},
])
# Query with semantic search
results = connector.query("deployment schedule", limit=5)
# Retrieve all records
all_records = connector.get_all(limit=50, offset=0)
# Delete specific records
connector.delete("record-id-123")
Archival Storage
The CortexDBArchivalStorage provides Letta's long-term memory interface:
archival = CortexDBArchivalStorage(
client=client,
tenant_id="my-app",
agent_id="agent-001",
top_k=10,
)
# Store long-term memories
archival.insert("User prefers Python over JavaScript.")
archival.insert(
"Project deadline is March 30th.",
metadata={"source": "user", "priority": "high"},
)
# Search archival memory
results = archival.search("programming language preferences")
# Retrieve all memories
all_memories = archival.get_all(limit=100)
# Clear all memories for this agent
archival.clear()
As Agent Tools
Letta agents can use the storage connector directly for tool-based memory access:
from cortexdb import Cortex
from cortexdb_letta import CortexDBArchivalStorage
client = Cortex(base_url="http://localhost:3141", api_key="your-cortex-api-key")
archival = CortexDBArchivalStorage(client=client, tenant_id="my-app", agent_id="agent-001")
# In your Letta agent's tool functions:
def archival_memory_insert(content: str) -> str:
archival.insert(content)
return "Memory archived successfully."
def archival_memory_search(query: str) -> str:
results = archival.search(query)
if not results:
return "No archival memories found."
return "\n".join(r.get("content", str(r)) for r in results)
Configuration
| Parameter | Default | Description |
|---|---|---|
| base_url | http://localhost:3141 | CortexDB server URL |
| api_key | None | CortexDB API key |
| tenant_id | "default" | Tenant identifier |
| table_name | "passages" | Logical table name (StorageConnector) |
| agent_id | "default" | Agent identifier (ArchivalStorage) |
| top_k | 10 | Results per search query |
Complete Example
from cortexdb import Cortex
from cortexdb_letta import CortexDBStorageConnector, CortexDBArchivalStorage
client = Cortex(base_url="http://localhost:3141", api_key="your-cortex-api-key")
# Set up storage for a Letta agent
passage_store = CortexDBStorageConnector(
client=client,
tenant_id="my-app",
table_name="agent_passages",
)
archival = CortexDBArchivalStorage(
client=client,
tenant_id="my-app",
agent_id="research-agent",
top_k=10,
)
# Ingest documents into passage storage
documents = [
{"text": "CortexDB supports multi-tenant isolation."},
{"text": "The API uses gRPC for high-performance communication."},
{"text": "Events are the source of truth in the system."},
]
passage_store.insert_many(documents)
# Agent archives important findings
archival.insert("The system processes 10,000 events per second at peak load.")
archival.insert("Data retention policy requires 90-day minimum storage.")
# Later, search for relevant context
results = archival.search("performance benchmarks")
for r in results:
print(r.get("content", ""))
# Check storage size
print(f"Passages stored: {passage_store.size()}")