Use CortexDB as a persistent storage backend for Letta (formerly MemGPT) agents.
Letta Integration
CortexDB integrates with Letta (formerly MemGPT) as a storage backend, replacing the default storage with CortexDB's persistent, semantically-searchable memory system. This enables Letta agents to maintain long-term archival memory across sessions with rich retrieval capabilities.
Installation
pip install cortexdbai[letta]
Quick Start
from cortexdb import Cortex
from cortexdb_letta import CortexDBStorageConnector, CortexDBArchivalStorage
client = Cortex(base_url="https://api.cortexdb.ai", api_key="your-cortex-api-key")
# Use as a storage connector for passages
connector = CortexDBStorageConnector(
client=client,
tenant_id="my-app",
table_name="passages",
)
# Use as archival (long-term) storage
archival = CortexDBArchivalStorage(
client=client,
tenant_id="my-app",
agent_id="agent-001",
)
As Memory Backend
Storage Connector
The CortexDBStorageConnector provides a general-purpose storage interface for Letta's passage and record management:
connector = CortexDBStorageConnector(
client=client,
tenant_id="my-app",
table_name="passages",
)
# Insert records
connector.insert({"text": "The API rate limit is 100 requests per minute."})
connector.insert_many([
{"text": "Deploy window: Tuesdays 2-4pm EST"},
{"text": "Staging environment: staging.example.com"},
])
# Query with semantic search
result = connector.query("deployment schedule")
print(result.context)
# Delete specific records
connector.delete("record-id-123")
Archival Storage
The CortexDBArchivalStorage provides Letta's long-term memory interface:
archival = CortexDBArchivalStorage(
client=client,
tenant_id="my-app",
agent_id="agent-001",
)
# Store long-term memories
archival.insert("User prefers Python over JavaScript.")
archival.insert("Project deadline is March 30th.")
# Search archival memory
result = archival.search("programming language preferences")
print(result.context)
# Clear all memories for this agent
archival.clear()
As Agent Tools
Letta agents can use the storage connector directly for tool-based memory access:
from cortexdb import Cortex
from cortexdb_letta import CortexDBArchivalStorage
client = Cortex(base_url="https://api.cortexdb.ai", api_key="your-cortex-api-key")
archival = CortexDBArchivalStorage(client=client, tenant_id="my-app", agent_id="agent-001")
# In your Letta agent's tool functions:
def archival_memory_insert(content: str) -> str:
result = archival.insert(content)
return f"Memory archived (event_id={result.event_id})."
def archival_memory_search(query: str) -> str:
result = archival.search(query)
return result.context if result.context else "No archival memories found."
Configuration
| Parameter | Default | Description |
|---|---|---|
| base_url | https://api.cortexdb.ai | CortexDB server URL |
| api_key | None | CortexDB API key |
| tenant_id | "default" | Tenant identifier |
| table_name | "passages" | Logical table name (StorageConnector) |
| agent_id | "default" | Agent identifier (ArchivalStorage) |
Under the Hood
The integration wrapper maps to CortexDB's REST API:
# archival.insert("User prefers Python over JavaScript.")
curl -X POST https://api.cortexdb.ai/v1/remember \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"content": "User prefers Python over JavaScript.",
"tenant_id": "my-app"
}'
# Returns: { "event_id": "019d6359-d3cc-7671-9e4c-9151011fa016" }
# archival.search("programming language preferences")
curl -X POST https://api.cortexdb.ai/v1/recall \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"query": "programming language preferences",
"tenant_id": "my-app"
}'
# Returns: { "context": "...", "confidence": 0.94, "latency_ms": 9 }
Complete Example
from cortexdb import Cortex
from cortexdb_letta import CortexDBStorageConnector, CortexDBArchivalStorage
client = Cortex(base_url="https://api.cortexdb.ai", api_key="your-cortex-api-key")
# Set up storage for a Letta agent
passage_store = CortexDBStorageConnector(
client=client,
tenant_id="my-app",
table_name="agent_passages",
)
archival = CortexDBArchivalStorage(
client=client,
tenant_id="my-app",
agent_id="research-agent",
)
# Ingest documents into passage storage
documents = [
{"text": "CortexDB supports multi-tenant isolation."},
{"text": "The API uses gRPC for high-performance communication."},
{"text": "Events are the source of truth in the system."},
]
passage_store.insert_many(documents)
# Agent archives important findings
archival.insert("The system processes 10,000 events per second at peak load.")
archival.insert("Data retention policy requires 90-day minimum storage.")
# Later, search for relevant context
result = archival.search("performance benchmarks")
print(result.context)
# Check storage size
print(f"Passages stored: {passage_store.size()}")