Use CortexDB as a memory store for LlamaIndex agents and query engines.
LlamaIndex Integration
CortexDB integrates with LlamaIndex as a memory store, providing persistent hybrid-retrieval memory for agents and query engines.
Installation
pip install cortexdb[llamaindex]
As a Memory Store
from llama_index.core.agent import ReActAgent
from llama_index.llms.openai import OpenAI
from cortexdb.integrations.llamaindex import CortexMemoryStore
memory = CortexMemoryStore(
api_key="your-cortex-api-key",
tenant_id="my-app",
)
agent = ReActAgent.from_tools(
tools,
llm=OpenAI(model="gpt-4o"),
memory=memory,
verbose=True,
)
response = agent.chat("Remember that our API rate limit is 1000 req/s per tenant.")
# Later...
response = agent.chat("What is our API rate limit?")
As a Retriever
from cortexdb.integrations.llamaindex import CortexRetriever
from llama_index.core.query_engine import RetrieverQueryEngine
retriever = CortexRetriever(
api_key="your-cortex-api-key",
tenant_id="my-app",
top_k=10,
)
query_engine = RetrieverQueryEngine.from_args(retriever)
response = query_engine.query("What architectural decisions have been made?")
Configuration
| Parameter | Default | Description |
|---|---|---|
| api_key | $CORTEX_API_KEY | CortexDB API key |
| tenant_id | Required | Tenant identifier |
| namespace | None | Memory namespace |
| top_k | 10 | Results per recall |
| auto_remember | True | Auto-store conversation turns |