Use CortexDB as a long-term memory provider for LangChain agents.

LangChain Integration

CortexDB integrates with LangChain as a memory backend, replacing ConversationBufferMemory and ConversationSummaryMemory with persistent, hybrid-retrieval memory.

Installation

pip install cortexdb[langchain]

Setup

from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_openai_functions_agent
from cortexdb.integrations.langchain import CortexMemory

# Initialize CortexDB memory
memory = CortexMemory(
    api_key="your-cortex-api-key",
    tenant_id="my-app",
    namespace="chat-agent",
    top_k=10,
)

# Create agent with CortexDB memory
llm = ChatOpenAI(model="gpt-4o")
agent = create_openai_functions_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, memory=memory)

# Memory persists across sessions
response = executor.invoke({"input": "Remember that I prefer Python over JavaScript."})
# ... later, even in a new session ...
response = executor.invoke({"input": "What programming language do I prefer?"})
# "Based on your previous conversations, you prefer Python over JavaScript."

As a Retriever

from cortexdb.integrations.langchain import CortexRetriever

retriever = CortexRetriever(
    api_key="your-cortex-api-key",
    tenant_id="my-app",
    top_k=5,
)

# Use in a RetrievalQA chain
from langchain.chains import RetrievalQA

qa = RetrievalQA.from_chain_type(
    llm=ChatOpenAI(model="gpt-4o"),
    retriever=retriever,
)

answer = qa.invoke("What decisions did we make about the database?")

Configuration

| Parameter | Default | Description | |---|---|---| | api_key | $CORTEX_API_KEY | CortexDB API key | | base_url | https://api.cortexdb.io | Server URL | | tenant_id | Required | Tenant identifier | | namespace | None | Namespace for memory scope | | top_k | 10 | Results per recall | | auto_remember | True | Auto-store conversation turns | | episode_type | message | Type for auto-stored episodes |