Add long-term memory to IBM BeeAI Framework agents.
BeeAI Integration
CortexDB provides persistent long-term memory for IBM's BeeAI Framework agents, enabling agents to remember context across sessions with semantic retrieval of past interactions and stored knowledge.
Installation
pip install cortexdb[beeai]
Quick Start
from cortexdb import Cortex
from cortexdb_beeai import CortexDBMemory
client = Cortex(base_url="http://localhost:3141", api_key="your-cortex-api-key")
memory = CortexDBMemory(
client=client,
tenant_id="my-app",
namespace="bee-agent",
top_k=10,
)
# Store a memory
memory.add("The deployment cadence is every two weeks on Tuesdays.")
# Search for relevant memories
results = memory.search("deployment schedule")
As Memory Backend
The CortexDBMemory class provides a full memory interface for BeeAI agents:
memory = CortexDBMemory(
client=client,
tenant_id="my-app",
namespace="support-agent",
)
# Store individual memories
memory.add("Customer prefers email communication.")
memory.add("Account tier: Enterprise", metadata={"source": "crm"})
# Save conversation turns automatically
memory.save(
input_text="What is our refund policy?",
output_text="Refunds are available within 30 days of purchase.",
)
# Recall context as formatted text (for prompt injection)
context = memory.recall("refund policy")
# Load memory variables (returns dict with "context" key)
variables = memory.load("What are the account details?")
print(variables["context"])
# Clear all memories
memory.clear()
As Agent Tools
Use CortexDB tools to give BeeAI agents explicit memory operations:
from cortexdb import Cortex
from cortexdb_beeai import CortexDBSearchTool, CortexDBStoreTool, CortexDBForgetTool
client = Cortex(base_url="http://localhost:3141", api_key="your-cortex-api-key")
# Create tools
search_tool = CortexDBSearchTool(client=client, tenant_id="my-app", namespace="agent-kb")
store_tool = CortexDBStoreTool(client=client, tenant_id="my-app", namespace="agent-kb")
forget_tool = CortexDBForgetTool(client=client, tenant_id="my-app")
# Use tools directly
result = search_tool.run("previous deployment issues")
store_tool.run("Resolved DNS issue by updating the CNAME record.")
forget_tool.run("outdated server config", reason="Servers migrated to new infrastructure")
# Access tool metadata for agent registration
print(search_tool.name) # "cortexdb_search"
print(search_tool.description) # "Search CortexDB for relevant memories..."
print(search_tool.input_schema) # JSON schema for tool inputs
Configuration
| Parameter | Default | Description |
|---|---|---|
| base_url | http://localhost:3141 | CortexDB server URL |
| api_key | None | CortexDB API key |
| tenant_id | "default" | Tenant identifier |
| namespace | None | Memory namespace |
| top_k | 5 | Results per search/recall |
| limit | 5 | Default result limit (tools) |
Complete Example
from cortexdb import Cortex
from cortexdb_beeai import CortexDBMemory, CortexDBSearchTool, CortexDBStoreTool
client = Cortex(base_url="http://localhost:3141", api_key="your-cortex-api-key")
# Set up shared memory for a BeeAI agent
memory = CortexDBMemory(
client=client,
tenant_id="engineering",
namespace="incident-response",
top_k=10,
)
# Pre-load knowledge
memory.add("Runbook: If CPU > 90% for 5 min, scale horizontally.")
memory.add("Runbook: If memory > 85%, check for memory leaks in Java services.")
memory.add("On-call rotation: Mon-Wed Alice, Thu-Fri Bob, Weekends Charlie.")
# Agent processes an incident
user_query = "CPU is spiking on the payment service"
context = memory.recall(user_query)
# Context now contains the relevant runbook entry
print(f"Retrieved context:\n{context}")
# Save the interaction for future reference
memory.save(
input_text=user_query,
output_text="Initiating horizontal scaling for payment service per runbook.",
)
# Also available as explicit tools for the agent
search = CortexDBSearchTool(client=client, tenant_id="engineering", namespace="incident-response")
store = CortexDBStoreTool(client=client, tenant_id="engineering", namespace="incident-response")
# Agent can use tools during reasoning
results = search.run("payment service incidents", limit=5)
store.run("Payment service scaled to 6 replicas. CPU normalized at 45%.")