Add persistent memory to AG2 agents (successor to AutoGen).
AG2 Integration
CortexDB provides persistent long-term memory for AG2 agents, enabling multi-agent conversations that remember context across sessions. AG2 is the successor to Microsoft AutoGen, featuring an event-driven architecture with improved modularity.
Installation
pip install cortexdb[ag2]
Quick Start
from ag2 import AssistantAgent, UserProxyAgent
from cortexdb import Cortex
from cortexdb_ag2 import CortexDBAgent
client = Cortex(base_url="http://localhost:3141", api_key="your-cortex-api-key")
# Create a memory-augmented assistant
agent = CortexDBAgent(
name="assistant",
cortex_client=client,
tenant_id="my-app",
namespace="ag2-assistant",
memory_top_k=10,
llm_config={"model": "gpt-4o"},
system_message="You are a helpful assistant with long-term memory.",
)
user = UserProxyAgent(
name="user",
human_input_mode="ALWAYS",
)
# Conversations are automatically stored and recalled
user.initiate_chat(agent, message="What do you remember about our project?")
As Memory Backend
The CortexDBAgent extends AG2's ConversableAgent with automatic memory:
- Auto-recall: Before each response, relevant memories are retrieved and injected as context
- Auto-store: After each response, the conversation turn is persisted to CortexDB
# Disable auto behaviors for manual control
agent = CortexDBAgent(
name="assistant",
cortex_client=client,
tenant_id="my-app",
auto_store=False,
auto_recall=False,
llm_config={"model": "gpt-4o"},
)
As Agent Tools
Register CortexDB as callable tools that AG2 agents can invoke:
from ag2 import ConversableAgent, UserProxyAgent
from cortexdb import Cortex
from cortexdb_ag2 import register_cortexdb_tools
client = Cortex(base_url="http://localhost:3141", api_key="your-cortex-api-key")
assistant = ConversableAgent("assistant", llm_config={"model": "gpt-4o"})
user_proxy = UserProxyAgent("user_proxy")
# Register search, store, and forget tools
register_cortexdb_tools(assistant, user_proxy, client, tenant_id="my-app")
user_proxy.initiate_chat(
assistant,
message="Search your memory for what we discussed about deployment.",
)
You can also register tools individually:
from cortexdb_ag2 import cortexdb_search_fn, cortexdb_store_fn
search = cortexdb_search_fn(client, tenant_id="my-app")
store = cortexdb_store_fn(client, tenant_id="my-app")
assistant.register_for_llm(description="Search long-term memory")(search)
user_proxy.register_for_execution()(search)
Configuration
| Parameter | Default | Description |
|---|---|---|
| base_url | http://localhost:3141 | CortexDB server URL |
| api_key | None | CortexDB API key |
| tenant_id | "default" | Tenant identifier |
| namespace | None | Memory namespace |
| memory_top_k | 5 | Results per recall |
| auto_store | True | Auto-store conversation turns |
| auto_recall | True | Auto-retrieve context before responding |
Complete Example
from ag2 import UserProxyAgent
from cortexdb import Cortex
from cortexdb_ag2 import CortexDBAgent, register_cortexdb_tools
client = Cortex(base_url="http://localhost:3141", api_key="your-cortex-api-key")
# Memory-augmented agent with tool access
agent = CortexDBAgent(
name="knowledge_agent",
cortex_client=client,
tenant_id="engineering",
namespace="project-alpha",
memory_top_k=10,
llm_config={"model": "gpt-4o"},
system_message=(
"You are an engineering assistant with long-term memory. "
"You remember past conversations and decisions."
),
)
user = UserProxyAgent(name="engineer", human_input_mode="ALWAYS")
# Also register explicit tools for manual memory operations
register_cortexdb_tools(agent, user, client, tenant_id="engineering")
# Start conversation — memories persist across sessions
user.initiate_chat(
agent,
message="Remember: we decided to use PostgreSQL for the user service.",
)
# In a later session, the agent recalls this decision automatically
user.initiate_chat(
agent,
message="What database did we choose for the user service?",
)