Run CortexDB with Docker for local development or a straightforward self-hosted setup.
Docker Deployment
Docker is the fastest way to run CortexDB in your own environment.
Use this path when you want a practical self-hosted setup for development, internal evaluation, or a simpler production footprint.
Quick Start
docker run -d \
--name cortexdb \
-p 3141:3141 \
-v cortexdb-data:/data \
-e CORTEX_API_KEY=change-me \
cortexdb/cortexdb:latest
After the container starts, verify that the service is available:
curl http://localhost:3141/v1/admin/health
Docker Compose
For teams that want a reusable deployment file, start with a minimal Compose setup and add environment-specific configuration through your secret manager and deployment tooling.
version: "3.8"
services:
cortexdb:
image: cortexdb/cortexdb:latest
ports:
- "3141:3141"
environment:
CORTEX_API_KEY: "${CORTEX_API_KEY}"
volumes:
- cortexdb-data:/data
restart: unless-stopped
volumes:
cortexdb-data:
What to configure next
Most self-hosted teams should review these areas after the container is running:
- authentication and secret handling
- persistent storage and backup policy
- networking and TLS termination
- AI provider credentials, if your deployment uses them
- monitoring, alerting, and log collection
Networking and security
For production use, place CortexDB behind a reverse proxy, ingress layer, or load balancer that matches your organization’s networking standards.
You should also:
- keep secrets out of checked-in Compose files
- restrict access to trusted applications and operators
- use durable volumes rather than ephemeral container storage
Upgrades
When you upgrade a Docker-based deployment, validate the new image in a lower environment first, then roll it out with the same operational controls you use for other stateful services.