High-level guidance for configuring CortexDB in self-hosted environments.

Configuration

CortexDB can be configured for local development, self-hosted deployment, and larger production environments.

This page is intentionally high-level and focuses on the major areas teams should plan for when deploying CortexDB.

Core areas to configure

Networking

Teams typically configure:

  • how CortexDB is exposed to applications
  • how requests reach the service
  • what should remain private inside the deployment environment

Authentication and access

Most production deployments should define how applications authenticate and how access is governed across tenants, teams, or environments.

AI providers

If your deployment uses external AI providers, plan for:

  • model and provider selection
  • credentials and secret management
  • operational limits and cost controls

Storage and persistence

Self-hosted teams should decide where CortexDB data lives, how it is backed up, and what operational standards apply to durability and retention.

Observability

Production setups should include logging, monitoring, and alerting appropriate to your environment.

Deployment topology

You may begin with a single-node setup and later move to a larger self-hosted deployment depending on workload, governance, and operational requirements.

Recommended approach

For most teams, the easiest path is:

  1. start with the Docker deployment guide
  2. validate networking, auth, and provider settings
  3. review production considerations before wider rollout
  4. move to a broader cluster model only when needed

What this page intentionally does not cover

This page does not list every internal tuning knob or low-level engine setting. For public documentation, the priority is helping teams understand the major deployment decisions rather than exposing implementation-specific internals.

Next Steps