# Docker Deployment Guide > Deploy the LLM Gateway using pre-built Docker images or build your own. ## Table of Contents - [Quick Start](#quick-start) - [Using Pre-Built Images](#using-pre-built-images) - [Configuration](#configuration) - [Docker Compose](#docker-compose) - [Building from Source](#building-from-source) - [Production Considerations](#production-considerations) - [Troubleshooting](#troubleshooting) ## Quick Start Pull and run the latest image: ```bash docker run -d \ --name llm-gateway \ -p 8080:8080 \ -e OPENAI_API_KEY="sk-your-key" \ -e ANTHROPIC_API_KEY="sk-ant-your-key" \ -e GOOGLE_API_KEY="your-key" \ ghcr.io/yourusername/llm-gateway:latest # Verify it's running curl http://localhost:8080/health ``` ## Using Pre-Built Images Images are automatically built and published via GitHub Actions on every release. ### Available Tags - `latest` - Latest stable release - `v1.2.3` - Specific version tags - `main` - Latest commit on main branch (unstable) - `sha-abc1234` - Specific commit SHA ### Pull from Registry ```bash # Pull latest stable docker pull ghcr.io/yourusername/llm-gateway:latest # Pull specific version docker pull ghcr.io/yourusername/llm-gateway:v1.2.3 # List local images docker images | grep llm-gateway ``` ### Basic Usage ```bash docker run -d \ --name llm-gateway \ -p 8080:8080 \ --env-file .env \ ghcr.io/yourusername/llm-gateway:latest ``` ## Configuration ### Environment Variables Create a `.env` file with your API keys: ```bash # Required: At least one provider OPENAI_API_KEY=sk-your-openai-key ANTHROPIC_API_KEY=sk-ant-your-anthropic-key GOOGLE_API_KEY=your-google-key # Optional: Server settings SERVER_ADDRESS=:8080 LOGGING_LEVEL=info LOGGING_FORMAT=json # Optional: Features ADMIN_ENABLED=true RATE_LIMIT_ENABLED=true RATE_LIMIT_REQUESTS_PER_SECOND=10 RATE_LIMIT_BURST=20 # Optional: Auth AUTH_ENABLED=false AUTH_ISSUER=https://accounts.google.com AUTH_AUDIENCE=your-client-id.apps.googleusercontent.com # Optional: Observability OBSERVABILITY_ENABLED=false OBSERVABILITY_METRICS_ENABLED=false OBSERVABILITY_TRACING_ENABLED=false ``` Run with environment file: ```bash docker run -d \ --name llm-gateway \ -p 8080:8080 \ --env-file .env \ ghcr.io/yourusername/llm-gateway:latest ``` ### Using Config File For more complex configurations, use a YAML config file: ```bash # Create config from example cp config.example.yaml config.yaml # Edit config.yaml with your settings # Mount config file into container docker run -d \ --name llm-gateway \ -p 8080:8080 \ -v $(pwd)/config.yaml:/app/config.yaml:ro \ ghcr.io/yourusername/llm-gateway:latest \ --config /app/config.yaml ``` ### Persistent Storage For persistent conversation storage with SQLite: ```bash docker run -d \ --name llm-gateway \ -p 8080:8080 \ -v llm-gateway-data:/app/data \ -e OPENAI_API_KEY="your-key" \ -e CONVERSATIONS_STORE=sql \ -e CONVERSATIONS_DRIVER=sqlite3 \ -e CONVERSATIONS_DSN=/app/data/conversations.db \ ghcr.io/yourusername/llm-gateway:latest ``` ## Docker Compose The project includes a production-ready `docker-compose.yaml` file. ### Basic Setup ```bash # Create .env file with API keys cat > .env <