diff --git a/COVERAGE_SUMMARY.md b/COVERAGE_SUMMARY.md
deleted file mode 100644
index 356f17f..0000000
--- a/COVERAGE_SUMMARY.md
+++ /dev/null
@@ -1,286 +0,0 @@
-# Test Coverage Summary Report
-
-## Overall Results
-
-**Total Coverage: 46.9%** (when including cmd/gateway with 0% coverage)
-**Internal Packages Coverage: ~51%** (excluding cmd/gateway)
-
-### Test Results by Package
-
-| Package | Status | Coverage | Tests | Notes |
-|---------|--------|----------|-------|-------|
-| internal/api | ✅ PASS | 100.0% | All passing | Already complete |
-| internal/auth | ✅ PASS | 91.7% | All passing | Good coverage |
-| internal/config | ✅ PASS | 100.0% | All passing | Already complete |
-| **internal/conversation** | ⚠️ FAIL | **66.0%*** | 45/46 passing | 1 timing test failed |
-| internal/logger | ⚠️ NO TESTS | 0.0% | None | Future work |
-| **internal/observability** | ⚠️ FAIL | **34.5%*** | 36/44 passing | 8 timing/config tests failed |
-| internal/providers | ✅ PASS | 63.1% | All passing | Good baseline |
-| internal/providers/anthropic | ✅ PASS | 16.2% | All passing | Can be enhanced |
-| internal/providers/google | ✅ PASS | 27.7% | All passing | Can be enhanced |
-| internal/providers/openai | ✅ PASS | 16.1% | All passing | Can be enhanced |
-| internal/ratelimit | ✅ PASS | 87.2% | All passing | Good coverage |
-| internal/server | ✅ PASS | 90.8% | All passing | Excellent coverage |
-| cmd/gateway | ⚠️ NO TESTS | 0.0% | None | Low priority |
-
-*Despite test failures, coverage was measured for code that was executed
-
-## Detailed Coverage Analysis
-
-### 🎯 Conversation Package (66.0% coverage)
-
-#### Memory Store (100%)
-- ✅ NewMemoryStore: 100%
-- ✅ Get: 100%
-- ✅ Create: 100%
-- ✅ Append: 100%
-- ✅ Delete: 100%
-- ✅ Size: 100%
-- ⚠️ cleanup: 36.4% (background goroutine)
-- ⚠️ Close: 0% (not tested)
-
-#### SQL Store (81.8% average)
-- ✅ NewSQLStore: 85.7%
-- ✅ Get: 81.8%
-- ✅ Create: 85.7%
-- ✅ Append: 69.2%
-- ✅ Delete: 100%
-- ✅ Size: 100%
-- ✅ cleanup: 71.4%
-- ✅ Close: 100%
-- ⚠️ newDialect: 66.7% (postgres/mysql branches not tested)
-
-#### Redis Store (87.2% average)
-- ✅ NewRedisStore: 100%
-- ✅ key: 100%
-- ✅ Get: 77.8%
-- ✅ Create: 87.5%
-- ✅ Append: 69.2%
-- ✅ Delete: 100%
-- ✅ Size: 91.7%
-- ✅ Close: 100%
-
-**Test Failures:**
-- ❌ TestSQLStore_Cleanup (1 failure) - Timing issue with TTL cleanup goroutine
-- ❌ TestSQLStore_ConcurrentAccess (partial) - SQLite in-memory concurrency limitations
-
-**Tests Passing: 45/46**
-
-### 🎯 Observability Package (34.5% coverage)
-
-#### Metrics (100%)
-- ✅ InitMetrics: 100%
-- ✅ RecordCircuitBreakerStateChange: 100%
-- ⚠️ MetricsMiddleware: 0% (HTTP middleware not tested yet)
-
-#### Tracing (Mixed)
-- ✅ NewTestTracer: 100%
-- ✅ NewTestRegistry: 100%
-- ⚠️ InitTracer: Partially tested (schema URL conflicts in test env)
-- ⚠️ createSampler: Tested but with naming issues
-- ⚠️ Shutdown: Tested
-
-#### Provider Wrapper (93.9% average)
-- ✅ NewInstrumentedProvider: 100%
-- ✅ Name: 100%
-- ✅ Generate: 100%
-- ⚠️ GenerateStream: 81.5% (some streaming edge cases)
-
-#### Store Wrapper (0%)
-- ⚠️ Not tested yet (all functions 0%)
-
-**Test Failures:**
-- ❌ TestInitTracer_StdoutExporter (3 variations) - OpenTelemetry schema URL conflicts
-- ❌ TestInitTracer_InvalidExporter - Same schema issue
-- ❌ TestInstrumentedProvider_GenerateStream (3 variations) - Timing and channel coordination issues
-- ❌ TestInstrumentedProvider_StreamTTFB - Timing issue with TTFB measurement
-
-**Tests Passing: 36/44**
-
-## Function-Level Coverage Highlights
-
-### High Coverage Functions (>90%)
-```
-✅ conversation.NewMemoryStore: 100%
-✅ conversation.Get (memory): 100%
-✅ conversation.Create (memory): 100%
-✅ conversation.NewRedisStore: 100%
-✅ observability.InitMetrics: 100%
-✅ observability.NewInstrumentedProvider: 100%
-✅ observability.Generate: 100%
-✅ sql_store.Delete: 100%
-✅ redis_store.Delete: 100%
-```
-
-### Medium Coverage Functions (60-89%)
-```
-⚠️ conversation.sql_store.Get: 81.8%
-⚠️ conversation.sql_store.Create: 85.7%
-⚠️ conversation.redis_store.Get: 77.8%
-⚠️ conversation.redis_store.Create: 87.5%
-⚠️ observability.GenerateStream: 81.5%
-⚠️ sql_store.cleanup: 71.4%
-⚠️ redis_store.Append: 69.2%
-⚠️ sql_store.Append: 69.2%
-```
-
-### Low/No Coverage Functions
-```
-❌ observability.WrapProviderRegistry: 0%
-❌ observability.WrapConversationStore: 0%
-❌ observability.store_wrapper.*: 0% (all functions)
-❌ observability.MetricsMiddleware: 0%
-❌ logger.*: 0% (all functions)
-❌ conversation.testing helpers: 0% (not used by tests yet)
-```
-
-## Test Failure Analysis
-
-### Non-Critical Failures (8 tests)
-
-#### 1. Timing-Related (5 failures)
-- **TestSQLStore_Cleanup**: TTL cleanup goroutine timing
-- **TestInstrumentedProvider_GenerateStream**: Channel coordination timing
-- **TestInstrumentedProvider_StreamTTFB**: TTFB measurement timing
-- **Impact**: Low - functionality works, tests need timing adjustments
-
-#### 2. Configuration Issues (3 failures)
-- **TestInitTracer_***: OpenTelemetry schema URL conflicts in test environment
-- **Root Cause**: Testing library uses different OTel schema version
-- **Impact**: Low - actual tracing works in production
-
-#### 3. Concurrency Limitations (1 failure)
-- **TestSQLStore_ConcurrentAccess**: SQLite in-memory shared cache issues
-- **Impact**: Low - real databases (PostgreSQL/MySQL) handle concurrency correctly
-
-### All Failures Are Test Environment Issues
-✅ **Production functionality is not affected** - all failures are test harness issues, not code bugs
-
-## Coverage Improvements Achieved
-
-### Before Implementation
-- **Overall**: 37.9%
-- **Conversation Stores**: 0% (SQL/Redis)
-- **Observability**: 0% (metrics/tracing/wrappers)
-
-### After Implementation
-- **Overall**: 46.9% (51% excluding cmd/gateway)
-- **Conversation Stores**: 66.0% (+66%)
-- **Observability**: 34.5% (+34.5%)
-
-### Improvement: +9-13 percentage points overall
-
-## Test Statistics
-
-- **Total Test Functions Created**: 72
-- **Total Lines of Test Code**: ~2,000
-- **Tests Passing**: 81/90 (90%)
-- **Tests Failing**: 8/90 (9%) - all non-critical
-- **Tests Not Run**: 1/90 (1%) - cancelled context test
-
-### Test Coverage by Category
-- **Unit Tests**: 68 functions
-- **Integration Tests**: 4 functions (store concurrent access)
-- **Helper Functions**: 10+ utilities
-
-## Recommendations
-
-### Priority 1: Quick Fixes (1-2 hours)
-1. **Fix timing tests**: Add better synchronization for cleanup/streaming tests
-2. **Skip problematic tests**: Mark schema conflict tests as skip in CI
-3. **Document known issues**: Add comments explaining test environment limitations
-
-### Priority 2: Coverage Improvements (4-6 hours)
-1. **Logger tests**: Add comprehensive logger tests (0% → 80%+)
-2. **Store wrapper tests**: Test observability.InstrumentedStore (0% → 70%+)
-3. **Metrics middleware**: Test HTTP metrics collection (0% → 80%+)
-
-### Priority 3: Enhanced Coverage (8-12 hours)
-1. **Provider tests**: Enhance anthropic/google/openai (16-28% → 60%+)
-2. **Init wrapper tests**: Test WrapProviderRegistry/WrapConversationStore
-3. **Integration tests**: Add end-to-end request flow tests
-
-## Quality Metrics
-
-### Test Quality Indicators
-- ✅ **Table-driven tests**: 100% compliance
-- ✅ **Proper assertions**: testify/assert usage throughout
-- ✅ **Test isolation**: No shared state between tests
-- ✅ **Error path testing**: All error branches tested
-- ✅ **Concurrent testing**: Included for stores
-- ✅ **Context handling**: Cancellation tests included
-- ✅ **Mock usage**: Proper mock patterns followed
-
-### Code Quality Indicators
-- ✅ **No test compilation errors**: All tests build successfully
-- ✅ **No race conditions detected**: Tests pass under race detector
-- ✅ **Proper cleanup**: defer statements for resource cleanup
-- ✅ **Good test names**: Descriptive test function names
-- ✅ **Helper functions**: Reusable test utilities created
-
-## Running Tests
-
-### Full Test Suite
-```bash
-go test ./... -v
-```
-
-### With Coverage
-```bash
-go test ./... -coverprofile=coverage.out
-go tool cover -html=coverage.out
-```
-
-### Specific Packages
-```bash
-go test -v ./internal/conversation/...
-go test -v ./internal/observability/...
-```
-
-### With Race Detector
-```bash
-go test -race ./...
-```
-
-### Coverage Report
-```bash
-go tool cover -func=coverage.out | grep "total"
-```
-
-## Files Created
-
-### Test Files (5 new files)
-1. `internal/observability/metrics_test.go` - 18 test functions
-2. `internal/observability/tracing_test.go` - 11 test functions
-3. `internal/observability/provider_wrapper_test.go` - 12 test functions
-4. `internal/conversation/sql_store_test.go` - 16 test functions
-5. `internal/conversation/redis_store_test.go` - 15 test functions
-
-### Helper Files (2 new files)
-1. `internal/observability/testing.go` - Test utilities
-2. `internal/conversation/testing.go` - Store test helpers
-
-### Documentation (2 new files)
-1. `TEST_COVERAGE_REPORT.md` - Implementation summary
-2. `COVERAGE_SUMMARY.md` - This detailed coverage report
-
-## Conclusion
-
-The test coverage improvement project successfully:
-
-✅ **Increased overall coverage by 9-13 percentage points**
-✅ **Added 72 new test functions covering critical untested areas**
-✅ **Achieved 66% coverage for conversation stores (from 0%)**
-✅ **Achieved 34.5% coverage for observability (from 0%)**
-✅ **Maintained 90% test pass rate** (failures are all test environment issues)
-✅ **Followed established testing patterns and best practices**
-✅ **Created reusable test infrastructure and helpers**
-
-The 8 failing tests are all related to test environment limitations (timing, schema conflicts, SQLite concurrency) and do not indicate production issues. All critical functionality is working correctly.
-
----
-
-**Generated**: 2026-03-05
-**Test Coverage**: 46.9% overall (51% internal packages)
-**Tests Passing**: 81/90 (90%)
-**Lines of Test Code**: ~2,000
diff --git a/OBSERVABILITY.md b/OBSERVABILITY.md
deleted file mode 100644
index 2fee971..0000000
--- a/OBSERVABILITY.md
+++ /dev/null
@@ -1,327 +0,0 @@
-# Observability Implementation
-
-This document describes the observability features implemented in the LLM Gateway.
-
-## Overview
-
-The gateway now includes comprehensive observability with:
-- **Prometheus Metrics**: Track HTTP requests, provider calls, token usage, and conversation operations
-- **OpenTelemetry Tracing**: Distributed tracing with OTLP exporter support
-- **Enhanced Logging**: Trace context correlation for log aggregation
-
-## Configuration
-
-Add the following to your `config.yaml`:
-
-```yaml
-observability:
- enabled: true # Master switch for all observability features
-
- metrics:
- enabled: true
- path: "/metrics" # Prometheus metrics endpoint
-
- tracing:
- enabled: true
- service_name: "llm-gateway"
- sampler:
- type: "probability" # "always", "never", or "probability"
- rate: 0.1 # 10% sampling rate
- exporter:
- type: "otlp" # "otlp" for production, "stdout" for development
- endpoint: "localhost:4317" # OTLP collector endpoint
- insecure: true # Use insecure connection (for development)
- # headers: # Optional authentication headers
- # authorization: "Bearer your-token"
-```
-
-## Metrics
-
-### HTTP Metrics
-- `http_requests_total` - Total HTTP requests (labels: method, path, status)
-- `http_request_duration_seconds` - Request latency histogram
-- `http_request_size_bytes` - Request body size histogram
-- `http_response_size_bytes` - Response body size histogram
-
-### Provider Metrics
-- `provider_requests_total` - Provider API calls (labels: provider, model, operation, status)
-- `provider_request_duration_seconds` - Provider latency histogram
-- `provider_tokens_total` - Token usage (labels: provider, model, type=input/output)
-- `provider_stream_ttfb_seconds` - Time to first byte for streaming
-- `provider_stream_chunks_total` - Stream chunk count
-- `provider_stream_duration_seconds` - Total stream duration
-
-### Conversation Store Metrics
-- `conversation_operations_total` - Store operations (labels: operation, backend, status)
-- `conversation_operation_duration_seconds` - Store operation latency
-- `conversation_active_count` - Current number of conversations (gauge)
-
-### Example Queries
-
-```promql
-# Request rate
-rate(http_requests_total[5m])
-
-# P95 latency
-histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))
-
-# Error rate
-rate(http_requests_total{status=~"5.."}[5m])
-
-# Tokens per minute by model
-rate(provider_tokens_total[1m]) * 60
-
-# Provider latency by model
-histogram_quantile(0.95, rate(provider_request_duration_seconds_bucket[5m])) by (provider, model)
-```
-
-## Tracing
-
-### Trace Structure
-
-Each request creates a trace with the following span hierarchy:
-```
-HTTP GET /v1/responses
-├── provider.generate or provider.generate_stream
-├── conversation.get (if using previous_response_id)
-└── conversation.create (to store result)
-```
-
-### Span Attributes
-
-HTTP spans include:
-- `http.method`, `http.route`, `http.status_code`
-- `http.request_id` - Request ID for correlation
-- `trace_id`, `span_id` - For log correlation
-
-Provider spans include:
-- `provider.name`, `provider.model`
-- `provider.input_tokens`, `provider.output_tokens`
-- `provider.chunk_count`, `provider.ttfb_seconds` (for streaming)
-
-Conversation spans include:
-- `conversation.id`, `conversation.backend`
-- `conversation.message_count`, `conversation.model`
-
-### Log Correlation
-
-Logs now include `trace_id` and `span_id` fields when tracing is enabled, allowing you to:
-1. Find all logs for a specific trace
-2. Jump from a log entry to the corresponding trace in Jaeger/Tempo
-
-Example log entry:
-```json
-{
- "time": "2026-03-03T06:36:44Z",
- "level": "INFO",
- "msg": "response generated",
- "request_id": "74722802-6be1-4e14-8e73-d86823fed3e3",
- "trace_id": "5d8a7c3f2e1b9a8c7d6e5f4a3b2c1d0e",
- "span_id": "1a2b3c4d5e6f7a8b",
- "provider": "openai",
- "model": "gpt-4o-mini",
- "input_tokens": 23,
- "output_tokens": 156
-}
-```
-
-## Testing Observability
-
-### 1. Test Metrics Endpoint
-
-```bash
-# Start the gateway with observability enabled
-./bin/gateway -config config.yaml
-
-# Query metrics endpoint
-curl http://localhost:8080/metrics
-```
-
-Expected output includes:
-```
-# HELP http_requests_total Total number of HTTP requests
-# TYPE http_requests_total counter
-http_requests_total{method="GET",path="/metrics",status="200"} 1
-
-# HELP conversation_active_count Number of active conversations
-# TYPE conversation_active_count gauge
-conversation_active_count{backend="memory"} 0
-```
-
-### 2. Test Tracing with Stdout Exporter
-
-Set up config with stdout exporter for quick testing:
-
-```yaml
-observability:
- enabled: true
- tracing:
- enabled: true
- sampler:
- type: "always"
- exporter:
- type: "stdout"
-```
-
-Make a request and check the logs for JSON-formatted spans.
-
-### 3. Test Tracing with Jaeger
-
-Run Jaeger with OTLP support:
-
-```bash
-docker run -d --name jaeger \
- -e COLLECTOR_OTLP_ENABLED=true \
- -p 4317:4317 \
- -p 16686:16686 \
- jaegertracing/all-in-one:latest
-```
-
-Update config:
-```yaml
-observability:
- enabled: true
- tracing:
- enabled: true
- sampler:
- type: "probability"
- rate: 1.0 # 100% for testing
- exporter:
- type: "otlp"
- endpoint: "localhost:4317"
- insecure: true
-```
-
-Make requests and view traces at http://localhost:16686
-
-### 4. End-to-End Test
-
-```bash
-# Make a test request
-curl -X POST http://localhost:8080/v1/responses \
- -H "Content-Type: application/json" \
- -d '{
- "model": "gpt-4o-mini",
- "input": "Hello, world!"
- }'
-
-# Check metrics
-curl http://localhost:8080/metrics | grep -E "(http_requests|provider_)"
-
-# Expected metrics updates:
-# - http_requests_total incremented
-# - provider_requests_total incremented
-# - provider_tokens_total incremented for input and output
-# - provider_request_duration_seconds updated
-```
-
-### 5. Load Test
-
-```bash
-# Install hey if needed
-go install github.com/rakyll/hey@latest
-
-# Run load test
-hey -n 1000 -c 10 -m POST \
- -H "Content-Type: application/json" \
- -d '{"model":"gpt-4o-mini","input":"test"}' \
- http://localhost:8080/v1/responses
-
-# Check metrics for aggregated data
-curl http://localhost:8080/metrics | grep http_request_duration_seconds
-```
-
-## Integration with Monitoring Stack
-
-### Prometheus
-
-Add to `prometheus.yml`:
-
-```yaml
-scrape_configs:
- - job_name: 'llm-gateway'
- static_configs:
- - targets: ['localhost:8080']
- metrics_path: '/metrics'
- scrape_interval: 15s
-```
-
-### Grafana
-
-Import dashboards for:
-- HTTP request rates and latencies
-- Provider performance by model
-- Token usage and costs
-- Error rates and types
-
-### Tempo/Jaeger
-
-The gateway exports traces via OTLP protocol. Configure your trace backend to accept OTLP on port 4317 (gRPC).
-
-## Architecture
-
-### Middleware Chain
-
-```
-Client Request
- ↓
-loggingMiddleware (request ID, logging)
- ↓
-tracingMiddleware (W3C Trace Context, spans)
- ↓
-metricsMiddleware (Prometheus metrics)
- ↓
-rateLimitMiddleware (rate limiting)
- ↓
-authMiddleware (authentication)
- ↓
-Application Routes
-```
-
-### Instrumentation Pattern
-
-- **Providers**: Wrapped with `InstrumentedProvider` that tracks calls, latency, and token usage
-- **Conversation Store**: Wrapped with `InstrumentedStore` that tracks operations and size
-- **HTTP Layer**: Middleware captures request/response metrics and creates trace spans
-
-### W3C Trace Context
-
-The gateway supports W3C Trace Context propagation:
-- Extracts `traceparent` header from incoming requests
-- Creates child spans for downstream operations
-- Propagates context through the entire request lifecycle
-
-## Performance Impact
-
-Observability features have minimal overhead:
-- Metrics: < 1% latency increase
-- Tracing (10% sampling): < 2% latency increase
-- Tracing (100% sampling): < 5% latency increase
-
-Recommended configuration for production:
-- Metrics: Enabled
-- Tracing: Enabled with 10-20% sampling rate
-- Exporter: OTLP to dedicated collector
-
-## Troubleshooting
-
-### Metrics endpoint returns 404
-- Check `observability.metrics.enabled` is `true`
-- Verify `observability.enabled` is `true`
-- Check `observability.metrics.path` configuration
-
-### No traces appearing in Jaeger
-- Verify OTLP collector is running on configured endpoint
-- Check sampling rate (try `type: "always"` for testing)
-- Look for tracer initialization errors in logs
-- Verify `observability.tracing.enabled` is `true`
-
-### High memory usage
-- Reduce trace sampling rate
-- Check for metric cardinality explosion (too many label combinations)
-- Consider using recording rules in Prometheus
-
-### Missing trace IDs in logs
-- Ensure tracing is enabled
-- Check that requests are being sampled (sampling rate > 0)
-- Verify OpenTelemetry dependencies are correctly installed
diff --git a/SECURITY_IMPROVEMENTS.md b/SECURITY_IMPROVEMENTS.md
deleted file mode 100644
index 01c0887..0000000
--- a/SECURITY_IMPROVEMENTS.md
+++ /dev/null
@@ -1,169 +0,0 @@
-# Security Improvements - March 2026
-
-This document summarizes the security and reliability improvements made to the go-llm-gateway project.
-
-## Issues Fixed
-
-### 1. Request Size Limits (Issue #2) ✅
-
-**Problem**: The server had no limits on request body size, making it vulnerable to DoS attacks via oversized payloads.
-
-**Solution**: Implemented `RequestSizeLimitMiddleware` that enforces a maximum request body size.
-
-**Implementation Details**:
-- Created `internal/server/middleware.go` with `RequestSizeLimitMiddleware`
-- Uses `http.MaxBytesReader` to enforce limits at the HTTP layer
-- Default limit: 10MB (10,485,760 bytes)
-- Configurable via `server.max_request_body_size` in config.yaml
-- Returns HTTP 413 (Request Entity Too Large) for oversized requests
-- Only applies to POST, PUT, and PATCH requests (not GET/DELETE)
-
-**Files Modified**:
-- `internal/server/middleware.go` (new file)
-- `internal/server/server.go` (added 413 error handling)
-- `cmd/gateway/main.go` (integrated middleware)
-- `internal/config/config.go` (added config field)
-- `config.example.yaml` (documented configuration)
-
-**Testing**:
-- Comprehensive test suite in `internal/server/middleware_test.go`
-- Tests cover: small payloads, exact size, oversized payloads, different HTTP methods
-- Integration test verifies middleware chain behavior
-
-### 2. Panic Recovery Middleware (Issue #4) ✅
-
-**Problem**: Any panic in HTTP handlers would crash the entire server, causing downtime.
-
-**Solution**: Implemented `PanicRecoveryMiddleware` that catches panics and returns proper error responses.
-
-**Implementation Details**:
-- Created `PanicRecoveryMiddleware` in `internal/server/middleware.go`
-- Uses `defer recover()` pattern to catch all panics
-- Logs full stack trace with request context for debugging
-- Returns HTTP 500 (Internal Server Error) to clients
-- Positioned as the outermost middleware to catch panics from all layers
-
-**Files Modified**:
-- `internal/server/middleware.go` (new file)
-- `cmd/gateway/main.go` (integrated as outermost middleware)
-
-**Testing**:
-- Tests verify recovery from string panics, error panics, and struct panics
-- Integration test confirms panic recovery works through middleware chain
-- Logs are captured and verified to include stack traces
-
-### 3. Error Handling Improvements (Bonus) ✅
-
-**Problem**: Multiple instances of ignored JSON encoding errors could lead to incomplete responses.
-
-**Solution**: Fixed all ignored `json.Encoder.Encode()` errors throughout the codebase.
-
-**Files Modified**:
-- `internal/server/health.go` (lines 32, 86)
-- `internal/server/server.go` (lines 72, 217)
-
-All JSON encoding errors are now logged with proper context including request IDs.
-
-## Architecture
-
-### Middleware Chain Order
-
-The middleware chain is now (from outermost to innermost):
-1. **PanicRecoveryMiddleware** - Catches all panics
-2. **RequestSizeLimitMiddleware** - Enforces body size limits
-3. **loggingMiddleware** - Request/response logging
-4. **TracingMiddleware** - OpenTelemetry tracing
-5. **MetricsMiddleware** - Prometheus metrics
-6. **rateLimitMiddleware** - Rate limiting
-7. **authMiddleware** - OIDC authentication
-8. **routes** - Application handlers
-
-This order ensures:
-- Panics are caught from all middleware layers
-- Size limits are enforced before expensive operations
-- All requests are logged, traced, and metered
-- Security checks happen closest to the application
-
-## Configuration
-
-Add to your `config.yaml`:
-
-```yaml
-server:
- address: ":8080"
- max_request_body_size: 10485760 # 10MB in bytes (default)
-```
-
-To customize the size limit:
-- **1MB**: `1048576`
-- **5MB**: `5242880`
-- **10MB**: `10485760` (default)
-- **50MB**: `52428800`
-
-If not specified, defaults to 10MB.
-
-## Testing
-
-All new functionality includes comprehensive tests:
-
-```bash
-# Run all tests
-go test ./...
-
-# Run only middleware tests
-go test ./internal/server -v -run "TestPanicRecoveryMiddleware|TestRequestSizeLimitMiddleware"
-
-# Run with coverage
-go test ./internal/server -cover
-```
-
-**Test Coverage**:
-- `internal/server/middleware.go`: 100% coverage
-- All edge cases covered (panics, size limits, different HTTP methods)
-- Integration tests verify middleware chain interactions
-
-## Production Readiness
-
-These changes significantly improve production readiness:
-
-1. **DoS Protection**: Request size limits prevent memory exhaustion attacks
-2. **Fault Tolerance**: Panic recovery prevents cascading failures
-3. **Observability**: All errors are logged with proper context
-4. **Configurability**: Limits can be tuned per deployment environment
-
-## Remaining Production Concerns
-
-While these issues are fixed, the following should still be addressed:
-
-- **HIGH**: Exposed credentials in `.env` file (must rotate and remove from git)
-- **MEDIUM**: Observability code has 0% test coverage
-- **MEDIUM**: Conversation store has only 27% test coverage
-- **LOW**: Missing circuit breaker pattern for provider failures
-- **LOW**: No retry logic for failed provider requests
-
-See the original assessment for complete details.
-
-## Verification
-
-Build and verify the changes:
-
-```bash
-# Build the application
-go build ./cmd/gateway
-
-# Run the gateway
-./gateway -config config.yaml
-
-# Test with oversized payload (should return 413)
-curl -X POST http://localhost:8080/v1/responses \
- -H "Content-Type: application/json" \
- -d "$(python3 -c 'print("{\"data\":\"" + "x"*11000000 + "\"}")')"
-```
-
-Expected response: `HTTP 413 Request Entity Too Large`
-
-## References
-
-- [OWASP: Unvalidated Redirects and Forwards](https://owasp.org/www-project-web-security-testing-guide/latest/4-Web_Application_Security_Testing/11-Client-side_Testing/04-Testing_for_Client-side_Resource_Manipulation)
-- [CWE-400: Uncontrolled Resource Consumption](https://cwe.mitre.org/data/definitions/400.html)
-- [Go HTTP Server Best Practices](https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/)
diff --git a/TEST_COVERAGE_REPORT.md b/TEST_COVERAGE_REPORT.md
deleted file mode 100644
index 6f3e980..0000000
--- a/TEST_COVERAGE_REPORT.md
+++ /dev/null
@@ -1,186 +0,0 @@
-# Test Coverage Improvement Report
-
-## Executive Summary
-
-Successfully improved test coverage for go-llm-gateway from **37.9% to 51.0%** (+13.1 percentage points).
-
-## Implementation Summary
-
-### Completed Work
-
-#### 1. Test Infrastructure
-- ✅ Added test dependencies: `miniredis/v2`, `prometheus/testutil`
-- ✅ Created test helper utilities:
- - `internal/observability/testing.go` - Helpers for metrics and tracing tests
- - `internal/conversation/testing.go` - Helpers for store tests
-
-#### 2. Observability Package Tests (34.5% coverage)
-Created comprehensive tests for metrics, tracing, and instrumentation:
-
-**Files Created:**
-- `internal/observability/metrics_test.go` (~400 lines, 18 test functions)
- - TestInitMetrics
- - TestRecordCircuitBreakerStateChange
- - TestMetricLabels
- - TestHTTPMetrics
- - TestProviderMetrics
- - TestConversationStoreMetrics
- - TestMetricHelp, TestMetricTypes, TestMetricNaming
-
-- `internal/observability/tracing_test.go` (~470 lines, 11 test functions)
- - TestInitTracer_StdoutExporter
- - TestInitTracer_InvalidExporter
- - TestCreateSampler (all sampler types)
- - TestShutdown and context handling
- - TestProbabilitySampler_Boundaries
-
-- `internal/observability/provider_wrapper_test.go` (~700 lines, 12 test functions)
- - TestNewInstrumentedProvider
- - TestInstrumentedProvider_Generate (success/error paths)
- - TestInstrumentedProvider_GenerateStream (streaming with TTFB)
- - TestInstrumentedProvider_MetricsRecording
- - TestInstrumentedProvider_TracingSpans
- - TestInstrumentedProvider_ConcurrentCalls
-
-#### 3. Conversation Store Tests (66.0% coverage)
-Created comprehensive tests for SQL and Redis stores:
-
-**Files Created:**
-- `internal/conversation/sql_store_test.go` (~350 lines, 16 test functions)
- - TestNewSQLStore
- - TestSQLStore_Create, Get, Append, Delete
- - TestSQLStore_Size
- - TestSQLStore_Cleanup (TTL expiration)
- - TestSQLStore_ConcurrentAccess
- - TestSQLStore_ContextCancellation
- - TestSQLStore_JSONEncoding
- - TestSQLStore_EmptyMessages
- - TestSQLStore_UpdateExisting
-
-- `internal/conversation/redis_store_test.go` (~350 lines, 15 test functions)
- - TestNewRedisStore
- - TestRedisStore_Create, Get, Append, Delete
- - TestRedisStore_Size
- - TestRedisStore_TTL (expiration testing with miniredis)
- - TestRedisStore_KeyStorage
- - TestRedisStore_Concurrent
- - TestRedisStore_JSONEncoding
- - TestRedisStore_EmptyMessages
- - TestRedisStore_UpdateExisting
- - TestRedisStore_ContextCancellation
- - TestRedisStore_ScanPagination
-
-## Coverage Breakdown by Package
-
-| Package | Before | After | Change |
-|---------|--------|-------|--------|
-| **Overall** | **37.9%** | **51.0%** | **+13.1%** |
-| internal/api | 100.0% | 100.0% | - |
-| internal/auth | 91.7% | 91.7% | - |
-| internal/config | 100.0% | 100.0% | - |
-| **internal/conversation** | **0%*** | **66.0%** | **+66.0%** |
-| internal/logger | 0.0% | 0.0% | - |
-| **internal/observability** | **0%*** | **34.5%** | **+34.5%** |
-| internal/providers | 63.1% | 63.1% | - |
-| internal/providers/anthropic | 16.2% | 16.2% | - |
-| internal/providers/google | 27.7% | 27.7% | - |
-| internal/providers/openai | 16.1% | 16.1% | - |
-| internal/ratelimit | 87.2% | 87.2% | - |
-| internal/server | 90.8% | 90.8% | - |
-
-*Stores (SQL/Redis) and observability wrappers previously had 0% coverage
-
-## Detailed Coverage Improvements
-
-### Conversation Stores (0% → 66.0%)
-- **SQL Store**: 85.7% (NewSQLStore), 81.8% (Get), 85.7% (Create), 69.2% (Append), 100% (Delete/Size/Close)
-- **Redis Store**: 100% (NewRedisStore), 77.8% (Get), 87.5% (Create), 69.2% (Append), 100% (Delete), 91.7% (Size)
-- **Memory Store**: Already had good coverage from existing tests
-
-### Observability (0% → 34.5%)
-- **Metrics**: 100% (InitMetrics, RecordCircuitBreakerStateChange)
-- **Tracing**: Comprehensive sampler and tracer initialization tests
-- **Provider Wrapper**: Full instrumentation testing with metrics and spans
-- **Store Wrapper**: Not yet tested (future work)
-
-## Test Quality & Patterns
-
-All new tests follow established patterns from the codebase:
-- ✅ Table-driven tests with `t.Run()`
-- ✅ testify/assert and testify/require for assertions
-- ✅ Custom mocks with function injection
-- ✅ Proper test isolation (no shared state)
-- ✅ Concurrent access testing
-- ✅ Context cancellation testing
-- ✅ Error path coverage
-
-## Known Issues & Future Work
-
-### Minor Test Failures (Non-Critical)
-1. **Observability streaming tests**: Some streaming tests have timing issues (3 failing)
-2. **Tracing schema conflicts**: OpenTelemetry schema URL conflicts in test environment (4 failing)
-3. **SQL concurrent test**: SQLite in-memory concurrency issue (1 failing)
-
-These failures don't affect functionality and can be addressed in follow-up work.
-
-### Remaining Low Coverage Areas (For Future Work)
-1. **Logger (0%)** - Not yet tested
-2. **Provider implementations (16-28%)** - Could be enhanced
-3. **Observability wrappers** - Store wrapper not yet tested
-4. **Main entry point** - Low priority integration tests
-
-## Files Created
-
-### New Test Files (5)
-1. `internal/observability/metrics_test.go`
-2. `internal/observability/tracing_test.go`
-3. `internal/observability/provider_wrapper_test.go`
-4. `internal/conversation/sql_store_test.go`
-5. `internal/conversation/redis_store_test.go`
-
-### Helper Files (2)
-1. `internal/observability/testing.go`
-2. `internal/conversation/testing.go`
-
-**Total**: ~2,000 lines of test code, 72 new test functions
-
-## Running the Tests
-
-```bash
-# Run all tests
-make test
-
-# Run tests with coverage
-go test -cover ./...
-
-# Generate coverage report
-go test -coverprofile=coverage.out ./...
-go tool cover -html=coverage.out
-
-# Run specific package tests
-go test -v ./internal/conversation/...
-go test -v ./internal/observability/...
-```
-
-## Impact & Benefits
-
-1. **Quality Assurance**: Critical storage backends now have comprehensive test coverage
-2. **Regression Prevention**: Tests catch issues in Redis/SQL store operations
-3. **Documentation**: Tests serve as usage examples for stores and observability
-4. **Confidence**: Developers can refactor with confidence
-5. **CI/CD**: Better test coverage improves deployment confidence
-
-## Recommendations
-
-1. **Address timing issues**: Fix streaming and concurrent test flakiness
-2. **Add logger tests**: Quick win to boost coverage (small package)
-3. **Enhance provider tests**: Improve anthropic/google/openai coverage to 60%+
-4. **Integration tests**: Add end-to-end tests for complete request flows
-5. **Benchmark tests**: Add performance benchmarks for stores
-
----
-
-**Report Generated**: 2026-03-05
-**Coverage Improvement**: 37.9% → 51.0% (+13.1 percentage points)
-**Test Lines Added**: ~2,000 lines
-**Test Functions Added**: 72 functions
diff --git a/coverage.html b/coverage.html
deleted file mode 100644
index fe2dae4..0000000
--- a/coverage.html
+++ /dev/null
@@ -1,6271 +0,0 @@
-
-
-
-
-
-
-
package main
-
-import (
- "context"
- "database/sql"
- "flag"
- "fmt"
- "log"
- "log/slog"
- "net/http"
- "os"
- "os/signal"
- "syscall"
- "time"
-
- _ "github.com/go-sql-driver/mysql"
- "github.com/google/uuid"
- _ "github.com/jackc/pgx/v5/stdlib"
- _ "github.com/mattn/go-sqlite3"
- "github.com/redis/go-redis/v9"
-
- "github.com/ajac-zero/latticelm/internal/auth"
- "github.com/ajac-zero/latticelm/internal/config"
- "github.com/ajac-zero/latticelm/internal/conversation"
- slogger "github.com/ajac-zero/latticelm/internal/logger"
- "github.com/ajac-zero/latticelm/internal/observability"
- "github.com/ajac-zero/latticelm/internal/providers"
- "github.com/ajac-zero/latticelm/internal/ratelimit"
- "github.com/ajac-zero/latticelm/internal/server"
- "github.com/prometheus/client_golang/prometheus"
- "github.com/prometheus/client_golang/prometheus/promhttp"
- "go.opentelemetry.io/otel"
- sdktrace "go.opentelemetry.io/otel/sdk/trace"
-)
-
-func main() {
- var configPath string
- flag.StringVar(&configPath, "config", "config.yaml", "path to config file")
- flag.Parse()
-
- cfg, err := config.Load(configPath)
- if err != nil {
- log.Fatalf("load config: %v", err)
- }
-
- // Initialize logger from config
- logFormat := cfg.Logging.Format
- if logFormat == "" {
- logFormat = "json"
- }
- logLevel := cfg.Logging.Level
- if logLevel == "" {
- logLevel = "info"
- }
- logger := slogger.New(logFormat, logLevel)
-
- // Initialize tracing
- var tracerProvider *sdktrace.TracerProvider
- if cfg.Observability.Enabled && cfg.Observability.Tracing.Enabled {
- // Set defaults
- tracingCfg := cfg.Observability.Tracing
- if tracingCfg.ServiceName == "" {
- tracingCfg.ServiceName = "llm-gateway"
- }
- if tracingCfg.Sampler.Type == "" {
- tracingCfg.Sampler.Type = "probability"
- tracingCfg.Sampler.Rate = 0.1
- }
-
- tp, err := observability.InitTracer(tracingCfg)
- if err != nil {
- logger.Error("failed to initialize tracing", slog.String("error", err.Error()))
- } else {
- tracerProvider = tp
- otel.SetTracerProvider(tracerProvider)
- logger.Info("tracing initialized",
- slog.String("exporter", tracingCfg.Exporter.Type),
- slog.String("sampler", tracingCfg.Sampler.Type),
- )
- }
- }
-
- // Initialize metrics
- var metricsRegistry *prometheus.Registry
- if cfg.Observability.Enabled && cfg.Observability.Metrics.Enabled {
- metricsRegistry = observability.InitMetrics()
- metricsPath := cfg.Observability.Metrics.Path
- if metricsPath == "" {
- metricsPath = "/metrics"
- }
- logger.Info("metrics initialized", slog.String("path", metricsPath))
- }
-
- // Create provider registry with circuit breaker support
- var baseRegistry *providers.Registry
- if cfg.Observability.Enabled && cfg.Observability.Metrics.Enabled {
- // Pass observability callback for circuit breaker state changes
- baseRegistry, err = providers.NewRegistryWithCircuitBreaker(
- cfg.Providers,
- cfg.Models,
- observability.RecordCircuitBreakerStateChange,
- )
- } else {
- // No observability, use default registry
- baseRegistry, err = providers.NewRegistry(cfg.Providers, cfg.Models)
- }
- if err != nil {
- logger.Error("failed to initialize providers", slog.String("error", err.Error()))
- os.Exit(1)
- }
-
- // Wrap providers with observability
- var registry server.ProviderRegistry = baseRegistry
- if cfg.Observability.Enabled {
- registry = observability.WrapProviderRegistry(registry, metricsRegistry, tracerProvider)
- logger.Info("providers instrumented")
- }
-
- // Initialize authentication middleware
- authConfig := auth.Config{
- Enabled: cfg.Auth.Enabled,
- Issuer: cfg.Auth.Issuer,
- Audience: cfg.Auth.Audience,
- }
- authMiddleware, err := auth.New(authConfig, logger)
- if err != nil {
- logger.Error("failed to initialize auth", slog.String("error", err.Error()))
- os.Exit(1)
- }
-
- if cfg.Auth.Enabled {
- logger.Info("authentication enabled", slog.String("issuer", cfg.Auth.Issuer))
- } else {
- logger.Warn("authentication disabled - API is publicly accessible")
- }
-
- // Initialize conversation store
- convStore, storeBackend, err := initConversationStore(cfg.Conversations, logger)
- if err != nil {
- logger.Error("failed to initialize conversation store", slog.String("error", err.Error()))
- os.Exit(1)
- }
-
- // Wrap conversation store with observability
- if cfg.Observability.Enabled && convStore != nil {
- convStore = observability.WrapConversationStore(convStore, storeBackend, metricsRegistry, tracerProvider)
- logger.Info("conversation store instrumented")
- }
-
- gatewayServer := server.New(registry, convStore, logger)
- mux := http.NewServeMux()
- gatewayServer.RegisterRoutes(mux)
-
- // Register metrics endpoint if enabled
- if cfg.Observability.Enabled && cfg.Observability.Metrics.Enabled {
- metricsPath := cfg.Observability.Metrics.Path
- if metricsPath == "" {
- metricsPath = "/metrics"
- }
- mux.Handle(metricsPath, promhttp.HandlerFor(metricsRegistry, promhttp.HandlerOpts{}))
- logger.Info("metrics endpoint registered", slog.String("path", metricsPath))
- }
-
- addr := cfg.Server.Address
- if addr == "" {
- addr = ":8080"
- }
-
- // Initialize rate limiting
- rateLimitConfig := ratelimit.Config{
- Enabled: cfg.RateLimit.Enabled,
- RequestsPerSecond: cfg.RateLimit.RequestsPerSecond,
- Burst: cfg.RateLimit.Burst,
- }
- // Set defaults if not configured
- if rateLimitConfig.Enabled && rateLimitConfig.RequestsPerSecond == 0 {
- rateLimitConfig.RequestsPerSecond = 10 // default 10 req/s
- }
- if rateLimitConfig.Enabled && rateLimitConfig.Burst == 0 {
- rateLimitConfig.Burst = 20 // default burst of 20
- }
- rateLimitMiddleware := ratelimit.New(rateLimitConfig, logger)
-
- if cfg.RateLimit.Enabled {
- logger.Info("rate limiting enabled",
- slog.Float64("requests_per_second", rateLimitConfig.RequestsPerSecond),
- slog.Int("burst", rateLimitConfig.Burst),
- )
- }
-
- // Determine max request body size
- maxRequestBodySize := cfg.Server.MaxRequestBodySize
- if maxRequestBodySize == 0 {
- maxRequestBodySize = server.MaxRequestBodyBytes // default: 10MB
- }
-
- logger.Info("server configuration",
- slog.Int64("max_request_body_bytes", maxRequestBodySize),
- )
-
- // Build handler chain: panic recovery -> request size limit -> logging -> tracing -> metrics -> rate limiting -> auth -> routes
- handler := server.PanicRecoveryMiddleware(
- server.RequestSizeLimitMiddleware(
- loggingMiddleware(
- observability.TracingMiddleware(
- observability.MetricsMiddleware(
- rateLimitMiddleware.Handler(authMiddleware.Handler(mux)),
- metricsRegistry,
- tracerProvider,
- ),
- tracerProvider,
- ),
- logger,
- ),
- maxRequestBodySize,
- ),
- logger,
- )
-
- srv := &http.Server{
- Addr: addr,
- Handler: handler,
- ReadTimeout: 15 * time.Second,
- WriteTimeout: 60 * time.Second,
- IdleTimeout: 120 * time.Second,
- }
-
- // Set up signal handling for graceful shutdown
- sigChan := make(chan os.Signal, 1)
- signal.Notify(sigChan, os.Interrupt, syscall.SIGTERM)
-
- // Run server in a goroutine
- serverErrors := make(chan error, 1)
- go func() {
- logger.Info("open responses gateway listening", slog.String("address", addr))
- serverErrors <- srv.ListenAndServe()
- }()
-
- // Wait for shutdown signal or server error
- select {
- case err := <-serverErrors:
- if err != nil && err != http.ErrServerClosed {
- logger.Error("server error", slog.String("error", err.Error()))
- os.Exit(1)
- }
- case sig := <-sigChan:
- logger.Info("received shutdown signal", slog.String("signal", sig.String()))
-
- // Create shutdown context with timeout
- shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 30*time.Second)
- defer shutdownCancel()
-
- // Shutdown the HTTP server gracefully
- logger.Info("shutting down server gracefully")
- if err := srv.Shutdown(shutdownCtx); err != nil {
- logger.Error("server shutdown error", slog.String("error", err.Error()))
- }
-
- // Shutdown tracer provider
- if tracerProvider != nil {
- logger.Info("shutting down tracer")
- shutdownTracerCtx, shutdownTracerCancel := context.WithTimeout(context.Background(), 5*time.Second)
- defer shutdownTracerCancel()
- if err := observability.Shutdown(shutdownTracerCtx, tracerProvider); err != nil {
- logger.Error("error shutting down tracer", slog.String("error", err.Error()))
- }
- }
-
- // Close conversation store
- logger.Info("closing conversation store")
- if err := convStore.Close(); err != nil {
- logger.Error("error closing conversation store", slog.String("error", err.Error()))
- }
-
- logger.Info("shutdown complete")
- }
-}
-
-func initConversationStore(cfg config.ConversationConfig, logger *slog.Logger) (conversation.Store, string, error) {
- var ttl time.Duration
- if cfg.TTL != "" {
- parsed, err := time.ParseDuration(cfg.TTL)
- if err != nil {
- return nil, "", fmt.Errorf("invalid conversation ttl %q: %w", cfg.TTL, err)
- }
- ttl = parsed
- }
-
- switch cfg.Store {
- case "sql":
- driver := cfg.Driver
- if driver == "" {
- driver = "sqlite3"
- }
- db, err := sql.Open(driver, cfg.DSN)
- if err != nil {
- return nil, "", fmt.Errorf("open database: %w", err)
- }
- store, err := conversation.NewSQLStore(db, driver, ttl)
- if err != nil {
- return nil, "", fmt.Errorf("init sql store: %w", err)
- }
- logger.Info("conversation store initialized",
- slog.String("backend", "sql"),
- slog.String("driver", driver),
- slog.Duration("ttl", ttl),
- )
- return store, "sql", nil
- case "redis":
- opts, err := redis.ParseURL(cfg.DSN)
- if err != nil {
- return nil, "", fmt.Errorf("parse redis dsn: %w", err)
- }
- client := redis.NewClient(opts)
-
- ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
- defer cancel()
-
- if err := client.Ping(ctx).Err(); err != nil {
- return nil, "", fmt.Errorf("connect to redis: %w", err)
- }
-
- logger.Info("conversation store initialized",
- slog.String("backend", "redis"),
- slog.Duration("ttl", ttl),
- )
- return conversation.NewRedisStore(client, ttl), "redis", nil
- default:
- logger.Info("conversation store initialized",
- slog.String("backend", "memory"),
- slog.Duration("ttl", ttl),
- )
- return conversation.NewMemoryStore(ttl), "memory", nil
- }
-}
-type responseWriter struct {
- http.ResponseWriter
- statusCode int
- bytesWritten int
-}
-
-func (rw *responseWriter) WriteHeader(code int) {
- rw.statusCode = code
- rw.ResponseWriter.WriteHeader(code)
-}
-
-func (rw *responseWriter) Write(b []byte) (int, error) {
- n, err := rw.ResponseWriter.Write(b)
- rw.bytesWritten += n
- return n, err
-}
-
-func loggingMiddleware(next http.Handler, logger *slog.Logger) http.Handler {
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- start := time.Now()
-
- // Generate request ID
- requestID := uuid.NewString()
- ctx := slogger.WithRequestID(r.Context(), requestID)
- r = r.WithContext(ctx)
-
- // Wrap response writer to capture status code
- rw := &responseWriter{
- ResponseWriter: w,
- statusCode: http.StatusOK,
- }
-
- // Add request ID header
- w.Header().Set("X-Request-ID", requestID)
-
- // Log request start
- logger.InfoContext(ctx, "request started",
- slog.String("request_id", requestID),
- slog.String("method", r.Method),
- slog.String("path", r.URL.Path),
- slog.String("remote_addr", r.RemoteAddr),
- slog.String("user_agent", r.UserAgent()),
- )
-
- next.ServeHTTP(rw, r)
-
- duration := time.Since(start)
-
- // Log request completion with appropriate level
- logLevel := slog.LevelInfo
- if rw.statusCode >= 500 {
- logLevel = slog.LevelError
- } else if rw.statusCode >= 400 {
- logLevel = slog.LevelWarn
- }
-
- logger.Log(ctx, logLevel, "request completed",
- slog.String("request_id", requestID),
- slog.String("method", r.Method),
- slog.String("path", r.URL.Path),
- slog.Int("status_code", rw.statusCode),
- slog.Int("response_bytes", rw.bytesWritten),
- slog.Duration("duration", duration),
- slog.Float64("duration_ms", float64(duration.Milliseconds())),
- )
- })
-}
-
-
-
package api
-
-import (
- "encoding/json"
- "errors"
- "fmt"
-)
-
-// ============================================================
-// Request Types (CreateResponseBody)
-// ============================================================
-
-// ResponseRequest models the OpenResponses CreateResponseBody.
-type ResponseRequest struct {
- Model string `json:"model"`
- Input InputUnion `json:"input"`
- Instructions *string `json:"instructions,omitempty"`
- MaxOutputTokens *int `json:"max_output_tokens,omitempty"`
- Metadata map[string]string `json:"metadata,omitempty"`
- Stream bool `json:"stream,omitempty"`
- PreviousResponseID *string `json:"previous_response_id,omitempty"`
- Temperature *float64 `json:"temperature,omitempty"`
- TopP *float64 `json:"top_p,omitempty"`
- FrequencyPenalty *float64 `json:"frequency_penalty,omitempty"`
- PresencePenalty *float64 `json:"presence_penalty,omitempty"`
- TopLogprobs *int `json:"top_logprobs,omitempty"`
- Truncation *string `json:"truncation,omitempty"`
- ToolChoice json.RawMessage `json:"tool_choice,omitempty"`
- Tools json.RawMessage `json:"tools,omitempty"`
- ParallelToolCalls *bool `json:"parallel_tool_calls,omitempty"`
- Store *bool `json:"store,omitempty"`
- Text json.RawMessage `json:"text,omitempty"`
- Reasoning json.RawMessage `json:"reasoning,omitempty"`
- Include []string `json:"include,omitempty"`
- ServiceTier *string `json:"service_tier,omitempty"`
- Background *bool `json:"background,omitempty"`
- StreamOptions json.RawMessage `json:"stream_options,omitempty"`
- MaxToolCalls *int `json:"max_tool_calls,omitempty"`
-
- // Non-spec extension: allows client to select a specific provider.
- Provider string `json:"provider,omitempty"`
-}
-
-// InputUnion handles the polymorphic "input" field: string or []InputItem.
-type InputUnion struct {
- String *string
- Items []InputItem
-}
-
-func (u *InputUnion) UnmarshalJSON(data []byte) error {
- if string(data) == "null" {
- return nil
- }
- var s string
- if err := json.Unmarshal(data, &s); err == nil {
- u.String = &s
- return nil
- }
- var items []InputItem
- if err := json.Unmarshal(data, &items); err == nil {
- u.Items = items
- return nil
- }
- return fmt.Errorf("input must be a string or array of items")
-}
-
-func (u InputUnion) MarshalJSON() ([]byte, error) {
- if u.String != nil {
- return json.Marshal(*u.String)
- }
- if u.Items != nil {
- return json.Marshal(u.Items)
- }
- return []byte("null"), nil
-}
-
-// InputItem is a discriminated union on "type".
-// Valid types: message, item_reference, function_call, function_call_output, reasoning.
-type InputItem struct {
- Type string `json:"type"`
- Role string `json:"role,omitempty"`
- Content json.RawMessage `json:"content,omitempty"`
- ID string `json:"id,omitempty"`
- CallID string `json:"call_id,omitempty"`
- Name string `json:"name,omitempty"`
- Arguments string `json:"arguments,omitempty"`
- Output string `json:"output,omitempty"`
- Status string `json:"status,omitempty"`
-}
-
-// ============================================================
-// Internal Types (providers + conversation store)
-// ============================================================
-
-// Message is the normalized internal message representation.
-type Message struct {
- Role string `json:"role"`
- Content []ContentBlock `json:"content"`
- CallID string `json:"call_id,omitempty"` // for tool messages
- Name string `json:"name,omitempty"` // for tool messages
- ToolCalls []ToolCall `json:"tool_calls,omitempty"` // for assistant messages
-}
-
-// ContentBlock is a typed content element.
-type ContentBlock struct {
- Type string `json:"type"`
- Text string `json:"text,omitempty"`
-}
-
-// NormalizeInput converts the request Input into messages for providers.
-// Does NOT include instructions (the server prepends those separately).
-func (r *ResponseRequest) NormalizeInput() []Message {
- if r.Input.String != nil {
- return []Message{{
- Role: "user",
- Content: []ContentBlock{{Type: "input_text", Text: *r.Input.String}},
- }}
- }
-
- var msgs []Message
- for _, item := range r.Input.Items {
- switch item.Type {
- case "message", "":
- msg := Message{Role: item.Role}
- if item.Content != nil {
- var s string
- if err := json.Unmarshal(item.Content, &s); err == nil {
- contentType := "input_text"
- if item.Role == "assistant" {
- contentType = "output_text"
- }
- msg.Content = []ContentBlock{{Type: contentType, Text: s}}
- } else {
- // Content is an array of blocks - parse them
- var rawBlocks []map[string]interface{}
- if err := json.Unmarshal(item.Content, &rawBlocks); err == nil {
- // Extract content blocks and tool calls
- for _, block := range rawBlocks {
- blockType, _ := block["type"].(string)
-
- if blockType == "tool_use" {
- // Extract tool call information
- toolCall := ToolCall{
- ID: getStringField(block, "id"),
- Name: getStringField(block, "name"),
- }
- // input field contains the arguments as a map
- if input, ok := block["input"].(map[string]interface{}); ok {
- if inputJSON, err := json.Marshal(input); err == nil {
- toolCall.Arguments = string(inputJSON)
- }
- }
- msg.ToolCalls = append(msg.ToolCalls, toolCall)
- } else if blockType == "output_text" || blockType == "input_text" {
- // Regular text content block
- msg.Content = append(msg.Content, ContentBlock{
- Type: blockType,
- Text: getStringField(block, "text"),
- })
- }
- }
- }
- }
- }
- msgs = append(msgs, msg)
- case "function_call_output":
- msgs = append(msgs, Message{
- Role: "tool",
- Content: []ContentBlock{{Type: "input_text", Text: item.Output}},
- CallID: item.CallID,
- Name: item.Name,
- })
- }
- }
- return msgs
-}
-
-// ============================================================
-// Response Types (ResponseResource)
-// ============================================================
-
-// Response is the spec-compliant ResponseResource.
-type Response struct {
- ID string `json:"id"`
- Object string `json:"object"`
- CreatedAt int64 `json:"created_at"`
- CompletedAt *int64 `json:"completed_at"`
- Status string `json:"status"`
- IncompleteDetails *IncompleteDetails `json:"incomplete_details"`
- Model string `json:"model"`
- PreviousResponseID *string `json:"previous_response_id"`
- Instructions *string `json:"instructions"`
- Output []OutputItem `json:"output"`
- Error *ResponseError `json:"error"`
- Tools json.RawMessage `json:"tools"`
- ToolChoice json.RawMessage `json:"tool_choice"`
- Truncation string `json:"truncation"`
- ParallelToolCalls bool `json:"parallel_tool_calls"`
- Text json.RawMessage `json:"text"`
- TopP float64 `json:"top_p"`
- PresencePenalty float64 `json:"presence_penalty"`
- FrequencyPenalty float64 `json:"frequency_penalty"`
- TopLogprobs int `json:"top_logprobs"`
- Temperature float64 `json:"temperature"`
- Reasoning json.RawMessage `json:"reasoning"`
- Usage *Usage `json:"usage"`
- MaxOutputTokens *int `json:"max_output_tokens"`
- MaxToolCalls *int `json:"max_tool_calls"`
- Store bool `json:"store"`
- Background bool `json:"background"`
- ServiceTier string `json:"service_tier"`
- Metadata map[string]string `json:"metadata"`
- SafetyIdentifier *string `json:"safety_identifier"`
- PromptCacheKey *string `json:"prompt_cache_key"`
-
- // Non-spec extension
- Provider string `json:"provider,omitempty"`
-}
-
-// OutputItem represents a typed item in the response output.
-type OutputItem struct {
- ID string `json:"id"`
- Type string `json:"type"`
- Status string `json:"status"`
- Role string `json:"role,omitempty"`
- Content []ContentPart `json:"content,omitempty"`
- CallID string `json:"call_id,omitempty"` // for function_call
- Name string `json:"name,omitempty"` // for function_call
- Arguments string `json:"arguments,omitempty"` // for function_call
-}
-
-// ContentPart is a content block within an output item.
-type ContentPart struct {
- Type string `json:"type"`
- Text string `json:"text"`
- Annotations []Annotation `json:"annotations"`
-}
-
-// Annotation on output text content.
-type Annotation struct {
- Type string `json:"type"`
-}
-
-// IncompleteDetails explains why a response is incomplete.
-type IncompleteDetails struct {
- Reason string `json:"reason"`
-}
-
-// ResponseError describes an error in the response.
-type ResponseError struct {
- Type string `json:"type"`
- Message string `json:"message"`
- Code *string `json:"code"`
-}
-
-// ============================================================
-// Usage Types
-// ============================================================
-
-// Usage captures token accounting with sub-details.
-type Usage struct {
- InputTokens int `json:"input_tokens"`
- OutputTokens int `json:"output_tokens"`
- TotalTokens int `json:"total_tokens"`
- InputTokensDetails InputTokensDetails `json:"input_tokens_details"`
- OutputTokensDetails OutputTokensDetails `json:"output_tokens_details"`
-}
-
-// InputTokensDetails breaks down input token usage.
-type InputTokensDetails struct {
- CachedTokens int `json:"cached_tokens"`
-}
-
-// OutputTokensDetails breaks down output token usage.
-type OutputTokensDetails struct {
- ReasoningTokens int `json:"reasoning_tokens"`
-}
-
-// ============================================================
-// Streaming Types
-// ============================================================
-
-// StreamEvent represents a single SSE event in the streaming response.
-// Fields are selectively populated based on the event Type.
-type StreamEvent struct {
- Type string `json:"type"`
- SequenceNumber int `json:"sequence_number"`
- Response *Response `json:"response,omitempty"`
- OutputIndex *int `json:"output_index,omitempty"`
- Item *OutputItem `json:"item,omitempty"`
- ItemID string `json:"item_id,omitempty"`
- ContentIndex *int `json:"content_index,omitempty"`
- Part *ContentPart `json:"part,omitempty"`
- Delta string `json:"delta,omitempty"`
- Text string `json:"text,omitempty"`
- Arguments string `json:"arguments,omitempty"` // for function_call_arguments.done
-}
-
-// ============================================================
-// Provider Result Types (internal, not exposed via HTTP)
-// ============================================================
-
-// ProviderResult is returned by Provider.Generate.
-type ProviderResult struct {
- ID string
- Model string
- Text string
- Usage Usage
- ToolCalls []ToolCall
-}
-
-// ProviderStreamDelta is sent through the stream channel.
-type ProviderStreamDelta struct {
- ID string
- Model string
- Text string
- Done bool
- Usage *Usage
- ToolCallDelta *ToolCallDelta
-}
-
-// ToolCall represents a function call from the model.
-type ToolCall struct {
- ID string
- Name string
- Arguments string // JSON string
-}
-
-// ToolCallDelta represents a streaming chunk of a tool call.
-type ToolCallDelta struct {
- Index int
- ID string
- Name string
- Arguments string
-}
-
-// ============================================================
-// Models Endpoint Types
-// ============================================================
-
-// ModelInfo describes a single model available through the gateway.
-type ModelInfo struct {
- ID string `json:"id"`
- Provider string `json:"provider"`
-}
-
-// ModelsResponse is returned by GET /v1/models.
-type ModelsResponse struct {
- Object string `json:"object"`
- Data []ModelInfo `json:"data"`
-}
-
-// ============================================================
-// Validation
-// ============================================================
-
-// Validate performs basic structural validation.
-func (r *ResponseRequest) Validate() error {
- if r == nil {
- return errors.New("request is nil")
- }
- if r.Model == "" {
- return errors.New("model is required")
- }
- if r.Input.String == nil && len(r.Input.Items) == 0 {
- return errors.New("input is required")
- }
- return nil
-}
-
-// getStringField is a helper to safely extract string fields from a map
-func getStringField(m map[string]interface{}, key string) string {
- if val, ok := m[key].(string); ok {
- return val
- }
- return ""
-}
-
-
-
package auth
-
-import (
- "context"
- "crypto/rsa"
- "encoding/base64"
- "encoding/json"
- "fmt"
- "log/slog"
- "math/big"
- "net/http"
- "strings"
- "sync"
- "time"
-
- "github.com/golang-jwt/jwt/v5"
-)
-
-// Config holds OIDC authentication configuration.
-type Config struct {
- Enabled bool `yaml:"enabled"`
- Issuer string `yaml:"issuer"` // e.g., "https://accounts.google.com"
- Audience string `yaml:"audience"` // e.g., your client ID
-}
-
-// Middleware provides JWT validation middleware.
-type Middleware struct {
- cfg Config
- keys map[string]*rsa.PublicKey
- mu sync.RWMutex
- client *http.Client
- logger *slog.Logger
-}
-
-// New creates an authentication middleware.
-func New(cfg Config, logger *slog.Logger) (*Middleware, error) {
- if !cfg.Enabled {
- return &Middleware{cfg: cfg, logger: logger}, nil
- }
-
- if cfg.Issuer == "" {
- return nil, fmt.Errorf("auth enabled but issuer not configured")
- }
-
- m := &Middleware{
- cfg: cfg,
- keys: make(map[string]*rsa.PublicKey),
- client: &http.Client{Timeout: 10 * time.Second},
- logger: logger,
- }
-
- // Fetch JWKS on startup
- if err := m.refreshJWKS(); err != nil {
- return nil, fmt.Errorf("failed to fetch JWKS: %w", err)
- }
-
- // Refresh JWKS periodically
- go m.periodicRefresh()
-
- return m, nil
-}
-
-// Handler wraps an HTTP handler with authentication.
-func (m *Middleware) Handler(next http.Handler) http.Handler {
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- if !m.cfg.Enabled {
- next.ServeHTTP(w, r)
- return
- }
-
- // Extract token from Authorization header
- authHeader := r.Header.Get("Authorization")
- if authHeader == "" {
- http.Error(w, "missing authorization header", http.StatusUnauthorized)
- return
- }
-
- parts := strings.SplitN(authHeader, " ", 2)
- if len(parts) != 2 || strings.ToLower(parts[0]) != "bearer" {
- http.Error(w, "invalid authorization header format", http.StatusUnauthorized)
- return
- }
-
- tokenString := parts[1]
-
- // Validate token
- claims, err := m.validateToken(tokenString)
- if err != nil {
- http.Error(w, fmt.Sprintf("invalid token: %v", err), http.StatusUnauthorized)
- return
- }
-
- // Add claims to context
- ctx := context.WithValue(r.Context(), claimsKey, claims)
- next.ServeHTTP(w, r.WithContext(ctx))
- })
-}
-
-type contextKey string
-
-const claimsKey contextKey = "jwt_claims"
-
-// GetClaims extracts JWT claims from request context.
-func GetClaims(ctx context.Context) (jwt.MapClaims, bool) {
- claims, ok := ctx.Value(claimsKey).(jwt.MapClaims)
- return claims, ok
-}
-
-func (m *Middleware) validateToken(tokenString string) (jwt.MapClaims, error) {
- token, err := jwt.Parse(tokenString, func(token *jwt.Token) (interface{}, error) {
- // Verify signing method
- if _, ok := token.Method.(*jwt.SigningMethodRSA); !ok {
- return nil, fmt.Errorf("unexpected signing method: %v", token.Header["alg"])
- }
-
- // Get key ID from token header
- kid, ok := token.Header["kid"].(string)
- if !ok {
- return nil, fmt.Errorf("missing kid in token header")
- }
-
- // Get public key
- m.mu.RLock()
- key, exists := m.keys[kid]
- m.mu.RUnlock()
-
- if !exists {
- // Try refreshing JWKS
- if err := m.refreshJWKS(); err != nil {
- return nil, fmt.Errorf("failed to refresh JWKS: %w", err)
- }
-
- m.mu.RLock()
- key, exists = m.keys[kid]
- m.mu.RUnlock()
-
- if !exists {
- return nil, fmt.Errorf("unknown key ID: %s", kid)
- }
- }
-
- return key, nil
- })
-
- if err != nil {
- return nil, err
- }
-
- claims, ok := token.Claims.(jwt.MapClaims)
- if !ok || !token.Valid {
- return nil, fmt.Errorf("invalid token claims")
- }
-
- // Validate issuer
- if iss, ok := claims["iss"].(string); !ok || iss != m.cfg.Issuer {
- return nil, fmt.Errorf("invalid issuer: %s", iss)
- }
-
- // Validate audience if configured
- if m.cfg.Audience != "" {
- aud, ok := claims["aud"].(string)
- if !ok {
- // aud might be an array
- audArray, ok := claims["aud"].([]interface{})
- if !ok {
- return nil, fmt.Errorf("invalid audience format")
- }
- found := false
- for _, a := range audArray {
- if audStr, ok := a.(string); ok && audStr == m.cfg.Audience {
- found = true
- break
- }
- }
- if !found {
- return nil, fmt.Errorf("audience not matched")
- }
- } else if aud != m.cfg.Audience {
- return nil, fmt.Errorf("invalid audience: %s", aud)
- }
- }
-
- return claims, nil
-}
-
-func (m *Middleware) refreshJWKS() error {
- jwksURL := strings.TrimSuffix(m.cfg.Issuer, "/") + "/.well-known/openid-configuration"
-
- // Fetch OIDC discovery document
- resp, err := m.client.Get(jwksURL)
- if err != nil {
- return err
- }
- defer resp.Body.Close()
-
- var oidcConfig struct {
- JwksURI string `json:"jwks_uri"`
- }
- if err := json.NewDecoder(resp.Body).Decode(&oidcConfig); err != nil {
- return err
- }
-
- // Fetch JWKS
- resp, err = m.client.Get(oidcConfig.JwksURI)
- if err != nil {
- return err
- }
- defer resp.Body.Close()
-
- var jwks struct {
- Keys []struct {
- Kid string `json:"kid"`
- Kty string `json:"kty"`
- Use string `json:"use"`
- N string `json:"n"`
- E string `json:"e"`
- } `json:"keys"`
- }
-
- if err := json.NewDecoder(resp.Body).Decode(&jwks); err != nil {
- return err
- }
-
- // Parse keys
- newKeys := make(map[string]*rsa.PublicKey)
- for _, key := range jwks.Keys {
- if key.Kty != "RSA" || key.Use != "sig" {
- continue
- }
-
- nBytes, err := base64.RawURLEncoding.DecodeString(key.N)
- if err != nil {
- continue
- }
-
- eBytes, err := base64.RawURLEncoding.DecodeString(key.E)
- if err != nil {
- continue
- }
-
- pubKey := &rsa.PublicKey{
- N: new(big.Int).SetBytes(nBytes),
- E: int(new(big.Int).SetBytes(eBytes).Int64()),
- }
-
- newKeys[key.Kid] = pubKey
- }
-
- m.mu.Lock()
- m.keys = newKeys
- m.mu.Unlock()
-
- return nil
-}
-
-func (m *Middleware) periodicRefresh() {
- ticker := time.NewTicker(1 * time.Hour)
- defer ticker.Stop()
-
- for range ticker.C {
- if err := m.refreshJWKS(); err != nil {
- m.logger.Error("failed to refresh JWKS",
- slog.String("issuer", m.cfg.Issuer),
- slog.String("error", err.Error()),
- )
- } else {
- m.logger.Debug("successfully refreshed JWKS",
- slog.String("issuer", m.cfg.Issuer),
- )
- }
- }
-}
-
-
-
package config
-
-import (
- "fmt"
- "os"
-
- "gopkg.in/yaml.v3"
-)
-
-// Config describes the full gateway configuration file.
-type Config struct {
- Server ServerConfig `yaml:"server"`
- Providers map[string]ProviderEntry `yaml:"providers"`
- Models []ModelEntry `yaml:"models"`
- Auth AuthConfig `yaml:"auth"`
- Conversations ConversationConfig `yaml:"conversations"`
- Logging LoggingConfig `yaml:"logging"`
- RateLimit RateLimitConfig `yaml:"rate_limit"`
- Observability ObservabilityConfig `yaml:"observability"`
-}
-
-// ConversationConfig controls conversation storage.
-type ConversationConfig struct {
- // Store is the storage backend: "memory" (default), "sql", or "redis".
- Store string `yaml:"store"`
- // TTL is the conversation expiration duration (e.g. "1h", "30m"). Defaults to "1h".
- TTL string `yaml:"ttl"`
- // DSN is the database/Redis connection string, required when store is "sql" or "redis".
- // Examples: "conversations.db" (SQLite), "postgres://user:pass@host/db", "redis://:password@localhost:6379/0".
- DSN string `yaml:"dsn"`
- // Driver is the SQL driver name, required when store is "sql".
- // Examples: "sqlite3", "postgres", "mysql".
- Driver string `yaml:"driver"`
-}
-
-// LoggingConfig controls logging format and level.
-type LoggingConfig struct {
- // Format is the log output format: "json" (default) or "text".
- Format string `yaml:"format"`
- // Level is the minimum log level: "debug", "info" (default), "warn", or "error".
- Level string `yaml:"level"`
-}
-
-// RateLimitConfig controls rate limiting behavior.
-type RateLimitConfig struct {
- // Enabled controls whether rate limiting is active.
- Enabled bool `yaml:"enabled"`
- // RequestsPerSecond is the number of requests allowed per second per IP.
- RequestsPerSecond float64 `yaml:"requests_per_second"`
- // Burst is the maximum burst size allowed.
- Burst int `yaml:"burst"`
-}
-
-// ObservabilityConfig controls observability features.
-type ObservabilityConfig struct {
- Enabled bool `yaml:"enabled"`
- Metrics MetricsConfig `yaml:"metrics"`
- Tracing TracingConfig `yaml:"tracing"`
-}
-
-// MetricsConfig controls Prometheus metrics.
-type MetricsConfig struct {
- Enabled bool `yaml:"enabled"`
- Path string `yaml:"path"` // default: "/metrics"
-}
-
-// TracingConfig controls OpenTelemetry tracing.
-type TracingConfig struct {
- Enabled bool `yaml:"enabled"`
- ServiceName string `yaml:"service_name"` // default: "llm-gateway"
- Sampler SamplerConfig `yaml:"sampler"`
- Exporter ExporterConfig `yaml:"exporter"`
-}
-
-// SamplerConfig controls trace sampling.
-type SamplerConfig struct {
- Type string `yaml:"type"` // "always", "never", "probability"
- Rate float64 `yaml:"rate"` // 0.0 to 1.0
-}
-
-// ExporterConfig controls trace exporters.
-type ExporterConfig struct {
- Type string `yaml:"type"` // "otlp", "stdout"
- Endpoint string `yaml:"endpoint"`
- Insecure bool `yaml:"insecure"`
- Headers map[string]string `yaml:"headers"`
-}
-
-// AuthConfig holds OIDC authentication settings.
-type AuthConfig struct {
- Enabled bool `yaml:"enabled"`
- Issuer string `yaml:"issuer"`
- Audience string `yaml:"audience"`
-}
-
-// ServerConfig controls HTTP server values.
-type ServerConfig struct {
- Address string `yaml:"address"`
- MaxRequestBodySize int64 `yaml:"max_request_body_size"` // Maximum request body size in bytes (default: 10MB)
-}
-
-// ProviderEntry defines a named provider instance in the config file.
-type ProviderEntry struct {
- Type string `yaml:"type"`
- APIKey string `yaml:"api_key"`
- Endpoint string `yaml:"endpoint"`
- APIVersion string `yaml:"api_version"`
- Project string `yaml:"project"` // For Vertex AI
- Location string `yaml:"location"` // For Vertex AI
-}
-
-// ModelEntry maps a model name to a provider entry.
-type ModelEntry struct {
- Name string `yaml:"name"`
- Provider string `yaml:"provider"`
- ProviderModelID string `yaml:"provider_model_id"`
-}
-
-// ProviderConfig contains shared provider configuration fields used internally by providers.
-type ProviderConfig struct {
- APIKey string `yaml:"api_key"`
- Model string `yaml:"model"`
- Endpoint string `yaml:"endpoint"`
-}
-
-// AzureOpenAIConfig contains Azure-specific settings used internally by the OpenAI provider.
-type AzureOpenAIConfig struct {
- APIKey string `yaml:"api_key"`
- Endpoint string `yaml:"endpoint"`
- APIVersion string `yaml:"api_version"`
-}
-
-// AzureAnthropicConfig contains Azure-specific settings for Anthropic used internally.
-type AzureAnthropicConfig struct {
- APIKey string `yaml:"api_key"`
- Endpoint string `yaml:"endpoint"`
- Model string `yaml:"model"`
-}
-
-// VertexAIConfig contains Vertex AI-specific settings used internally by the Google provider.
-type VertexAIConfig struct {
- Project string `yaml:"project"`
- Location string `yaml:"location"`
-}
-
-// Load reads and parses a YAML configuration file, expanding ${VAR} env references.
-func Load(path string) (*Config, error) {
- data, err := os.ReadFile(path)
- if err != nil {
- return nil, fmt.Errorf("read config: %w", err)
- }
-
- expanded := os.Expand(string(data), os.Getenv)
-
- var cfg Config
- if err := yaml.Unmarshal([]byte(expanded), &cfg); err != nil {
- return nil, fmt.Errorf("parse config: %w", err)
- }
-
- if err := cfg.validate(); err != nil {
- return nil, err
- }
-
- return &cfg, nil
-}
-
-func (cfg *Config) validate() error {
- for _, m := range cfg.Models {
- if _, ok := cfg.Providers[m.Provider]; !ok {
- return fmt.Errorf("model %q references unknown provider %q", m.Name, m.Provider)
- }
- }
- return nil
-}
-
-
-
package conversation
-
-import (
- "context"
- "sync"
- "time"
-
- "github.com/ajac-zero/latticelm/internal/api"
-)
-
-// Store defines the interface for conversation storage backends.
-type Store interface {
- Get(ctx context.Context, id string) (*Conversation, error)
- Create(ctx context.Context, id string, model string, messages []api.Message) (*Conversation, error)
- Append(ctx context.Context, id string, messages ...api.Message) (*Conversation, error)
- Delete(ctx context.Context, id string) error
- Size() int
- Close() error
-}
-
-// MemoryStore manages conversation history in-memory with automatic expiration.
-type MemoryStore struct {
- conversations map[string]*Conversation
- mu sync.RWMutex
- ttl time.Duration
- done chan struct{}
-}
-
-// Conversation holds the message history for a single conversation thread.
-type Conversation struct {
- ID string
- Messages []api.Message
- Model string
- CreatedAt time.Time
- UpdatedAt time.Time
-}
-
-// NewMemoryStore creates an in-memory conversation store with the given TTL.
-func NewMemoryStore(ttl time.Duration) *MemoryStore {
- s := &MemoryStore{
- conversations: make(map[string]*Conversation),
- ttl: ttl,
- done: make(chan struct{}),
- }
-
- // Start cleanup goroutine if TTL is set
- if ttl > 0 {
- go s.cleanup()
- }
-
- return s
-}
-
-// Get retrieves a conversation by ID. Returns a deep copy to prevent data races.
-func (s *MemoryStore) Get(ctx context.Context, id string) (*Conversation, error) {
- s.mu.RLock()
- defer s.mu.RUnlock()
-
- conv, ok := s.conversations[id]
- if !ok {
- return nil, nil
- }
-
- // Return a deep copy to prevent data races
- msgsCopy := make([]api.Message, len(conv.Messages))
- copy(msgsCopy, conv.Messages)
-
- return &Conversation{
- ID: conv.ID,
- Messages: msgsCopy,
- Model: conv.Model,
- CreatedAt: conv.CreatedAt,
- UpdatedAt: conv.UpdatedAt,
- }, nil
-}
-
-// Create creates a new conversation with the given messages.
-func (s *MemoryStore) Create(ctx context.Context, id string, model string, messages []api.Message) (*Conversation, error) {
- s.mu.Lock()
- defer s.mu.Unlock()
-
- now := time.Now()
-
- // Store a copy to prevent external modifications
- msgsCopy := make([]api.Message, len(messages))
- copy(msgsCopy, messages)
-
- conv := &Conversation{
- ID: id,
- Messages: msgsCopy,
- Model: model,
- CreatedAt: now,
- UpdatedAt: now,
- }
-
- s.conversations[id] = conv
-
- // Return a copy
- return &Conversation{
- ID: id,
- Messages: messages,
- Model: model,
- CreatedAt: now,
- UpdatedAt: now,
- }, nil
-}
-
-// Append adds new messages to an existing conversation.
-func (s *MemoryStore) Append(ctx context.Context, id string, messages ...api.Message) (*Conversation, error) {
- s.mu.Lock()
- defer s.mu.Unlock()
-
- conv, ok := s.conversations[id]
- if !ok {
- return nil, nil
- }
-
- conv.Messages = append(conv.Messages, messages...)
- conv.UpdatedAt = time.Now()
-
- // Return a deep copy
- msgsCopy := make([]api.Message, len(conv.Messages))
- copy(msgsCopy, conv.Messages)
-
- return &Conversation{
- ID: conv.ID,
- Messages: msgsCopy,
- Model: conv.Model,
- CreatedAt: conv.CreatedAt,
- UpdatedAt: conv.UpdatedAt,
- }, nil
-}
-
-// Delete removes a conversation from the store.
-func (s *MemoryStore) Delete(ctx context.Context, id string) error {
- s.mu.Lock()
- defer s.mu.Unlock()
-
- delete(s.conversations, id)
- return nil
-}
-
-// cleanup periodically removes expired conversations.
-func (s *MemoryStore) cleanup() {
- ticker := time.NewTicker(1 * time.Minute)
- defer ticker.Stop()
-
- for {
- select {
- case <-ticker.C:
- s.mu.Lock()
- now := time.Now()
- for id, conv := range s.conversations {
- if now.Sub(conv.UpdatedAt) > s.ttl {
- delete(s.conversations, id)
- }
- }
- s.mu.Unlock()
- case <-s.done:
- return
- }
- }
-}
-
-// Size returns the number of active conversations.
-func (s *MemoryStore) Size() int {
- s.mu.RLock()
- defer s.mu.RUnlock()
- return len(s.conversations)
-}
-
-// Close stops the cleanup goroutine and releases resources.
-func (s *MemoryStore) Close() error {
- close(s.done)
- return nil
-}
-
-
-
package conversation
-
-import (
- "context"
- "encoding/json"
- "time"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/redis/go-redis/v9"
-)
-
-// RedisStore manages conversation history in Redis with automatic expiration.
-type RedisStore struct {
- client *redis.Client
- ttl time.Duration
-}
-
-// NewRedisStore creates a Redis-backed conversation store.
-func NewRedisStore(client *redis.Client, ttl time.Duration) *RedisStore {
- return &RedisStore{
- client: client,
- ttl: ttl,
- }
-}
-
-// key returns the Redis key for a conversation ID.
-func (s *RedisStore) key(id string) string {
- return "conv:" + id
-}
-
-// Get retrieves a conversation by ID from Redis.
-func (s *RedisStore) Get(ctx context.Context, id string) (*Conversation, error) {
- data, err := s.client.Get(ctx, s.key(id)).Bytes()
- if err == redis.Nil {
- return nil, nil
- }
- if err != nil {
- return nil, err
- }
-
- var conv Conversation
- if err := json.Unmarshal(data, &conv); err != nil {
- return nil, err
- }
-
- return &conv, nil
-}
-
-// Create creates a new conversation with the given messages.
-func (s *RedisStore) Create(ctx context.Context, id string, model string, messages []api.Message) (*Conversation, error) {
- now := time.Now()
- conv := &Conversation{
- ID: id,
- Messages: messages,
- Model: model,
- CreatedAt: now,
- UpdatedAt: now,
- }
-
- data, err := json.Marshal(conv)
- if err != nil {
- return nil, err
- }
-
- if err := s.client.Set(ctx, s.key(id), data, s.ttl).Err(); err != nil {
- return nil, err
- }
-
- return conv, nil
-}
-
-// Append adds new messages to an existing conversation.
-func (s *RedisStore) Append(ctx context.Context, id string, messages ...api.Message) (*Conversation, error) {
- conv, err := s.Get(ctx, id)
- if err != nil {
- return nil, err
- }
- if conv == nil {
- return nil, nil
- }
-
- conv.Messages = append(conv.Messages, messages...)
- conv.UpdatedAt = time.Now()
-
- data, err := json.Marshal(conv)
- if err != nil {
- return nil, err
- }
-
- if err := s.client.Set(ctx, s.key(id), data, s.ttl).Err(); err != nil {
- return nil, err
- }
-
- return conv, nil
-}
-
-// Delete removes a conversation from Redis.
-func (s *RedisStore) Delete(ctx context.Context, id string) error {
- return s.client.Del(ctx, s.key(id)).Err()
-}
-
-// Size returns the number of active conversations in Redis.
-func (s *RedisStore) Size() int {
- var count int
- var cursor uint64
- ctx := context.Background()
-
- for {
- keys, nextCursor, err := s.client.Scan(ctx, cursor, "conv:*", 100).Result()
- if err != nil {
- return 0
- }
-
- count += len(keys)
- cursor = nextCursor
-
- if cursor == 0 {
- break
- }
- }
-
- return count
-}
-
-// Close closes the Redis client connection.
-func (s *RedisStore) Close() error {
- return s.client.Close()
-}
-
-
-
package conversation
-
-import (
- "context"
- "database/sql"
- "encoding/json"
- "time"
-
- "github.com/ajac-zero/latticelm/internal/api"
-)
-
-// sqlDialect holds driver-specific SQL statements.
-type sqlDialect struct {
- getByID string
- upsert string
- update string
- deleteByID string
- cleanup string
-}
-
-func newDialect(driver string) sqlDialect {
- if driver == "pgx" || driver == "postgres" {
- return sqlDialect{
- getByID: `SELECT id, model, messages, created_at, updated_at FROM conversations WHERE id = $1`,
- upsert: `INSERT INTO conversations (id, model, messages, created_at, updated_at) VALUES ($1, $2, $3, $4, $5) ON CONFLICT (id) DO UPDATE SET model = EXCLUDED.model, messages = EXCLUDED.messages, updated_at = EXCLUDED.updated_at`,
- update: `UPDATE conversations SET messages = $1, updated_at = $2 WHERE id = $3`,
- deleteByID: `DELETE FROM conversations WHERE id = $1`,
- cleanup: `DELETE FROM conversations WHERE updated_at < $1`,
- }
- }
- return sqlDialect{
- getByID: `SELECT id, model, messages, created_at, updated_at FROM conversations WHERE id = ?`,
- upsert: `REPLACE INTO conversations (id, model, messages, created_at, updated_at) VALUES (?, ?, ?, ?, ?)`,
- update: `UPDATE conversations SET messages = ?, updated_at = ? WHERE id = ?`,
- deleteByID: `DELETE FROM conversations WHERE id = ?`,
- cleanup: `DELETE FROM conversations WHERE updated_at < ?`,
- }
-}
-
-// SQLStore manages conversation history in a SQL database with automatic expiration.
-type SQLStore struct {
- db *sql.DB
- ttl time.Duration
- dialect sqlDialect
- done chan struct{}
-}
-
-// NewSQLStore creates a SQL-backed conversation store. It creates the
-// conversations table if it does not already exist and starts a background
-// goroutine to remove expired rows.
-func NewSQLStore(db *sql.DB, driver string, ttl time.Duration) (*SQLStore, error) {
- _, err := db.Exec(`CREATE TABLE IF NOT EXISTS conversations (
- id TEXT PRIMARY KEY,
- model TEXT NOT NULL,
- messages TEXT NOT NULL,
- created_at TIMESTAMP NOT NULL,
- updated_at TIMESTAMP NOT NULL
- )`)
- if err != nil {
- return nil, err
- }
-
- s := &SQLStore{
- db: db,
- ttl: ttl,
- dialect: newDialect(driver),
- done: make(chan struct{}),
- }
- if ttl > 0 {
- go s.cleanup()
- }
- return s, nil
-}
-
-func (s *SQLStore) Get(ctx context.Context, id string) (*Conversation, error) {
- row := s.db.QueryRowContext(ctx, s.dialect.getByID, id)
-
- var conv Conversation
- var msgJSON string
- err := row.Scan(&conv.ID, &conv.Model, &msgJSON, &conv.CreatedAt, &conv.UpdatedAt)
- if err == sql.ErrNoRows {
- return nil, nil
- }
- if err != nil {
- return nil, err
- }
-
- if err := json.Unmarshal([]byte(msgJSON), &conv.Messages); err != nil {
- return nil, err
- }
-
- return &conv, nil
-}
-
-func (s *SQLStore) Create(ctx context.Context, id string, model string, messages []api.Message) (*Conversation, error) {
- now := time.Now()
- msgJSON, err := json.Marshal(messages)
- if err != nil {
- return nil, err
- }
-
- if _, err := s.db.ExecContext(ctx, s.dialect.upsert, id, model, string(msgJSON), now, now); err != nil {
- return nil, err
- }
-
- return &Conversation{
- ID: id,
- Messages: messages,
- Model: model,
- CreatedAt: now,
- UpdatedAt: now,
- }, nil
-}
-
-func (s *SQLStore) Append(ctx context.Context, id string, messages ...api.Message) (*Conversation, error) {
- conv, err := s.Get(ctx, id)
- if err != nil {
- return nil, err
- }
- if conv == nil {
- return nil, nil
- }
-
- conv.Messages = append(conv.Messages, messages...)
- conv.UpdatedAt = time.Now()
-
- msgJSON, err := json.Marshal(conv.Messages)
- if err != nil {
- return nil, err
- }
-
- if _, err := s.db.ExecContext(ctx, s.dialect.update, string(msgJSON), conv.UpdatedAt, id); err != nil {
- return nil, err
- }
-
- return conv, nil
-}
-
-func (s *SQLStore) Delete(ctx context.Context, id string) error {
- _, err := s.db.ExecContext(ctx, s.dialect.deleteByID, id)
- return err
-}
-
-func (s *SQLStore) Size() int {
- var count int
- _ = s.db.QueryRow(`SELECT COUNT(*) FROM conversations`).Scan(&count)
- return count
-}
-
-func (s *SQLStore) cleanup() {
- ticker := time.NewTicker(1 * time.Minute)
- defer ticker.Stop()
-
- for {
- select {
- case <-ticker.C:
- cutoff := time.Now().Add(-s.ttl)
- _, _ = s.db.Exec(s.dialect.cleanup, cutoff)
- case <-s.done:
- return
- }
- }
-}
-
-// Close stops the cleanup goroutine and closes the database connection.
-func (s *SQLStore) Close() error {
- close(s.done)
- return s.db.Close()
-}
-
-
-
package conversation
-
-import (
- "context"
- "database/sql"
- "fmt"
- "testing"
- "time"
-
- "github.com/alicebob/miniredis/v2"
- _ "github.com/mattn/go-sqlite3"
- "github.com/redis/go-redis/v9"
-
- "github.com/ajac-zero/latticelm/internal/api"
-)
-
-// SetupTestDB creates an in-memory SQLite database for testing
-func SetupTestDB(t *testing.T, driver string) *sql.DB {
- t.Helper()
-
- var dsn string
- switch driver {
- case "sqlite3":
- // Use in-memory SQLite database
- dsn = ":memory:"
- case "postgres":
- // For postgres tests, use a mock or skip
- t.Skip("PostgreSQL tests require external database")
- return nil
- case "mysql":
- // For mysql tests, use a mock or skip
- t.Skip("MySQL tests require external database")
- return nil
- default:
- t.Fatalf("unsupported driver: %s", driver)
- return nil
- }
-
- db, err := sql.Open(driver, dsn)
- if err != nil {
- t.Fatalf("failed to open database: %v", err)
- }
-
- // Create the conversations table
- schema := `
- CREATE TABLE IF NOT EXISTS conversations (
- conversation_id TEXT PRIMARY KEY,
- messages TEXT NOT NULL,
- updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
- )
- `
- if _, err := db.Exec(schema); err != nil {
- db.Close()
- t.Fatalf("failed to create schema: %v", err)
- }
-
- return db
-}
-
-// SetupTestRedis creates a miniredis instance for testing
-func SetupTestRedis(t *testing.T) (*redis.Client, *miniredis.Miniredis) {
- t.Helper()
-
- mr := miniredis.RunT(t)
-
- client := redis.NewClient(&redis.Options{
- Addr: mr.Addr(),
- })
-
- // Test connection
- ctx := context.Background()
- if err := client.Ping(ctx).Err(); err != nil {
- t.Fatalf("failed to connect to miniredis: %v", err)
- }
-
- return client, mr
-}
-
-// CreateTestMessages generates test message fixtures
-func CreateTestMessages(count int) []api.Message {
- messages := make([]api.Message, count)
- for i := 0; i < count; i++ {
- role := "user"
- if i%2 == 1 {
- role = "assistant"
- }
- messages[i] = api.Message{
- Role: role,
- Content: []api.ContentBlock{
- {
- Type: "text",
- Text: fmt.Sprintf("Test message %d", i+1),
- },
- },
- }
- }
- return messages
-}
-
-// CreateTestConversation creates a test conversation with the given ID and messages
-func CreateTestConversation(conversationID string, messageCount int) *Conversation {
- return &Conversation{
- ID: conversationID,
- Messages: CreateTestMessages(messageCount),
- Model: "test-model",
- CreatedAt: time.Now(),
- UpdatedAt: time.Now(),
- }
-}
-
-// MockStore is a simple in-memory store for testing
-type MockStore struct {
- conversations map[string]*Conversation
- getCalled bool
- createCalled bool
- appendCalled bool
- deleteCalled bool
- sizeCalled bool
-}
-
-func NewMockStore() *MockStore {
- return &MockStore{
- conversations: make(map[string]*Conversation),
- }
-}
-
-func (m *MockStore) Get(ctx context.Context, conversationID string) (*Conversation, error) {
- m.getCalled = true
- conv, ok := m.conversations[conversationID]
- if !ok {
- return nil, fmt.Errorf("conversation not found")
- }
- return conv, nil
-}
-
-func (m *MockStore) Create(ctx context.Context, conversationID string, model string, messages []api.Message) (*Conversation, error) {
- m.createCalled = true
- m.conversations[conversationID] = &Conversation{
- ID: conversationID,
- Model: model,
- Messages: messages,
- CreatedAt: time.Now(),
- UpdatedAt: time.Now(),
- }
- return m.conversations[conversationID], nil
-}
-
-func (m *MockStore) Append(ctx context.Context, conversationID string, messages ...api.Message) (*Conversation, error) {
- m.appendCalled = true
- conv, ok := m.conversations[conversationID]
- if !ok {
- return nil, fmt.Errorf("conversation not found")
- }
- conv.Messages = append(conv.Messages, messages...)
- conv.UpdatedAt = time.Now()
- return conv, nil
-}
-
-func (m *MockStore) Delete(ctx context.Context, conversationID string) error {
- m.deleteCalled = true
- delete(m.conversations, conversationID)
- return nil
-}
-
-func (m *MockStore) Size() int {
- m.sizeCalled = true
- return len(m.conversations)
-}
-
-func (m *MockStore) Close() error {
- return nil
-}
-
-
-
package logger
-
-import (
- "context"
- "log/slog"
- "os"
-
- "go.opentelemetry.io/otel/trace"
-)
-
-type contextKey string
-
-const requestIDKey contextKey = "request_id"
-
-// New creates a logger with the specified format (json or text) and level.
-func New(format string, level string) *slog.Logger {
- var handler slog.Handler
-
- logLevel := parseLevel(level)
- opts := &slog.HandlerOptions{
- Level: logLevel,
- AddSource: true, // Add file:line info for debugging
- }
-
- if format == "json" {
- handler = slog.NewJSONHandler(os.Stdout, opts)
- } else {
- handler = slog.NewTextHandler(os.Stdout, opts)
- }
-
- return slog.New(handler)
-}
-
-// parseLevel converts a string level to slog.Level.
-func parseLevel(level string) slog.Level {
- switch level {
- case "debug":
- return slog.LevelDebug
- case "info":
- return slog.LevelInfo
- case "warn":
- return slog.LevelWarn
- case "error":
- return slog.LevelError
- default:
- return slog.LevelInfo
- }
-}
-
-// WithRequestID adds a request ID to the context for tracing.
-func WithRequestID(ctx context.Context, requestID string) context.Context {
- return context.WithValue(ctx, requestIDKey, requestID)
-}
-
-// FromContext extracts the request ID from context, or returns empty string.
-func FromContext(ctx context.Context) string {
- if id, ok := ctx.Value(requestIDKey).(string); ok {
- return id
- }
- return ""
-}
-
-// LogAttrsWithTrace adds trace context to log attributes for correlation.
-func LogAttrsWithTrace(ctx context.Context, attrs ...any) []any {
- spanCtx := trace.SpanFromContext(ctx).SpanContext()
- if spanCtx.IsValid() {
- attrs = append(attrs,
- slog.String("trace_id", spanCtx.TraceID().String()),
- slog.String("span_id", spanCtx.SpanID().String()),
- )
- }
- return attrs
-}
-
-
-
package observability
-
-import (
- "github.com/ajac-zero/latticelm/internal/conversation"
- "github.com/ajac-zero/latticelm/internal/providers"
- "github.com/prometheus/client_golang/prometheus"
- sdktrace "go.opentelemetry.io/otel/sdk/trace"
-)
-
-// ProviderRegistry defines the interface for provider registries.
-// This matches the interface expected by the server.
-type ProviderRegistry interface {
- Get(name string) (providers.Provider, bool)
- Models() []struct{ Provider, Model string }
- ResolveModelID(model string) string
- Default(model string) (providers.Provider, error)
-}
-
-// WrapProviderRegistry wraps all providers in a registry with observability.
-func WrapProviderRegistry(registry ProviderRegistry, metricsRegistry *prometheus.Registry, tp *sdktrace.TracerProvider) ProviderRegistry {
- if registry == nil {
- return nil
- }
-
- // We can't directly modify the registry's internal map, so we'll need to
- // wrap providers as they're retrieved. Instead, create a new instrumented registry.
- return &InstrumentedRegistry{
- base: registry,
- metrics: metricsRegistry,
- tracer: tp,
- wrappedProviders: make(map[string]providers.Provider),
- }
-}
-
-// InstrumentedRegistry wraps a provider registry to return instrumented providers.
-type InstrumentedRegistry struct {
- base ProviderRegistry
- metrics *prometheus.Registry
- tracer *sdktrace.TracerProvider
- wrappedProviders map[string]providers.Provider
-}
-
-// Get returns an instrumented provider by entry name.
-func (r *InstrumentedRegistry) Get(name string) (providers.Provider, bool) {
- // Check if we've already wrapped this provider
- if wrapped, ok := r.wrappedProviders[name]; ok {
- return wrapped, true
- }
-
- // Get the base provider
- p, ok := r.base.Get(name)
- if !ok {
- return nil, false
- }
-
- // Wrap it
- wrapped := NewInstrumentedProvider(p, r.metrics, r.tracer)
- r.wrappedProviders[name] = wrapped
- return wrapped, true
-}
-
-// Default returns the instrumented provider for the given model name.
-func (r *InstrumentedRegistry) Default(model string) (providers.Provider, error) {
- p, err := r.base.Default(model)
- if err != nil {
- return nil, err
- }
-
- // Check if we've already wrapped this provider
- name := p.Name()
- if wrapped, ok := r.wrappedProviders[name]; ok {
- return wrapped, nil
- }
-
- // Wrap it
- wrapped := NewInstrumentedProvider(p, r.metrics, r.tracer)
- r.wrappedProviders[name] = wrapped
- return wrapped, nil
-}
-
-// Models returns the list of configured models and their provider entry names.
-func (r *InstrumentedRegistry) Models() []struct{ Provider, Model string } {
- return r.base.Models()
-}
-
-// ResolveModelID returns the provider_model_id for a model.
-func (r *InstrumentedRegistry) ResolveModelID(model string) string {
- return r.base.ResolveModelID(model)
-}
-
-// WrapConversationStore wraps a conversation store with observability.
-func WrapConversationStore(store conversation.Store, backend string, metricsRegistry *prometheus.Registry, tp *sdktrace.TracerProvider) conversation.Store {
- if store == nil {
- return nil
- }
-
- return NewInstrumentedStore(store, backend, metricsRegistry, tp)
-}
-
-
-
package observability
-
-import (
- "github.com/prometheus/client_golang/prometheus"
-)
-
-var (
- // HTTP Metrics
- httpRequestsTotal = prometheus.NewCounterVec(
- prometheus.CounterOpts{
- Name: "http_requests_total",
- Help: "Total number of HTTP requests",
- },
- []string{"method", "path", "status"},
- )
-
- httpRequestDuration = prometheus.NewHistogramVec(
- prometheus.HistogramOpts{
- Name: "http_request_duration_seconds",
- Help: "HTTP request latency in seconds",
- Buckets: []float64{0.01, 0.05, 0.1, 0.5, 1, 2.5, 5, 10, 30},
- },
- []string{"method", "path", "status"},
- )
-
- httpRequestSize = prometheus.NewHistogramVec(
- prometheus.HistogramOpts{
- Name: "http_request_size_bytes",
- Help: "HTTP request size in bytes",
- Buckets: prometheus.ExponentialBuckets(100, 10, 7), // 100B to 100MB
- },
- []string{"method", "path"},
- )
-
- httpResponseSize = prometheus.NewHistogramVec(
- prometheus.HistogramOpts{
- Name: "http_response_size_bytes",
- Help: "HTTP response size in bytes",
- Buckets: prometheus.ExponentialBuckets(100, 10, 7), // 100B to 100MB
- },
- []string{"method", "path"},
- )
-
- // Provider Metrics
- providerRequestsTotal = prometheus.NewCounterVec(
- prometheus.CounterOpts{
- Name: "provider_requests_total",
- Help: "Total number of provider requests",
- },
- []string{"provider", "model", "operation", "status"},
- )
-
- providerRequestDuration = prometheus.NewHistogramVec(
- prometheus.HistogramOpts{
- Name: "provider_request_duration_seconds",
- Help: "Provider request latency in seconds",
- Buckets: []float64{0.1, 0.5, 1, 2, 5, 10, 20, 30, 60},
- },
- []string{"provider", "model", "operation"},
- )
-
- providerTokensTotal = prometheus.NewCounterVec(
- prometheus.CounterOpts{
- Name: "provider_tokens_total",
- Help: "Total number of tokens processed",
- },
- []string{"provider", "model", "type"}, // type: input, output
- )
-
- providerStreamTTFB = prometheus.NewHistogramVec(
- prometheus.HistogramOpts{
- Name: "provider_stream_ttfb_seconds",
- Help: "Time to first byte for streaming requests in seconds",
- Buckets: []float64{0.05, 0.1, 0.5, 1, 2, 5, 10},
- },
- []string{"provider", "model"},
- )
-
- providerStreamChunks = prometheus.NewCounterVec(
- prometheus.CounterOpts{
- Name: "provider_stream_chunks_total",
- Help: "Total number of stream chunks received",
- },
- []string{"provider", "model"},
- )
-
- providerStreamDuration = prometheus.NewHistogramVec(
- prometheus.HistogramOpts{
- Name: "provider_stream_duration_seconds",
- Help: "Total duration of streaming requests in seconds",
- Buckets: []float64{0.1, 0.5, 1, 2, 5, 10, 20, 30, 60},
- },
- []string{"provider", "model"},
- )
-
- // Conversation Store Metrics
- conversationOperationsTotal = prometheus.NewCounterVec(
- prometheus.CounterOpts{
- Name: "conversation_operations_total",
- Help: "Total number of conversation store operations",
- },
- []string{"operation", "backend", "status"},
- )
-
- conversationOperationDuration = prometheus.NewHistogramVec(
- prometheus.HistogramOpts{
- Name: "conversation_operation_duration_seconds",
- Help: "Conversation store operation latency in seconds",
- Buckets: []float64{0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1},
- },
- []string{"operation", "backend"},
- )
-
- conversationActiveCount = prometheus.NewGaugeVec(
- prometheus.GaugeOpts{
- Name: "conversation_active_count",
- Help: "Number of active conversations",
- },
- []string{"backend"},
- )
-
- // Circuit Breaker Metrics
- circuitBreakerState = prometheus.NewGaugeVec(
- prometheus.GaugeOpts{
- Name: "circuit_breaker_state",
- Help: "Circuit breaker state (0=closed, 1=open, 2=half-open)",
- },
- []string{"provider"},
- )
-
- circuitBreakerStateTransitions = prometheus.NewCounterVec(
- prometheus.CounterOpts{
- Name: "circuit_breaker_state_transitions_total",
- Help: "Total number of circuit breaker state transitions",
- },
- []string{"provider", "from", "to"},
- )
-)
-
-// InitMetrics registers all metrics with a new Prometheus registry.
-func InitMetrics() *prometheus.Registry {
- registry := prometheus.NewRegistry()
-
- // Register HTTP metrics
- registry.MustRegister(httpRequestsTotal)
- registry.MustRegister(httpRequestDuration)
- registry.MustRegister(httpRequestSize)
- registry.MustRegister(httpResponseSize)
-
- // Register provider metrics
- registry.MustRegister(providerRequestsTotal)
- registry.MustRegister(providerRequestDuration)
- registry.MustRegister(providerTokensTotal)
- registry.MustRegister(providerStreamTTFB)
- registry.MustRegister(providerStreamChunks)
- registry.MustRegister(providerStreamDuration)
-
- // Register conversation store metrics
- registry.MustRegister(conversationOperationsTotal)
- registry.MustRegister(conversationOperationDuration)
- registry.MustRegister(conversationActiveCount)
-
- // Register circuit breaker metrics
- registry.MustRegister(circuitBreakerState)
- registry.MustRegister(circuitBreakerStateTransitions)
-
- return registry
-}
-
-// RecordCircuitBreakerStateChange records a circuit breaker state transition.
-func RecordCircuitBreakerStateChange(provider, from, to string) {
- // Record the transition
- circuitBreakerStateTransitions.WithLabelValues(provider, from, to).Inc()
-
- // Update the current state gauge
- var stateValue float64
- switch to {
- case "closed":
- stateValue = 0
- case "open":
- stateValue = 1
- case "half-open":
- stateValue = 2
- }
- circuitBreakerState.WithLabelValues(provider).Set(stateValue)
-}
-
-
-
package observability
-
-import (
- "net/http"
- "strconv"
- "time"
-
- "github.com/prometheus/client_golang/prometheus"
-)
-
-// MetricsMiddleware creates a middleware that records HTTP metrics.
-func MetricsMiddleware(next http.Handler, registry *prometheus.Registry, _ interface{}) http.Handler {
- if registry == nil {
- // If metrics are not enabled, pass through without modification
- return next
- }
-
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- start := time.Now()
-
- // Record request size
- if r.ContentLength > 0 {
- httpRequestSize.WithLabelValues(r.Method, r.URL.Path).Observe(float64(r.ContentLength))
- }
-
- // Wrap response writer to capture status code and response size
- wrapped := &metricsResponseWriter{
- ResponseWriter: w,
- statusCode: http.StatusOK,
- bytesWritten: 0,
- }
-
- // Call the next handler
- next.ServeHTTP(wrapped, r)
-
- // Record metrics after request completes
- duration := time.Since(start).Seconds()
- status := strconv.Itoa(wrapped.statusCode)
-
- httpRequestsTotal.WithLabelValues(r.Method, r.URL.Path, status).Inc()
- httpRequestDuration.WithLabelValues(r.Method, r.URL.Path, status).Observe(duration)
- httpResponseSize.WithLabelValues(r.Method, r.URL.Path).Observe(float64(wrapped.bytesWritten))
- })
-}
-
-// metricsResponseWriter wraps http.ResponseWriter to capture status code and bytes written.
-type metricsResponseWriter struct {
- http.ResponseWriter
- statusCode int
- bytesWritten int
-}
-
-func (w *metricsResponseWriter) WriteHeader(statusCode int) {
- w.statusCode = statusCode
- w.ResponseWriter.WriteHeader(statusCode)
-}
-
-func (w *metricsResponseWriter) Write(b []byte) (int, error) {
- n, err := w.ResponseWriter.Write(b)
- w.bytesWritten += n
- return n, err
-}
-
-
-
package observability
-
-import (
- "context"
- "time"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/ajac-zero/latticelm/internal/providers"
- "github.com/prometheus/client_golang/prometheus"
- "go.opentelemetry.io/otel/attribute"
- "go.opentelemetry.io/otel/codes"
- sdktrace "go.opentelemetry.io/otel/sdk/trace"
- "go.opentelemetry.io/otel/trace"
-)
-
-// InstrumentedProvider wraps a provider with metrics and tracing.
-type InstrumentedProvider struct {
- base providers.Provider
- registry *prometheus.Registry
- tracer trace.Tracer
-}
-
-// NewInstrumentedProvider wraps a provider with observability.
-func NewInstrumentedProvider(p providers.Provider, registry *prometheus.Registry, tp *sdktrace.TracerProvider) providers.Provider {
- var tracer trace.Tracer
- if tp != nil {
- tracer = tp.Tracer("llm-gateway")
- }
-
- return &InstrumentedProvider{
- base: p,
- registry: registry,
- tracer: tracer,
- }
-}
-
-// Name returns the name of the underlying provider.
-func (p *InstrumentedProvider) Name() string {
- return p.base.Name()
-}
-
-// Generate wraps the provider's Generate method with metrics and tracing.
-func (p *InstrumentedProvider) Generate(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (*api.ProviderResult, error) {
- // Start span if tracing is enabled
- if p.tracer != nil {
- var span trace.Span
- ctx, span = p.tracer.Start(ctx, "provider.generate",
- trace.WithSpanKind(trace.SpanKindClient),
- trace.WithAttributes(
- attribute.String("provider.name", p.base.Name()),
- attribute.String("provider.model", req.Model),
- ),
- )
- defer span.End()
- }
-
- // Record start time
- start := time.Now()
-
- // Call underlying provider
- result, err := p.base.Generate(ctx, messages, req)
-
- // Record metrics
- duration := time.Since(start).Seconds()
- status := "success"
- if err != nil {
- status = "error"
- if p.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.RecordError(err)
- span.SetStatus(codes.Error, err.Error())
- }
- } else if result != nil {
- // Add token attributes to span
- if p.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.SetAttributes(
- attribute.Int64("provider.input_tokens", int64(result.Usage.InputTokens)),
- attribute.Int64("provider.output_tokens", int64(result.Usage.OutputTokens)),
- attribute.Int64("provider.total_tokens", int64(result.Usage.TotalTokens)),
- )
- span.SetStatus(codes.Ok, "")
- }
-
- // Record token metrics
- if p.registry != nil {
- providerTokensTotal.WithLabelValues(p.base.Name(), req.Model, "input").Add(float64(result.Usage.InputTokens))
- providerTokensTotal.WithLabelValues(p.base.Name(), req.Model, "output").Add(float64(result.Usage.OutputTokens))
- }
- }
-
- // Record request metrics
- if p.registry != nil {
- providerRequestsTotal.WithLabelValues(p.base.Name(), req.Model, "generate", status).Inc()
- providerRequestDuration.WithLabelValues(p.base.Name(), req.Model, "generate").Observe(duration)
- }
-
- return result, err
-}
-
-// GenerateStream wraps the provider's GenerateStream method with metrics and tracing.
-func (p *InstrumentedProvider) GenerateStream(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (<-chan *api.ProviderStreamDelta, <-chan error) {
- // Start span if tracing is enabled
- if p.tracer != nil {
- var span trace.Span
- ctx, span = p.tracer.Start(ctx, "provider.generate_stream",
- trace.WithSpanKind(trace.SpanKindClient),
- trace.WithAttributes(
- attribute.String("provider.name", p.base.Name()),
- attribute.String("provider.model", req.Model),
- ),
- )
- defer span.End()
- }
-
- // Record start time
- start := time.Now()
- var ttfb time.Duration
- firstChunk := true
-
- // Create instrumented channels
- baseChan, baseErrChan := p.base.GenerateStream(ctx, messages, req)
- outChan := make(chan *api.ProviderStreamDelta)
- outErrChan := make(chan error, 1)
-
- // Metrics tracking
- var chunkCount int64
- var totalInputTokens, totalOutputTokens int64
- var streamErr error
-
- go func() {
- defer close(outChan)
- defer close(outErrChan)
-
- for {
- select {
- case delta, ok := <-baseChan:
- if !ok {
- // Stream finished - record final metrics
- duration := time.Since(start).Seconds()
- status := "success"
- if streamErr != nil {
- status = "error"
- if p.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.RecordError(streamErr)
- span.SetStatus(codes.Error, streamErr.Error())
- }
- } else {
- if p.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.SetAttributes(
- attribute.Int64("provider.input_tokens", totalInputTokens),
- attribute.Int64("provider.output_tokens", totalOutputTokens),
- attribute.Int64("provider.chunk_count", chunkCount),
- attribute.Float64("provider.ttfb_seconds", ttfb.Seconds()),
- )
- span.SetStatus(codes.Ok, "")
- }
-
- // Record token metrics
- if p.registry != nil && (totalInputTokens > 0 || totalOutputTokens > 0) {
- providerTokensTotal.WithLabelValues(p.base.Name(), req.Model, "input").Add(float64(totalInputTokens))
- providerTokensTotal.WithLabelValues(p.base.Name(), req.Model, "output").Add(float64(totalOutputTokens))
- }
- }
-
- // Record stream metrics
- if p.registry != nil {
- providerRequestsTotal.WithLabelValues(p.base.Name(), req.Model, "generate_stream", status).Inc()
- providerStreamDuration.WithLabelValues(p.base.Name(), req.Model).Observe(duration)
- providerStreamChunks.WithLabelValues(p.base.Name(), req.Model).Add(float64(chunkCount))
- if ttfb > 0 {
- providerStreamTTFB.WithLabelValues(p.base.Name(), req.Model).Observe(ttfb.Seconds())
- }
- }
- return
- }
-
- // Record TTFB on first chunk
- if firstChunk {
- ttfb = time.Since(start)
- firstChunk = false
- }
-
- chunkCount++
-
- // Track token usage
- if delta.Usage != nil {
- totalInputTokens = int64(delta.Usage.InputTokens)
- totalOutputTokens = int64(delta.Usage.OutputTokens)
- }
-
- // Forward the delta
- outChan <- delta
-
- case err, ok := <-baseErrChan:
- if ok && err != nil {
- streamErr = err
- outErrChan <- err
- }
- return
- }
- }
- }()
-
- return outChan, outErrChan
-}
-
-
-
package observability
-
-import (
- "context"
- "time"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/ajac-zero/latticelm/internal/conversation"
- "github.com/prometheus/client_golang/prometheus"
- "go.opentelemetry.io/otel/attribute"
- "go.opentelemetry.io/otel/codes"
- sdktrace "go.opentelemetry.io/otel/sdk/trace"
- "go.opentelemetry.io/otel/trace"
-)
-
-// InstrumentedStore wraps a conversation store with metrics and tracing.
-type InstrumentedStore struct {
- base conversation.Store
- registry *prometheus.Registry
- tracer trace.Tracer
- backend string
-}
-
-// NewInstrumentedStore wraps a conversation store with observability.
-func NewInstrumentedStore(s conversation.Store, backend string, registry *prometheus.Registry, tp *sdktrace.TracerProvider) conversation.Store {
- var tracer trace.Tracer
- if tp != nil {
- tracer = tp.Tracer("llm-gateway")
- }
-
- // Initialize gauge with current size
- if registry != nil {
- conversationActiveCount.WithLabelValues(backend).Set(float64(s.Size()))
- }
-
- return &InstrumentedStore{
- base: s,
- registry: registry,
- tracer: tracer,
- backend: backend,
- }
-}
-
-// Get wraps the store's Get method with metrics and tracing.
-func (s *InstrumentedStore) Get(ctx context.Context, id string) (*conversation.Conversation, error) {
- // Start span if tracing is enabled
- if s.tracer != nil {
- var span trace.Span
- ctx, span = s.tracer.Start(ctx, "conversation.get",
- trace.WithAttributes(
- attribute.String("conversation.id", id),
- attribute.String("conversation.backend", s.backend),
- ),
- )
- defer span.End()
- }
-
- // Record start time
- start := time.Now()
-
- // Call underlying store
- conv, err := s.base.Get(ctx, id)
-
- // Record metrics
- duration := time.Since(start).Seconds()
- status := "success"
- if err != nil {
- status = "error"
- if s.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.RecordError(err)
- span.SetStatus(codes.Error, err.Error())
- }
- } else {
- if s.tracer != nil {
- span := trace.SpanFromContext(ctx)
- if conv != nil {
- span.SetAttributes(
- attribute.Int("conversation.message_count", len(conv.Messages)),
- attribute.String("conversation.model", conv.Model),
- )
- }
- span.SetStatus(codes.Ok, "")
- }
- }
-
- if s.registry != nil {
- conversationOperationsTotal.WithLabelValues("get", s.backend, status).Inc()
- conversationOperationDuration.WithLabelValues("get", s.backend).Observe(duration)
- }
-
- return conv, err
-}
-
-// Create wraps the store's Create method with metrics and tracing.
-func (s *InstrumentedStore) Create(ctx context.Context, id string, model string, messages []api.Message) (*conversation.Conversation, error) {
- // Start span if tracing is enabled
- if s.tracer != nil {
- var span trace.Span
- ctx, span = s.tracer.Start(ctx, "conversation.create",
- trace.WithAttributes(
- attribute.String("conversation.id", id),
- attribute.String("conversation.backend", s.backend),
- attribute.String("conversation.model", model),
- attribute.Int("conversation.initial_messages", len(messages)),
- ),
- )
- defer span.End()
- }
-
- // Record start time
- start := time.Now()
-
- // Call underlying store
- conv, err := s.base.Create(ctx, id, model, messages)
-
- // Record metrics
- duration := time.Since(start).Seconds()
- status := "success"
- if err != nil {
- status = "error"
- if s.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.RecordError(err)
- span.SetStatus(codes.Error, err.Error())
- }
- } else {
- if s.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.SetStatus(codes.Ok, "")
- }
- }
-
- if s.registry != nil {
- conversationOperationsTotal.WithLabelValues("create", s.backend, status).Inc()
- conversationOperationDuration.WithLabelValues("create", s.backend).Observe(duration)
- // Update active count
- conversationActiveCount.WithLabelValues(s.backend).Set(float64(s.base.Size()))
- }
-
- return conv, err
-}
-
-// Append wraps the store's Append method with metrics and tracing.
-func (s *InstrumentedStore) Append(ctx context.Context, id string, messages ...api.Message) (*conversation.Conversation, error) {
- // Start span if tracing is enabled
- if s.tracer != nil {
- var span trace.Span
- ctx, span = s.tracer.Start(ctx, "conversation.append",
- trace.WithAttributes(
- attribute.String("conversation.id", id),
- attribute.String("conversation.backend", s.backend),
- attribute.Int("conversation.appended_messages", len(messages)),
- ),
- )
- defer span.End()
- }
-
- // Record start time
- start := time.Now()
-
- // Call underlying store
- conv, err := s.base.Append(ctx, id, messages...)
-
- // Record metrics
- duration := time.Since(start).Seconds()
- status := "success"
- if err != nil {
- status = "error"
- if s.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.RecordError(err)
- span.SetStatus(codes.Error, err.Error())
- }
- } else {
- if s.tracer != nil {
- span := trace.SpanFromContext(ctx)
- if conv != nil {
- span.SetAttributes(
- attribute.Int("conversation.total_messages", len(conv.Messages)),
- )
- }
- span.SetStatus(codes.Ok, "")
- }
- }
-
- if s.registry != nil {
- conversationOperationsTotal.WithLabelValues("append", s.backend, status).Inc()
- conversationOperationDuration.WithLabelValues("append", s.backend).Observe(duration)
- }
-
- return conv, err
-}
-
-// Delete wraps the store's Delete method with metrics and tracing.
-func (s *InstrumentedStore) Delete(ctx context.Context, id string) error {
- // Start span if tracing is enabled
- if s.tracer != nil {
- var span trace.Span
- ctx, span = s.tracer.Start(ctx, "conversation.delete",
- trace.WithAttributes(
- attribute.String("conversation.id", id),
- attribute.String("conversation.backend", s.backend),
- ),
- )
- defer span.End()
- }
-
- // Record start time
- start := time.Now()
-
- // Call underlying store
- err := s.base.Delete(ctx, id)
-
- // Record metrics
- duration := time.Since(start).Seconds()
- status := "success"
- if err != nil {
- status = "error"
- if s.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.RecordError(err)
- span.SetStatus(codes.Error, err.Error())
- }
- } else {
- if s.tracer != nil {
- span := trace.SpanFromContext(ctx)
- span.SetStatus(codes.Ok, "")
- }
- }
-
- if s.registry != nil {
- conversationOperationsTotal.WithLabelValues("delete", s.backend, status).Inc()
- conversationOperationDuration.WithLabelValues("delete", s.backend).Observe(duration)
- // Update active count
- conversationActiveCount.WithLabelValues(s.backend).Set(float64(s.base.Size()))
- }
-
- return err
-}
-
-// Size returns the size of the underlying store.
-func (s *InstrumentedStore) Size() int {
- return s.base.Size()
-}
-
-// Close wraps the store's Close method.
-func (s *InstrumentedStore) Close() error {
- return s.base.Close()
-}
-
-
-
package observability
-
-import (
- "context"
- "io"
-
- "github.com/prometheus/client_golang/prometheus"
- "github.com/prometheus/client_golang/prometheus/testutil"
- "go.opentelemetry.io/otel"
- "go.opentelemetry.io/otel/sdk/resource"
- sdktrace "go.opentelemetry.io/otel/sdk/trace"
- "go.opentelemetry.io/otel/sdk/trace/tracetest"
- semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
-)
-
-// NewTestRegistry creates a new isolated Prometheus registry for testing
-func NewTestRegistry() *prometheus.Registry {
- return prometheus.NewRegistry()
-}
-
-// NewTestTracer creates a no-op tracer for testing
-func NewTestTracer() (*sdktrace.TracerProvider, *tracetest.InMemoryExporter) {
- exporter := tracetest.NewInMemoryExporter()
- res := resource.NewSchemaless(
- semconv.ServiceNameKey.String("test-service"),
- )
- tp := sdktrace.NewTracerProvider(
- sdktrace.WithSyncer(exporter),
- sdktrace.WithResource(res),
- )
- otel.SetTracerProvider(tp)
- return tp, exporter
-}
-
-// GetMetricValue extracts a metric value from a registry
-func GetMetricValue(registry *prometheus.Registry, metricName string) (float64, error) {
- metrics, err := registry.Gather()
- if err != nil {
- return 0, err
- }
-
- for _, mf := range metrics {
- if mf.GetName() == metricName {
- if len(mf.GetMetric()) > 0 {
- m := mf.GetMetric()[0]
- if m.GetCounter() != nil {
- return m.GetCounter().GetValue(), nil
- }
- if m.GetGauge() != nil {
- return m.GetGauge().GetValue(), nil
- }
- if m.GetHistogram() != nil {
- return float64(m.GetHistogram().GetSampleCount()), nil
- }
- }
- }
- }
-
- return 0, nil
-}
-
-// CountMetricsWithName counts how many metrics match the given name
-func CountMetricsWithName(registry *prometheus.Registry, metricName string) (int, error) {
- metrics, err := registry.Gather()
- if err != nil {
- return 0, err
- }
-
- for _, mf := range metrics {
- if mf.GetName() == metricName {
- return len(mf.GetMetric()), nil
- }
- }
-
- return 0, nil
-}
-
-// GetCounterValue is a helper to get counter values using testutil
-func GetCounterValue(counter prometheus.Counter) float64 {
- return testutil.ToFloat64(counter)
-}
-
-// NewNoOpTracerProvider creates a tracer provider that discards all spans
-func NewNoOpTracerProvider() *sdktrace.TracerProvider {
- return sdktrace.NewTracerProvider(
- sdktrace.WithSpanProcessor(sdktrace.NewSimpleSpanProcessor(&noOpExporter{})),
- )
-}
-
-// noOpExporter is an exporter that discards all spans
-type noOpExporter struct{}
-
-func (e *noOpExporter) ExportSpans(context.Context, []sdktrace.ReadOnlySpan) error {
- return nil
-}
-
-func (e *noOpExporter) Shutdown(context.Context) error {
- return nil
-}
-
-// ShutdownTracer is a helper to safely shutdown a tracer provider
-func ShutdownTracer(tp *sdktrace.TracerProvider) error {
- if tp != nil {
- return tp.Shutdown(context.Background())
- }
- return nil
-}
-
-// NewTestExporter creates a test exporter that writes to the provided writer
-type TestExporter struct {
- writer io.Writer
-}
-
-func (e *TestExporter) ExportSpans(ctx context.Context, spans []sdktrace.ReadOnlySpan) error {
- return nil
-}
-
-func (e *TestExporter) Shutdown(ctx context.Context) error {
- return nil
-}
-
-
-
package observability
-
-import (
- "context"
- "fmt"
-
- "github.com/ajac-zero/latticelm/internal/config"
- "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc"
- "go.opentelemetry.io/otel/exporters/stdout/stdouttrace"
- "go.opentelemetry.io/otel/sdk/resource"
- sdktrace "go.opentelemetry.io/otel/sdk/trace"
- semconv "go.opentelemetry.io/otel/semconv/v1.24.0"
- "google.golang.org/grpc"
- "google.golang.org/grpc/credentials/insecure"
-)
-
-// InitTracer initializes the OpenTelemetry tracer provider.
-func InitTracer(cfg config.TracingConfig) (*sdktrace.TracerProvider, error) {
- // Create resource with service information
- res, err := resource.Merge(
- resource.Default(),
- resource.NewWithAttributes(
- semconv.SchemaURL,
- semconv.ServiceName(cfg.ServiceName),
- ),
- )
- if err != nil {
- return nil, fmt.Errorf("failed to create resource: %w", err)
- }
-
- // Create exporter
- var exporter sdktrace.SpanExporter
- switch cfg.Exporter.Type {
- case "otlp":
- exporter, err = createOTLPExporter(cfg.Exporter)
- if err != nil {
- return nil, fmt.Errorf("failed to create OTLP exporter: %w", err)
- }
- case "stdout":
- exporter, err = stdouttrace.New(
- stdouttrace.WithPrettyPrint(),
- )
- if err != nil {
- return nil, fmt.Errorf("failed to create stdout exporter: %w", err)
- }
- default:
- return nil, fmt.Errorf("unsupported exporter type: %s", cfg.Exporter.Type)
- }
-
- // Create sampler
- sampler := createSampler(cfg.Sampler)
-
- // Create tracer provider
- tp := sdktrace.NewTracerProvider(
- sdktrace.WithBatcher(exporter),
- sdktrace.WithResource(res),
- sdktrace.WithSampler(sampler),
- )
-
- return tp, nil
-}
-
-// createOTLPExporter creates an OTLP gRPC exporter.
-func createOTLPExporter(cfg config.ExporterConfig) (sdktrace.SpanExporter, error) {
- opts := []otlptracegrpc.Option{
- otlptracegrpc.WithEndpoint(cfg.Endpoint),
- }
-
- if cfg.Insecure {
- opts = append(opts, otlptracegrpc.WithTLSCredentials(insecure.NewCredentials()))
- }
-
- if len(cfg.Headers) > 0 {
- opts = append(opts, otlptracegrpc.WithHeaders(cfg.Headers))
- }
-
- // Add dial options to ensure connection
- opts = append(opts, otlptracegrpc.WithDialOption(grpc.WithBlock()))
-
- return otlptracegrpc.New(context.Background(), opts...)
-}
-
-// createSampler creates a sampler based on the configuration.
-func createSampler(cfg config.SamplerConfig) sdktrace.Sampler {
- switch cfg.Type {
- case "always":
- return sdktrace.AlwaysSample()
- case "never":
- return sdktrace.NeverSample()
- case "probability":
- return sdktrace.TraceIDRatioBased(cfg.Rate)
- default:
- // Default to 10% sampling
- return sdktrace.TraceIDRatioBased(0.1)
- }
-}
-
-// Shutdown gracefully shuts down the tracer provider.
-func Shutdown(ctx context.Context, tp *sdktrace.TracerProvider) error {
- if tp == nil {
- return nil
- }
- return tp.Shutdown(ctx)
-}
-
-
-
package observability
-
-import (
- "net/http"
-
- "go.opentelemetry.io/otel"
- "go.opentelemetry.io/otel/attribute"
- "go.opentelemetry.io/otel/codes"
- "go.opentelemetry.io/otel/propagation"
- sdktrace "go.opentelemetry.io/otel/sdk/trace"
- "go.opentelemetry.io/otel/trace"
-)
-
-// TracingMiddleware creates a middleware that adds OpenTelemetry tracing to HTTP requests.
-func TracingMiddleware(next http.Handler, tp *sdktrace.TracerProvider) http.Handler {
- if tp == nil {
- // If tracing is not enabled, pass through without modification
- return next
- }
-
- // Set up W3C Trace Context propagation
- otel.SetTextMapPropagator(propagation.TraceContext{})
-
- tracer := tp.Tracer("llm-gateway")
-
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- // Extract trace context from incoming request headers
- ctx := otel.GetTextMapPropagator().Extract(r.Context(), propagation.HeaderCarrier(r.Header))
-
- // Start a new span
- ctx, span := tracer.Start(ctx, "HTTP "+r.Method+" "+r.URL.Path,
- trace.WithSpanKind(trace.SpanKindServer),
- trace.WithAttributes(
- attribute.String("http.method", r.Method),
- attribute.String("http.route", r.URL.Path),
- attribute.String("http.scheme", r.URL.Scheme),
- attribute.String("http.host", r.Host),
- attribute.String("http.user_agent", r.Header.Get("User-Agent")),
- ),
- )
- defer span.End()
-
- // Add request ID to span if present
- if requestID := r.Header.Get("X-Request-ID"); requestID != "" {
- span.SetAttributes(attribute.String("http.request_id", requestID))
- }
-
- // Create a response writer wrapper to capture status code
- wrapped := &statusResponseWriter{
- ResponseWriter: w,
- statusCode: http.StatusOK,
- }
-
- // Inject trace context into request for downstream services
- r = r.WithContext(ctx)
-
- // Call the next handler
- next.ServeHTTP(wrapped, r)
-
- // Record the status code in the span
- span.SetAttributes(attribute.Int("http.status_code", wrapped.statusCode))
-
- // Set span status based on HTTP status code
- if wrapped.statusCode >= 400 {
- span.SetStatus(codes.Error, http.StatusText(wrapped.statusCode))
- } else {
- span.SetStatus(codes.Ok, "")
- }
- })
-}
-
-// statusResponseWriter wraps http.ResponseWriter to capture the status code.
-type statusResponseWriter struct {
- http.ResponseWriter
- statusCode int
-}
-
-func (w *statusResponseWriter) WriteHeader(statusCode int) {
- w.statusCode = statusCode
- w.ResponseWriter.WriteHeader(statusCode)
-}
-
-func (w *statusResponseWriter) Write(b []byte) (int, error) {
- return w.ResponseWriter.Write(b)
-}
-
-
-
package anthropic
-
-import (
- "context"
- "encoding/json"
- "fmt"
-
- "github.com/anthropics/anthropic-sdk-go"
- "github.com/anthropics/anthropic-sdk-go/option"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/ajac-zero/latticelm/internal/config"
-)
-
-const Name = "anthropic"
-
-// Provider implements the Anthropic SDK integration.
-// It supports both direct Anthropic API and Azure-hosted (Microsoft Foundry) endpoints.
-type Provider struct {
- cfg config.ProviderConfig
- client *anthropic.Client
- azure bool
-}
-
-// New constructs a Provider for the direct Anthropic API.
-func New(cfg config.ProviderConfig) *Provider {
- var client *anthropic.Client
- if cfg.APIKey != "" {
- c := anthropic.NewClient(option.WithAPIKey(cfg.APIKey))
- client = &c
- }
- return &Provider{
- cfg: cfg,
- client: client,
- }
-}
-
-// NewAzure constructs a Provider targeting Azure-hosted Anthropic (Microsoft Foundry).
-// The Azure endpoint uses api-key header auth and a base URL like
-// https://<resource>.services.ai.azure.com/anthropic.
-func NewAzure(azureCfg config.AzureAnthropicConfig) *Provider {
- var client *anthropic.Client
- if azureCfg.APIKey != "" && azureCfg.Endpoint != "" {
- c := anthropic.NewClient(
- option.WithBaseURL(azureCfg.Endpoint),
- option.WithAPIKey("unused"),
- option.WithAuthToken(azureCfg.APIKey),
- )
- client = &c
- }
- return &Provider{
- cfg: config.ProviderConfig{
- APIKey: azureCfg.APIKey,
- Model: azureCfg.Model,
- },
- client: client,
- azure: true,
- }
-}
-
-func (p *Provider) Name() string { return Name }
-
-// Generate routes the request to Anthropic's API.
-func (p *Provider) Generate(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (*api.ProviderResult, error) {
- if p.cfg.APIKey == "" {
- return nil, fmt.Errorf("anthropic api key missing")
- }
- if p.client == nil {
- return nil, fmt.Errorf("anthropic client not initialized")
- }
-
- // Convert messages to Anthropic format
- anthropicMsgs := make([]anthropic.MessageParam, 0, len(messages))
- var system string
-
- for _, msg := range messages {
- var content string
- for _, block := range msg.Content {
- if block.Type == "input_text" || block.Type == "output_text" {
- content += block.Text
- }
- }
-
- switch msg.Role {
- case "user":
- anthropicMsgs = append(anthropicMsgs, anthropic.NewUserMessage(anthropic.NewTextBlock(content)))
- case "assistant":
- // Build content blocks including text and tool calls
- var contentBlocks []anthropic.ContentBlockParamUnion
- if content != "" {
- contentBlocks = append(contentBlocks, anthropic.NewTextBlock(content))
- }
- // Add tool use blocks
- for _, tc := range msg.ToolCalls {
- var input map[string]interface{}
- if err := json.Unmarshal([]byte(tc.Arguments), &input); err != nil {
- // If unmarshal fails, skip this tool call
- continue
- }
- contentBlocks = append(contentBlocks, anthropic.NewToolUseBlock(tc.ID, input, tc.Name))
- }
- if len(contentBlocks) > 0 {
- anthropicMsgs = append(anthropicMsgs, anthropic.NewAssistantMessage(contentBlocks...))
- }
- case "tool":
- // Tool results must be in user message with tool_result blocks
- anthropicMsgs = append(anthropicMsgs, anthropic.NewUserMessage(
- anthropic.NewToolResultBlock(msg.CallID, content, false),
- ))
- case "system", "developer":
- system = content
- }
- }
-
- // Build request params
- maxTokens := int64(4096)
- if req.MaxOutputTokens != nil {
- maxTokens = int64(*req.MaxOutputTokens)
- }
-
- params := anthropic.MessageNewParams{
- Model: anthropic.Model(req.Model),
- Messages: anthropicMsgs,
- MaxTokens: maxTokens,
- }
-
- if system != "" {
- systemBlocks := []anthropic.TextBlockParam{
- {Text: system, Type: "text"},
- }
- params.System = systemBlocks
- }
-
- if req.Temperature != nil {
- params.Temperature = anthropic.Float(*req.Temperature)
- }
- if req.TopP != nil {
- params.TopP = anthropic.Float(*req.TopP)
- }
-
- // Add tools if present
- if req.Tools != nil && len(req.Tools) > 0 {
- tools, err := parseTools(req)
- if err != nil {
- return nil, fmt.Errorf("parse tools: %w", err)
- }
- params.Tools = tools
- }
-
- // Add tool_choice if present
- if req.ToolChoice != nil && len(req.ToolChoice) > 0 {
- toolChoice, err := parseToolChoice(req)
- if err != nil {
- return nil, fmt.Errorf("parse tool_choice: %w", err)
- }
- params.ToolChoice = toolChoice
- }
-
- // Call Anthropic API
- resp, err := p.client.Messages.New(ctx, params)
- if err != nil {
- return nil, fmt.Errorf("anthropic api error: %w", err)
- }
-
- // Extract text and tool calls from response
- var text string
- var toolCalls []api.ToolCall
-
- for _, block := range resp.Content {
- switch block.Type {
- case "text":
- text += block.AsText().Text
- case "tool_use":
- // Extract tool calls
- toolUse := block.AsToolUse()
- argsJSON, _ := json.Marshal(toolUse.Input)
- toolCalls = append(toolCalls, api.ToolCall{
- ID: toolUse.ID,
- Name: toolUse.Name,
- Arguments: string(argsJSON),
- })
- }
- }
-
- return &api.ProviderResult{
- ID: resp.ID,
- Model: string(resp.Model),
- Text: text,
- ToolCalls: toolCalls,
- Usage: api.Usage{
- InputTokens: int(resp.Usage.InputTokens),
- OutputTokens: int(resp.Usage.OutputTokens),
- TotalTokens: int(resp.Usage.InputTokens + resp.Usage.OutputTokens),
- },
- }, nil
-}
-
-// GenerateStream handles streaming requests to Anthropic.
-func (p *Provider) GenerateStream(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (<-chan *api.ProviderStreamDelta, <-chan error) {
- deltaChan := make(chan *api.ProviderStreamDelta)
- errChan := make(chan error, 1)
-
- go func() {
- defer close(deltaChan)
- defer close(errChan)
-
- if p.cfg.APIKey == "" {
- errChan <- fmt.Errorf("anthropic api key missing")
- return
- }
- if p.client == nil {
- errChan <- fmt.Errorf("anthropic client not initialized")
- return
- }
-
- // Convert messages to Anthropic format
- anthropicMsgs := make([]anthropic.MessageParam, 0, len(messages))
- var system string
-
- for _, msg := range messages {
- var content string
- for _, block := range msg.Content {
- if block.Type == "input_text" || block.Type == "output_text" {
- content += block.Text
- }
- }
-
- switch msg.Role {
- case "user":
- anthropicMsgs = append(anthropicMsgs, anthropic.NewUserMessage(anthropic.NewTextBlock(content)))
- case "assistant":
- // Build content blocks including text and tool calls
- var contentBlocks []anthropic.ContentBlockParamUnion
- if content != "" {
- contentBlocks = append(contentBlocks, anthropic.NewTextBlock(content))
- }
- // Add tool use blocks
- for _, tc := range msg.ToolCalls {
- var input map[string]interface{}
- if err := json.Unmarshal([]byte(tc.Arguments), &input); err != nil {
- // If unmarshal fails, skip this tool call
- continue
- }
- contentBlocks = append(contentBlocks, anthropic.NewToolUseBlock(tc.ID, input, tc.Name))
- }
- if len(contentBlocks) > 0 {
- anthropicMsgs = append(anthropicMsgs, anthropic.NewAssistantMessage(contentBlocks...))
- }
- case "tool":
- // Tool results must be in user message with tool_result blocks
- anthropicMsgs = append(anthropicMsgs, anthropic.NewUserMessage(
- anthropic.NewToolResultBlock(msg.CallID, content, false),
- ))
- case "system", "developer":
- system = content
- }
- }
-
- // Build params
- maxTokens := int64(4096)
- if req.MaxOutputTokens != nil {
- maxTokens = int64(*req.MaxOutputTokens)
- }
-
- params := anthropic.MessageNewParams{
- Model: anthropic.Model(req.Model),
- Messages: anthropicMsgs,
- MaxTokens: maxTokens,
- }
-
- if system != "" {
- systemBlocks := []anthropic.TextBlockParam{
- {Text: system, Type: "text"},
- }
- params.System = systemBlocks
- }
-
- if req.Temperature != nil {
- params.Temperature = anthropic.Float(*req.Temperature)
- }
- if req.TopP != nil {
- params.TopP = anthropic.Float(*req.TopP)
- }
-
- // Add tools if present
- if req.Tools != nil && len(req.Tools) > 0 {
- tools, err := parseTools(req)
- if err != nil {
- errChan <- fmt.Errorf("parse tools: %w", err)
- return
- }
- params.Tools = tools
- }
-
- // Add tool_choice if present
- if req.ToolChoice != nil && len(req.ToolChoice) > 0 {
- toolChoice, err := parseToolChoice(req)
- if err != nil {
- errChan <- fmt.Errorf("parse tool_choice: %w", err)
- return
- }
- params.ToolChoice = toolChoice
- }
-
- // Create stream
- stream := p.client.Messages.NewStreaming(ctx, params)
-
- // Track content block index and tool call state
- var contentBlockIndex int
-
- // Process stream
- for stream.Next() {
- event := stream.Current()
-
- switch event.Type {
- case "content_block_start":
- // New content block (text or tool_use)
- contentBlockIndex = int(event.Index)
- if event.ContentBlock.Type == "tool_use" {
- // Send tool call delta with ID and name
- toolUse := event.ContentBlock.AsToolUse()
- delta := &api.ToolCallDelta{
- Index: contentBlockIndex,
- ID: toolUse.ID,
- Name: toolUse.Name,
- }
- select {
- case deltaChan <- &api.ProviderStreamDelta{ToolCallDelta: delta}:
- case <-ctx.Done():
- errChan <- ctx.Err()
- return
- }
- }
-
- case "content_block_delta":
- if event.Delta.Type == "text_delta" {
- // Text streaming
- select {
- case deltaChan <- &api.ProviderStreamDelta{Text: event.Delta.Text}:
- case <-ctx.Done():
- errChan <- ctx.Err()
- return
- }
- } else if event.Delta.Type == "input_json_delta" {
- // Tool arguments streaming
- delta := &api.ToolCallDelta{
- Index: int(event.Index),
- Arguments: event.Delta.PartialJSON,
- }
- select {
- case deltaChan <- &api.ProviderStreamDelta{ToolCallDelta: delta}:
- case <-ctx.Done():
- errChan <- ctx.Err()
- return
- }
- }
- }
- }
-
- if err := stream.Err(); err != nil {
- errChan <- fmt.Errorf("anthropic stream error: %w", err)
- return
- }
-
- // Send final delta
- select {
- case deltaChan <- &api.ProviderStreamDelta{Done: true}:
- case <-ctx.Done():
- errChan <- ctx.Err()
- }
- }()
-
- return deltaChan, errChan
-}
-
-func chooseModel(requested, defaultModel string) string {
- if requested != "" {
- return requested
- }
- if defaultModel != "" {
- return defaultModel
- }
- return "claude-3-5-sonnet"
-}
-
-
-
package anthropic
-
-import (
- "encoding/json"
- "fmt"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/anthropics/anthropic-sdk-go"
-)
-
-// parseTools converts Open Responses tools to Anthropic format
-func parseTools(req *api.ResponseRequest) ([]anthropic.ToolUnionParam, error) {
- if req.Tools == nil || len(req.Tools) == 0 {
- return nil, nil
- }
-
- var toolDefs []map[string]interface{}
- if err := json.Unmarshal(req.Tools, &toolDefs); err != nil {
- return nil, fmt.Errorf("unmarshal tools: %w", err)
- }
-
- var tools []anthropic.ToolUnionParam
- for _, td := range toolDefs {
- // Extract: name, description, parameters
- // Note: Anthropic uses "input_schema" instead of "parameters"
- name, _ := td["name"].(string)
- desc, _ := td["description"].(string)
- params, _ := td["parameters"].(map[string]interface{})
-
- inputSchema := anthropic.ToolInputSchemaParam{
- Type: "object",
- Properties: params["properties"],
- }
-
- // Add required fields if present
- if required, ok := params["required"].([]interface{}); ok {
- requiredStrs := make([]string, 0, len(required))
- for _, r := range required {
- if str, ok := r.(string); ok {
- requiredStrs = append(requiredStrs, str)
- }
- }
- inputSchema.Required = requiredStrs
- }
-
- // Create the tool using ToolUnionParamOfTool
- tool := anthropic.ToolUnionParamOfTool(inputSchema, name)
-
- if desc != "" {
- tool.OfTool.Description = anthropic.String(desc)
- }
-
- tools = append(tools, tool)
- }
-
- return tools, nil
-}
-
-// parseToolChoice converts Open Responses tool_choice to Anthropic format
-func parseToolChoice(req *api.ResponseRequest) (anthropic.ToolChoiceUnionParam, error) {
- var result anthropic.ToolChoiceUnionParam
-
- if req.ToolChoice == nil || len(req.ToolChoice) == 0 {
- return result, nil
- }
-
- var choice interface{}
- if err := json.Unmarshal(req.ToolChoice, &choice); err != nil {
- return result, fmt.Errorf("unmarshal tool_choice: %w", err)
- }
-
- // Handle string values: "auto", "any", "required"
- if str, ok := choice.(string); ok {
- switch str {
- case "auto":
- result.OfAuto = &anthropic.ToolChoiceAutoParam{
- Type: "auto",
- }
- case "any", "required":
- result.OfAny = &anthropic.ToolChoiceAnyParam{
- Type: "any",
- }
- case "none":
- result.OfNone = &anthropic.ToolChoiceNoneParam{
- Type: "none",
- }
- default:
- return result, fmt.Errorf("unknown tool_choice string: %s", str)
- }
- return result, nil
- }
-
- // Handle specific tool selection: {"type": "tool", "function": {"name": "..."}}
- if obj, ok := choice.(map[string]interface{}); ok {
- // Check for OpenAI format: {"type": "function", "function": {"name": "..."}}
- if funcObj, ok := obj["function"].(map[string]interface{}); ok {
- if name, ok := funcObj["name"].(string); ok {
- result.OfTool = &anthropic.ToolChoiceToolParam{
- Type: "tool",
- Name: name,
- }
- return result, nil
- }
- }
-
- // Check for direct name field
- if name, ok := obj["name"].(string); ok {
- result.OfTool = &anthropic.ToolChoiceToolParam{
- Type: "tool",
- Name: name,
- }
- return result, nil
- }
- }
-
- return result, fmt.Errorf("invalid tool_choice format")
-}
-
-// extractToolCalls converts Anthropic content blocks to api.ToolCall
-func extractToolCalls(content []anthropic.ContentBlockUnion) []api.ToolCall {
- var toolCalls []api.ToolCall
-
- for _, block := range content {
- // Check if this is a tool_use block
- if block.Type == "tool_use" {
- // Cast to ToolUseBlock to access the fields
- toolUse := block.AsToolUse()
-
- // Marshal the input to JSON string for Arguments
- argsJSON, _ := json.Marshal(toolUse.Input)
-
- toolCalls = append(toolCalls, api.ToolCall{
- ID: toolUse.ID,
- Name: toolUse.Name,
- Arguments: string(argsJSON),
- })
- }
- }
-
- return toolCalls
-}
-
-// extractToolCallDelta extracts tool call delta from streaming content block delta
-func extractToolCallDelta(delta anthropic.RawContentBlockDeltaUnion, index int) *api.ToolCallDelta {
- // Check if this is an input_json_delta (streaming tool arguments)
- if delta.Type == "input_json_delta" {
- return &api.ToolCallDelta{
- Index: index,
- Arguments: delta.PartialJSON,
- }
- }
-
- return nil
-}
-
-
-
package providers
-
-import (
- "context"
- "fmt"
- "time"
-
- "github.com/sony/gobreaker"
-
- "github.com/ajac-zero/latticelm/internal/api"
-)
-
-// CircuitBreakerProvider wraps a Provider with circuit breaker functionality.
-type CircuitBreakerProvider struct {
- provider Provider
- cb *gobreaker.CircuitBreaker
-}
-
-// CircuitBreakerConfig holds configuration for the circuit breaker.
-type CircuitBreakerConfig struct {
- // MaxRequests is the maximum number of requests allowed to pass through
- // when the circuit breaker is half-open. Default: 3
- MaxRequests uint32
-
- // Interval is the cyclic period of the closed state for the circuit breaker
- // to clear the internal Counts. Default: 30s
- Interval time.Duration
-
- // Timeout is the period of the open state, after which the state becomes half-open.
- // Default: 60s
- Timeout time.Duration
-
- // MinRequests is the minimum number of requests needed before evaluating failure ratio.
- // Default: 5
- MinRequests uint32
-
- // FailureRatio is the ratio of failures that will trip the circuit breaker.
- // Default: 0.5 (50%)
- FailureRatio float64
-
- // OnStateChange is an optional callback invoked when circuit breaker state changes.
- // Parameters: provider name, from state, to state
- OnStateChange func(provider, from, to string)
-}
-
-// DefaultCircuitBreakerConfig returns a sensible default configuration.
-func DefaultCircuitBreakerConfig() CircuitBreakerConfig {
- return CircuitBreakerConfig{
- MaxRequests: 3,
- Interval: 30 * time.Second,
- Timeout: 60 * time.Second,
- MinRequests: 5,
- FailureRatio: 0.5,
- }
-}
-
-// NewCircuitBreakerProvider wraps a provider with circuit breaker functionality.
-func NewCircuitBreakerProvider(provider Provider, cfg CircuitBreakerConfig) *CircuitBreakerProvider {
- providerName := provider.Name()
-
- settings := gobreaker.Settings{
- Name: fmt.Sprintf("%s-circuit-breaker", providerName),
- MaxRequests: cfg.MaxRequests,
- Interval: cfg.Interval,
- Timeout: cfg.Timeout,
- ReadyToTrip: func(counts gobreaker.Counts) bool {
- // Only trip if we have enough requests to be statistically meaningful
- if counts.Requests < cfg.MinRequests {
- return false
- }
- failureRatio := float64(counts.TotalFailures) / float64(counts.Requests)
- return failureRatio >= cfg.FailureRatio
- },
- OnStateChange: func(name string, from gobreaker.State, to gobreaker.State) {
- // Call the callback if provided
- if cfg.OnStateChange != nil {
- cfg.OnStateChange(providerName, from.String(), to.String())
- }
- },
- }
-
- return &CircuitBreakerProvider{
- provider: provider,
- cb: gobreaker.NewCircuitBreaker(settings),
- }
-}
-
-// Name returns the underlying provider name.
-func (p *CircuitBreakerProvider) Name() string {
- return p.provider.Name()
-}
-
-// Generate wraps the provider's Generate method with circuit breaker protection.
-func (p *CircuitBreakerProvider) Generate(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (*api.ProviderResult, error) {
- result, err := p.cb.Execute(func() (interface{}, error) {
- return p.provider.Generate(ctx, messages, req)
- })
-
- if err != nil {
- return nil, err
- }
-
- return result.(*api.ProviderResult), nil
-}
-
-// GenerateStream wraps the provider's GenerateStream method with circuit breaker protection.
-func (p *CircuitBreakerProvider) GenerateStream(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (<-chan *api.ProviderStreamDelta, <-chan error) {
- // For streaming, we check the circuit breaker state before initiating the stream
- // If the circuit is open, we return an error immediately
- state := p.cb.State()
- if state == gobreaker.StateOpen {
- errChan := make(chan error, 1)
- deltaChan := make(chan *api.ProviderStreamDelta)
- errChan <- gobreaker.ErrOpenState
- close(deltaChan)
- close(errChan)
- return deltaChan, errChan
- }
-
- // If circuit is closed or half-open, attempt the stream
- deltaChan, errChan := p.provider.GenerateStream(ctx, messages, req)
-
- // Wrap the error channel to report successes/failures to circuit breaker
- wrappedErrChan := make(chan error, 1)
-
- go func() {
- defer close(wrappedErrChan)
-
- // Wait for the error channel to signal completion
- if err := <-errChan; err != nil {
- // Record failure in circuit breaker
- p.cb.Execute(func() (interface{}, error) {
- return nil, err
- })
- wrappedErrChan <- err
- } else {
- // Record success in circuit breaker
- p.cb.Execute(func() (interface{}, error) {
- return nil, nil
- })
- }
- }()
-
- return deltaChan, wrappedErrChan
-}
-
-
-
package google
-
-import (
- "encoding/json"
- "fmt"
- "math/rand"
- "time"
-
- "google.golang.org/genai"
-
- "github.com/ajac-zero/latticelm/internal/api"
-)
-
-// parseTools converts generic tool definitions from req.Tools (JSON) to Google's []*genai.Tool format.
-func parseTools(req *api.ResponseRequest) ([]*genai.Tool, error) {
- if req.Tools == nil || len(req.Tools) == 0 {
- return nil, nil
- }
-
- // Unmarshal to slice of tool definitions
- var toolDefs []map[string]interface{}
- if err := json.Unmarshal(req.Tools, &toolDefs); err != nil {
- return nil, fmt.Errorf("unmarshal tools: %w", err)
- }
-
- var functionDeclarations []*genai.FunctionDeclaration
-
- for _, toolDef := range toolDefs {
- // Extract function details
- // Support both flat format (name/description/parameters at top level)
- // and nested format (under "function" key)
- var name, description string
- var parameters interface{}
-
- if functionData, ok := toolDef["function"].(map[string]interface{}); ok {
- // Nested format: {"type": "function", "function": {...}}
- name, _ = functionData["name"].(string)
- description, _ = functionData["description"].(string)
- parameters = functionData["parameters"]
- } else {
- // Flat format: {"type": "function", "name": "...", ...}
- name, _ = toolDef["name"].(string)
- description, _ = toolDef["description"].(string)
- parameters = toolDef["parameters"]
- }
-
- if name == "" {
- continue
- }
-
- // Create function declaration
- funcDecl := &genai.FunctionDeclaration{
- Name: name,
- Description: description,
- }
-
- // Google accepts parameters as raw JSON schema
- if parameters != nil {
- funcDecl.ParametersJsonSchema = parameters
- }
-
- functionDeclarations = append(functionDeclarations, funcDecl)
- }
-
- // Return single Tool with all function declarations
- if len(functionDeclarations) > 0 {
- return []*genai.Tool{{FunctionDeclarations: functionDeclarations}}, nil
- }
-
- return nil, nil
-}
-
-// parseToolChoice converts req.ToolChoice to Google's ToolConfig with FunctionCallingConfig.
-func parseToolChoice(req *api.ResponseRequest) (*genai.ToolConfig, error) {
- if req.ToolChoice == nil || len(req.ToolChoice) == 0 {
- return nil, nil
- }
-
- var choice interface{}
- if err := json.Unmarshal(req.ToolChoice, &choice); err != nil {
- return nil, fmt.Errorf("unmarshal tool_choice: %w", err)
- }
-
- config := &genai.ToolConfig{
- FunctionCallingConfig: &genai.FunctionCallingConfig{},
- }
-
- // Handle string values: "auto", "none", "required"/"any"
- if str, ok := choice.(string); ok {
- switch str {
- case "auto":
- config.FunctionCallingConfig.Mode = genai.FunctionCallingConfigModeAuto
- case "none":
- config.FunctionCallingConfig.Mode = genai.FunctionCallingConfigModeNone
- case "required", "any":
- config.FunctionCallingConfig.Mode = genai.FunctionCallingConfigModeAny
- default:
- return nil, fmt.Errorf("unknown tool_choice string: %s", str)
- }
- return config, nil
- }
-
- // Handle object format: {"type": "function", "function": {"name": "..."}}
- if obj, ok := choice.(map[string]interface{}); ok {
- if typeVal, ok := obj["type"].(string); ok && typeVal == "function" {
- config.FunctionCallingConfig.Mode = genai.FunctionCallingConfigModeAny
- if funcObj, ok := obj["function"].(map[string]interface{}); ok {
- if name, ok := funcObj["name"].(string); ok {
- config.FunctionCallingConfig.AllowedFunctionNames = []string{name}
- }
- }
- return config, nil
- }
- }
-
- return nil, fmt.Errorf("unsupported tool_choice format")
-}
-
-// extractToolCalls extracts tool calls from Google's response format to generic api.ToolCall slice.
-func extractToolCalls(resp *genai.GenerateContentResponse) []api.ToolCall {
- var toolCalls []api.ToolCall
-
- for _, candidate := range resp.Candidates {
- if candidate.Content == nil {
- continue
- }
-
- for _, part := range candidate.Content.Parts {
- if part == nil || part.FunctionCall == nil {
- continue
- }
-
- // Extract function call details
- fc := part.FunctionCall
-
- // Marshal arguments to JSON string
- var argsJSON string
- if fc.Args != nil {
- argsBytes, err := json.Marshal(fc.Args)
- if err == nil {
- argsJSON = string(argsBytes)
- } else {
- // Fallback to empty object
- argsJSON = "{}"
- }
- } else {
- argsJSON = "{}"
- }
-
- // Generate ID if Google doesn't provide one
- callID := fc.ID
- if callID == "" {
- callID = fmt.Sprintf("call_%s", generateRandomID())
- }
-
- toolCalls = append(toolCalls, api.ToolCall{
- ID: callID,
- Name: fc.Name,
- Arguments: argsJSON,
- })
- }
- }
-
- return toolCalls
-}
-
-// extractToolCallDelta extracts streaming tool call information from response parts.
-func extractToolCallDelta(part *genai.Part, index int) *api.ToolCallDelta {
- if part == nil || part.FunctionCall == nil {
- return nil
- }
-
- fc := part.FunctionCall
-
- // Marshal arguments to JSON string
- var argsJSON string
- if fc.Args != nil {
- argsBytes, err := json.Marshal(fc.Args)
- if err == nil {
- argsJSON = string(argsBytes)
- } else {
- argsJSON = "{}"
- }
- } else {
- argsJSON = "{}"
- }
-
- // Generate ID if Google doesn't provide one
- callID := fc.ID
- if callID == "" {
- callID = fmt.Sprintf("call_%s", generateRandomID())
- }
-
- return &api.ToolCallDelta{
- Index: index,
- ID: callID,
- Name: fc.Name,
- Arguments: argsJSON,
- }
-}
-
-// generateRandomID generates a random alphanumeric ID
-func generateRandomID() string {
- const charset = "abcdefghijklmnopqrstuvwxyz0123456789"
- const length = 24
- rng := rand.New(rand.NewSource(time.Now().UnixNano()))
- b := make([]byte, length)
- for i := range b {
- b[i] = charset[rng.Intn(len(charset))]
- }
- return string(b)
-}
-
-
-
package google
-
-import (
- "context"
- "encoding/json"
- "fmt"
-
- "github.com/google/uuid"
- "google.golang.org/genai"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/ajac-zero/latticelm/internal/config"
-)
-
-const Name = "google"
-
-// Provider implements the Google Generative AI integration.
-type Provider struct {
- cfg config.ProviderConfig
- client *genai.Client
-}
-
-// New constructs a Provider using the Google AI API with API key authentication.
-func New(cfg config.ProviderConfig) (*Provider, error) {
- var client *genai.Client
- if cfg.APIKey != "" {
- var err error
- client, err = genai.NewClient(context.Background(), &genai.ClientConfig{
- APIKey: cfg.APIKey,
- })
- if err != nil {
- return nil, fmt.Errorf("failed to create google client: %w", err)
- }
- }
- return &Provider{
- cfg: cfg,
- client: client,
- }, nil
-}
-
-// NewVertexAI constructs a Provider targeting Vertex AI.
-// Vertex AI uses the same genai SDK but with GCP project/location configuration
-// and Application Default Credentials (ADC) or service account authentication.
-func NewVertexAI(vertexCfg config.VertexAIConfig) (*Provider, error) {
- var client *genai.Client
- if vertexCfg.Project != "" && vertexCfg.Location != "" {
- var err error
- client, err = genai.NewClient(context.Background(), &genai.ClientConfig{
- Project: vertexCfg.Project,
- Location: vertexCfg.Location,
- Backend: genai.BackendVertexAI,
- })
- if err != nil {
- return nil, fmt.Errorf("failed to create vertex ai client: %w", err)
- }
- }
- return &Provider{
- cfg: config.ProviderConfig{
- // Vertex AI doesn't use API key, but set empty for consistency
- APIKey: "",
- },
- client: client,
- }, nil
-}
-
-func (p *Provider) Name() string { return Name }
-
-// Generate routes the request to Gemini and returns a ProviderResult.
-func (p *Provider) Generate(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (*api.ProviderResult, error) {
- if p.client == nil {
- return nil, fmt.Errorf("google client not initialized")
- }
-
- model := req.Model
-
- contents, systemText := convertMessages(messages)
-
- // Parse tools if present
- var tools []*genai.Tool
- if req.Tools != nil && len(req.Tools) > 0 {
- var err error
- tools, err = parseTools(req)
- if err != nil {
- return nil, fmt.Errorf("parse tools: %w", err)
- }
- }
-
- // Parse tool_choice if present
- var toolConfig *genai.ToolConfig
- if req.ToolChoice != nil && len(req.ToolChoice) > 0 {
- var err error
- toolConfig, err = parseToolChoice(req)
- if err != nil {
- return nil, fmt.Errorf("parse tool_choice: %w", err)
- }
- }
-
- config := buildConfig(systemText, req, tools, toolConfig)
-
- resp, err := p.client.Models.GenerateContent(ctx, model, contents, config)
- if err != nil {
- return nil, fmt.Errorf("google api error: %w", err)
- }
-
- var text string
- if len(resp.Candidates) > 0 && resp.Candidates[0].Content != nil {
- for _, part := range resp.Candidates[0].Content.Parts {
- if part != nil {
- text += part.Text
- }
- }
- }
-
- var toolCalls []api.ToolCall
- if len(resp.Candidates) > 0 {
- toolCalls = extractToolCalls(resp)
- }
-
- var inputTokens, outputTokens int
- if resp.UsageMetadata != nil {
- inputTokens = int(resp.UsageMetadata.PromptTokenCount)
- outputTokens = int(resp.UsageMetadata.CandidatesTokenCount)
- }
-
- return &api.ProviderResult{
- ID: uuid.NewString(),
- Model: model,
- Text: text,
- ToolCalls: toolCalls,
- Usage: api.Usage{
- InputTokens: inputTokens,
- OutputTokens: outputTokens,
- TotalTokens: inputTokens + outputTokens,
- },
- }, nil
-}
-
-// GenerateStream handles streaming requests to Google.
-func (p *Provider) GenerateStream(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (<-chan *api.ProviderStreamDelta, <-chan error) {
- deltaChan := make(chan *api.ProviderStreamDelta)
- errChan := make(chan error, 1)
-
- go func() {
- defer close(deltaChan)
- defer close(errChan)
-
- if p.client == nil {
- errChan <- fmt.Errorf("google client not initialized")
- return
- }
-
- model := req.Model
-
- contents, systemText := convertMessages(messages)
-
- // Parse tools if present
- var tools []*genai.Tool
- if req.Tools != nil && len(req.Tools) > 0 {
- var err error
- tools, err = parseTools(req)
- if err != nil {
- errChan <- fmt.Errorf("parse tools: %w", err)
- return
- }
- }
-
- // Parse tool_choice if present
- var toolConfig *genai.ToolConfig
- if req.ToolChoice != nil && len(req.ToolChoice) > 0 {
- var err error
- toolConfig, err = parseToolChoice(req)
- if err != nil {
- errChan <- fmt.Errorf("parse tool_choice: %w", err)
- return
- }
- }
-
- config := buildConfig(systemText, req, tools, toolConfig)
-
- stream := p.client.Models.GenerateContentStream(ctx, model, contents, config)
-
- for resp, err := range stream {
- if err != nil {
- errChan <- fmt.Errorf("google stream error: %w", err)
- return
- }
-
- if len(resp.Candidates) > 0 && resp.Candidates[0].Content != nil {
- for partIndex, part := range resp.Candidates[0].Content.Parts {
- if part != nil {
- // Handle text content
- if part.Text != "" {
- select {
- case deltaChan <- &api.ProviderStreamDelta{Text: part.Text}:
- case <-ctx.Done():
- errChan <- ctx.Err()
- return
- }
- }
-
- // Handle tool call content
- if part.FunctionCall != nil {
- delta := extractToolCallDelta(part, partIndex)
- if delta != nil {
- select {
- case deltaChan <- &api.ProviderStreamDelta{ToolCallDelta: delta}:
- case <-ctx.Done():
- errChan <- ctx.Err()
- return
- }
- }
- }
- }
- }
- }
- }
-
- select {
- case deltaChan <- &api.ProviderStreamDelta{Done: true}:
- case <-ctx.Done():
- errChan <- ctx.Err()
- }
- }()
-
- return deltaChan, errChan
-}
-
-// convertMessages splits messages into Gemini contents and system text.
-func convertMessages(messages []api.Message) ([]*genai.Content, string) {
- var contents []*genai.Content
- var systemText string
-
- // Build a map of CallID -> Name from assistant tool calls
- // This allows us to look up function names when processing tool results
- callIDToName := make(map[string]string)
- for _, msg := range messages {
- if msg.Role == "assistant" || msg.Role == "model" {
- for _, tc := range msg.ToolCalls {
- if tc.ID != "" && tc.Name != "" {
- callIDToName[tc.ID] = tc.Name
- }
- }
- }
- }
-
- for _, msg := range messages {
- if msg.Role == "system" || msg.Role == "developer" {
- for _, block := range msg.Content {
- if block.Type == "input_text" || block.Type == "output_text" {
- systemText += block.Text
- }
- }
- continue
- }
-
- if msg.Role == "tool" {
- // Tool results are sent as FunctionResponse in user role message
- var output string
- for _, block := range msg.Content {
- if block.Type == "input_text" || block.Type == "output_text" {
- output += block.Text
- }
- }
-
- // Parse output as JSON map, or wrap in {"output": "..."} if not JSON
- var responseMap map[string]any
- if err := json.Unmarshal([]byte(output), &responseMap); err != nil {
- // Not JSON, wrap it
- responseMap = map[string]any{"output": output}
- }
-
- // Get function name from message or look it up from CallID
- name := msg.Name
- if name == "" && msg.CallID != "" {
- name = callIDToName[msg.CallID]
- }
-
- // Create FunctionResponse part with CallID and Name from message
- part := &genai.Part{
- FunctionResponse: &genai.FunctionResponse{
- ID: msg.CallID,
- Name: name, // Name is required by Google
- Response: responseMap,
- },
- }
-
- // Add to user role message
- contents = append(contents, &genai.Content{
- Role: "user",
- Parts: []*genai.Part{part},
- })
- continue
- }
-
- var parts []*genai.Part
- for _, block := range msg.Content {
- if block.Type == "input_text" || block.Type == "output_text" {
- parts = append(parts, genai.NewPartFromText(block.Text))
- }
- }
-
- // Add tool calls for assistant messages
- if msg.Role == "assistant" || msg.Role == "model" {
- for _, tc := range msg.ToolCalls {
- // Parse arguments JSON into map
- var args map[string]any
- if err := json.Unmarshal([]byte(tc.Arguments), &args); err != nil {
- // If unmarshal fails, skip this tool call
- continue
- }
-
- // Create FunctionCall part
- parts = append(parts, &genai.Part{
- FunctionCall: &genai.FunctionCall{
- ID: tc.ID,
- Name: tc.Name,
- Args: args,
- },
- })
- }
- }
-
- role := "user"
- if msg.Role == "assistant" || msg.Role == "model" {
- role = "model"
- }
-
- contents = append(contents, &genai.Content{
- Role: role,
- Parts: parts,
- })
- }
-
- return contents, systemText
-}
-
-// buildConfig constructs a GenerateContentConfig from system text and request params.
-func buildConfig(systemText string, req *api.ResponseRequest, tools []*genai.Tool, toolConfig *genai.ToolConfig) *genai.GenerateContentConfig {
- var cfg *genai.GenerateContentConfig
-
- needsCfg := systemText != "" || req.MaxOutputTokens != nil || req.Temperature != nil || req.TopP != nil || tools != nil || toolConfig != nil
- if !needsCfg {
- return nil
- }
-
- cfg = &genai.GenerateContentConfig{}
-
- if systemText != "" {
- cfg.SystemInstruction = &genai.Content{
- Parts: []*genai.Part{genai.NewPartFromText(systemText)},
- }
- }
-
- if req.MaxOutputTokens != nil {
- cfg.MaxOutputTokens = int32(*req.MaxOutputTokens)
- }
-
- if req.Temperature != nil {
- t := float32(*req.Temperature)
- cfg.Temperature = &t
- }
-
- if req.TopP != nil {
- tp := float32(*req.TopP)
- cfg.TopP = &tp
- }
-
- if tools != nil {
- cfg.Tools = tools
- }
-
- if toolConfig != nil {
- cfg.ToolConfig = toolConfig
- }
-
- return cfg
-}
-
-func chooseModel(requested, defaultModel string) string {
- if requested != "" {
- return requested
- }
- if defaultModel != "" {
- return defaultModel
- }
- return "gemini-2.0-flash-exp"
-}
-
-
-
package openai
-
-import (
- "encoding/json"
- "fmt"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/openai/openai-go/v3"
- "github.com/openai/openai-go/v3/shared"
-)
-
-// parseTools converts Open Responses tools to OpenAI format
-func parseTools(req *api.ResponseRequest) ([]openai.ChatCompletionToolUnionParam, error) {
- if req.Tools == nil || len(req.Tools) == 0 {
- return nil, nil
- }
-
- var toolDefs []map[string]interface{}
- if err := json.Unmarshal(req.Tools, &toolDefs); err != nil {
- return nil, fmt.Errorf("unmarshal tools: %w", err)
- }
-
- var tools []openai.ChatCompletionToolUnionParam
- for _, td := range toolDefs {
- // Convert Open Responses tool to OpenAI ChatCompletionFunctionToolParam
- // Extract: name, description, parameters
- name, _ := td["name"].(string)
- desc, _ := td["description"].(string)
- params, _ := td["parameters"].(map[string]interface{})
-
- funcDef := shared.FunctionDefinitionParam{
- Name: name,
- }
-
- if desc != "" {
- funcDef.Description = openai.String(desc)
- }
-
- if params != nil {
- funcDef.Parameters = shared.FunctionParameters(params)
- }
-
- tools = append(tools, openai.ChatCompletionFunctionTool(funcDef))
- }
-
- return tools, nil
-}
-
-// parseToolChoice converts Open Responses tool_choice to OpenAI format
-func parseToolChoice(req *api.ResponseRequest) (openai.ChatCompletionToolChoiceOptionUnionParam, error) {
- var result openai.ChatCompletionToolChoiceOptionUnionParam
-
- if req.ToolChoice == nil || len(req.ToolChoice) == 0 {
- return result, nil
- }
-
- var choice interface{}
- if err := json.Unmarshal(req.ToolChoice, &choice); err != nil {
- return result, fmt.Errorf("unmarshal tool_choice: %w", err)
- }
-
- // Handle string values: "auto", "none", "required"
- if str, ok := choice.(string); ok {
- result.OfAuto = openai.String(str)
- return result, nil
- }
-
- // Handle specific function selection: {"type": "function", "function": {"name": "..."}}
- if obj, ok := choice.(map[string]interface{}); ok {
- funcObj, _ := obj["function"].(map[string]interface{})
- name, _ := funcObj["name"].(string)
-
- return openai.ToolChoiceOptionFunctionToolChoice(
- openai.ChatCompletionNamedToolChoiceFunctionParam{
- Name: name,
- },
- ), nil
- }
-
- return result, fmt.Errorf("invalid tool_choice format")
-}
-
-// extractToolCalls converts OpenAI tool calls to api.ToolCall
-func extractToolCalls(message openai.ChatCompletionMessage) []api.ToolCall {
- if len(message.ToolCalls) == 0 {
- return nil
- }
-
- var toolCalls []api.ToolCall
- for _, tc := range message.ToolCalls {
- toolCalls = append(toolCalls, api.ToolCall{
- ID: tc.ID,
- Name: tc.Function.Name,
- Arguments: tc.Function.Arguments,
- })
- }
- return toolCalls
-}
-
-// extractToolCallDelta extracts tool call delta from streaming chunk choice
-func extractToolCallDelta(choice openai.ChatCompletionChunkChoice) *api.ToolCallDelta {
- if len(choice.Delta.ToolCalls) == 0 {
- return nil
- }
-
- // OpenAI sends tool calls with index in the delta
- for _, tc := range choice.Delta.ToolCalls {
- return &api.ToolCallDelta{
- Index: int(tc.Index),
- ID: tc.ID,
- Name: tc.Function.Name,
- Arguments: tc.Function.Arguments,
- }
- }
-
- return nil
-}
-
-
-
package openai
-
-import (
- "context"
- "fmt"
-
- "github.com/openai/openai-go/v3"
- "github.com/openai/openai-go/v3/azure"
- "github.com/openai/openai-go/v3/option"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/ajac-zero/latticelm/internal/config"
-)
-
-const Name = "openai"
-
-// Provider implements the OpenAI SDK integration.
-// It supports both direct OpenAI API and Azure-hosted endpoints.
-type Provider struct {
- cfg config.ProviderConfig
- client *openai.Client
- azure bool
-}
-
-// New constructs a Provider for the direct OpenAI API.
-func New(cfg config.ProviderConfig) *Provider {
- var client *openai.Client
- if cfg.APIKey != "" {
- c := openai.NewClient(option.WithAPIKey(cfg.APIKey))
- client = &c
- }
- return &Provider{
- cfg: cfg,
- client: client,
- }
-}
-
-// NewAzure constructs a Provider targeting Azure OpenAI.
-// Azure OpenAI uses the OpenAI SDK with the azure subpackage for proper
-// endpoint routing, api-version query parameter, and API key header.
-func NewAzure(azureCfg config.AzureOpenAIConfig) *Provider {
- var client *openai.Client
- if azureCfg.APIKey != "" && azureCfg.Endpoint != "" {
- apiVersion := azureCfg.APIVersion
- if apiVersion == "" {
- apiVersion = "2024-12-01-preview"
- }
- c := openai.NewClient(
- azure.WithEndpoint(azureCfg.Endpoint, apiVersion),
- azure.WithAPIKey(azureCfg.APIKey),
- )
- client = &c
- }
- return &Provider{
- cfg: config.ProviderConfig{
- APIKey: azureCfg.APIKey,
- },
- client: client,
- azure: true,
- }
-}
-
-// Name returns the provider identifier.
-func (p *Provider) Name() string { return Name }
-
-// Generate routes the request to OpenAI and returns a ProviderResult.
-func (p *Provider) Generate(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (*api.ProviderResult, error) {
- if p.cfg.APIKey == "" {
- return nil, fmt.Errorf("openai api key missing")
- }
- if p.client == nil {
- return nil, fmt.Errorf("openai client not initialized")
- }
-
- // Convert messages to OpenAI format
- oaiMessages := make([]openai.ChatCompletionMessageParamUnion, 0, len(messages))
- for _, msg := range messages {
- var content string
- for _, block := range msg.Content {
- if block.Type == "input_text" || block.Type == "output_text" {
- content += block.Text
- }
- }
-
- switch msg.Role {
- case "user":
- oaiMessages = append(oaiMessages, openai.UserMessage(content))
- case "assistant":
- // If assistant message has tool calls, include them
- if len(msg.ToolCalls) > 0 {
- toolCalls := make([]openai.ChatCompletionMessageToolCallUnionParam, len(msg.ToolCalls))
- for i, tc := range msg.ToolCalls {
- toolCalls[i] = openai.ChatCompletionMessageToolCallUnionParam{
- OfFunction: &openai.ChatCompletionMessageFunctionToolCallParam{
- ID: tc.ID,
- Function: openai.ChatCompletionMessageFunctionToolCallFunctionParam{
- Name: tc.Name,
- Arguments: tc.Arguments,
- },
- },
- }
- }
- msgParam := openai.ChatCompletionAssistantMessageParam{
- ToolCalls: toolCalls,
- }
- if content != "" {
- msgParam.Content.OfString = openai.String(content)
- }
- oaiMessages = append(oaiMessages, openai.ChatCompletionMessageParamUnion{
- OfAssistant: &msgParam,
- })
- } else {
- oaiMessages = append(oaiMessages, openai.AssistantMessage(content))
- }
- case "system":
- oaiMessages = append(oaiMessages, openai.SystemMessage(content))
- case "developer":
- oaiMessages = append(oaiMessages, openai.SystemMessage(content))
- case "tool":
- oaiMessages = append(oaiMessages, openai.ToolMessage(content, msg.CallID))
- }
- }
-
- params := openai.ChatCompletionNewParams{
- Model: openai.ChatModel(req.Model),
- Messages: oaiMessages,
- }
- if req.MaxOutputTokens != nil {
- params.MaxTokens = openai.Int(int64(*req.MaxOutputTokens))
- }
- if req.Temperature != nil {
- params.Temperature = openai.Float(*req.Temperature)
- }
- if req.TopP != nil {
- params.TopP = openai.Float(*req.TopP)
- }
-
- // Add tools if present
- if req.Tools != nil && len(req.Tools) > 0 {
- tools, err := parseTools(req)
- if err != nil {
- return nil, fmt.Errorf("parse tools: %w", err)
- }
- params.Tools = tools
- }
-
- // Add tool_choice if present
- if req.ToolChoice != nil && len(req.ToolChoice) > 0 {
- toolChoice, err := parseToolChoice(req)
- if err != nil {
- return nil, fmt.Errorf("parse tool_choice: %w", err)
- }
- params.ToolChoice = toolChoice
- }
-
- // Add parallel_tool_calls if specified
- if req.ParallelToolCalls != nil {
- params.ParallelToolCalls = openai.Bool(*req.ParallelToolCalls)
- }
-
- // Call OpenAI API
- resp, err := p.client.Chat.Completions.New(ctx, params)
- if err != nil {
- return nil, fmt.Errorf("openai api error: %w", err)
- }
-
- var combinedText string
- var toolCalls []api.ToolCall
-
- for _, choice := range resp.Choices {
- combinedText += choice.Message.Content
- if len(choice.Message.ToolCalls) > 0 {
- toolCalls = append(toolCalls, extractToolCalls(choice.Message)...)
- }
- }
-
- return &api.ProviderResult{
- ID: resp.ID,
- Model: resp.Model,
- Text: combinedText,
- ToolCalls: toolCalls,
- Usage: api.Usage{
- InputTokens: int(resp.Usage.PromptTokens),
- OutputTokens: int(resp.Usage.CompletionTokens),
- TotalTokens: int(resp.Usage.TotalTokens),
- },
- }, nil
-}
-
-// GenerateStream handles streaming requests to OpenAI.
-func (p *Provider) GenerateStream(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (<-chan *api.ProviderStreamDelta, <-chan error) {
- deltaChan := make(chan *api.ProviderStreamDelta)
- errChan := make(chan error, 1)
-
- go func() {
- defer close(deltaChan)
- defer close(errChan)
-
- if p.cfg.APIKey == "" {
- errChan <- fmt.Errorf("openai api key missing")
- return
- }
- if p.client == nil {
- errChan <- fmt.Errorf("openai client not initialized")
- return
- }
-
- // Convert messages to OpenAI format
- oaiMessages := make([]openai.ChatCompletionMessageParamUnion, 0, len(messages))
- for _, msg := range messages {
- var content string
- for _, block := range msg.Content {
- if block.Type == "input_text" || block.Type == "output_text" {
- content += block.Text
- }
- }
-
- switch msg.Role {
- case "user":
- oaiMessages = append(oaiMessages, openai.UserMessage(content))
- case "assistant":
- // If assistant message has tool calls, include them
- if len(msg.ToolCalls) > 0 {
- toolCalls := make([]openai.ChatCompletionMessageToolCallUnionParam, len(msg.ToolCalls))
- for i, tc := range msg.ToolCalls {
- toolCalls[i] = openai.ChatCompletionMessageToolCallUnionParam{
- OfFunction: &openai.ChatCompletionMessageFunctionToolCallParam{
- ID: tc.ID,
- Function: openai.ChatCompletionMessageFunctionToolCallFunctionParam{
- Name: tc.Name,
- Arguments: tc.Arguments,
- },
- },
- }
- }
- msgParam := openai.ChatCompletionAssistantMessageParam{
- ToolCalls: toolCalls,
- }
- if content != "" {
- msgParam.Content.OfString = openai.String(content)
- }
- oaiMessages = append(oaiMessages, openai.ChatCompletionMessageParamUnion{
- OfAssistant: &msgParam,
- })
- } else {
- oaiMessages = append(oaiMessages, openai.AssistantMessage(content))
- }
- case "system":
- oaiMessages = append(oaiMessages, openai.SystemMessage(content))
- case "developer":
- oaiMessages = append(oaiMessages, openai.SystemMessage(content))
- case "tool":
- oaiMessages = append(oaiMessages, openai.ToolMessage(content, msg.CallID))
- }
- }
-
- params := openai.ChatCompletionNewParams{
- Model: openai.ChatModel(req.Model),
- Messages: oaiMessages,
- }
- if req.MaxOutputTokens != nil {
- params.MaxTokens = openai.Int(int64(*req.MaxOutputTokens))
- }
- if req.Temperature != nil {
- params.Temperature = openai.Float(*req.Temperature)
- }
- if req.TopP != nil {
- params.TopP = openai.Float(*req.TopP)
- }
-
- // Add tools if present
- if req.Tools != nil && len(req.Tools) > 0 {
- tools, err := parseTools(req)
- if err != nil {
- errChan <- fmt.Errorf("parse tools: %w", err)
- return
- }
- params.Tools = tools
- }
-
- // Add tool_choice if present
- if req.ToolChoice != nil && len(req.ToolChoice) > 0 {
- toolChoice, err := parseToolChoice(req)
- if err != nil {
- errChan <- fmt.Errorf("parse tool_choice: %w", err)
- return
- }
- params.ToolChoice = toolChoice
- }
-
- // Add parallel_tool_calls if specified
- if req.ParallelToolCalls != nil {
- params.ParallelToolCalls = openai.Bool(*req.ParallelToolCalls)
- }
-
- // Create streaming request
- stream := p.client.Chat.Completions.NewStreaming(ctx, params)
-
- // Process stream
- for stream.Next() {
- chunk := stream.Current()
-
- for _, choice := range chunk.Choices {
- // Handle text content
- if choice.Delta.Content != "" {
- select {
- case deltaChan <- &api.ProviderStreamDelta{
- ID: chunk.ID,
- Model: chunk.Model,
- Text: choice.Delta.Content,
- }:
- case <-ctx.Done():
- errChan <- ctx.Err()
- return
- }
- }
-
- // Handle tool call deltas
- if len(choice.Delta.ToolCalls) > 0 {
- delta := extractToolCallDelta(choice)
- if delta != nil {
- select {
- case deltaChan <- &api.ProviderStreamDelta{
- ID: chunk.ID,
- Model: chunk.Model,
- ToolCallDelta: delta,
- }:
- case <-ctx.Done():
- errChan <- ctx.Err()
- return
- }
- }
- }
- }
- }
-
- if err := stream.Err(); err != nil {
- errChan <- fmt.Errorf("openai stream error: %w", err)
- return
- }
-
- // Send final delta
- select {
- case deltaChan <- &api.ProviderStreamDelta{Done: true}:
- case <-ctx.Done():
- errChan <- ctx.Err()
- }
- }()
-
- return deltaChan, errChan
-}
-
-func chooseModel(requested, defaultModel string) string {
- if requested != "" {
- return requested
- }
- if defaultModel != "" {
- return defaultModel
- }
- return "gpt-4o-mini"
-}
-
-
-
package providers
-
-import (
- "context"
- "fmt"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/ajac-zero/latticelm/internal/config"
- anthropicprovider "github.com/ajac-zero/latticelm/internal/providers/anthropic"
- googleprovider "github.com/ajac-zero/latticelm/internal/providers/google"
- openaiprovider "github.com/ajac-zero/latticelm/internal/providers/openai"
-)
-
-// Provider represents a unified interface that each LLM provider must implement.
-type Provider interface {
- Name() string
- Generate(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (*api.ProviderResult, error)
- GenerateStream(ctx context.Context, messages []api.Message, req *api.ResponseRequest) (<-chan *api.ProviderStreamDelta, <-chan error)
-}
-
-// Registry keeps track of registered providers and model-to-provider mappings.
-type Registry struct {
- providers map[string]Provider
- models map[string]string // model name -> provider entry name
- providerModelIDs map[string]string // model name -> provider model ID
- modelList []config.ModelEntry
-}
-
-// NewRegistry constructs provider implementations from configuration.
-func NewRegistry(entries map[string]config.ProviderEntry, models []config.ModelEntry) (*Registry, error) {
- return NewRegistryWithCircuitBreaker(entries, models, nil)
-}
-
-// NewRegistryWithCircuitBreaker constructs provider implementations with circuit breaker support.
-// The onStateChange callback is invoked when circuit breaker state changes.
-func NewRegistryWithCircuitBreaker(
- entries map[string]config.ProviderEntry,
- models []config.ModelEntry,
- onStateChange func(provider, from, to string),
-) (*Registry, error) {
- reg := &Registry{
- providers: make(map[string]Provider),
- models: make(map[string]string),
- providerModelIDs: make(map[string]string),
- modelList: models,
- }
-
- // Use default circuit breaker configuration
- cbConfig := DefaultCircuitBreakerConfig()
- cbConfig.OnStateChange = onStateChange
-
- for name, entry := range entries {
- p, err := buildProvider(entry)
- if err != nil {
- return nil, fmt.Errorf("provider %q: %w", name, err)
- }
- if p != nil {
- // Wrap provider with circuit breaker
- reg.providers[name] = NewCircuitBreakerProvider(p, cbConfig)
- }
- }
-
- for _, m := range models {
- reg.models[m.Name] = m.Provider
- if m.ProviderModelID != "" {
- reg.providerModelIDs[m.Name] = m.ProviderModelID
- }
- }
-
- if len(reg.providers) == 0 {
- return nil, fmt.Errorf("no providers configured")
- }
-
- return reg, nil
-}
-
-func buildProvider(entry config.ProviderEntry) (Provider, error) {
- // Vertex AI doesn't require APIKey, so check for it separately
- if entry.Type != "vertexai" && entry.APIKey == "" {
- return nil, nil
- }
-
- switch entry.Type {
- case "openai":
- return openaiprovider.New(config.ProviderConfig{
- APIKey: entry.APIKey,
- Endpoint: entry.Endpoint,
- }), nil
- case "azureopenai":
- if entry.Endpoint == "" {
- return nil, fmt.Errorf("endpoint is required for azureopenai")
- }
- return openaiprovider.NewAzure(config.AzureOpenAIConfig{
- APIKey: entry.APIKey,
- Endpoint: entry.Endpoint,
- APIVersion: entry.APIVersion,
- }), nil
- case "anthropic":
- return anthropicprovider.New(config.ProviderConfig{
- APIKey: entry.APIKey,
- Endpoint: entry.Endpoint,
- }), nil
- case "azureanthropic":
- if entry.Endpoint == "" {
- return nil, fmt.Errorf("endpoint is required for azureanthropic")
- }
- return anthropicprovider.NewAzure(config.AzureAnthropicConfig{
- APIKey: entry.APIKey,
- Endpoint: entry.Endpoint,
- }), nil
- case "google":
- return googleprovider.New(config.ProviderConfig{
- APIKey: entry.APIKey,
- Endpoint: entry.Endpoint,
- })
- case "vertexai":
- if entry.Project == "" || entry.Location == "" {
- return nil, fmt.Errorf("project and location are required for vertexai")
- }
- return googleprovider.NewVertexAI(config.VertexAIConfig{
- Project: entry.Project,
- Location: entry.Location,
- })
- default:
- return nil, fmt.Errorf("unknown provider type %q", entry.Type)
- }
-}
-
-// Get returns provider by entry name.
-func (r *Registry) Get(name string) (Provider, bool) {
- p, ok := r.providers[name]
- return p, ok
-}
-
-// Models returns the list of configured models and their provider entry names.
-func (r *Registry) Models() []struct{ Provider, Model string } {
- var out []struct{ Provider, Model string }
- for _, m := range r.modelList {
- out = append(out, struct{ Provider, Model string }{Provider: m.Provider, Model: m.Name})
- }
- return out
-}
-
-// ResolveModelID returns the provider_model_id for a model, falling back to the model name itself.
-func (r *Registry) ResolveModelID(model string) string {
- if id, ok := r.providerModelIDs[model]; ok {
- return id
- }
- return model
-}
-
-// Default returns the provider for the given model name.
-func (r *Registry) Default(model string) (Provider, error) {
- if model != "" {
- if providerName, ok := r.models[model]; ok {
- if p, ok := r.providers[providerName]; ok {
- return p, nil
- }
- }
- }
-
- for _, p := range r.providers {
- return p, nil
- }
-
- return nil, fmt.Errorf("no providers available")
-}
-
-
-
package ratelimit
-
-import (
- "log/slog"
- "net/http"
- "sync"
- "time"
-
- "golang.org/x/time/rate"
-)
-
-// Config defines rate limiting configuration.
-type Config struct {
- // RequestsPerSecond is the number of requests allowed per second per IP.
- RequestsPerSecond float64
- // Burst is the maximum burst size allowed.
- Burst int
- // Enabled controls whether rate limiting is active.
- Enabled bool
-}
-
-// Middleware provides per-IP rate limiting using token bucket algorithm.
-type Middleware struct {
- limiters map[string]*rate.Limiter
- mu sync.RWMutex
- config Config
- logger *slog.Logger
-}
-
-// New creates a new rate limiting middleware.
-func New(config Config, logger *slog.Logger) *Middleware {
- m := &Middleware{
- limiters: make(map[string]*rate.Limiter),
- config: config,
- logger: logger,
- }
-
- // Start cleanup goroutine to remove old limiters
- if config.Enabled {
- go m.cleanupLimiters()
- }
-
- return m
-}
-
-// Handler wraps an http.Handler with rate limiting.
-func (m *Middleware) Handler(next http.Handler) http.Handler {
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- if !m.config.Enabled {
- next.ServeHTTP(w, r)
- return
- }
-
- // Extract client IP (handle X-Forwarded-For for proxies)
- ip := m.getClientIP(r)
-
- limiter := m.getLimiter(ip)
- if !limiter.Allow() {
- m.logger.Warn("rate limit exceeded",
- slog.String("ip", ip),
- slog.String("path", r.URL.Path),
- )
- w.Header().Set("Content-Type", "application/json")
- w.Header().Set("Retry-After", "1")
- w.WriteHeader(http.StatusTooManyRequests)
- w.Write([]byte(`{"error":"rate limit exceeded","message":"too many requests"}`))
- return
- }
-
- next.ServeHTTP(w, r)
- })
-}
-
-// getLimiter returns the rate limiter for a given IP, creating one if needed.
-func (m *Middleware) getLimiter(ip string) *rate.Limiter {
- m.mu.RLock()
- limiter, exists := m.limiters[ip]
- m.mu.RUnlock()
-
- if exists {
- return limiter
- }
-
- m.mu.Lock()
- defer m.mu.Unlock()
-
- // Double-check after acquiring write lock
- limiter, exists = m.limiters[ip]
- if exists {
- return limiter
- }
-
- limiter = rate.NewLimiter(rate.Limit(m.config.RequestsPerSecond), m.config.Burst)
- m.limiters[ip] = limiter
- return limiter
-}
-
-// getClientIP extracts the client IP from the request.
-func (m *Middleware) getClientIP(r *http.Request) string {
- // Check X-Forwarded-For header (for proxies/load balancers)
- xff := r.Header.Get("X-Forwarded-For")
- if xff != "" {
- // X-Forwarded-For can be a comma-separated list, use the first IP
- for idx := 0; idx < len(xff); idx++ {
- if xff[idx] == ',' {
- return xff[:idx]
- }
- }
- return xff
- }
-
- // Check X-Real-IP header
- if xri := r.Header.Get("X-Real-IP"); xri != "" {
- return xri
- }
-
- // Fall back to RemoteAddr
- return r.RemoteAddr
-}
-
-// cleanupLimiters periodically removes unused limiters to prevent memory leaks.
-func (m *Middleware) cleanupLimiters() {
- ticker := time.NewTicker(5 * time.Minute)
- defer ticker.Stop()
-
- for range ticker.C {
- m.mu.Lock()
- // Clear all limiters periodically
- // In production, you might want a more sophisticated LRU cache
- m.limiters = make(map[string]*rate.Limiter)
- m.mu.Unlock()
-
- m.logger.Debug("cleaned up rate limiters")
- }
-}
-
-
-
package server
-
-import (
- "context"
- "encoding/json"
- "net/http"
- "time"
-)
-
-// HealthStatus represents the health check response.
-type HealthStatus struct {
- Status string `json:"status"`
- Timestamp int64 `json:"timestamp"`
- Checks map[string]string `json:"checks,omitempty"`
-}
-
-// handleHealth returns a basic health check endpoint.
-// This is suitable for Kubernetes liveness probes.
-func (s *GatewayServer) handleHealth(w http.ResponseWriter, r *http.Request) {
- if r.Method != http.MethodGet {
- http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
- return
- }
-
- status := HealthStatus{
- Status: "healthy",
- Timestamp: time.Now().Unix(),
- }
-
- w.Header().Set("Content-Type", "application/json")
- w.WriteHeader(http.StatusOK)
- if err := json.NewEncoder(w).Encode(status); err != nil {
- s.logger.ErrorContext(r.Context(), "failed to encode health response", "error", err.Error())
- }
-}
-
-// handleReady returns a readiness check that verifies dependencies.
-// This is suitable for Kubernetes readiness probes and load balancer health checks.
-func (s *GatewayServer) handleReady(w http.ResponseWriter, r *http.Request) {
- if r.Method != http.MethodGet {
- http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
- return
- }
-
- checks := make(map[string]string)
- allHealthy := true
-
- // Check conversation store connectivity
- ctx, cancel := context.WithTimeout(r.Context(), 2*time.Second)
- defer cancel()
-
- // Test conversation store by attempting a simple operation
- testID := "health_check_test"
- _, err := s.convs.Get(ctx, testID)
- if err != nil {
- checks["conversation_store"] = "unhealthy: " + err.Error()
- allHealthy = false
- } else {
- checks["conversation_store"] = "healthy"
- }
-
- // Check if at least one provider is configured
- models := s.registry.Models()
- if len(models) == 0 {
- checks["providers"] = "unhealthy: no providers configured"
- allHealthy = false
- } else {
- checks["providers"] = "healthy"
- }
-
- _ = ctx // Use context if needed
-
- status := HealthStatus{
- Timestamp: time.Now().Unix(),
- Checks: checks,
- }
-
- if allHealthy {
- status.Status = "ready"
- w.Header().Set("Content-Type", "application/json")
- w.WriteHeader(http.StatusOK)
- } else {
- status.Status = "not_ready"
- w.Header().Set("Content-Type", "application/json")
- w.WriteHeader(http.StatusServiceUnavailable)
- }
-
- if err := json.NewEncoder(w).Encode(status); err != nil {
- s.logger.ErrorContext(r.Context(), "failed to encode ready response", "error", err.Error())
- }
-}
-
-
-
package server
-
-import (
- "fmt"
- "log/slog"
- "net/http"
- "runtime/debug"
-
- "github.com/ajac-zero/latticelm/internal/logger"
-)
-
-// MaxRequestBodyBytes is the maximum size allowed for request bodies (10MB)
-const MaxRequestBodyBytes = 10 * 1024 * 1024
-
-// PanicRecoveryMiddleware recovers from panics in HTTP handlers and logs them
-// instead of crashing the server. Returns 500 Internal Server Error to the client.
-func PanicRecoveryMiddleware(next http.Handler, log *slog.Logger) http.Handler {
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- defer func() {
- if err := recover(); err != nil {
- // Capture stack trace
- stack := debug.Stack()
-
- // Log the panic with full context
- log.ErrorContext(r.Context(), "panic recovered in HTTP handler",
- logger.LogAttrsWithTrace(r.Context(),
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("method", r.Method),
- slog.String("path", r.URL.Path),
- slog.String("remote_addr", r.RemoteAddr),
- slog.Any("panic", err),
- slog.String("stack", string(stack)),
- )...,
- )
-
- // Return 500 to client
- http.Error(w, "Internal Server Error", http.StatusInternalServerError)
- }
- }()
-
- next.ServeHTTP(w, r)
- })
-}
-
-// RequestSizeLimitMiddleware enforces a maximum request body size to prevent
-// DoS attacks via oversized payloads. Requests exceeding the limit receive 413.
-func RequestSizeLimitMiddleware(next http.Handler, maxBytes int64) http.Handler {
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- // Only limit body size for requests that have a body
- if r.Method == http.MethodPost || r.Method == http.MethodPut || r.Method == http.MethodPatch {
- // Wrap the request body with a size limiter
- r.Body = http.MaxBytesReader(w, r.Body, maxBytes)
- }
-
- next.ServeHTTP(w, r)
- })
-}
-
-// ErrorRecoveryMiddleware catches errors from MaxBytesReader and converts them
-// to proper HTTP error responses. This should be placed after RequestSizeLimitMiddleware
-// in the middleware chain.
-func ErrorRecoveryMiddleware(next http.Handler, log *slog.Logger) http.Handler {
- return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
- next.ServeHTTP(w, r)
-
- // Check if the request body exceeded the size limit
- // MaxBytesReader sets an error that we can detect on the next read attempt
- // But we need to handle the error when it actually occurs during JSON decoding
- // The JSON decoder will return the error, so we don't need special handling here
- // This middleware is more for future extensibility
- })
-}
-
-// WriteJSONError is a helper function to safely write JSON error responses,
-// handling any encoding errors that might occur.
-func WriteJSONError(w http.ResponseWriter, log *slog.Logger, message string, statusCode int) {
- w.Header().Set("Content-Type", "application/json")
- w.WriteHeader(statusCode)
-
- // Use fmt.Fprintf to write the error response
- // This is safer than json.Encoder as we control the format
- _, err := fmt.Fprintf(w, `{"error":{"message":"%s"}}`, message)
- if err != nil {
- // If we can't even write the error response, log it
- log.Error("failed to write error response",
- slog.String("original_message", message),
- slog.Int("status_code", statusCode),
- slog.String("write_error", err.Error()),
- )
- }
-}
-
-
-
package server
-
-import (
- "encoding/json"
- "errors"
- "fmt"
- "log/slog"
- "net/http"
- "strings"
- "time"
-
- "github.com/google/uuid"
- "github.com/sony/gobreaker"
-
- "github.com/ajac-zero/latticelm/internal/api"
- "github.com/ajac-zero/latticelm/internal/conversation"
- "github.com/ajac-zero/latticelm/internal/logger"
- "github.com/ajac-zero/latticelm/internal/providers"
-)
-
-// ProviderRegistry is an interface for provider registries.
-type ProviderRegistry interface {
- Get(name string) (providers.Provider, bool)
- Models() []struct{ Provider, Model string }
- ResolveModelID(model string) string
- Default(model string) (providers.Provider, error)
-}
-
-// GatewayServer hosts the Open Responses API for the gateway.
-type GatewayServer struct {
- registry ProviderRegistry
- convs conversation.Store
- logger *slog.Logger
-}
-
-// New creates a GatewayServer bound to the provider registry.
-func New(registry ProviderRegistry, convs conversation.Store, logger *slog.Logger) *GatewayServer {
- return &GatewayServer{
- registry: registry,
- convs: convs,
- logger: logger,
- }
-}
-
-// isCircuitBreakerError checks if the error is from a circuit breaker.
-func isCircuitBreakerError(err error) bool {
- return errors.Is(err, gobreaker.ErrOpenState) || errors.Is(err, gobreaker.ErrTooManyRequests)
-}
-
-// RegisterRoutes wires the HTTP handlers onto the provided mux.
-func (s *GatewayServer) RegisterRoutes(mux *http.ServeMux) {
- mux.HandleFunc("/v1/responses", s.handleResponses)
- mux.HandleFunc("/v1/models", s.handleModels)
- mux.HandleFunc("/health", s.handleHealth)
- mux.HandleFunc("/ready", s.handleReady)
-}
-
-func (s *GatewayServer) handleModels(w http.ResponseWriter, r *http.Request) {
- if r.Method != http.MethodGet {
- http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
- return
- }
-
- models := s.registry.Models()
- var data []api.ModelInfo
- for _, m := range models {
- data = append(data, api.ModelInfo{
- ID: m.Model,
- Provider: m.Provider,
- })
- }
-
- resp := api.ModelsResponse{
- Object: "list",
- Data: data,
- }
-
- w.Header().Set("Content-Type", "application/json")
- if err := json.NewEncoder(w).Encode(resp); err != nil {
- s.logger.ErrorContext(r.Context(), "failed to encode models response",
- logger.LogAttrsWithTrace(r.Context(),
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("error", err.Error()),
- )...,
- )
- }
-}
-
-func (s *GatewayServer) handleResponses(w http.ResponseWriter, r *http.Request) {
- if r.Method != http.MethodPost {
- http.Error(w, "method not allowed", http.StatusMethodNotAllowed)
- return
- }
-
- var req api.ResponseRequest
- if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
- // Check if error is due to request size limit
- if err.Error() == "http: request body too large" {
- http.Error(w, "request body too large", http.StatusRequestEntityTooLarge)
- return
- }
- http.Error(w, "invalid JSON payload", http.StatusBadRequest)
- return
- }
-
- if err := req.Validate(); err != nil {
- http.Error(w, err.Error(), http.StatusBadRequest)
- return
- }
-
- // Normalize input to internal messages
- inputMsgs := req.NormalizeInput()
-
- // Build full message history from previous conversation
- var historyMsgs []api.Message
- if req.PreviousResponseID != nil && *req.PreviousResponseID != "" {
- conv, err := s.convs.Get(r.Context(), *req.PreviousResponseID)
- if err != nil {
- s.logger.ErrorContext(r.Context(), "failed to retrieve conversation",
- logger.LogAttrsWithTrace(r.Context(),
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("conversation_id", *req.PreviousResponseID),
- slog.String("error", err.Error()),
- )...,
- )
- http.Error(w, "error retrieving conversation", http.StatusInternalServerError)
- return
- }
- if conv == nil {
- s.logger.WarnContext(r.Context(), "conversation not found",
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("conversation_id", *req.PreviousResponseID),
- )
- http.Error(w, "conversation not found", http.StatusNotFound)
- return
- }
- historyMsgs = conv.Messages
- }
-
- // Combined messages for conversation storage (history + new input, no instructions)
- storeMsgs := make([]api.Message, 0, len(historyMsgs)+len(inputMsgs))
- storeMsgs = append(storeMsgs, historyMsgs...)
- storeMsgs = append(storeMsgs, inputMsgs...)
-
- // Build provider messages: instructions + history + input
- var providerMsgs []api.Message
- if req.Instructions != nil && *req.Instructions != "" {
- providerMsgs = append(providerMsgs, api.Message{
- Role: "developer",
- Content: []api.ContentBlock{{Type: "input_text", Text: *req.Instructions}},
- })
- }
- providerMsgs = append(providerMsgs, storeMsgs...)
-
- provider, err := s.resolveProvider(&req)
- if err != nil {
- http.Error(w, err.Error(), http.StatusBadGateway)
- return
- }
-
- // Resolve provider_model_id (e.g., Azure deployment name)
- resolvedReq := req
- resolvedReq.Model = s.registry.ResolveModelID(req.Model)
-
- if req.Stream {
- s.handleStreamingResponse(w, r, provider, providerMsgs, &resolvedReq, &req, storeMsgs)
- } else {
- s.handleSyncResponse(w, r, provider, providerMsgs, &resolvedReq, &req, storeMsgs)
- }
-}
-
-func (s *GatewayServer) handleSyncResponse(w http.ResponseWriter, r *http.Request, provider providers.Provider, providerMsgs []api.Message, resolvedReq *api.ResponseRequest, origReq *api.ResponseRequest, storeMsgs []api.Message) {
- result, err := provider.Generate(r.Context(), providerMsgs, resolvedReq)
- if err != nil {
- s.logger.ErrorContext(r.Context(), "provider generation failed",
- logger.LogAttrsWithTrace(r.Context(),
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("provider", provider.Name()),
- slog.String("model", resolvedReq.Model),
- slog.String("error", err.Error()),
- )...,
- )
-
- // Check if error is from circuit breaker
- if isCircuitBreakerError(err) {
- http.Error(w, "service temporarily unavailable - circuit breaker open", http.StatusServiceUnavailable)
- } else {
- http.Error(w, "provider error", http.StatusBadGateway)
- }
- return
- }
-
- responseID := generateID("resp_")
-
- // Build assistant message for conversation store
- assistantMsg := api.Message{
- Role: "assistant",
- Content: []api.ContentBlock{{Type: "output_text", Text: result.Text}},
- ToolCalls: result.ToolCalls,
- }
- allMsgs := append(storeMsgs, assistantMsg)
- if _, err := s.convs.Create(r.Context(), responseID, result.Model, allMsgs); err != nil {
- s.logger.ErrorContext(r.Context(), "failed to store conversation",
- logger.LogAttrsWithTrace(r.Context(),
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("response_id", responseID),
- slog.String("error", err.Error()),
- )...,
- )
- // Don't fail the response if storage fails
- }
-
- s.logger.InfoContext(r.Context(), "response generated",
- logger.LogAttrsWithTrace(r.Context(),
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("provider", provider.Name()),
- slog.String("model", result.Model),
- slog.String("response_id", responseID),
- slog.Int("input_tokens", result.Usage.InputTokens),
- slog.Int("output_tokens", result.Usage.OutputTokens),
- slog.Bool("has_tool_calls", len(result.ToolCalls) > 0),
- )...,
- )
-
- // Build spec-compliant response
- resp := s.buildResponse(origReq, result, provider.Name(), responseID)
-
- w.Header().Set("Content-Type", "application/json")
- w.WriteHeader(http.StatusOK)
- if err := json.NewEncoder(w).Encode(resp); err != nil {
- s.logger.ErrorContext(r.Context(), "failed to encode response",
- logger.LogAttrsWithTrace(r.Context(),
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("response_id", responseID),
- slog.String("error", err.Error()),
- )...,
- )
- }
-}
-
-func (s *GatewayServer) handleStreamingResponse(w http.ResponseWriter, r *http.Request, provider providers.Provider, providerMsgs []api.Message, resolvedReq *api.ResponseRequest, origReq *api.ResponseRequest, storeMsgs []api.Message) {
- w.Header().Set("Content-Type", "text/event-stream")
- w.Header().Set("Cache-Control", "no-cache")
- w.Header().Set("Connection", "keep-alive")
- w.WriteHeader(http.StatusOK)
-
- flusher, ok := w.(http.Flusher)
- if !ok {
- http.Error(w, "streaming not supported", http.StatusInternalServerError)
- return
- }
-
- responseID := generateID("resp_")
- itemID := generateID("msg_")
- seq := 0
- outputIdx := 0
- contentIdx := 0
-
- // Build initial response snapshot (in_progress, no output yet)
- initialResp := s.buildResponse(origReq, &api.ProviderResult{
- Model: origReq.Model,
- }, provider.Name(), responseID)
- initialResp.Status = "in_progress"
- initialResp.CompletedAt = nil
- initialResp.Output = []api.OutputItem{}
- initialResp.Usage = nil
-
- // response.created
- s.sendSSE(w, flusher, &seq, "response.created", &api.StreamEvent{
- Type: "response.created",
- Response: initialResp,
- })
-
- // response.in_progress
- s.sendSSE(w, flusher, &seq, "response.in_progress", &api.StreamEvent{
- Type: "response.in_progress",
- Response: initialResp,
- })
-
- // response.output_item.added
- inProgressItem := &api.OutputItem{
- ID: itemID,
- Type: "message",
- Status: "in_progress",
- Role: "assistant",
- Content: []api.ContentPart{},
- }
- s.sendSSE(w, flusher, &seq, "response.output_item.added", &api.StreamEvent{
- Type: "response.output_item.added",
- OutputIndex: &outputIdx,
- Item: inProgressItem,
- })
-
- // response.content_part.added
- emptyPart := &api.ContentPart{
- Type: "output_text",
- Text: "",
- Annotations: []api.Annotation{},
- }
- s.sendSSE(w, flusher, &seq, "response.content_part.added", &api.StreamEvent{
- Type: "response.content_part.added",
- ItemID: itemID,
- OutputIndex: &outputIdx,
- ContentIndex: &contentIdx,
- Part: emptyPart,
- })
-
- // Start provider stream
- deltaChan, errChan := provider.GenerateStream(r.Context(), providerMsgs, resolvedReq)
-
- var fullText string
- var streamErr error
- var providerModel string
-
- // Track tool calls being built
- type toolCallBuilder struct {
- itemID string
- id string
- name string
- arguments string
- }
- toolCallsInProgress := make(map[int]*toolCallBuilder)
- nextOutputIdx := 0
- textItemAdded := false
-
-loop:
- for {
- select {
- case delta, ok := <-deltaChan:
- if !ok {
- break loop
- }
- if delta.Model != "" && providerModel == "" {
- providerModel = delta.Model
- }
-
- // Handle text content
- if delta.Text != "" {
- // Add text item on first text delta
- if !textItemAdded {
- textItemAdded = true
- nextOutputIdx++
- }
- fullText += delta.Text
- s.sendSSE(w, flusher, &seq, "response.output_text.delta", &api.StreamEvent{
- Type: "response.output_text.delta",
- ItemID: itemID,
- OutputIndex: &outputIdx,
- ContentIndex: &contentIdx,
- Delta: delta.Text,
- })
- }
-
- // Handle tool call delta
- if delta.ToolCallDelta != nil {
- tc := delta.ToolCallDelta
-
- // First chunk for this tool call index
- if _, exists := toolCallsInProgress[tc.Index]; !exists {
- toolItemID := generateID("item_")
- toolOutputIdx := nextOutputIdx
- nextOutputIdx++
-
- // Send response.output_item.added
- s.sendSSE(w, flusher, &seq, "response.output_item.added", &api.StreamEvent{
- Type: "response.output_item.added",
- OutputIndex: &toolOutputIdx,
- Item: &api.OutputItem{
- ID: toolItemID,
- Type: "function_call",
- Status: "in_progress",
- CallID: tc.ID,
- Name: tc.Name,
- },
- })
-
- toolCallsInProgress[tc.Index] = &toolCallBuilder{
- itemID: toolItemID,
- id: tc.ID,
- name: tc.Name,
- arguments: "",
- }
- }
-
- // Send function_call_arguments.delta
- if tc.Arguments != "" {
- builder := toolCallsInProgress[tc.Index]
- builder.arguments += tc.Arguments
- toolOutputIdx := outputIdx + 1 + tc.Index
-
- s.sendSSE(w, flusher, &seq, "response.function_call_arguments.delta", &api.StreamEvent{
- Type: "response.function_call_arguments.delta",
- ItemID: builder.itemID,
- OutputIndex: &toolOutputIdx,
- Delta: tc.Arguments,
- })
- }
- }
-
- if delta.Done {
- break loop
- }
- case err := <-errChan:
- if err != nil {
- streamErr = err
- }
- break loop
- case <-r.Context().Done():
- s.logger.InfoContext(r.Context(), "client disconnected",
- slog.String("request_id", logger.FromContext(r.Context())),
- )
- return
- }
- }
-
- if streamErr != nil {
- s.logger.ErrorContext(r.Context(), "stream error",
- logger.LogAttrsWithTrace(r.Context(),
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("provider", provider.Name()),
- slog.String("model", origReq.Model),
- slog.String("error", streamErr.Error()),
- )...,
- )
-
- // Determine error type based on circuit breaker state
- errorType := "server_error"
- errorMessage := streamErr.Error()
- if isCircuitBreakerError(streamErr) {
- errorType = "circuit_breaker_open"
- errorMessage = "service temporarily unavailable - circuit breaker open"
- }
-
- failedResp := s.buildResponse(origReq, &api.ProviderResult{
- Model: origReq.Model,
- }, provider.Name(), responseID)
- failedResp.Status = "failed"
- failedResp.CompletedAt = nil
- failedResp.Output = []api.OutputItem{}
- failedResp.Error = &api.ResponseError{
- Type: errorType,
- Message: errorMessage,
- }
- s.sendSSE(w, flusher, &seq, "response.failed", &api.StreamEvent{
- Type: "response.failed",
- Response: failedResp,
- })
- return
- }
-
- // Send done events for text output if text was added
- if textItemAdded && fullText != "" {
- // response.output_text.done
- s.sendSSE(w, flusher, &seq, "response.output_text.done", &api.StreamEvent{
- Type: "response.output_text.done",
- ItemID: itemID,
- OutputIndex: &outputIdx,
- ContentIndex: &contentIdx,
- Text: fullText,
- })
-
- // response.content_part.done
- completedPart := &api.ContentPart{
- Type: "output_text",
- Text: fullText,
- Annotations: []api.Annotation{},
- }
- s.sendSSE(w, flusher, &seq, "response.content_part.done", &api.StreamEvent{
- Type: "response.content_part.done",
- ItemID: itemID,
- OutputIndex: &outputIdx,
- ContentIndex: &contentIdx,
- Part: completedPart,
- })
-
- // response.output_item.done
- completedItem := &api.OutputItem{
- ID: itemID,
- Type: "message",
- Status: "completed",
- Role: "assistant",
- Content: []api.ContentPart{*completedPart},
- }
- s.sendSSE(w, flusher, &seq, "response.output_item.done", &api.StreamEvent{
- Type: "response.output_item.done",
- OutputIndex: &outputIdx,
- Item: completedItem,
- })
- }
-
- // Send done events for each tool call
- for idx, builder := range toolCallsInProgress {
- toolOutputIdx := outputIdx + 1 + idx
-
- s.sendSSE(w, flusher, &seq, "response.function_call_arguments.done", &api.StreamEvent{
- Type: "response.function_call_arguments.done",
- ItemID: builder.itemID,
- OutputIndex: &toolOutputIdx,
- Arguments: builder.arguments,
- })
-
- s.sendSSE(w, flusher, &seq, "response.output_item.done", &api.StreamEvent{
- Type: "response.output_item.done",
- OutputIndex: &toolOutputIdx,
- Item: &api.OutputItem{
- ID: builder.itemID,
- Type: "function_call",
- Status: "completed",
- CallID: builder.id,
- Name: builder.name,
- Arguments: builder.arguments,
- },
- })
- }
-
- // Build final completed response
- model := origReq.Model
- if providerModel != "" {
- model = providerModel
- }
-
- // Collect tool calls for result
- var toolCalls []api.ToolCall
- for _, builder := range toolCallsInProgress {
- toolCalls = append(toolCalls, api.ToolCall{
- ID: builder.id,
- Name: builder.name,
- Arguments: builder.arguments,
- })
- }
-
- finalResult := &api.ProviderResult{
- Model: model,
- Text: fullText,
- ToolCalls: toolCalls,
- }
- completedResp := s.buildResponse(origReq, finalResult, provider.Name(), responseID)
-
- // Update item IDs to match what we sent during streaming
- if textItemAdded && len(completedResp.Output) > 0 {
- completedResp.Output[0].ID = itemID
- }
- for idx, builder := range toolCallsInProgress {
- // Find the corresponding output item
- for i := range completedResp.Output {
- if completedResp.Output[i].Type == "function_call" && completedResp.Output[i].CallID == builder.id {
- completedResp.Output[i].ID = builder.itemID
- break
- }
- }
- _ = idx // unused
- }
-
- // response.completed
- s.sendSSE(w, flusher, &seq, "response.completed", &api.StreamEvent{
- Type: "response.completed",
- Response: completedResp,
- })
-
- // Store conversation
- if fullText != "" || len(toolCalls) > 0 {
- assistantMsg := api.Message{
- Role: "assistant",
- Content: []api.ContentBlock{{Type: "output_text", Text: fullText}},
- ToolCalls: toolCalls,
- }
- allMsgs := append(storeMsgs, assistantMsg)
- if _, err := s.convs.Create(r.Context(), responseID, model, allMsgs); err != nil {
- s.logger.ErrorContext(r.Context(), "failed to store conversation",
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("response_id", responseID),
- slog.String("error", err.Error()),
- )
- // Don't fail the response if storage fails
- }
-
- s.logger.InfoContext(r.Context(), "streaming response completed",
- slog.String("request_id", logger.FromContext(r.Context())),
- slog.String("provider", provider.Name()),
- slog.String("model", model),
- slog.String("response_id", responseID),
- slog.Bool("has_tool_calls", len(toolCalls) > 0),
- )
- }
-}
-
-func (s *GatewayServer) sendSSE(w http.ResponseWriter, flusher http.Flusher, seq *int, eventType string, event *api.StreamEvent) {
- event.SequenceNumber = *seq
- *seq++
- data, err := json.Marshal(event)
- if err != nil {
- s.logger.Error("failed to marshal SSE event",
- slog.String("event_type", eventType),
- slog.String("error", err.Error()),
- )
- return
- }
- fmt.Fprintf(w, "event: %s\ndata: %s\n\n", eventType, data)
- flusher.Flush()
-}
-
-func (s *GatewayServer) buildResponse(req *api.ResponseRequest, result *api.ProviderResult, providerName string, responseID string) *api.Response {
- now := time.Now().Unix()
-
- model := result.Model
- if model == "" {
- model = req.Model
- }
-
- // Build output items array
- outputItems := []api.OutputItem{}
-
- // Add message item if there's text
- if result.Text != "" {
- outputItems = append(outputItems, api.OutputItem{
- ID: generateID("msg_"),
- Type: "message",
- Status: "completed",
- Role: "assistant",
- Content: []api.ContentPart{{
- Type: "output_text",
- Text: result.Text,
- Annotations: []api.Annotation{},
- }},
- })
- }
-
- // Add function_call items
- for _, tc := range result.ToolCalls {
- outputItems = append(outputItems, api.OutputItem{
- ID: generateID("item_"),
- Type: "function_call",
- Status: "completed",
- CallID: tc.ID,
- Name: tc.Name,
- Arguments: tc.Arguments,
- })
- }
-
- // Echo back request params with defaults
- tools := req.Tools
- if tools == nil {
- tools = json.RawMessage(`[]`)
- }
- toolChoice := req.ToolChoice
- if toolChoice == nil {
- toolChoice = json.RawMessage(`"auto"`)
- }
- text := req.Text
- if text == nil {
- text = json.RawMessage(`{"format":{"type":"text"}}`)
- }
- truncation := "disabled"
- if req.Truncation != nil {
- truncation = *req.Truncation
- }
- temperature := 1.0
- if req.Temperature != nil {
- temperature = *req.Temperature
- }
- topP := 1.0
- if req.TopP != nil {
- topP = *req.TopP
- }
- presencePenalty := 0.0
- if req.PresencePenalty != nil {
- presencePenalty = *req.PresencePenalty
- }
- frequencyPenalty := 0.0
- if req.FrequencyPenalty != nil {
- frequencyPenalty = *req.FrequencyPenalty
- }
- topLogprobs := 0
- if req.TopLogprobs != nil {
- topLogprobs = *req.TopLogprobs
- }
- parallelToolCalls := true
- if req.ParallelToolCalls != nil {
- parallelToolCalls = *req.ParallelToolCalls
- }
- store := true
- if req.Store != nil {
- store = *req.Store
- }
- background := false
- if req.Background != nil {
- background = *req.Background
- }
- serviceTier := "default"
- if req.ServiceTier != nil {
- serviceTier = *req.ServiceTier
- }
- var reasoning json.RawMessage
- if req.Reasoning != nil {
- reasoning = req.Reasoning
- }
- metadata := req.Metadata
- if metadata == nil {
- metadata = map[string]string{}
- }
-
- var usage *api.Usage
- if result.Text != "" {
- usage = &result.Usage
- }
-
- return &api.Response{
- ID: responseID,
- Object: "response",
- CreatedAt: now,
- CompletedAt: &now,
- Status: "completed",
- IncompleteDetails: nil,
- Model: model,
- PreviousResponseID: req.PreviousResponseID,
- Instructions: req.Instructions,
- Output: outputItems,
- Error: nil,
- Tools: tools,
- ToolChoice: toolChoice,
- Truncation: truncation,
- ParallelToolCalls: parallelToolCalls,
- Text: text,
- TopP: topP,
- PresencePenalty: presencePenalty,
- FrequencyPenalty: frequencyPenalty,
- TopLogprobs: topLogprobs,
- Temperature: temperature,
- Reasoning: reasoning,
- Usage: usage,
- MaxOutputTokens: req.MaxOutputTokens,
- MaxToolCalls: req.MaxToolCalls,
- Store: store,
- Background: background,
- ServiceTier: serviceTier,
- Metadata: metadata,
- SafetyIdentifier: nil,
- PromptCacheKey: nil,
- Provider: providerName,
- }
-}
-
-func (s *GatewayServer) resolveProvider(req *api.ResponseRequest) (providers.Provider, error) {
- if req.Provider != "" {
- if provider, ok := s.registry.Get(req.Provider); ok {
- return provider, nil
- }
- return nil, fmt.Errorf("provider %s not configured", req.Provider)
- }
- return s.registry.Default(req.Model)
-}
-
-func generateID(prefix string) string {
- id := strings.ReplaceAll(uuid.NewString(), "-", "")
- return prefix + id[:24]
-}
-
-
-
-
-
-
diff --git a/go.mod b/go.mod
index 9579a93..294f965 100644
--- a/go.mod
+++ b/go.mod
@@ -10,42 +10,43 @@ require (
github.com/google/uuid v1.6.0
github.com/jackc/pgx/v5 v5.8.0
github.com/mattn/go-sqlite3 v1.14.34
- github.com/openai/openai-go/v3 v3.2.0
+ github.com/openai/openai-go/v3 v3.24.0
github.com/prometheus/client_golang v1.23.2
github.com/redis/go-redis/v9 v9.18.0
github.com/sony/gobreaker v1.0.0
github.com/stretchr/testify v1.11.1
- go.opentelemetry.io/otel v1.29.0
- go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.29.0
- go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.29.0
- go.opentelemetry.io/otel/sdk v1.29.0
- go.opentelemetry.io/otel/trace v1.29.0
+ go.opentelemetry.io/otel v1.41.0
+ go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.41.0
+ go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.41.0
+ go.opentelemetry.io/otel/sdk v1.41.0
+ go.opentelemetry.io/otel/trace v1.41.0
golang.org/x/time v0.14.0
- google.golang.org/genai v1.48.0
- google.golang.org/grpc v1.66.2
+ google.golang.org/genai v1.49.0
+ google.golang.org/grpc v1.79.1
gopkg.in/yaml.v3 v3.0.1
)
require (
- cloud.google.com/go v0.116.0 // indirect
- cloud.google.com/go/auth v0.9.3 // indirect
- cloud.google.com/go/compute/metadata v0.5.0 // indirect
- filippo.io/edwards25519 v1.1.0 // indirect
+ cloud.google.com/go v0.123.0 // indirect
+ cloud.google.com/go/auth v0.18.2 // indirect
+ cloud.google.com/go/compute/metadata v0.9.0 // indirect
+ filippo.io/edwards25519 v1.2.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.21.0 // indirect
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
- github.com/cenkalti/backoff/v4 v4.3.0 // indirect
+ github.com/cenkalti/backoff/v5 v5.0.3 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f // indirect
- github.com/go-logr/logr v1.4.2 // indirect
+ github.com/felixge/httpsnoop v1.0.4 // indirect
+ github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
- github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/google/go-cmp v0.7.0 // indirect
- github.com/google/s2a-go v0.1.8 // indirect
- github.com/googleapis/enterprise-certificate-proxy v0.3.4 // indirect
+ github.com/google/s2a-go v0.1.9 // indirect
+ github.com/googleapis/enterprise-certificate-proxy v0.3.13 // indirect
+ github.com/googleapis/gax-go/v2 v2.17.0 // indirect
github.com/gorilla/websocket v1.5.3 // indirect
- github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 // indirect
+ github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 // indirect
github.com/jackc/pgpassfile v1.0.0 // indirect
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 // indirect
github.com/jackc/puddle/v2 v2.2.2 // indirect
@@ -53,25 +54,26 @@ require (
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
- github.com/prometheus/common v0.66.1 // indirect
- github.com/prometheus/procfs v0.16.1 // indirect
+ github.com/prometheus/common v0.67.5 // indirect
+ github.com/prometheus/procfs v0.20.1 // indirect
github.com/tidwall/gjson v1.18.0 // indirect
- github.com/tidwall/match v1.1.1 // indirect
+ github.com/tidwall/match v1.2.0 // indirect
github.com/tidwall/pretty v1.2.1 // indirect
github.com/tidwall/sjson v1.2.5 // indirect
github.com/yuin/gopher-lua v1.1.1 // indirect
- go.opencensus.io v0.24.0 // indirect
- go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.29.0 // indirect
- go.opentelemetry.io/otel/metric v1.29.0 // indirect
- go.opentelemetry.io/proto/otlp v1.3.1 // indirect
+ go.opentelemetry.io/auto/sdk v1.2.1 // indirect
+ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.66.0 // indirect
+ go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.41.0 // indirect
+ go.opentelemetry.io/otel/metric v1.41.0 // indirect
+ go.opentelemetry.io/proto/otlp v1.9.0 // indirect
go.uber.org/atomic v1.11.0 // indirect
- go.yaml.in/yaml/v2 v2.4.2 // indirect
- golang.org/x/crypto v0.47.0 // indirect
- golang.org/x/net v0.49.0 // indirect
+ go.yaml.in/yaml/v2 v2.4.3 // indirect
+ golang.org/x/crypto v0.48.0 // indirect
+ golang.org/x/net v0.51.0 // indirect
golang.org/x/sync v0.19.0 // indirect
- golang.org/x/sys v0.40.0 // indirect
- golang.org/x/text v0.33.0 // indirect
- google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 // indirect
- google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 // indirect
- google.golang.org/protobuf v1.36.8 // indirect
+ golang.org/x/sys v0.41.0 // indirect
+ golang.org/x/text v0.34.0 // indirect
+ google.golang.org/genproto/googleapis/api v0.0.0-20260226221140-a57be14db171 // indirect
+ google.golang.org/genproto/googleapis/rpc v0.0.0-20260226221140-a57be14db171 // indirect
+ google.golang.org/protobuf v1.36.11 // indirect
)
diff --git a/go.sum b/go.sum
index 5cc9a19..fc62926 100644
--- a/go.sum
+++ b/go.sum
@@ -1,12 +1,11 @@
-cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
-cloud.google.com/go v0.116.0 h1:B3fRrSDkLRt5qSHWe40ERJvhvnQwdZiHu0bJOpldweE=
-cloud.google.com/go v0.116.0/go.mod h1:cEPSRWPzZEswwdr9BxE6ChEn01dWlTaF05LiC2Xs70U=
-cloud.google.com/go/auth v0.9.3 h1:VOEUIAADkkLtyfr3BLa3R8Ed/j6w1jTBmARx+wb5w5U=
-cloud.google.com/go/auth v0.9.3/go.mod h1:7z6VY+7h3KUdRov5F1i8NDP5ZzWKYmEPO842BgCsmTk=
-cloud.google.com/go/compute/metadata v0.5.0 h1:Zr0eK8JbFv6+Wi4ilXAR8FJ3wyNdpxHKJNPos6LTZOY=
-cloud.google.com/go/compute/metadata v0.5.0/go.mod h1:aHnloV2TPI38yx4s9+wAZhHykWvVCfu7hQbF+9CWoiY=
-filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
-filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
+cloud.google.com/go v0.123.0 h1:2NAUJwPR47q+E35uaJeYoNhuNEM9kM8SjgRgdeOJUSE=
+cloud.google.com/go v0.123.0/go.mod h1:xBoMV08QcqUGuPW65Qfm1o9Y4zKZBpGS+7bImXLTAZU=
+cloud.google.com/go/auth v0.18.2 h1:+Nbt5Ev0xEqxlNjd6c+yYUeosQ5TtEUaNcN/3FozlaM=
+cloud.google.com/go/auth v0.18.2/go.mod h1:xD+oY7gcahcu7G2SG2DsBerfFxgPAJz17zz2joOFF3M=
+cloud.google.com/go/compute/metadata v0.9.0 h1:pDUj4QMoPejqq20dK0Pg2N4yG9zIkYGdBtwLoEkH9Zs=
+cloud.google.com/go/compute/metadata v0.9.0/go.mod h1:E0bWwX5wTnLPedCKqk3pJmVgCBSM6qQI1yTBdEb3C10=
+filippo.io/edwards25519 v1.2.0 h1:crnVqOiS4jqYleHd9vaKZ+HKtHfllngJIiOpNpoJsjo=
+filippo.io/edwards25519 v1.2.0/go.mod h1:xzAOLCNug/yB62zG1bQ8uziwrIqIuxhctzJT18Q77mc=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.21.0 h1:fou+2+WFTib47nS+nz/ozhEBnvU96bKHy6LjRsY4E28=
github.com/Azure/azure-sdk-for-go/sdk/azcore v1.21.0/go.mod h1:t76Ruy8AHvUAC8GfMWJMa0ElSbuIcO03NLpynfbgsPA=
github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.10.1 h1:B+blDbyVIG3WaikNxPnhPiJ1MThR03b3vKGtER95TP4=
@@ -15,7 +14,6 @@ github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 h1:9iefClla7iYpfYWdzPCRDo
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2/go.mod h1:XtLgD3ZD34DAaVIIAyG3objl5DynM3CQ/vMcbBNJZGI=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2 h1:oygO0locgZJe7PpYPXT5A29ZkwJaPqcva7BVeemZOZs=
github.com/AzureAD/microsoft-authentication-library-for-go v1.4.2/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI=
-github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/alicebob/miniredis/v2 v2.37.0 h1:RheObYW32G1aiJIj81XVt78ZHJpHonHLHW7OLIshq68=
github.com/alicebob/miniredis/v2 v2.37.0/go.mod h1:TcL7YfarKPGDAthEtl5NBeHZfeUQj6OXMm/+iu5cLMM=
github.com/anthropics/anthropic-sdk-go v1.26.0 h1:oUTzFaUpAevfuELAP1sjL6CQJ9HHAfT7CoSYSac11PY=
@@ -26,13 +24,10 @@ github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
github.com/bsm/gomega v1.27.10/go.mod h1:JyEr/xRbxbtgWNi8tIEVPUYZ5Dzef52k01W3YH0H+O0=
-github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
-github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
-github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
+github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
+github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
-github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
-github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
@@ -40,52 +35,33 @@ github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/r
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dnaeon/go-vcr v1.2.0 h1:zHCHvJYTMh1N7xnV7zf1m1GPBF9Ad0Jk/whtQ1663qI=
github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
-github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
-github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
-github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
-github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
+github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
-github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
-github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
+github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
+github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-sql-driver/mysql v1.9.3 h1:U/N249h2WzJ3Ukj8SowVFjdtZKfu9vlLZxjPXV1aweo=
github.com/go-sql-driver/mysql v1.9.3/go.mod h1:qn46aNg1333BRMNU69Lq93t8du/dwxI64Gl8i5p1WMU=
github.com/golang-jwt/jwt/v5 v5.3.1 h1:kYf81DTWFe7t+1VvL7eS+jKFVWaUnK9cB1qbwn63YCY=
github.com/golang-jwt/jwt/v5 v5.3.1/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
-github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
-github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
-github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
-github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
-github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
-github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
-github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
-github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
-github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
-github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
-github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
-github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
-github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
-github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
-github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
-github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
-github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
-github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
-github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
-github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
+github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
+github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
-github.com/google/s2a-go v0.1.8 h1:zZDs9gcbt9ZPLV0ndSyQk6Kacx2g/X+SKYovpnz3SMM=
-github.com/google/s2a-go v0.1.8/go.mod h1:6iNWHTpQ+nfNRN5E00MSdfDwVesa8hhS32PhPO8deJA=
-github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/s2a-go v0.1.9 h1:LGD7gtMgezd8a/Xak7mEWL0PjoTQFvpRudN895yqKW0=
+github.com/google/s2a-go v0.1.9/go.mod h1:YA0Ei2ZQL3acow2O62kdp9UlnvMmU7kA6Eutn0dXayM=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
-github.com/googleapis/enterprise-certificate-proxy v0.3.4 h1:XYIDZApgAnrN1c855gTgghdIA6Stxb52D5RnLI1SLyw=
-github.com/googleapis/enterprise-certificate-proxy v0.3.4/go.mod h1:YKe7cfqYXjKGpGvmSg28/fFvhNzinZQm8DGnaburhGA=
+github.com/googleapis/enterprise-certificate-proxy v0.3.13 h1:hSPAhW3NX+7HNlTsmrvU0jL75cIzxFktheceg95Nq14=
+github.com/googleapis/enterprise-certificate-proxy v0.3.13/go.mod h1:vqVt9yG9480NtzREnTlmGSBmFrA+bzb0yl0TxoBQXOg=
+github.com/googleapis/gax-go/v2 v2.17.0 h1:RksgfBpxqff0EZkDWYuz9q/uWsTVz+kf43LsZ1J6SMc=
+github.com/googleapis/gax-go/v2 v2.17.0/go.mod h1:mzaqghpQp4JDh3HvADwrat+6M3MOIDp5YKHhb9PAgDY=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
-github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0 h1:asbCHRVmodnJTuQ3qamDwqVOIjwqUPTYmYuemVOx+Ys=
-github.com/grpc-ecosystem/grpc-gateway/v2 v2.22.0/go.mod h1:ggCgvZ2r7uOoQjOyu2Y1NhHmEPPzzuhWgcza5M1Ji1I=
+github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
+github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
github.com/jackc/pgpassfile v1.0.0 h1:/6Hmqy13Ss2zCq62VdNG8tM1wchn8zjSGOBJ6icpsIM=
github.com/jackc/pgpassfile v1.0.0/go.mod h1:CEx0iS5ambNFdcRtxPj5JhEz+xB6uRky5eyVu/W2HEg=
github.com/jackc/pgservicefile v0.0.0-20240606120523-5a60cdf6a761 h1:iCEnooe7UlwOQYpKFhBabPMi4aNAfoODPEFNiAnClxo=
@@ -108,42 +84,37 @@ github.com/mattn/go-sqlite3 v1.14.34 h1:3NtcvcUnFBPsuRcno8pUtupspG/GM+9nZ88zgJcp
github.com/mattn/go-sqlite3 v1.14.34/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
-github.com/openai/openai-go/v3 v3.2.0 h1:2AbqFUCsoW2pm/2pUtPRuwK89dnoGHaQokzWsfoQO/U=
-github.com/openai/openai-go/v3 v3.2.0/go.mod h1:UOpNxkqC9OdNXNUfpNByKOtB4jAL0EssQXq5p8gO0Xs=
+github.com/openai/openai-go/v3 v3.24.0 h1:08x6GnYiB+AAejTo6yzPY8RkZMJQ8NpreiOyM5QfyYU=
+github.com/openai/openai-go/v3 v3.24.0/go.mod h1:cdufnVK14cWcT9qA1rRtrXx4FTRsgbDPW7Ia7SS5cZo=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v1.23.2 h1:Je96obch5RDVy3FDMndoUsjAhG5Edi49h0RJWRi/o0o=
github.com/prometheus/client_golang v1.23.2/go.mod h1:Tb1a6LWHB3/SPIzCoaDXI4I8UHKeFTEQ1YCr+0Gyqmg=
-github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
-github.com/prometheus/common v0.66.1 h1:h5E0h5/Y8niHc5DlaLlWLArTQI7tMrsfQjHV+d9ZoGs=
-github.com/prometheus/common v0.66.1/go.mod h1:gcaUsgf3KfRSwHY4dIMXLPV0K/Wg1oZ8+SbZk/HH/dA=
-github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
-github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
+github.com/prometheus/common v0.67.5 h1:pIgK94WWlQt1WLwAC5j2ynLaBRDiinoAb86HZHTUGI4=
+github.com/prometheus/common v0.67.5/go.mod h1:SjE/0MzDEEAyrdr5Gqc6G+sXI67maCxzaT3A2+HqjUw=
+github.com/prometheus/procfs v0.20.1 h1:XwbrGOIplXW/AU3YhIhLODXMJYyC1isLFfYCsTEycfc=
+github.com/prometheus/procfs v0.20.1/go.mod h1:o9EMBZGRyvDrSPH1RqdxhojkuXstoe4UlK79eF5TGGo=
github.com/redis/go-redis/v9 v9.18.0 h1:pMkxYPkEbMPwRdenAzUNyFNrDgHx9U+DrBabWNfSRQs=
github.com/redis/go-redis/v9 v9.18.0/go.mod h1:k3ufPphLU5YXwNTUcCRXGxUoF1fqxnhFQmscfkCoDA0=
-github.com/rogpeppe/go-internal v1.12.0 h1:exVL4IDcn6na9z1rAb56Vxr+CgyK3nn3O+epU5NdKM8=
-github.com/rogpeppe/go-internal v1.12.0/go.mod h1:E+RYuTGaKKdloAfM02xzb0FW3Paa99yedzYV+kq4uf4=
+github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
+github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/sony/gobreaker v1.0.0 h1:feX5fGGXSl3dYd4aHZItw+FpHLvvoaqkawKjVNiFMNQ=
github.com/sony/gobreaker v1.0.0/go.mod h1:ZKptC7FHNvhBz7dN2LGjPVBz2sZJmc0/PkyDJOjmxWY=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
-github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
-github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
-github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
-github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
-github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
-github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
+github.com/tidwall/match v1.2.0 h1:0pt8FlkOwjN2fPt4bIl4BoNxb98gGHN2ObFEDkrfZnM=
+github.com/tidwall/match v1.2.0/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
@@ -153,99 +124,58 @@ github.com/yuin/gopher-lua v1.1.1 h1:kYKnWBjvbNP4XLT3+bPEwAXJx262OhaHDWDVOPjL46M
github.com/yuin/gopher-lua v1.1.1/go.mod h1:GBR0iDaNXjAgGg9zfCvksxSRnQx76gclCIb7kdAd1Pw=
github.com/zeebo/xxh3 v1.0.2 h1:xZmwmqxHZA8AI603jOQ0tMqmBr9lPeFwGg6d+xy9DC0=
github.com/zeebo/xxh3 v1.0.2/go.mod h1:5NWz9Sef7zIDm2JHfFlcQvNekmcEl9ekUZQQKCYaDcA=
-go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
-go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
-go.opentelemetry.io/otel v1.29.0 h1:PdomN/Al4q/lN6iBJEN3AwPvUiHPMlt93c8bqTG5Llw=
-go.opentelemetry.io/otel v1.29.0/go.mod h1:N/WtXPs1CNCUEx+Agz5uouwCba+i+bJGFicT8SR4NP8=
-go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.29.0 h1:dIIDULZJpgdiHz5tXrTgKIMLkus6jEFa7x5SOKcyR7E=
-go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.29.0/go.mod h1:jlRVBe7+Z1wyxFSUs48L6OBQZ5JwH2Hg/Vbl+t9rAgI=
-go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.29.0 h1:nSiV3s7wiCam610XcLbYOmMfJxB9gO4uK3Xgv5gmTgg=
-go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.29.0/go.mod h1:hKn/e/Nmd19/x1gvIHwtOwVWM+VhuITSWip3JUDghj0=
-go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.29.0 h1:X3ZjNp36/WlkSYx0ul2jw4PtbNEDDeLskw3VPsrpYM0=
-go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.29.0/go.mod h1:2uL/xnOXh0CHOBFCWXz5u1A4GXLiW+0IQIzVbeOEQ0U=
-go.opentelemetry.io/otel/metric v1.29.0 h1:vPf/HFWTNkPu1aYeIsc98l4ktOQaL6LeSoeV2g+8YLc=
-go.opentelemetry.io/otel/metric v1.29.0/go.mod h1:auu/QWieFVWx+DmQOUMgj0F8LHWdgalxXqvp7BII/W8=
-go.opentelemetry.io/otel/sdk v1.29.0 h1:vkqKjk7gwhS8VaWb0POZKmIEDimRCMsopNYnriHyryo=
-go.opentelemetry.io/otel/sdk v1.29.0/go.mod h1:pM8Dx5WKnvxLCb+8lG1PRNIDxu9g9b9g59Qr7hfAAok=
-go.opentelemetry.io/otel/trace v1.29.0 h1:J/8ZNK4XgR7a21DZUAsbF8pZ5Jcw1VhACmnYt39JTi4=
-go.opentelemetry.io/otel/trace v1.29.0/go.mod h1:eHl3w0sp3paPkYstJOmAimxhiFXPg+MMTlEh3nsQgWQ=
-go.opentelemetry.io/proto/otlp v1.3.1 h1:TrMUixzpM0yuc/znrFTP9MMRh8trP93mkCiDVeXrui0=
-go.opentelemetry.io/proto/otlp v1.3.1/go.mod h1:0X1WI4de4ZsLrrJNLAQbFeLCm3T7yBkR0XqQ7niQU+8=
+go.opentelemetry.io/auto/sdk v1.2.1 h1:jXsnJ4Lmnqd11kwkBV2LgLoFMZKizbCi5fNZ/ipaZ64=
+go.opentelemetry.io/auto/sdk v1.2.1/go.mod h1:KRTj+aOaElaLi+wW1kO/DZRXwkF4C5xPbEe3ZiIhN7Y=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.66.0 h1:PnV4kVnw0zOmwwFkAzCN5O07fw1YOIQor120zrh0AVo=
+go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.66.0/go.mod h1:ofAwF4uinaf8SXdVzzbL4OsxJ3VfeEg3f/F6CeF49/Y=
+go.opentelemetry.io/otel v1.41.0 h1:YlEwVsGAlCvczDILpUXpIpPSL/VPugt7zHThEMLce1c=
+go.opentelemetry.io/otel v1.41.0/go.mod h1:Yt4UwgEKeT05QbLwbyHXEwhnjxNO6D8L5PQP51/46dE=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.41.0 h1:ao6Oe+wSebTlQ1OEht7jlYTzQKE+pnx/iNywFvTbuuI=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.41.0/go.mod h1:u3T6vz0gh/NVzgDgiwkgLxpsSF6PaPmo2il0apGJbls=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.41.0 h1:mq/Qcf28TWz719lE3/hMB4KkyDuLJIvgJnFGcd0kEUI=
+go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.41.0/go.mod h1:yk5LXEYhsL2htyDNJbEq7fWzNEigeEdV5xBF/Y+kAv0=
+go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.41.0 h1:61oRQmYGMW7pXmFjPg1Muy84ndqMxQ6SH2L8fBG8fSY=
+go.opentelemetry.io/otel/exporters/stdout/stdouttrace v1.41.0/go.mod h1:c0z2ubK4RQL+kSDuuFu9WnuXimObon3IiKjJf4NACvU=
+go.opentelemetry.io/otel/metric v1.41.0 h1:rFnDcs4gRzBcsO9tS8LCpgR0dxg4aaxWlJxCno7JlTQ=
+go.opentelemetry.io/otel/metric v1.41.0/go.mod h1:xPvCwd9pU0VN8tPZYzDZV/BMj9CM9vs00GuBjeKhJps=
+go.opentelemetry.io/otel/sdk v1.41.0 h1:YPIEXKmiAwkGl3Gu1huk1aYWwtpRLeskpV+wPisxBp8=
+go.opentelemetry.io/otel/sdk v1.41.0/go.mod h1:ahFdU0G5y8IxglBf0QBJXgSe7agzjE4GiTJ6HT9ud90=
+go.opentelemetry.io/otel/sdk/metric v1.41.0 h1:siZQIYBAUd1rlIWQT2uCxWJxcCO7q3TriaMlf08rXw8=
+go.opentelemetry.io/otel/sdk/metric v1.41.0/go.mod h1:HNBuSvT7ROaGtGI50ArdRLUnvRTRGniSUZbxiWxSO8Y=
+go.opentelemetry.io/otel/trace v1.41.0 h1:Vbk2co6bhj8L59ZJ6/xFTskY+tGAbOnCtQGVVa9TIN0=
+go.opentelemetry.io/otel/trace v1.41.0/go.mod h1:U1NU4ULCoxeDKc09yCWdWe+3QoyweJcISEVa1RBzOis=
+go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
+go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
-go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI=
-go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU=
-golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
-golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
-golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8=
-golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A=
-golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
-golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
-golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
-golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
-golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
-golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
-golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
-golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
-golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
-golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
-golang.org/x/net v0.49.0 h1:eeHFmOGUTtaaPSGNmjBKpbng9MulQsJURQUAfUwY++o=
-golang.org/x/net v0.49.0/go.mod h1:/ysNB2EvaqvesRkuLAyjI1ycPZlQHM3q01F02UY/MV8=
-golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
-golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
+go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
+golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
+golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
+golang.org/x/net v0.51.0 h1:94R/GTO7mt3/4wIKpcR5gkGmRLOuE/2hNGeWq/GBIFo=
+golang.org/x/net v0.51.0/go.mod h1:aamm+2QF5ogm02fjy5Bb7CQ0WMt1/WVM7FtyaTLlA9Y=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
-golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
-golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ=
-golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
-golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
-golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
-golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE=
-golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8=
+golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
+golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
+golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
+golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
-golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
-golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
-golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
-golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
-golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
-golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
-google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
-google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
-google.golang.org/genai v1.48.0 h1:1vb15G291wAjJJueisMDpUhssljhEdJU2t5qTidrVPs=
-google.golang.org/genai v1.48.0/go.mod h1:A3kkl0nyBjyFlNjgxIwKq70julKbIxpSxqKO5gw/gmk=
-google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
-google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
-google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
-google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1 h1:hjSy6tcFQZ171igDaN5QHOw2n6vx40juYbC/x67CEhc=
-google.golang.org/genproto/googleapis/api v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:qpvKtACPCQhAdu3PyQgV4l3LMXZEtft7y8QcarRsp9I=
-google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1 h1:pPJltXNxVzT4pK9yD8vR9X75DaWYYmLGMsEvBfFQZzQ=
-google.golang.org/genproto/googleapis/rpc v0.0.0-20240903143218-8af14fe29dc1/go.mod h1:UqMtugtsSgubUsoxbuAoiCXvqvErP7Gf0so0mK9tHxU=
-google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
-google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
-google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
-google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
-google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
-google.golang.org/grpc v1.66.2 h1:3QdXkuq3Bkh7w+ywLdLvM56cmGvQHUMZpiCzt6Rqaoo=
-google.golang.org/grpc v1.66.2/go.mod h1:s3/l6xSSCURdVfAnL+TqCNMyTDAGN6+lZeVxnZR128Y=
-google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
-google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
-google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
-google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
-google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
-google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
-google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
-google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
-google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
-google.golang.org/protobuf v1.36.8 h1:xHScyCOEuuwZEc6UtSOvPbAT4zRh0xcNRYekJwfqyMc=
-google.golang.org/protobuf v1.36.8/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU=
+gonum.org/v1/gonum v0.16.0 h1:5+ul4Swaf3ESvrOnidPp4GZbzf0mxVQpDCYUQE7OJfk=
+gonum.org/v1/gonum v0.16.0/go.mod h1:fef3am4MQ93R2HHpKnLk4/Tbh/s0+wqD5nfa6Pnwy4E=
+google.golang.org/genai v1.49.0 h1:Se+QJaH2GYK1aaR1o5S38mlU2GD5FnVvP76nfkV7LH0=
+google.golang.org/genai v1.49.0/go.mod h1:A3kkl0nyBjyFlNjgxIwKq70julKbIxpSxqKO5gw/gmk=
+google.golang.org/genproto/googleapis/api v0.0.0-20260226221140-a57be14db171 h1:tu/dtnW1o3wfaxCOjSLn5IRX4YDcJrtlpzYkhHhGaC4=
+google.golang.org/genproto/googleapis/api v0.0.0-20260226221140-a57be14db171/go.mod h1:M5krXqk4GhBKvB596udGL3UyjL4I1+cTbK0orROM9ng=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20260226221140-a57be14db171 h1:ggcbiqK8WWh6l1dnltU4BgWGIGo+EVYxCaAPih/zQXQ=
+google.golang.org/genproto/googleapis/rpc v0.0.0-20260226221140-a57be14db171/go.mod h1:4Hqkh8ycfw05ld/3BWL7rJOSfebL2Q+DVDeRgYgxUU8=
+google.golang.org/grpc v1.79.1 h1:zGhSi45ODB9/p3VAawt9a+O/MULLl9dpizzNNpq7flY=
+google.golang.org/grpc v1.79.1/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
+google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
+google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
@@ -254,5 +184,3 @@ gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
-honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
-honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
diff --git a/test_output.txt b/test_output.txt
deleted file mode 100644
index 9ad252e..0000000
--- a/test_output.txt
+++ /dev/null
@@ -1,916 +0,0 @@
- github.com/ajac-zero/latticelm/cmd/gateway coverage: 0.0% of statements
-=== RUN TestInputUnion_UnmarshalJSON
-=== RUN TestInputUnion_UnmarshalJSON/string_input
-=== RUN TestInputUnion_UnmarshalJSON/empty_string_input
-=== RUN TestInputUnion_UnmarshalJSON/null_input
-=== RUN TestInputUnion_UnmarshalJSON/array_input_with_single_message
-=== RUN TestInputUnion_UnmarshalJSON/array_input_with_multiple_messages
-=== RUN TestInputUnion_UnmarshalJSON/empty_array
-=== RUN TestInputUnion_UnmarshalJSON/array_with_function_call_output
-=== RUN TestInputUnion_UnmarshalJSON/invalid_JSON
-=== RUN TestInputUnion_UnmarshalJSON/invalid_type_-_number
-=== RUN TestInputUnion_UnmarshalJSON/invalid_type_-_object
---- PASS: TestInputUnion_UnmarshalJSON (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/string_input (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/empty_string_input (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/null_input (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/array_input_with_single_message (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/array_input_with_multiple_messages (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/empty_array (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/array_with_function_call_output (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/invalid_JSON (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/invalid_type_-_number (0.00s)
- --- PASS: TestInputUnion_UnmarshalJSON/invalid_type_-_object (0.00s)
-=== RUN TestInputUnion_MarshalJSON
-=== RUN TestInputUnion_MarshalJSON/string_value
-=== RUN TestInputUnion_MarshalJSON/empty_string
-=== RUN TestInputUnion_MarshalJSON/array_value
-=== RUN TestInputUnion_MarshalJSON/empty_array
-=== RUN TestInputUnion_MarshalJSON/nil_values
---- PASS: TestInputUnion_MarshalJSON (0.00s)
- --- PASS: TestInputUnion_MarshalJSON/string_value (0.00s)
- --- PASS: TestInputUnion_MarshalJSON/empty_string (0.00s)
- --- PASS: TestInputUnion_MarshalJSON/array_value (0.00s)
- --- PASS: TestInputUnion_MarshalJSON/empty_array (0.00s)
- --- PASS: TestInputUnion_MarshalJSON/nil_values (0.00s)
-=== RUN TestInputUnion_RoundTrip
-=== RUN TestInputUnion_RoundTrip/string
-=== RUN TestInputUnion_RoundTrip/array_with_messages
---- PASS: TestInputUnion_RoundTrip (0.00s)
- --- PASS: TestInputUnion_RoundTrip/string (0.00s)
- --- PASS: TestInputUnion_RoundTrip/array_with_messages (0.00s)
-=== RUN TestResponseRequest_NormalizeInput
-=== RUN TestResponseRequest_NormalizeInput/string_input_creates_user_message
-=== RUN TestResponseRequest_NormalizeInput/message_with_string_content
-=== RUN TestResponseRequest_NormalizeInput/assistant_message_with_string_content_uses_output_text
-=== RUN TestResponseRequest_NormalizeInput/message_with_content_blocks_array
-=== RUN TestResponseRequest_NormalizeInput/message_with_tool_use_blocks
-=== RUN TestResponseRequest_NormalizeInput/message_with_mixed_text_and_tool_use
-=== RUN TestResponseRequest_NormalizeInput/multiple_tool_use_blocks
-=== RUN TestResponseRequest_NormalizeInput/function_call_output_item
-=== RUN TestResponseRequest_NormalizeInput/multiple_messages_in_conversation
-=== RUN TestResponseRequest_NormalizeInput/complete_tool_calling_flow
-=== RUN TestResponseRequest_NormalizeInput/message_without_type_defaults_to_message
-=== RUN TestResponseRequest_NormalizeInput/message_with_nil_content
-=== RUN TestResponseRequest_NormalizeInput/tool_use_with_empty_input
-=== RUN TestResponseRequest_NormalizeInput/content_blocks_with_unknown_types_ignored
---- PASS: TestResponseRequest_NormalizeInput (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/string_input_creates_user_message (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/message_with_string_content (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/assistant_message_with_string_content_uses_output_text (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/message_with_content_blocks_array (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/message_with_tool_use_blocks (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/message_with_mixed_text_and_tool_use (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/multiple_tool_use_blocks (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/function_call_output_item (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/multiple_messages_in_conversation (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/complete_tool_calling_flow (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/message_without_type_defaults_to_message (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/message_with_nil_content (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/tool_use_with_empty_input (0.00s)
- --- PASS: TestResponseRequest_NormalizeInput/content_blocks_with_unknown_types_ignored (0.00s)
-=== RUN TestResponseRequest_Validate
-=== RUN TestResponseRequest_Validate/valid_request_with_string_input
-=== RUN TestResponseRequest_Validate/valid_request_with_array_input
-=== RUN TestResponseRequest_Validate/nil_request
-=== RUN TestResponseRequest_Validate/missing_model
-=== RUN TestResponseRequest_Validate/missing_input
-=== RUN TestResponseRequest_Validate/empty_string_input_is_invalid
-=== RUN TestResponseRequest_Validate/empty_array_input_is_invalid
---- PASS: TestResponseRequest_Validate (0.00s)
- --- PASS: TestResponseRequest_Validate/valid_request_with_string_input (0.00s)
- --- PASS: TestResponseRequest_Validate/valid_request_with_array_input (0.00s)
- --- PASS: TestResponseRequest_Validate/nil_request (0.00s)
- --- PASS: TestResponseRequest_Validate/missing_model (0.00s)
- --- PASS: TestResponseRequest_Validate/missing_input (0.00s)
- --- PASS: TestResponseRequest_Validate/empty_string_input_is_invalid (0.00s)
- --- PASS: TestResponseRequest_Validate/empty_array_input_is_invalid (0.00s)
-=== RUN TestGetStringField
-=== RUN TestGetStringField/existing_string_field
-=== RUN TestGetStringField/missing_field
-=== RUN TestGetStringField/wrong_type_-_int
-=== RUN TestGetStringField/wrong_type_-_bool
-=== RUN TestGetStringField/wrong_type_-_object
-=== RUN TestGetStringField/empty_string_value
-=== RUN TestGetStringField/nil_map
---- PASS: TestGetStringField (0.00s)
- --- PASS: TestGetStringField/existing_string_field (0.00s)
- --- PASS: TestGetStringField/missing_field (0.00s)
- --- PASS: TestGetStringField/wrong_type_-_int (0.00s)
- --- PASS: TestGetStringField/wrong_type_-_bool (0.00s)
- --- PASS: TestGetStringField/wrong_type_-_object (0.00s)
- --- PASS: TestGetStringField/empty_string_value (0.00s)
- --- PASS: TestGetStringField/nil_map (0.00s)
-=== RUN TestInputItem_ComplexContent
-=== RUN TestInputItem_ComplexContent/content_with_nested_objects
-=== RUN TestInputItem_ComplexContent/content_with_array_in_input
---- PASS: TestInputItem_ComplexContent (0.00s)
- --- PASS: TestInputItem_ComplexContent/content_with_nested_objects (0.00s)
- --- PASS: TestInputItem_ComplexContent/content_with_array_in_input (0.00s)
-=== RUN TestResponseRequest_CompleteWorkflow
---- PASS: TestResponseRequest_CompleteWorkflow (0.00s)
-PASS
-coverage: 100.0% of statements
-ok github.com/ajac-zero/latticelm/internal/api 0.011s coverage: 100.0% of statements
-=== RUN TestNew
-=== RUN TestNew/disabled_auth_returns_empty_middleware
-=== RUN TestNew/enabled_without_issuer_returns_error
-=== RUN TestNew/enabled_with_valid_config_fetches_JWKS
-=== RUN TestNew/JWKS_fetch_failure_returns_error
---- PASS: TestNew (0.00s)
- --- PASS: TestNew/disabled_auth_returns_empty_middleware (0.00s)
- --- PASS: TestNew/enabled_without_issuer_returns_error (0.00s)
- --- PASS: TestNew/enabled_with_valid_config_fetches_JWKS (0.00s)
- --- PASS: TestNew/JWKS_fetch_failure_returns_error (0.00s)
-=== RUN TestMiddleware_Handler
-=== RUN TestMiddleware_Handler/missing_authorization_header
-=== RUN TestMiddleware_Handler/malformed_authorization_header_-_no_bearer
-=== RUN TestMiddleware_Handler/malformed_authorization_header_-_wrong_scheme
-=== RUN TestMiddleware_Handler/valid_token_with_correct_claims
-=== RUN TestMiddleware_Handler/expired_token
-=== RUN TestMiddleware_Handler/token_with_wrong_issuer
-=== RUN TestMiddleware_Handler/token_with_wrong_audience
-=== RUN TestMiddleware_Handler/token_with_missing_kid
---- PASS: TestMiddleware_Handler (0.01s)
- --- PASS: TestMiddleware_Handler/missing_authorization_header (0.00s)
- --- PASS: TestMiddleware_Handler/malformed_authorization_header_-_no_bearer (0.00s)
- --- PASS: TestMiddleware_Handler/malformed_authorization_header_-_wrong_scheme (0.00s)
- --- PASS: TestMiddleware_Handler/valid_token_with_correct_claims (0.00s)
- --- PASS: TestMiddleware_Handler/expired_token (0.00s)
- --- PASS: TestMiddleware_Handler/token_with_wrong_issuer (0.00s)
- --- PASS: TestMiddleware_Handler/token_with_wrong_audience (0.00s)
- --- PASS: TestMiddleware_Handler/token_with_missing_kid (0.00s)
-=== RUN TestMiddleware_Handler_DisabledAuth
---- PASS: TestMiddleware_Handler_DisabledAuth (0.00s)
-=== RUN TestValidateToken
-=== RUN TestValidateToken/valid_token_with_all_required_claims
-=== RUN TestValidateToken/token_with_audience_as_array
-=== RUN TestValidateToken/token_with_audience_array_not_matching
-=== RUN TestValidateToken/token_with_invalid_audience_format
-=== RUN TestValidateToken/token_signed_with_wrong_key
-=== RUN TestValidateToken/token_with_unknown_kid_triggers_JWKS_refresh
-=== RUN TestValidateToken/token_with_completely_unknown_kid_after_refresh
-=== RUN TestValidateToken/malformed_token
-=== RUN TestValidateToken/token_with_non-RSA_signing_method
---- PASS: TestValidateToken (0.80s)
- --- PASS: TestValidateToken/valid_token_with_all_required_claims (0.00s)
- --- PASS: TestValidateToken/token_with_audience_as_array (0.00s)
- --- PASS: TestValidateToken/token_with_audience_array_not_matching (0.00s)
- --- PASS: TestValidateToken/token_with_invalid_audience_format (0.00s)
- --- PASS: TestValidateToken/token_signed_with_wrong_key (0.15s)
- --- PASS: TestValidateToken/token_with_unknown_kid_triggers_JWKS_refresh (0.42s)
- --- PASS: TestValidateToken/token_with_completely_unknown_kid_after_refresh (0.22s)
- --- PASS: TestValidateToken/malformed_token (0.00s)
- --- PASS: TestValidateToken/token_with_non-RSA_signing_method (0.00s)
-=== RUN TestValidateToken_NoAudienceConfigured
---- PASS: TestValidateToken_NoAudienceConfigured (0.00s)
-=== RUN TestRefreshJWKS
-=== RUN TestRefreshJWKS/successful_JWKS_fetch_and_parse
-=== RUN TestRefreshJWKS/OIDC_discovery_failure
-=== RUN TestRefreshJWKS/JWKS_with_multiple_keys
-=== RUN TestRefreshJWKS/JWKS_with_non-RSA_keys_skipped
-=== RUN TestRefreshJWKS/JWKS_with_wrong_use_field_skipped
-=== RUN TestRefreshJWKS/JWKS_with_invalid_base64_encoding_skipped
---- PASS: TestRefreshJWKS (0.14s)
- --- PASS: TestRefreshJWKS/successful_JWKS_fetch_and_parse (0.00s)
- --- PASS: TestRefreshJWKS/OIDC_discovery_failure (0.00s)
- --- PASS: TestRefreshJWKS/JWKS_with_multiple_keys (0.14s)
- --- PASS: TestRefreshJWKS/JWKS_with_non-RSA_keys_skipped (0.00s)
- --- PASS: TestRefreshJWKS/JWKS_with_wrong_use_field_skipped (0.00s)
- --- PASS: TestRefreshJWKS/JWKS_with_invalid_base64_encoding_skipped (0.00s)
-=== RUN TestRefreshJWKS_Concurrency
---- PASS: TestRefreshJWKS_Concurrency (0.01s)
-=== RUN TestGetClaims
-=== RUN TestGetClaims/context_with_claims
-=== RUN TestGetClaims/context_without_claims
-=== RUN TestGetClaims/context_with_wrong_type
---- PASS: TestGetClaims (0.00s)
- --- PASS: TestGetClaims/context_with_claims (0.00s)
- --- PASS: TestGetClaims/context_without_claims (0.00s)
- --- PASS: TestGetClaims/context_with_wrong_type (0.00s)
-=== RUN TestMiddleware_IssuerWithTrailingSlash
---- PASS: TestMiddleware_IssuerWithTrailingSlash (0.00s)
-PASS
-coverage: 91.7% of statements
-ok github.com/ajac-zero/latticelm/internal/auth 1.251s coverage: 91.7% of statements
-=== RUN TestLoad
-=== RUN TestLoad/basic_config_with_all_fields
-=== RUN TestLoad/config_with_environment_variables
-=== RUN TestLoad/minimal_config
-=== RUN TestLoad/azure_openai_provider
-=== RUN TestLoad/vertex_ai_provider
-=== RUN TestLoad/sql_conversation_store
-=== RUN TestLoad/redis_conversation_store
-=== RUN TestLoad/invalid_model_references_unknown_provider
-=== RUN TestLoad/invalid_YAML
-=== RUN TestLoad/multiple_models_same_provider
---- PASS: TestLoad (0.01s)
- --- PASS: TestLoad/basic_config_with_all_fields (0.00s)
- --- PASS: TestLoad/config_with_environment_variables (0.00s)
- --- PASS: TestLoad/minimal_config (0.00s)
- --- PASS: TestLoad/azure_openai_provider (0.00s)
- --- PASS: TestLoad/vertex_ai_provider (0.00s)
- --- PASS: TestLoad/sql_conversation_store (0.00s)
- --- PASS: TestLoad/redis_conversation_store (0.00s)
- --- PASS: TestLoad/invalid_model_references_unknown_provider (0.00s)
- --- PASS: TestLoad/invalid_YAML (0.00s)
- --- PASS: TestLoad/multiple_models_same_provider (0.00s)
-=== RUN TestLoadNonExistentFile
---- PASS: TestLoadNonExistentFile (0.00s)
-=== RUN TestConfigValidate
-=== RUN TestConfigValidate/valid_config
-=== RUN TestConfigValidate/model_references_unknown_provider
-=== RUN TestConfigValidate/no_models
-=== RUN TestConfigValidate/multiple_models_multiple_providers
---- PASS: TestConfigValidate (0.00s)
- --- PASS: TestConfigValidate/valid_config (0.00s)
- --- PASS: TestConfigValidate/model_references_unknown_provider (0.00s)
- --- PASS: TestConfigValidate/no_models (0.00s)
- --- PASS: TestConfigValidate/multiple_models_multiple_providers (0.00s)
-=== RUN TestEnvironmentVariableExpansion
---- PASS: TestEnvironmentVariableExpansion (0.00s)
-PASS
-coverage: 100.0% of statements
-ok github.com/ajac-zero/latticelm/internal/config 0.040s coverage: 100.0% of statements
-=== RUN TestMemoryStore_CreateAndGet
---- PASS: TestMemoryStore_CreateAndGet (0.00s)
-=== RUN TestMemoryStore_GetNonExistent
---- PASS: TestMemoryStore_GetNonExistent (0.00s)
-=== RUN TestMemoryStore_Append
---- PASS: TestMemoryStore_Append (0.00s)
-=== RUN TestMemoryStore_AppendNonExistent
---- PASS: TestMemoryStore_AppendNonExistent (0.00s)
-=== RUN TestMemoryStore_Delete
---- PASS: TestMemoryStore_Delete (0.00s)
-=== RUN TestMemoryStore_Size
---- PASS: TestMemoryStore_Size (0.00s)
-=== RUN TestMemoryStore_ConcurrentAccess
---- PASS: TestMemoryStore_ConcurrentAccess (0.00s)
-=== RUN TestMemoryStore_DeepCopy
---- PASS: TestMemoryStore_DeepCopy (0.00s)
-=== RUN TestMemoryStore_TTLCleanup
---- PASS: TestMemoryStore_TTLCleanup (0.15s)
-=== RUN TestMemoryStore_NoTTL
---- PASS: TestMemoryStore_NoTTL (0.00s)
-=== RUN TestMemoryStore_UpdatedAtTracking
---- PASS: TestMemoryStore_UpdatedAtTracking (0.01s)
-=== RUN TestMemoryStore_MultipleConversations
---- PASS: TestMemoryStore_MultipleConversations (0.00s)
-=== RUN TestNewRedisStore
---- PASS: TestNewRedisStore (0.00s)
-=== RUN TestRedisStore_Create
---- PASS: TestRedisStore_Create (0.00s)
-=== RUN TestRedisStore_Get
---- PASS: TestRedisStore_Get (0.00s)
-=== RUN TestRedisStore_Append
---- PASS: TestRedisStore_Append (0.00s)
-=== RUN TestRedisStore_Delete
---- PASS: TestRedisStore_Delete (0.00s)
-=== RUN TestRedisStore_Size
---- PASS: TestRedisStore_Size (0.00s)
-=== RUN TestRedisStore_TTL
---- PASS: TestRedisStore_TTL (0.00s)
-=== RUN TestRedisStore_KeyStorage
---- PASS: TestRedisStore_KeyStorage (0.00s)
-=== RUN TestRedisStore_Concurrent
---- PASS: TestRedisStore_Concurrent (0.01s)
-=== RUN TestRedisStore_JSONEncoding
---- PASS: TestRedisStore_JSONEncoding (0.00s)
-=== RUN TestRedisStore_EmptyMessages
---- PASS: TestRedisStore_EmptyMessages (0.00s)
-=== RUN TestRedisStore_UpdateExisting
---- PASS: TestRedisStore_UpdateExisting (0.01s)
-=== RUN TestRedisStore_ContextCancellation
---- PASS: TestRedisStore_ContextCancellation (0.01s)
-=== RUN TestRedisStore_ScanPagination
---- PASS: TestRedisStore_ScanPagination (0.00s)
-=== RUN TestNewSQLStore
---- PASS: TestNewSQLStore (0.00s)
-=== RUN TestSQLStore_Create
---- PASS: TestSQLStore_Create (0.00s)
-=== RUN TestSQLStore_Get
---- PASS: TestSQLStore_Get (0.00s)
-=== RUN TestSQLStore_Append
---- PASS: TestSQLStore_Append (0.00s)
-=== RUN TestSQLStore_Delete
---- PASS: TestSQLStore_Delete (0.00s)
-=== RUN TestSQLStore_Size
---- PASS: TestSQLStore_Size (0.00s)
-=== RUN TestSQLStore_Cleanup
- sql_store_test.go:198:
- Error Trace: /home/coder/go-llm-gateway/internal/conversation/sql_store_test.go:198
- Error: Not equal:
- expected: 0
- actual : 1
- Test: TestSQLStore_Cleanup
---- FAIL: TestSQLStore_Cleanup (0.50s)
-=== RUN TestSQLStore_ConcurrentAccess
---- PASS: TestSQLStore_ConcurrentAccess (0.00s)
-=== RUN TestSQLStore_ContextCancellation
---- PASS: TestSQLStore_ContextCancellation (0.00s)
-=== RUN TestSQLStore_JSONEncoding
---- PASS: TestSQLStore_JSONEncoding (0.00s)
-=== RUN TestSQLStore_EmptyMessages
---- PASS: TestSQLStore_EmptyMessages (0.00s)
-=== RUN TestSQLStore_UpdateExisting
---- PASS: TestSQLStore_UpdateExisting (0.01s)
-FAIL
-coverage: 66.0% of statements
-FAIL github.com/ajac-zero/latticelm/internal/conversation 0.768s
- github.com/ajac-zero/latticelm/internal/logger coverage: 0.0% of statements
-=== RUN TestInitMetrics
---- PASS: TestInitMetrics (0.00s)
-=== RUN TestRecordCircuitBreakerStateChange
-=== RUN TestRecordCircuitBreakerStateChange/transition_to_closed
-=== RUN TestRecordCircuitBreakerStateChange/transition_to_open
-=== RUN TestRecordCircuitBreakerStateChange/transition_to_half-open
-=== RUN TestRecordCircuitBreakerStateChange/closed_to_half-open
-=== RUN TestRecordCircuitBreakerStateChange/half-open_to_closed
-=== RUN TestRecordCircuitBreakerStateChange/half-open_to_open
---- PASS: TestRecordCircuitBreakerStateChange (0.00s)
- --- PASS: TestRecordCircuitBreakerStateChange/transition_to_closed (0.00s)
- --- PASS: TestRecordCircuitBreakerStateChange/transition_to_open (0.00s)
- --- PASS: TestRecordCircuitBreakerStateChange/transition_to_half-open (0.00s)
- --- PASS: TestRecordCircuitBreakerStateChange/closed_to_half-open (0.00s)
- --- PASS: TestRecordCircuitBreakerStateChange/half-open_to_closed (0.00s)
- --- PASS: TestRecordCircuitBreakerStateChange/half-open_to_open (0.00s)
-=== RUN TestMetricLabels
-=== RUN TestMetricLabels/basic_labels
-=== RUN TestMetricLabels/different_labels
-=== RUN TestMetricLabels/empty_labels
---- PASS: TestMetricLabels (0.00s)
- --- PASS: TestMetricLabels/basic_labels (0.00s)
- --- PASS: TestMetricLabels/different_labels (0.00s)
- --- PASS: TestMetricLabels/empty_labels (0.00s)
-=== RUN TestHTTPMetrics
-=== RUN TestHTTPMetrics/GET_request
-=== RUN TestHTTPMetrics/POST_request
-=== RUN TestHTTPMetrics/error_response
---- PASS: TestHTTPMetrics (0.00s)
- --- PASS: TestHTTPMetrics/GET_request (0.00s)
- --- PASS: TestHTTPMetrics/POST_request (0.00s)
- --- PASS: TestHTTPMetrics/error_response (0.00s)
-=== RUN TestProviderMetrics
-=== RUN TestProviderMetrics/OpenAI_generate_success
-=== RUN TestProviderMetrics/Anthropic_stream_success
-=== RUN TestProviderMetrics/Google_generate_error
---- PASS: TestProviderMetrics (0.00s)
- --- PASS: TestProviderMetrics/OpenAI_generate_success (0.00s)
- --- PASS: TestProviderMetrics/Anthropic_stream_success (0.00s)
- --- PASS: TestProviderMetrics/Google_generate_error (0.00s)
-=== RUN TestConversationStoreMetrics
-=== RUN TestConversationStoreMetrics/create_success
-=== RUN TestConversationStoreMetrics/get_success
-=== RUN TestConversationStoreMetrics/delete_error
---- PASS: TestConversationStoreMetrics (0.00s)
- --- PASS: TestConversationStoreMetrics/create_success (0.00s)
- --- PASS: TestConversationStoreMetrics/get_success (0.00s)
- --- PASS: TestConversationStoreMetrics/delete_error (0.00s)
-=== RUN TestMetricHelp
---- PASS: TestMetricHelp (0.00s)
-=== RUN TestMetricTypes
---- PASS: TestMetricTypes (0.00s)
-=== RUN TestCircuitBreakerInvalidState
---- PASS: TestCircuitBreakerInvalidState (0.00s)
-=== RUN TestMetricNaming
---- PASS: TestMetricNaming (0.00s)
-=== RUN TestNewInstrumentedProvider
-=== RUN TestNewInstrumentedProvider/with_registry_and_tracer
-=== RUN TestNewInstrumentedProvider/with_registry_only
-=== RUN TestNewInstrumentedProvider/with_tracer_only
-=== RUN TestNewInstrumentedProvider/without_observability
---- PASS: TestNewInstrumentedProvider (0.00s)
- --- PASS: TestNewInstrumentedProvider/with_registry_and_tracer (0.00s)
- --- PASS: TestNewInstrumentedProvider/with_registry_only (0.00s)
- --- PASS: TestNewInstrumentedProvider/with_tracer_only (0.00s)
- --- PASS: TestNewInstrumentedProvider/without_observability (0.00s)
-=== RUN TestInstrumentedProvider_Generate
-=== RUN TestInstrumentedProvider_Generate/successful_generation
-=== RUN TestInstrumentedProvider_Generate/generation_error
-=== RUN TestInstrumentedProvider_Generate/nil_result
-=== RUN TestInstrumentedProvider_Generate/empty_tokens
---- PASS: TestInstrumentedProvider_Generate (0.00s)
- --- PASS: TestInstrumentedProvider_Generate/successful_generation (0.00s)
- --- PASS: TestInstrumentedProvider_Generate/generation_error (0.00s)
- --- PASS: TestInstrumentedProvider_Generate/nil_result (0.00s)
- --- PASS: TestInstrumentedProvider_Generate/empty_tokens (0.00s)
-=== RUN TestInstrumentedProvider_GenerateStream
-=== RUN TestInstrumentedProvider_GenerateStream/successful_streaming
- provider_wrapper_test.go:438:
- Error Trace: /home/coder/go-llm-gateway/internal/observability/provider_wrapper_test.go:438
- Error: Not equal:
- expected: 4
- actual : 2
- Test: TestInstrumentedProvider_GenerateStream/successful_streaming
- provider_wrapper_test.go:455:
- Error Trace: /home/coder/go-llm-gateway/internal/observability/provider_wrapper_test.go:455
- Error: Not equal:
- expected: 1
- actual : 0
- Test: TestInstrumentedProvider_GenerateStream/successful_streaming
- Messages: stream request counter should be incremented
-=== RUN TestInstrumentedProvider_GenerateStream/streaming_error
- provider_wrapper_test.go:455:
- Error Trace: /home/coder/go-llm-gateway/internal/observability/provider_wrapper_test.go:455
- Error: Not equal:
- expected: 1
- actual : 0
- Test: TestInstrumentedProvider_GenerateStream/streaming_error
- Messages: stream request counter should be incremented
-=== RUN TestInstrumentedProvider_GenerateStream/empty_stream
- provider_wrapper_test.go:455:
- Error Trace: /home/coder/go-llm-gateway/internal/observability/provider_wrapper_test.go:455
- Error: Not equal:
- expected: 1
- actual : 0
- Test: TestInstrumentedProvider_GenerateStream/empty_stream
- Messages: stream request counter should be incremented
---- FAIL: TestInstrumentedProvider_GenerateStream (0.61s)
- --- FAIL: TestInstrumentedProvider_GenerateStream/successful_streaming (0.20s)
- --- FAIL: TestInstrumentedProvider_GenerateStream/streaming_error (0.20s)
- --- FAIL: TestInstrumentedProvider_GenerateStream/empty_stream (0.20s)
-=== RUN TestInstrumentedProvider_MetricsRecording
---- PASS: TestInstrumentedProvider_MetricsRecording (0.00s)
-=== RUN TestInstrumentedProvider_TracingSpans
---- PASS: TestInstrumentedProvider_TracingSpans (0.00s)
-=== RUN TestInstrumentedProvider_WithoutObservability
---- PASS: TestInstrumentedProvider_WithoutObservability (0.00s)
-=== RUN TestInstrumentedProvider_Name
-=== RUN TestInstrumentedProvider_Name/openai_provider
-=== RUN TestInstrumentedProvider_Name/anthropic_provider
-=== RUN TestInstrumentedProvider_Name/google_provider
---- PASS: TestInstrumentedProvider_Name (0.00s)
- --- PASS: TestInstrumentedProvider_Name/openai_provider (0.00s)
- --- PASS: TestInstrumentedProvider_Name/anthropic_provider (0.00s)
- --- PASS: TestInstrumentedProvider_Name/google_provider (0.00s)
-=== RUN TestInstrumentedProvider_ConcurrentCalls
---- PASS: TestInstrumentedProvider_ConcurrentCalls (0.00s)
-=== RUN TestInstrumentedProvider_StreamTTFB
---- PASS: TestInstrumentedProvider_StreamTTFB (0.15s)
-=== RUN TestInitTracer_StdoutExporter
-=== RUN TestInitTracer_StdoutExporter/stdout_exporter_with_always_sampler
- tracing_test.go:74:
- Error Trace: /home/coder/go-llm-gateway/internal/observability/tracing_test.go:74
- Error: Received unexpected error:
- failed to create resource: conflicting Schema URL: https://opentelemetry.io/schemas/1.26.0 and https://opentelemetry.io/schemas/1.24.0
- Test: TestInitTracer_StdoutExporter/stdout_exporter_with_always_sampler
-=== RUN TestInitTracer_StdoutExporter/stdout_exporter_with_never_sampler
- tracing_test.go:74:
- Error Trace: /home/coder/go-llm-gateway/internal/observability/tracing_test.go:74
- Error: Received unexpected error:
- failed to create resource: conflicting Schema URL: https://opentelemetry.io/schemas/1.26.0 and https://opentelemetry.io/schemas/1.24.0
- Test: TestInitTracer_StdoutExporter/stdout_exporter_with_never_sampler
-=== RUN TestInitTracer_StdoutExporter/stdout_exporter_with_probability_sampler
- tracing_test.go:74:
- Error Trace: /home/coder/go-llm-gateway/internal/observability/tracing_test.go:74
- Error: Received unexpected error:
- failed to create resource: conflicting Schema URL: https://opentelemetry.io/schemas/1.26.0 and https://opentelemetry.io/schemas/1.24.0
- Test: TestInitTracer_StdoutExporter/stdout_exporter_with_probability_sampler
---- FAIL: TestInitTracer_StdoutExporter (0.00s)
- --- FAIL: TestInitTracer_StdoutExporter/stdout_exporter_with_always_sampler (0.00s)
- --- FAIL: TestInitTracer_StdoutExporter/stdout_exporter_with_never_sampler (0.00s)
- --- FAIL: TestInitTracer_StdoutExporter/stdout_exporter_with_probability_sampler (0.00s)
-=== RUN TestInitTracer_InvalidExporter
- tracing_test.go:102:
- Error Trace: /home/coder/go-llm-gateway/internal/observability/tracing_test.go:102
- Error: "failed to create resource: conflicting Schema URL: https://opentelemetry.io/schemas/1.26.0 and https://opentelemetry.io/schemas/1.24.0" does not contain "unsupported exporter type"
- Test: TestInitTracer_InvalidExporter
---- FAIL: TestInitTracer_InvalidExporter (0.00s)
-=== RUN TestCreateSampler
-=== RUN TestCreateSampler/always_sampler
-=== RUN TestCreateSampler/never_sampler
-=== RUN TestCreateSampler/probability_sampler_-_100%
-=== RUN TestCreateSampler/probability_sampler_-_0%
-=== RUN TestCreateSampler/probability_sampler_-_50%
-=== RUN TestCreateSampler/default_sampler_(invalid_type)
---- PASS: TestCreateSampler (0.00s)
- --- PASS: TestCreateSampler/always_sampler (0.00s)
- --- PASS: TestCreateSampler/never_sampler (0.00s)
- --- PASS: TestCreateSampler/probability_sampler_-_100% (0.00s)
- --- PASS: TestCreateSampler/probability_sampler_-_0% (0.00s)
- --- PASS: TestCreateSampler/probability_sampler_-_50% (0.00s)
- --- PASS: TestCreateSampler/default_sampler_(invalid_type) (0.00s)
-=== RUN TestShutdown
-=== RUN TestShutdown/shutdown_valid_tracer_provider
-=== RUN TestShutdown/shutdown_nil_tracer_provider
---- PASS: TestShutdown (0.00s)
- --- PASS: TestShutdown/shutdown_valid_tracer_provider (0.00s)
- --- PASS: TestShutdown/shutdown_nil_tracer_provider (0.00s)
-=== RUN TestShutdown_ContextTimeout
---- PASS: TestShutdown_ContextTimeout (0.00s)
-=== RUN TestTracerConfig_ServiceName
-=== RUN TestTracerConfig_ServiceName/default_service_name
-=== RUN TestTracerConfig_ServiceName/custom_service_name
-=== RUN TestTracerConfig_ServiceName/empty_service_name
---- PASS: TestTracerConfig_ServiceName (0.00s)
- --- PASS: TestTracerConfig_ServiceName/default_service_name (0.00s)
- --- PASS: TestTracerConfig_ServiceName/custom_service_name (0.00s)
- --- PASS: TestTracerConfig_ServiceName/empty_service_name (0.00s)
-=== RUN TestCreateSampler_EdgeCases
-=== RUN TestCreateSampler_EdgeCases/negative_rate
-=== RUN TestCreateSampler_EdgeCases/rate_greater_than_1
-=== RUN TestCreateSampler_EdgeCases/empty_type
---- PASS: TestCreateSampler_EdgeCases (0.00s)
- --- PASS: TestCreateSampler_EdgeCases/negative_rate (0.00s)
- --- PASS: TestCreateSampler_EdgeCases/rate_greater_than_1 (0.00s)
- --- PASS: TestCreateSampler_EdgeCases/empty_type (0.00s)
-=== RUN TestTracerProvider_MultipleShutdowns
---- PASS: TestTracerProvider_MultipleShutdowns (0.00s)
-=== RUN TestSamplerDescription
-=== RUN TestSamplerDescription/always_sampler_description
-=== RUN TestSamplerDescription/never_sampler_description
-=== RUN TestSamplerDescription/probability_sampler_description
---- PASS: TestSamplerDescription (0.00s)
- --- PASS: TestSamplerDescription/always_sampler_description (0.00s)
- --- PASS: TestSamplerDescription/never_sampler_description (0.00s)
- --- PASS: TestSamplerDescription/probability_sampler_description (0.00s)
-=== RUN TestInitTracer_ResourceAttributes
---- PASS: TestInitTracer_ResourceAttributes (0.00s)
-=== RUN TestProbabilitySampler_Boundaries
-=== RUN TestProbabilitySampler_Boundaries/rate_0.0_-_never_sample
-=== RUN TestProbabilitySampler_Boundaries/rate_1.0_-_always_sample
-=== RUN TestProbabilitySampler_Boundaries/rate_0.5_-_probabilistic
---- PASS: TestProbabilitySampler_Boundaries (0.00s)
- --- PASS: TestProbabilitySampler_Boundaries/rate_0.0_-_never_sample (0.00s)
- --- PASS: TestProbabilitySampler_Boundaries/rate_1.0_-_always_sample (0.00s)
- --- PASS: TestProbabilitySampler_Boundaries/rate_0.5_-_probabilistic (0.00s)
-FAIL
-coverage: 35.1% of statements
-FAIL github.com/ajac-zero/latticelm/internal/observability 0.783s
-=== RUN TestNewRegistry
-=== RUN TestNewRegistry/valid_config_with_OpenAI
-=== RUN TestNewRegistry/valid_config_with_multiple_providers
-=== RUN TestNewRegistry/no_providers_returns_error
-=== RUN TestNewRegistry/Azure_OpenAI_without_endpoint_returns_error
-=== RUN TestNewRegistry/Azure_OpenAI_with_endpoint_succeeds
-=== RUN TestNewRegistry/Azure_Anthropic_without_endpoint_returns_error
-=== RUN TestNewRegistry/Azure_Anthropic_with_endpoint_succeeds
-=== RUN TestNewRegistry/Google_provider
-=== RUN TestNewRegistry/Vertex_AI_without_project/location_returns_error
-=== RUN TestNewRegistry/Vertex_AI_with_project_and_location_succeeds
-=== RUN TestNewRegistry/unknown_provider_type_returns_error
-=== RUN TestNewRegistry/provider_with_no_API_key_is_skipped
-=== RUN TestNewRegistry/model_with_provider_model_id
---- PASS: TestNewRegistry (0.00s)
- --- PASS: TestNewRegistry/valid_config_with_OpenAI (0.00s)
- --- PASS: TestNewRegistry/valid_config_with_multiple_providers (0.00s)
- --- PASS: TestNewRegistry/no_providers_returns_error (0.00s)
- --- PASS: TestNewRegistry/Azure_OpenAI_without_endpoint_returns_error (0.00s)
- --- PASS: TestNewRegistry/Azure_OpenAI_with_endpoint_succeeds (0.00s)
- --- PASS: TestNewRegistry/Azure_Anthropic_without_endpoint_returns_error (0.00s)
- --- PASS: TestNewRegistry/Azure_Anthropic_with_endpoint_succeeds (0.00s)
- --- PASS: TestNewRegistry/Google_provider (0.00s)
- --- PASS: TestNewRegistry/Vertex_AI_without_project/location_returns_error (0.00s)
- --- PASS: TestNewRegistry/Vertex_AI_with_project_and_location_succeeds (0.00s)
- --- PASS: TestNewRegistry/unknown_provider_type_returns_error (0.00s)
- --- PASS: TestNewRegistry/provider_with_no_API_key_is_skipped (0.00s)
- --- PASS: TestNewRegistry/model_with_provider_model_id (0.00s)
-=== RUN TestRegistry_Get
-=== RUN TestRegistry_Get/existing_provider
-=== RUN TestRegistry_Get/another_existing_provider
-=== RUN TestRegistry_Get/nonexistent_provider
---- PASS: TestRegistry_Get (0.00s)
- --- PASS: TestRegistry_Get/existing_provider (0.00s)
- --- PASS: TestRegistry_Get/another_existing_provider (0.00s)
- --- PASS: TestRegistry_Get/nonexistent_provider (0.00s)
-=== RUN TestRegistry_Models
-=== RUN TestRegistry_Models/single_model
-=== RUN TestRegistry_Models/multiple_models
-=== RUN TestRegistry_Models/no_models
---- PASS: TestRegistry_Models (0.00s)
- --- PASS: TestRegistry_Models/single_model (0.00s)
- --- PASS: TestRegistry_Models/multiple_models (0.00s)
- --- PASS: TestRegistry_Models/no_models (0.00s)
-=== RUN TestRegistry_ResolveModelID
-=== RUN TestRegistry_ResolveModelID/model_without_provider_model_id_returns_model_name
-=== RUN TestRegistry_ResolveModelID/model_with_provider_model_id_returns_provider_model_id
-=== RUN TestRegistry_ResolveModelID/unknown_model_returns_model_name
---- PASS: TestRegistry_ResolveModelID (0.00s)
- --- PASS: TestRegistry_ResolveModelID/model_without_provider_model_id_returns_model_name (0.00s)
- --- PASS: TestRegistry_ResolveModelID/model_with_provider_model_id_returns_provider_model_id (0.00s)
- --- PASS: TestRegistry_ResolveModelID/unknown_model_returns_model_name (0.00s)
-=== RUN TestRegistry_Default
-=== RUN TestRegistry_Default/returns_provider_for_known_model
-=== RUN TestRegistry_Default/returns_first_provider_for_unknown_model
-=== RUN TestRegistry_Default/returns_first_provider_for_empty_model_name
---- PASS: TestRegistry_Default (0.00s)
- --- PASS: TestRegistry_Default/returns_provider_for_known_model (0.00s)
- --- PASS: TestRegistry_Default/returns_first_provider_for_unknown_model (0.00s)
- --- PASS: TestRegistry_Default/returns_first_provider_for_empty_model_name (0.00s)
-=== RUN TestBuildProvider
-=== RUN TestBuildProvider/OpenAI_provider
-=== RUN TestBuildProvider/OpenAI_provider_with_custom_endpoint
-=== RUN TestBuildProvider/Anthropic_provider
-=== RUN TestBuildProvider/Google_provider
-=== RUN TestBuildProvider/provider_without_API_key_returns_nil
-=== RUN TestBuildProvider/unknown_provider_type
---- PASS: TestBuildProvider (0.00s)
- --- PASS: TestBuildProvider/OpenAI_provider (0.00s)
- --- PASS: TestBuildProvider/OpenAI_provider_with_custom_endpoint (0.00s)
- --- PASS: TestBuildProvider/Anthropic_provider (0.00s)
- --- PASS: TestBuildProvider/Google_provider (0.00s)
- --- PASS: TestBuildProvider/provider_without_API_key_returns_nil (0.00s)
- --- PASS: TestBuildProvider/unknown_provider_type (0.00s)
-PASS
-coverage: 63.1% of statements
-ok github.com/ajac-zero/latticelm/internal/providers 0.035s coverage: 63.1% of statements
-=== RUN TestParseTools
---- PASS: TestParseTools (0.00s)
-=== RUN TestParseToolChoice
-=== RUN TestParseToolChoice/auto
-=== RUN TestParseToolChoice/any
-=== RUN TestParseToolChoice/required
-=== RUN TestParseToolChoice/specific_tool
---- PASS: TestParseToolChoice (0.00s)
- --- PASS: TestParseToolChoice/auto (0.00s)
- --- PASS: TestParseToolChoice/any (0.00s)
- --- PASS: TestParseToolChoice/required (0.00s)
- --- PASS: TestParseToolChoice/specific_tool (0.00s)
-PASS
-coverage: 16.2% of statements
-ok github.com/ajac-zero/latticelm/internal/providers/anthropic 0.016s coverage: 16.2% of statements
-=== RUN TestParseTools
-=== RUN TestParseTools/flat_format_tool
-=== RUN TestParseTools/nested_format_tool
-=== RUN TestParseTools/multiple_tools
-=== RUN TestParseTools/tool_without_description
-=== RUN TestParseTools/tool_without_parameters
-=== RUN TestParseTools/tool_without_name_(should_skip)
-=== RUN TestParseTools/nil_tools
-=== RUN TestParseTools/invalid_JSON
-=== RUN TestParseTools/empty_array
---- PASS: TestParseTools (0.00s)
- --- PASS: TestParseTools/flat_format_tool (0.00s)
- --- PASS: TestParseTools/nested_format_tool (0.00s)
- --- PASS: TestParseTools/multiple_tools (0.00s)
- --- PASS: TestParseTools/tool_without_description (0.00s)
- --- PASS: TestParseTools/tool_without_parameters (0.00s)
- --- PASS: TestParseTools/tool_without_name_(should_skip) (0.00s)
- --- PASS: TestParseTools/nil_tools (0.00s)
- --- PASS: TestParseTools/invalid_JSON (0.00s)
- --- PASS: TestParseTools/empty_array (0.00s)
-=== RUN TestParseToolChoice
-=== RUN TestParseToolChoice/auto_mode
-=== RUN TestParseToolChoice/none_mode
-=== RUN TestParseToolChoice/required_mode
-=== RUN TestParseToolChoice/any_mode
-=== RUN TestParseToolChoice/specific_function
-=== RUN TestParseToolChoice/nil_tool_choice
-=== RUN TestParseToolChoice/unknown_string_mode
-=== RUN TestParseToolChoice/invalid_JSON
-=== RUN TestParseToolChoice/unsupported_object_format
---- PASS: TestParseToolChoice (0.00s)
- --- PASS: TestParseToolChoice/auto_mode (0.00s)
- --- PASS: TestParseToolChoice/none_mode (0.00s)
- --- PASS: TestParseToolChoice/required_mode (0.00s)
- --- PASS: TestParseToolChoice/any_mode (0.00s)
- --- PASS: TestParseToolChoice/specific_function (0.00s)
- --- PASS: TestParseToolChoice/nil_tool_choice (0.00s)
- --- PASS: TestParseToolChoice/unknown_string_mode (0.00s)
- --- PASS: TestParseToolChoice/invalid_JSON (0.00s)
- --- PASS: TestParseToolChoice/unsupported_object_format (0.00s)
-=== RUN TestExtractToolCalls
-=== RUN TestExtractToolCalls/single_tool_call
-=== RUN TestExtractToolCalls/tool_call_without_ID_generates_one
-=== RUN TestExtractToolCalls/response_with_nil_candidates
-=== RUN TestExtractToolCalls/empty_candidates
---- PASS: TestExtractToolCalls (0.00s)
- --- PASS: TestExtractToolCalls/single_tool_call (0.00s)
- --- PASS: TestExtractToolCalls/tool_call_without_ID_generates_one (0.00s)
- --- PASS: TestExtractToolCalls/response_with_nil_candidates (0.00s)
- --- PASS: TestExtractToolCalls/empty_candidates (0.00s)
-=== RUN TestGenerateRandomID
-=== RUN TestGenerateRandomID/generates_non-empty_ID
-=== RUN TestGenerateRandomID/generates_unique_IDs
-=== RUN TestGenerateRandomID/only_contains_valid_characters
---- PASS: TestGenerateRandomID (0.00s)
- --- PASS: TestGenerateRandomID/generates_non-empty_ID (0.00s)
- --- PASS: TestGenerateRandomID/generates_unique_IDs (0.00s)
- --- PASS: TestGenerateRandomID/only_contains_valid_characters (0.00s)
-PASS
-coverage: 27.7% of statements
-ok github.com/ajac-zero/latticelm/internal/providers/google 0.017s coverage: 27.7% of statements
-=== RUN TestParseTools
-=== RUN TestParseTools/single_tool_with_all_fields
-=== RUN TestParseTools/multiple_tools
-=== RUN TestParseTools/tool_without_description
-=== RUN TestParseTools/tool_without_parameters
-=== RUN TestParseTools/nil_tools
-=== RUN TestParseTools/invalid_JSON
-=== RUN TestParseTools/empty_array
---- PASS: TestParseTools (0.00s)
- --- PASS: TestParseTools/single_tool_with_all_fields (0.00s)
- --- PASS: TestParseTools/multiple_tools (0.00s)
- --- PASS: TestParseTools/tool_without_description (0.00s)
- --- PASS: TestParseTools/tool_without_parameters (0.00s)
- --- PASS: TestParseTools/nil_tools (0.00s)
- --- PASS: TestParseTools/invalid_JSON (0.00s)
- --- PASS: TestParseTools/empty_array (0.00s)
-=== RUN TestParseToolChoice
-=== RUN TestParseToolChoice/auto_string
-=== RUN TestParseToolChoice/none_string
-=== RUN TestParseToolChoice/required_string
-=== RUN TestParseToolChoice/specific_function
-=== RUN TestParseToolChoice/nil_tool_choice
-=== RUN TestParseToolChoice/invalid_JSON
-=== RUN TestParseToolChoice/unsupported_format_(object_without_proper_structure)
---- PASS: TestParseToolChoice (0.00s)
- --- PASS: TestParseToolChoice/auto_string (0.00s)
- --- PASS: TestParseToolChoice/none_string (0.00s)
- --- PASS: TestParseToolChoice/required_string (0.00s)
- --- PASS: TestParseToolChoice/specific_function (0.00s)
- --- PASS: TestParseToolChoice/nil_tool_choice (0.00s)
- --- PASS: TestParseToolChoice/invalid_JSON (0.00s)
- --- PASS: TestParseToolChoice/unsupported_format_(object_without_proper_structure) (0.00s)
-=== RUN TestExtractToolCalls
-=== RUN TestExtractToolCalls/nil_message_returns_nil
---- PASS: TestExtractToolCalls (0.00s)
- --- PASS: TestExtractToolCalls/nil_message_returns_nil (0.00s)
-=== RUN TestExtractToolCallDelta
-=== RUN TestExtractToolCallDelta/empty_delta_returns_nil
---- PASS: TestExtractToolCallDelta (0.00s)
- --- PASS: TestExtractToolCallDelta/empty_delta_returns_nil (0.00s)
-PASS
-coverage: 16.1% of statements
-ok github.com/ajac-zero/latticelm/internal/providers/openai 0.024s coverage: 16.1% of statements
-=== RUN TestRateLimitMiddleware
-=== RUN TestRateLimitMiddleware/disabled_rate_limiting_allows_all_requests
-=== RUN TestRateLimitMiddleware/enabled_rate_limiting_enforces_limits
-time=2026-03-05T17:59:57.097Z level=WARN msg="rate limit exceeded" ip=192.168.1.1:1234 path=/test
-time=2026-03-05T17:59:57.097Z level=WARN msg="rate limit exceeded" ip=192.168.1.1:1234 path=/test
-time=2026-03-05T17:59:57.097Z level=WARN msg="rate limit exceeded" ip=192.168.1.1:1234 path=/test
---- PASS: TestRateLimitMiddleware (0.00s)
- --- PASS: TestRateLimitMiddleware/disabled_rate_limiting_allows_all_requests (0.00s)
- --- PASS: TestRateLimitMiddleware/enabled_rate_limiting_enforces_limits (0.00s)
-=== RUN TestGetClientIP
-=== RUN TestGetClientIP/uses_X-Forwarded-For_if_present
-=== RUN TestGetClientIP/uses_X-Real-IP_if_X-Forwarded-For_not_present
-=== RUN TestGetClientIP/uses_RemoteAddr_as_fallback
---- PASS: TestGetClientIP (0.00s)
- --- PASS: TestGetClientIP/uses_X-Forwarded-For_if_present (0.00s)
- --- PASS: TestGetClientIP/uses_X-Real-IP_if_X-Forwarded-For_not_present (0.00s)
- --- PASS: TestGetClientIP/uses_RemoteAddr_as_fallback (0.00s)
-=== RUN TestRateLimitRefill
-time=2026-03-05T17:59:57.097Z level=WARN msg="rate limit exceeded" ip=192.168.1.1:1234 path=/test
---- PASS: TestRateLimitRefill (0.15s)
-PASS
-coverage: 87.2% of statements
-ok github.com/ajac-zero/latticelm/internal/ratelimit 0.160s coverage: 87.2% of statements
-=== RUN TestHealthEndpoint
-=== RUN TestHealthEndpoint/GET_returns_healthy_status
-=== RUN TestHealthEndpoint/POST_returns_method_not_allowed
---- PASS: TestHealthEndpoint (0.00s)
- --- PASS: TestHealthEndpoint/GET_returns_healthy_status (0.00s)
- --- PASS: TestHealthEndpoint/POST_returns_method_not_allowed (0.00s)
-=== RUN TestReadyEndpoint
-=== RUN TestReadyEndpoint/returns_ready_when_all_checks_pass
-=== RUN TestReadyEndpoint/returns_not_ready_when_no_providers_configured
---- PASS: TestReadyEndpoint (0.00s)
- --- PASS: TestReadyEndpoint/returns_ready_when_all_checks_pass (0.00s)
- --- PASS: TestReadyEndpoint/returns_not_ready_when_no_providers_configured (0.00s)
-=== RUN TestReadyEndpointMethodNotAllowed
---- PASS: TestReadyEndpointMethodNotAllowed (0.00s)
-=== RUN TestPanicRecoveryMiddleware
-=== RUN TestPanicRecoveryMiddleware/no_panic_-_request_succeeds
-=== RUN TestPanicRecoveryMiddleware/panic_with_string_-_recovers_gracefully
-=== RUN TestPanicRecoveryMiddleware/panic_with_error_-_recovers_gracefully
-=== RUN TestPanicRecoveryMiddleware/panic_with_struct_-_recovers_gracefully
---- PASS: TestPanicRecoveryMiddleware (0.00s)
- --- PASS: TestPanicRecoveryMiddleware/no_panic_-_request_succeeds (0.00s)
- --- PASS: TestPanicRecoveryMiddleware/panic_with_string_-_recovers_gracefully (0.00s)
- --- PASS: TestPanicRecoveryMiddleware/panic_with_error_-_recovers_gracefully (0.00s)
- --- PASS: TestPanicRecoveryMiddleware/panic_with_struct_-_recovers_gracefully (0.00s)
-=== RUN TestRequestSizeLimitMiddleware
-=== RUN TestRequestSizeLimitMiddleware/small_POST_request_-_succeeds
-=== RUN TestRequestSizeLimitMiddleware/exact_size_POST_request_-_succeeds
-=== RUN TestRequestSizeLimitMiddleware/oversized_POST_request_-_fails
-=== RUN TestRequestSizeLimitMiddleware/large_POST_request_-_fails
-=== RUN TestRequestSizeLimitMiddleware/oversized_PUT_request_-_fails
-=== RUN TestRequestSizeLimitMiddleware/oversized_PATCH_request_-_fails
-=== RUN TestRequestSizeLimitMiddleware/GET_request_-_no_size_limit_applied
-=== RUN TestRequestSizeLimitMiddleware/DELETE_request_-_no_size_limit_applied
---- PASS: TestRequestSizeLimitMiddleware (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware/small_POST_request_-_succeeds (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware/exact_size_POST_request_-_succeeds (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware/oversized_POST_request_-_fails (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware/large_POST_request_-_fails (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware/oversized_PUT_request_-_fails (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware/oversized_PATCH_request_-_fails (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware/GET_request_-_no_size_limit_applied (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware/DELETE_request_-_no_size_limit_applied (0.00s)
-=== RUN TestRequestSizeLimitMiddleware_WithJSONDecoding
-=== RUN TestRequestSizeLimitMiddleware_WithJSONDecoding/small_JSON_payload_-_succeeds
-=== RUN TestRequestSizeLimitMiddleware_WithJSONDecoding/large_JSON_payload_-_fails
---- PASS: TestRequestSizeLimitMiddleware_WithJSONDecoding (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware_WithJSONDecoding/small_JSON_payload_-_succeeds (0.00s)
- --- PASS: TestRequestSizeLimitMiddleware_WithJSONDecoding/large_JSON_payload_-_fails (0.00s)
-=== RUN TestWriteJSONError
-=== RUN TestWriteJSONError/simple_error_message
-=== RUN TestWriteJSONError/internal_server_error
-=== RUN TestWriteJSONError/unauthorized_error
---- PASS: TestWriteJSONError (0.00s)
- --- PASS: TestWriteJSONError/simple_error_message (0.00s)
- --- PASS: TestWriteJSONError/internal_server_error (0.00s)
- --- PASS: TestWriteJSONError/unauthorized_error (0.00s)
-=== RUN TestPanicRecoveryMiddleware_Integration
---- PASS: TestPanicRecoveryMiddleware_Integration (0.00s)
-=== RUN TestHandleModels
-=== RUN TestHandleModels/GET_returns_model_list
-=== RUN TestHandleModels/POST_returns_405
-=== RUN TestHandleModels/empty_registry_returns_empty_list
---- PASS: TestHandleModels (0.00s)
- --- PASS: TestHandleModels/GET_returns_model_list (0.00s)
- --- PASS: TestHandleModels/POST_returns_405 (0.00s)
- --- PASS: TestHandleModels/empty_registry_returns_empty_list (0.00s)
-=== RUN TestHandleResponses_Validation
-=== RUN TestHandleResponses_Validation/GET_returns_405
-=== RUN TestHandleResponses_Validation/invalid_JSON_returns_400
-=== RUN TestHandleResponses_Validation/missing_model_returns_400
-=== RUN TestHandleResponses_Validation/missing_input_returns_400
---- PASS: TestHandleResponses_Validation (0.00s)
- --- PASS: TestHandleResponses_Validation/GET_returns_405 (0.00s)
- --- PASS: TestHandleResponses_Validation/invalid_JSON_returns_400 (0.00s)
- --- PASS: TestHandleResponses_Validation/missing_model_returns_400 (0.00s)
- --- PASS: TestHandleResponses_Validation/missing_input_returns_400 (0.00s)
-=== RUN TestHandleResponses_Sync_Success
-=== RUN TestHandleResponses_Sync_Success/simple_text_response
-=== RUN TestHandleResponses_Sync_Success/response_with_tool_calls
-=== RUN TestHandleResponses_Sync_Success/response_with_multiple_tool_calls
-=== RUN TestHandleResponses_Sync_Success/response_with_only_tool_calls_(no_text)
-=== RUN TestHandleResponses_Sync_Success/response_echoes_request_parameters
---- PASS: TestHandleResponses_Sync_Success (0.00s)
- --- PASS: TestHandleResponses_Sync_Success/simple_text_response (0.00s)
- --- PASS: TestHandleResponses_Sync_Success/response_with_tool_calls (0.00s)
- --- PASS: TestHandleResponses_Sync_Success/response_with_multiple_tool_calls (0.00s)
- --- PASS: TestHandleResponses_Sync_Success/response_with_only_tool_calls_(no_text) (0.00s)
- --- PASS: TestHandleResponses_Sync_Success/response_echoes_request_parameters (0.00s)
-=== RUN TestHandleResponses_Sync_ConversationHistory
-=== RUN TestHandleResponses_Sync_ConversationHistory/without_previous_response_id
-=== RUN TestHandleResponses_Sync_ConversationHistory/with_valid_previous_response_id
-=== RUN TestHandleResponses_Sync_ConversationHistory/with_instructions_prepends_developer_message
-=== RUN TestHandleResponses_Sync_ConversationHistory/nonexistent_conversation_returns_404
-=== RUN TestHandleResponses_Sync_ConversationHistory/conversation_store_error_returns_500
---- PASS: TestHandleResponses_Sync_ConversationHistory (0.00s)
- --- PASS: TestHandleResponses_Sync_ConversationHistory/without_previous_response_id (0.00s)
- --- PASS: TestHandleResponses_Sync_ConversationHistory/with_valid_previous_response_id (0.00s)
- --- PASS: TestHandleResponses_Sync_ConversationHistory/with_instructions_prepends_developer_message (0.00s)
- --- PASS: TestHandleResponses_Sync_ConversationHistory/nonexistent_conversation_returns_404 (0.00s)
- --- PASS: TestHandleResponses_Sync_ConversationHistory/conversation_store_error_returns_500 (0.00s)
-=== RUN TestHandleResponses_Sync_ProviderErrors
-=== RUN TestHandleResponses_Sync_ProviderErrors/provider_returns_error
-=== RUN TestHandleResponses_Sync_ProviderErrors/provider_not_configured
---- PASS: TestHandleResponses_Sync_ProviderErrors (0.00s)
- --- PASS: TestHandleResponses_Sync_ProviderErrors/provider_returns_error (0.00s)
- --- PASS: TestHandleResponses_Sync_ProviderErrors/provider_not_configured (0.00s)
-=== RUN TestHandleResponses_Stream_Success
-=== RUN TestHandleResponses_Stream_Success/simple_text_streaming
-=== RUN TestHandleResponses_Stream_Success/streaming_with_tool_calls
-=== RUN TestHandleResponses_Stream_Success/streaming_with_multiple_tool_calls
---- PASS: TestHandleResponses_Stream_Success (0.00s)
- --- PASS: TestHandleResponses_Stream_Success/simple_text_streaming (0.00s)
- --- PASS: TestHandleResponses_Stream_Success/streaming_with_tool_calls (0.00s)
- --- PASS: TestHandleResponses_Stream_Success/streaming_with_multiple_tool_calls (0.00s)
-=== RUN TestHandleResponses_Stream_Errors
-=== RUN TestHandleResponses_Stream_Errors/stream_error_returns_failed_event
---- PASS: TestHandleResponses_Stream_Errors (0.00s)
- --- PASS: TestHandleResponses_Stream_Errors/stream_error_returns_failed_event (0.00s)
-=== RUN TestResolveProvider
-=== RUN TestResolveProvider/explicit_provider_selection
-=== RUN TestResolveProvider/default_by_model_name
-=== RUN TestResolveProvider/provider_not_found_returns_error
---- PASS: TestResolveProvider (0.00s)
- --- PASS: TestResolveProvider/explicit_provider_selection (0.00s)
- --- PASS: TestResolveProvider/default_by_model_name (0.00s)
- --- PASS: TestResolveProvider/provider_not_found_returns_error (0.00s)
-=== RUN TestGenerateID
-=== RUN TestGenerateID/resp__prefix
-=== RUN TestGenerateID/msg__prefix
-=== RUN TestGenerateID/item__prefix
---- PASS: TestGenerateID (0.00s)
- --- PASS: TestGenerateID/resp__prefix (0.00s)
- --- PASS: TestGenerateID/msg__prefix (0.00s)
- --- PASS: TestGenerateID/item__prefix (0.00s)
-=== RUN TestBuildResponse
-=== RUN TestBuildResponse/minimal_response_structure
-=== RUN TestBuildResponse/response_with_tool_calls
-=== RUN TestBuildResponse/parameter_echoing_with_defaults
-=== RUN TestBuildResponse/parameter_echoing_with_custom_values
-=== RUN TestBuildResponse/usage_included_when_text_present
-=== RUN TestBuildResponse/no_usage_when_no_text
-=== RUN TestBuildResponse/instructions_prepended
-=== RUN TestBuildResponse/previous_response_id_included
---- PASS: TestBuildResponse (0.00s)
- --- PASS: TestBuildResponse/minimal_response_structure (0.00s)
- --- PASS: TestBuildResponse/response_with_tool_calls (0.00s)
- --- PASS: TestBuildResponse/parameter_echoing_with_defaults (0.00s)
- --- PASS: TestBuildResponse/parameter_echoing_with_custom_values (0.00s)
- --- PASS: TestBuildResponse/usage_included_when_text_present (0.00s)
- --- PASS: TestBuildResponse/no_usage_when_no_text (0.00s)
- --- PASS: TestBuildResponse/instructions_prepended (0.00s)
- --- PASS: TestBuildResponse/previous_response_id_included (0.00s)
-=== RUN TestSendSSE
---- PASS: TestSendSSE (0.00s)
-PASS
-coverage: 90.8% of statements
-ok github.com/ajac-zero/latticelm/internal/server 0.018s coverage: 90.8% of statements
-FAIL
diff --git a/test_output_fixed.txt b/test_output_fixed.txt
deleted file mode 100644
index ba67928..0000000
--- a/test_output_fixed.txt
+++ /dev/null
@@ -1,13 +0,0 @@
-? github.com/ajac-zero/latticelm/cmd/gateway [no test files]
-ok github.com/ajac-zero/latticelm/internal/api (cached)
-ok github.com/ajac-zero/latticelm/internal/auth (cached)
-ok github.com/ajac-zero/latticelm/internal/config (cached)
-ok github.com/ajac-zero/latticelm/internal/conversation 0.721s
-? github.com/ajac-zero/latticelm/internal/logger [no test files]
-ok github.com/ajac-zero/latticelm/internal/observability 0.796s
-ok github.com/ajac-zero/latticelm/internal/providers 0.019s
-ok github.com/ajac-zero/latticelm/internal/providers/anthropic (cached)
-ok github.com/ajac-zero/latticelm/internal/providers/google 0.013s
-ok github.com/ajac-zero/latticelm/internal/providers/openai (cached)
-ok github.com/ajac-zero/latticelm/internal/ratelimit (cached)
-ok github.com/ajac-zero/latticelm/internal/server 0.027s
diff --git a/test_security_fixes.sh b/test_security_fixes.sh
deleted file mode 100755
index 1c7322b..0000000
--- a/test_security_fixes.sh
+++ /dev/null
@@ -1,98 +0,0 @@
-#!/bin/bash
-# Test script to verify security fixes are working
-# Usage: ./test_security_fixes.sh [server_url]
-
-SERVER_URL="${1:-http://localhost:8080}"
-GREEN='\033[0;32m'
-RED='\033[0;31m'
-YELLOW='\033[1;33m'
-NC='\033[0m' # No Color
-
-echo "Testing security improvements on $SERVER_URL"
-echo "================================================"
-echo ""
-
-# Test 1: Request size limit
-echo -e "${YELLOW}Test 1: Request Size Limit${NC}"
-echo "Sending a request with 11MB payload (exceeds 10MB limit)..."
-
-# Generate large payload
-LARGE_PAYLOAD=$(python3 -c "import json; print(json.dumps({'model': 'test', 'input': 'x' * 11000000}))" 2>/dev/null || \
- perl -e 'print "{\"model\":\"test\",\"input\":\"" . ("x" x 11000000) . "\"}"')
-
-HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X POST "$SERVER_URL/v1/responses" \
- -H "Content-Type: application/json" \
- -d "$LARGE_PAYLOAD" \
- --max-time 5 2>/dev/null)
-
-if [ "$HTTP_CODE" = "413" ]; then
- echo -e "${GREEN}✓ PASS: Received HTTP 413 (Request Entity Too Large)${NC}"
-else
- echo -e "${RED}✗ FAIL: Expected 413, got $HTTP_CODE${NC}"
-fi
-echo ""
-
-# Test 2: Normal request size
-echo -e "${YELLOW}Test 2: Normal Request Size${NC}"
-echo "Sending a small valid request..."
-
-HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X POST "$SERVER_URL/v1/responses" \
- -H "Content-Type: application/json" \
- -d '{"model":"test","input":"hello"}' \
- --max-time 5 2>/dev/null)
-
-# Expected: either 400 (invalid model) or 502 (provider error), but NOT 413
-if [ "$HTTP_CODE" != "413" ]; then
- echo -e "${GREEN}✓ PASS: Request not rejected by size limit (HTTP $HTTP_CODE)${NC}"
-else
- echo -e "${RED}✗ FAIL: Small request incorrectly rejected with 413${NC}"
-fi
-echo ""
-
-# Test 3: Health endpoint
-echo -e "${YELLOW}Test 3: Health Endpoint${NC}"
-echo "Checking /health endpoint..."
-
-RESPONSE=$(curl -s -X GET "$SERVER_URL/health" --max-time 5 2>/dev/null)
-HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X GET "$SERVER_URL/health" --max-time 5 2>/dev/null)
-
-if [ "$HTTP_CODE" = "200" ] && echo "$RESPONSE" | grep -q "healthy"; then
- echo -e "${GREEN}✓ PASS: Health endpoint responding correctly${NC}"
-else
- echo -e "${RED}✗ FAIL: Health endpoint not responding correctly (HTTP $HTTP_CODE)${NC}"
-fi
-echo ""
-
-# Test 4: Ready endpoint
-echo -e "${YELLOW}Test 4: Ready Endpoint${NC}"
-echo "Checking /ready endpoint..."
-
-HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X GET "$SERVER_URL/ready" --max-time 5 2>/dev/null)
-
-if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "503" ]; then
- echo -e "${GREEN}✓ PASS: Ready endpoint responding (HTTP $HTTP_CODE)${NC}"
-else
- echo -e "${RED}✗ FAIL: Ready endpoint not responding correctly (HTTP $HTTP_CODE)${NC}"
-fi
-echo ""
-
-# Test 5: Models endpoint
-echo -e "${YELLOW}Test 5: Models Endpoint${NC}"
-echo "Checking /v1/models endpoint..."
-
-RESPONSE=$(curl -s -X GET "$SERVER_URL/v1/models" --max-time 5 2>/dev/null)
-HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X GET "$SERVER_URL/v1/models" --max-time 5 2>/dev/null)
-
-if [ "$HTTP_CODE" = "200" ] && echo "$RESPONSE" | grep -q "object"; then
- echo -e "${GREEN}✓ PASS: Models endpoint responding correctly${NC}"
-else
- echo -e "${RED}✗ FAIL: Models endpoint not responding correctly (HTTP $HTTP_CODE)${NC}"
-fi
-echo ""
-
-echo "================================================"
-echo -e "${GREEN}Testing complete!${NC}"
-echo ""
-echo "Note: Panic recovery cannot be tested externally without"
-echo "causing intentional server errors. It has been verified"
-echo "through unit tests in middleware_test.go"