Skip to main content

Good defaults

GOModel uses a good defaults philosophy. This means that the default settings should be enough to use it.

How to override the default settings?

We use a three-layer configuration pipeline. Every setting has a sensible default, so you can start the server with zero configuration.
As GOModel works out of the box with no configuration files, you can try it in a minute.Start here: Quick Start
GOModel automatically discovers providers from well-known environment variables.

Configuration Methods

1. Environment Variables

The most common way to configure GOModel. Set any of the variables below to override defaults.

Server

VariableDescriptionDefault
PORTHTTP server port8080
GOMODEL_MASTER_KEYAuthentication key for securing the gateway(empty, unsafe mode)
BODY_SIZE_LIMITMax request body size (e.g., 10M, 1024K, 500KB)(no limit)

Cache

VariableDescriptionDefault
CACHE_TYPECache backend: local or redislocal
GOMODEL_CACHE_DIRDirectory for local cache files.cache
REDIS_URLRedis connection URL(empty)
REDIS_KEYRedis key for model cachegomodel:models
REDIS_TTLCache TTL in seconds86400 (24h)

Storage

Storage is shared by audit logging, usage tracking, and future features like IAM.
VariableDescriptionDefault
STORAGE_TYPEBackend: sqlite, postgresql, or mongodbsqlite
SQLITE_PATHSQLite database file pathdata/gomodel.db
POSTGRES_URLPostgreSQL connection string(empty)
POSTGRES_MAX_CONNSPostgreSQL connection pool size10
MONGODB_URLMongoDB connection string(empty)
MONGODB_DATABASEMongoDB database namegomodel

Audit Logging

VariableDescriptionDefault
LOGGING_ENABLEDEnable audit loggingfalse
LOGGING_LOG_BODIESLog request/response bodiestrue
LOGGING_LOG_HEADERSLog headers (sensitive ones auto-redacted)true
LOGGING_ONLY_MODEL_INTERACTIONSOnly log AI model endpointstrue
LOGGING_BUFFER_SIZEIn-memory buffer before flush1000
LOGGING_FLUSH_INTERVALFlush interval in seconds5
LOGGING_RETENTION_DAYSAuto-delete after N days (0 = forever)30
When LOGGING_LOG_BODIES is enabled, request and response bodies are stored in full. These may contain sensitive data such as PII or API keys embedded in prompts.

Token Usage Tracking

VariableDescriptionDefault
USAGE_ENABLEDEnable token usage trackingtrue
ENFORCE_RETURNING_USAGE_DATAAuto-add include_usage to streaming requeststrue
USAGE_BUFFER_SIZEIn-memory buffer before flush1000
USAGE_FLUSH_INTERVALFlush interval in seconds5
USAGE_RETENTION_DAYSAuto-delete after N days (0 = forever)90

Metrics

VariableDescriptionDefault
METRICS_ENABLEDEnable Prometheus metricsfalse
METRICS_ENDPOINTHTTP path for metrics/metrics

Admin

VariableDescriptionDefault
ADMIN_ENDPOINTS_ENABLEDEnable the admin REST APItrue
ADMIN_UI_ENABLEDEnable the admin dashboard UItrue

HTTP Client

These control timeouts for upstream API requests to LLM providers.
VariableDescriptionDefault
HTTP_TIMEOUTOverall request timeout in seconds600 (10 min)
HTTP_RESPONSE_HEADER_TIMEOUTTime to wait for response headers in seconds600 (10 min)

Provider API Keys

Set these to automatically register providers. No YAML configuration required.
VariableProvider
OPENAI_API_KEYOpenAI
ANTHROPIC_API_KEYAnthropic
GEMINI_API_KEYGoogle Gemini
XAI_API_KEYxAI (Grok)
GROQ_API_KEYGroq
OLLAMA_BASE_URLOllama (no API key needed)
You can also set a custom base URL for any provider using <PROVIDER>_BASE_URL (e.g., OPENAI_BASE_URL).

2. .env File

GOModel automatically loads a .env file from the working directory at startup. This is convenient for local development.
# .env
PORT=3000
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
Copy .env.template to .env and uncomment the values you need:
cp .env.template .env
Real environment variables always override values from the .env file. The .env file is only loaded if it exists — missing it is not an error.

3. Configuration File (YAML)

For more complex setups, you can use an optional YAML configuration file. GOModel looks for it in two locations (in order):
  1. config/config.yaml
  2. config.yaml
To get started, copy the example:
cp config/config.example.yaml config/config.yaml
Then uncomment and edit the settings you want to change:
server:
  port: "3000"
  master_key: "my-secret-key"

cache:
  type: redis
  redis:
    url: "redis://my-redis:6379"

providers:
  openai:
    type: openai
    api_key: "sk-..."

  anthropic:
    type: anthropic
    api_key: "sk-ant-..."

  # Custom OpenAI-compatible provider
  my-custom-llm:
    type: openai
    base_url: "https://api.example.com/v1"
    api_key: "..."
The YAML file supports environment variable expansion using ${VAR} and ${VAR:-default} syntax:
server:
  port: "${PORT:-8080}"

providers:
  openai:
    type: openai
    api_key: "${OPENAI_API_KEY}"
The YAML file is entirely optional. Any setting you can put in YAML can also be set via environment variables. Use YAML when you need to configure custom providers or prefer a structured config file.

Provider Configuration

Auto-Discovery from Environment Variables

The simplest way to add providers. GOModel checks for well-known API key environment variables and automatically registers providers:
export OPENAI_API_KEY="sk-..."      # Registers "openai" provider
export ANTHROPIC_API_KEY="sk-ant-..." # Registers "anthropic" provider
export GEMINI_API_KEY="..."          # Registers "gemini" provider

YAML Provider Blocks

For more control (custom base URLs, model restrictions, or custom provider names), use the YAML file:
providers:
  # Override OpenAI base URL
  openai:
    type: openai
    api_key: "sk-..."
    base_url: "https://my-proxy.example.com/v1"

  # Add a second OpenAI-compatible endpoint
  azure-openai:
    type: openai
    base_url: "https://my-resource.openai.azure.com/openai/deployments/gpt-4"
    api_key: "..."

  # Restrict to specific models
  gemini:
    type: gemini
    api_key: "..."
    models:
      - gemini-2.0-flash
      - gemini-1.5-pro

Ollama (Local Models)

Ollama does not require an API key. Set the base URL to enable it:
export OLLAMA_BASE_URL="http://localhost:11434/v1"
Or in YAML:
providers:
  ollama:
    type: ollama
    base_url: "http://localhost:11434/v1"
Providers with missing or unresolved API keys are automatically filtered out at startup. Ollama is the only exception — it only requires a base URL.