Skip to main content

Overview

GOModel is an OpenAI-compatible AI gateway for OpenClaw. It gives you one stable API endpoint while routing requests to OpenAI and other providers. This keeps OpenClaw configuration simple and makes model switching easier. Flow: OpenClaw -> GOModel -> OpenAI/Anthropic/Gemini/... These OpenAI model IDs are currently good defaults for OpenClaw:
Model IDBest for
gpt-5-miniDefault choice for cost/performance balance
gpt-5.2Higher quality for complex coding and agentic tasks
gpt-5.2-chat-latestChat-tuned behavior aligned with ChatGPT
gpt-5.2-codexAdvanced coding workflows in Codex-like environments (Responses API oriented)
Verify availability in your GOModel instance first: GET /v1/models. Model availability depends on your OpenAI account tier and API surface.
Reference:

1. Run GOModel

Start GOModel with at least one provider and a master key:
docker run --rm -p 8080:8080 \
  -e GOMODEL_MASTER_KEY="change-me" \
  -e OPENAI_API_KEY="sk-..." \
  enterpilot/gomodel
Confirm your model list:
curl -s http://localhost:8080/v1/models \
  -H "Authorization: Bearer change-me"

2. Add a GOModel provider in OpenClaw

In your OpenClaw config, add a custom provider that uses OpenAI-compatible requests:
{
  "env": {
    "GOMODEL_MASTER_KEY": "change-me"
  },
  "models": {
    "mode": "merge",
    "providers": {
      "gomodel": {
        "baseUrl": "http://localhost:8080/v1",
        "apiKey": "${GOMODEL_MASTER_KEY}",
        "api": "openai-completions",
        "models": [
          {
            "id": "gpt-5-mini",
            "name": "GOModel gpt-5-mini",
            "reasoning": false,
            "input": ["text"],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 128000,
            "maxTokens": 8192
          },
          {
            "id": "gpt-5.2",
            "name": "GOModel gpt-5.2",
            "reasoning": true,
            "input": ["text"],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 128000,
            "maxTokens": 8192
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "gomodel/gpt-5-mini"
      }
    }
  }
}
Replace model IDs with values returned by GOModel at /v1/models.
gpt-5.2-codex is highly relevant for coding agents, but OpenAI currently positions it as Responses API oriented. If your OpenClaw setup uses openai-completions, prefer gpt-5-mini or gpt-5.2.

3. Validate from OpenClaw

After reloading OpenClaw, send a test prompt with your configured default model. If you get 401 Unauthorized, verify OpenClaw is sending the same value as GOMODEL_MASTER_KEY.

Notes

  • If GOModel is running in Docker and OpenClaw is not, http://localhost:8080 is usually correct.
  • If both run in different containers, use a shared Docker network and container hostname instead of localhost.
  • You can expose multiple GOModel-backed models by adding more entries under models.providers.gomodel.models.
  • GOModel as an AI gateway also gives you centralized auth, audit logs, usage analytics, and provider abstraction with one OpenAI-compatible endpoint.