Overview
GOModel is an OpenAI-compatible AI gateway for OpenClaw. It gives you one stable API endpoint while routing requests to OpenAI and other providers. This keeps OpenClaw configuration simple and makes model switching easier. Flow:OpenClaw -> GOModel -> OpenAI/Anthropic/Gemini/...
Recommended OpenAI Models (As of February 28, 2026)
These OpenAI model IDs are currently good defaults for OpenClaw:| Model ID | Best for |
|---|---|
gpt-5-mini | Default choice for cost/performance balance |
gpt-5.2 | Higher quality for complex coding and agentic tasks |
gpt-5.2-chat-latest | Chat-tuned behavior aligned with ChatGPT |
gpt-5.2-codex | Advanced coding workflows in Codex-like environments (Responses API oriented) |
Verify availability in your GOModel instance first:
GET /v1/models. Model
availability depends on your OpenAI account tier and API surface.- OpenAI GPT-5.2 release: OpenAI Changelog
- OpenAI model docs: Models Overview
1. Run GOModel
Start GOModel with at least one provider and a master key:2. Add a GOModel provider in OpenClaw
In your OpenClaw config, add a custom provider that uses OpenAI-compatible requests:/v1/models.
3. Validate from OpenClaw
After reloading OpenClaw, send a test prompt with your configured default model. If you get401 Unauthorized, verify OpenClaw is sending the same value as GOMODEL_MASTER_KEY.
Notes
- If GOModel is running in Docker and OpenClaw is not,
http://localhost:8080is usually correct. - If both run in different containers, use a shared Docker network and container hostname instead of
localhost. - You can expose multiple GOModel-backed models by adding more entries under
models.providers.gomodel.models. - GOModel as an AI gateway also gives you centralized auth, audit logs, usage analytics, and provider abstraction with one OpenAI-compatible endpoint.