Codex -> GoModel -> OpenAI
Before you start
- Install Codex on your machine.
- Choose a GoModel master key, for example
change-me. - Make sure GoModel has the upstream provider key for the models you want to use.
1. Run GoModel
Start GoModel with a master key and an OpenAI provider key:2. Confirm the Responses API with curl
Before testing Codex itself, you can optionally verify that GoModel answers a normal Responses API request: This step is optional. If you are sure you have configured a validOPENAI_API_KEY in GoModel, you can skip it and go straight to
step 3.
ok.
3. Configure Codex to use GoModel
Point Codex at GoModel’s standard OpenAI-compatible API:4. Run a Codex test prompt
5. Check the traffic in GoModel
Open the GoModel dashboard audit logs: http://localhost:8080/admin/dashboard/audit This lets you confirm that Codex is reaching GoModel even before full support is finished. From the same dashboard, you can keep following your GoModel traffic and usage as the integration matures.Current status
- the target integration path is correct: standard
http://localhost:8080/v1 - Codex already uses
POST /v1/responses, which is what GoModel should support - the missing piece on the tested GoModel instance is request decompression for
Content-Encoding: zstd
References
- OpenAI Codex discussion: Deprecating
chat/completionssupport in Codex
Validated on March 10, 2026
This guide was validated against:- a local GoModel instance on
http://localhost:8080 - Codex CLI
0.113.0
POST /v1/responsesreturned200 OKwithcurl- Codex sent requests to
POST /v1/responses - the tested GoModel instance does not yet accept Codex’s
Content-Encoding: zstdrequest bodies
Known issues
In the March 10, 2026 validation, Codex reached the correct GoModel endpoint, but the request failed because Codex compressed the body withzstd.
The observed error from GoModel was: