Your existing ChatGPT plan now powers a smarter OpenClaw agent. No double billing. No extra setup.
What Just Changed
On Wednesday, May 14, 2026, OpenClaw announced a major architectural shift: all OpenAI model turns inside OpenClaw now run through the native Codex app-server runtime by default.

Previously, OpenClaw handled the OpenAI model loop itself – translating between its own system and OpenAI’s infrastructure. That translation layer is gone.
The split is clean. Codex now owns the low-level AI loop – thread state, tool continuation, compaction, code execution, and dynamic tool search.
OpenClaw keeps everything else: your channels, persona, memory, scheduled jobs, media rules, browser tools, and messaging gateway.
In plain terms? OpenAI’s models inside OpenClaw should now behave much closer to how they behave inside OpenAI’s own products.
Your ChatGPT Subscription Does the Heavy Lifting
Here’s the part that matters for your wallet.
If you already pay for ChatGPT, you don’t need a separate API key to run OpenAI models in OpenClaw. Just sign in with your existing subscription:
openclaw models auth login --provider openai
That links your ChatGPT or Codex subscription to your OpenClaw agent.
OpenAI explicitly supports this OAuth usage in external tools like OpenClaw.
And in early May, Sam Altman confirmed that OpenClaw is “flat available under ChatGPT paid plans.”
If you still want a direct API key as backup, that option remains available too.
Why the Runtime Split Matters
This isn’t just a behind-the-scenes plumbing change. It fixes several real pain points that OpenClaw users have dealt with.
No More Duplicate Tools
Before this update, OpenClaw had to stuff every tool schema into the initial prompt. That made requests bloated and noisy. The model saw too much and sometimes picked the wrong tool.
Codex solves this with dynamic tool loading. OpenClaw’s tools now live in a searchable namespace. The model discovers the right tool on demand instead of seeing everything at once. The initial context stays smaller. Accuracy goes up.
Replies Become Intentional
In most agent systems, the model’s final text string becomes the visible message by accident. That’s fine in a simple chat window. It’s messy when your agent replies across Telegram groups, Discord threads, scheduled tasks, and DMs.
Now, Codex-backed turns use OpenClaw’s message tool for any visible reply. Internal reasoning stays private. Quiet turns stay quiet. The agent only speaks when it has something worth saying.
Agent State Stays Isolated
Each OpenClaw agent gets its own Codex home, thread state, and account bridge. Your personal Codex CLI setup won’t leak into your OpenClaw agent, and vice versa. For anyone running multiple agents or sharing a machine, that separation matters.
A Pattern That Started in April
This announcement is the culmination of work that’s been building for weeks. OpenClaw’s 2026.4.10 release on April 11 first added Codex as a native provider with its own authentication and thread management.
The 2026.5.2 release on May 3 deepened the integration with /goal commands for long autonomous tasks powered by Codex.
Wednesday’s update makes Codex the default — not an option you have to configure.
The Steinberger Connection
There’s a backstory here that adds context. OpenClaw’s creator, Peter Steinberger, joined OpenAI in February 2026. OpenClaw moved to an independent open-source foundation with OpenAI as a sponsor.
When the architect of the most popular open-source agent framework now works at the company providing its default runtime, the integration is unlikely to be shallow.
That relationship gives OpenAI users a structural advantage inside OpenClaw — at least for now.
What About Other Models?
OpenClaw isn’t abandoning its multi-model identity.
The platform still supports Anthropic, Google, DeepSeek, MiniMax, Kimi, OpenRouter, Ollama, and local models. You can swap providers at runtime without rebuilding anything.
But the team is honest about where this is headed.
The lessons from Codex – dynamic tool catalogs, structured quiet outcomes, better prompt scoping – are flowing back into OpenClaw’s default harness. Eventually, every supported model should benefit from the patterns Codex helped prove.
For now though, OpenAI models get the most polished experience.
The Competitive Context
This update lands as OpenClaw faces growing pressure from Hermes Agent, which recently overtook OpenClaw on OpenRouter’s daily rankings.
Meanwhile, Anthropic removed OpenClaw from standard Claude subscriptions in April, shifting to pay-as-you-go API access.
That move pushed some users toward OpenAI’s subscription-backed route, exactly the path this update makes seamless.
Whether this is enough to reclaim momentum from Hermes remains to be seen. But for the millions of ChatGPT subscribers already running OpenClaw, Wednesday’s update makes one thing clear: OpenAI models just became the path of least resistance.

