Skip to main content
EMil Wu
Back to News

OpenAI Ships Codex Into Claude Code — When Your Competitor Becomes Your Plugin

5 min read
OpenAI Ships Codex Into Claude Code — When Your Competitor Becomes Your Plugin

Yesterday while browsing GitHub Trending, I spotted a repo name that made me pause: openai/codex-plugin-cc. CC stands for Claude Code. My first thought was “this must be community-made” — clicked through, and all three contributors are OpenAI engineers. Apache-2.0 license. 4,300 stars in one day.

OpenAI built an official plugin that lets you use Codex directly inside Claude Code.

Yes, you read that right. Not built by Anthropic — built by OpenAI themselves. They proactively shipped their AI coding agent into a competitor’s ecosystem.

What Does It Do?

After installation, your Claude Code gains several new slash commands:

CommandFunction
/codex:reviewStandard code review by Codex (read-only)
/codex:adversarial-reviewAdversarial review — challenges design decisions, tests assumptions, finds hidden risks
/codex:rescueDelegate debugging, regression fixes, and other tasks to Codex
/codex:statusCheck running/recent Codex tasks

Technically, the plugin calls your locally installed codex CLI binary — it’s not a separate runtime. It shares your Codex authentication and config (~/.codex/config.toml), so the barrier to entry is low: just a ChatGPT subscription or OpenAI API key.

Why Would OpenAI Do This?

The answer is in the numbers.

Claude Code hit $1 billion in annualized revenue in November 2025 — just six months after launch. By February 2026, it doubled to $2.5 billion. It leads Codex in VS Code Marketplace installs and ratings, and accounts for roughly 4% of public GitHub commits. OpenAI’s apps CEO Fidji Simo publicly called Claude Code and Cowork a “wake-up call.”

So OpenAI’s strategy is clear: if you can’t beat their Runtime, get inside it.

SmartScope’s analysis puts it bluntly: “Every review triggered through the plugin runs on OpenAI’s infrastructure and generates billing. Zero user acquisition cost, incremental billing.” OpenAI doesn’t need developers to leave Claude Code — just generating usage on the rival’s platform is enough.

Through Our Framework

If you’ve read Article 4: Skill Ecosystem in this series, you’ll remember our argument: future AI competition isn’t just about Models — it’s about Skill Ecosystems. Whichever platform has the most high-quality Skills wins.

This is now happening, but in a more interesting way than we imagined.

The five-layer architecture from Article 2 is: Command → Agent → [Tool + Skill] → Context. In this framework, the Model is just a reasoning engine — Article 1 established that Model ≠ Runtime. What the Codex plugin does is essentially mount a Tool that calls OpenAI’s Model inside Claude Code’s Runtime.

The implication: a single Runtime is no longer bound to a single Model.

You can have Claude write code in Claude Code, then call Codex to review it. Or the reverse — let Codex draft, Claude refine. Two Models collaborating in the same Runtime, sharing the same codebase, no tool-switching, no copy-pasting, no maintaining two separate contexts.

The Community Was Already Ahead

Before OpenAI’s official move, the community was already building multi-model orchestration:

  • claude-octopus — 8 providers (Codex, Gemini, Claude, Perplexity, Qwen, Ollama, etc.), 47 commands
  • claude-codex — Sequential review pipeline with Codex as the final quality gate
  • myclaude — Claude Code + Codex + Gemini multi-agent orchestration

Zed editor also natively supports Claude Code, Gemini CLI, and Codex as external agents. Multi-model isn’t experimental anymore — it’s becoming the default.

My Take

Perhaps the most important signal here isn’t “OpenAI made a plugin” — it’s what it implies about the industry direction: Models and Runtimes are decoupling.

Think about it: if OpenAI is willing to ship Codex into Claude Code, Google shipping Gemini in is just a matter of time (the community is already doing it). When every Runtime can call any Model, choosing a Runtime is no longer choosing a Model — it’s choosing an ecosystem. Who has the most Skills, the richest plugins, the best workflow integration.

This is exactly the landscape Article 4 predicted, just arriving faster than expected.

For developers, the practical short-term advice: if you’re already using Claude Code, try /codex:adversarial-review — having different Models review each other’s code is probably the lowest-cost entry point to multi-model workflows. Long-term, investing deeply in the Runtime with the most complete ecosystem matters more than chasing the latest Model.


References:

Support This Series

If these articles have been helpful, consider buying me a coffee