How to Install Codex CLI: The Complete Official Setup Guide
TL;DR: Codex CLI is one npm install -g @openai/codex away, but the install command is the easy part. The real surface area is ~/.codex/config.toml — model selection, three approval modes, three sandbox levels, MCP servers, and a per-project AGENTS.md that quietly shapes every prompt. This guide walks the full setup: choosing between npm, Homebrew, and the binary release; ChatGPT versus API key auth; the config file most quickstarts skip; and the routing trick to point Codex at OfoxAI or any other OpenAI-compatible provider when you do not want to pay OpenAI directly.
What Codex CLI Is in May 2026
Codex CLI is OpenAI’s terminal-native coding agent. You type natural-language instructions, it reads your repo, plans, and edits files in place — like Claude Code, but built around the GPT-5.3-Codex model family. The recent rewrite moved the core from a Node app to a Rust binary distributed through an npm wrapper, which dropped cold-start time to roughly a second and pushed configuration out into a real TOML file.
Three things matter for new installs:
- Default model is GPT-5.3-Codex, a coding-tuned variant of GPT-5.3 that ofox lists alongside the GPT-5.4 family in its model catalog. For non-coding work — planning, doc rewrites, prompt tuning — switch via
/modelto GPT-5.4 or GPT-5.4 Pro. - Sandbox is on by default. Codex 0.12x runs in
workspace-writemode out of the box, which means it can edit files inside your repo but cannot write to your home directory or run network-egress shells without approval. This is good. Do not turn it off because a tutorial told you to. - AGENTS.md is the leverage point. A short, specific AGENTS.md in your repo root changes the model’s behavior more than any
--modelflag.
For a side-by-side with the other terminal agents — Claude Code, Cursor, Cline — see our coding tools comparison.
System Requirements
| Requirement | Minimum |
|---|---|
| Node.js | 22 LTS (24 recommended) |
| OS | macOS 13+, Linux (glibc 2.31+ or musl), Windows via WSL2 |
| Disk | ~80 MB for the binary plus node_modules wrapper |
| Auth | ChatGPT account (Plus/Pro/Business/Edu/Enterprise) or an OpenAI-compatible API key |
Windows native is not officially supported as of writing. Use WSL2 with an Ubuntu or Debian distro — most CI examples in the docs assume a POSIX shell.
Check your Node version before installing:
node --version
# v22.x.x or higher
If it is older, install via nvm or your distro’s package manager. The wrapper will refuse to install on Node 18 or below.
Installation Methods
1. npm (Recommended for Most Users)
npm install -g @openai/codex
codex --version
This is the canonical install path documented at developers.openai.com/codex/cli. It downloads the platform-appropriate Rust binary on postinstall, so the apparent “node module” is mostly a launcher. Updates go through npm:
npm install -g @openai/codex@latest
If codex: command not found after install, your global npm bin is not on PATH. Run npm prefix -g to find your global install root — the codex binary lives in <prefix>/bin — and add that directory to your shell profile.
2. Homebrew (macOS Cask)
brew install --cask codex
codex --version
The cask wraps the same Rust binary OpenAI ships on the GitHub releases page. Use this if you already manage everything else through Homebrew and do not want a Node toolchain just for one CLI. Updates: brew upgrade --cask codex.
3. Binary Release (No Toolchain)
For CI runners, restricted environments, or air-gapped boxes, grab the prebuilt binary from github.com/openai/codex/releases/latest:
- macOS Apple Silicon:
codex-aarch64-apple-darwin.tar.gz - macOS Intel:
codex-x86_64-apple-darwin.tar.gz - Linux x86_64 musl:
codex-x86_64-unknown-linux-musl.tar.gz - Linux arm64 musl:
codex-aarch64-unknown-linux-musl.tar.gz
Unpack, drop codex into /usr/local/bin or ~/.local/bin, chmod +x, done. No Node required.
First-Run Authentication
The first codex invocation prompts you to authenticate. Pick one of two paths.
Option A: Sign In With ChatGPT
codex
Codex opens a browser tab, you log into ChatGPT, and the CLI receives an OAuth token cached at ~/.codex/auth.json. If you already pay for ChatGPT Plus, Pro, Business, Edu, or Enterprise, this is the cheapest path — Codex usage rides on top of your subscription up to the monthly limit OpenAI publishes for your tier.
This path does not require an API key and is the official recommendation for individual developers.
Option B: API Key
export OPENAI_API_KEY=sk-...
codex
Use this when:
- You want to route through a different provider (see “Routing Through OfoxAI” below).
- You are in CI and cannot do an interactive browser login.
- You need precise per-call billing instead of subscription usage.
The CLI also reads OPENAI_API_KEY from ~/.codex/config.toml if you prefer not to put secrets in your shell environment.
The Config File Most Quickstarts Skip
Codex looks for ~/.codex/config.toml on every run. If absent, it uses defaults. Here is a working starter file with the keys you actually need:
# ~/.codex/config.toml
model = "gpt-5.3-codex"
approval_policy = "on-request"
sandbox_mode = "workspace-write"
[model_providers.openai]
name = "OpenAI"
base_url = "https://api.openai.com/v1"
env_key = "OPENAI_API_KEY"
[mcp_servers.filesystem]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/Users/you/projects"]
Three knobs to understand.
approval_policy — Three Values
untrusted— Codex asks before every shell command and every file edit. Safest, slowest.on-request— Codex auto-runs commands inside the sandbox; asks only when it wants to escape (network egress, writes outside the workspace). This is the default and the right choice for most repos.never— No prompts, ever. Pair this with a tight sandbox or accept the consequences. Note that a few 0.12x builds do not fully honorneverin MCP tool paths — verify on your version before relying on it for unattended runs.
sandbox_mode — Three Levels
read-only— Codex can read files and run no-op commands but cannot edit anything. Useful for “explain this codebase” sessions.workspace-write— Default. Edits inside the current workspace, no network, no writes outside the repo.danger-full-access— Bypasses all sandboxing. Codex canrm -rf /if it decides to. Reserve this for throwaway containers or VMs.
MCP Servers
Codex speaks the Model Context Protocol, so you can plug in filesystem servers, search tools, database connectors, and anything else from the MCP ecosystem. Each server is a [mcp_servers.NAME] table with command and args. If you have already wired MCP servers for Claude Code, the same servers work — see our Claude Code configuration guide for examples.
AGENTS.md — The Highest-Leverage File in Your Repo
Drop an AGENTS.md at the root of your project. Codex reads it on every session and treats it as persistent system-level instruction.
# Project: payments-service
- TypeScript strict mode. No `any`.
- Tests live next to source: `foo.ts` + `foo.test.ts`.
- Run `pnpm test` before suggesting a commit.
- Never touch `migrations/*.sql` — those are reviewed manually.
- DB queries go through `src/db/client.ts`. Do not import `pg` directly elsewhere.
Two practical rules:
- Keep it short — under ~150 lines. Codex reads it every turn, and a sprawling file dilutes the signal and burns context budget.
- Write rules, not lore. “Never touch migrations” is a rule. “We adopted a hexagonal architecture in 2024” is lore — put that in your README.
You can also place a ~/.codex/AGENTS.md for global rules that apply across every project (e.g., “always use double quotes in shell”, “prefer ripgrep over grep”). Project-level AGENTS.md overrides global where they conflict.
Routing Through OfoxAI (or Any Compatible Provider)
If you want to share one API key across Codex CLI, Claude Code, Cursor, and your own scripts — or if direct OpenAI billing is awkward in your region — point Codex at an OpenAI-compatible gateway.
Two env vars in your shell profile:
export OPENAI_API_KEY=<your_ofoxai_key>
export OPENAI_BASE_URL=https://api.ofox.ai/v1
Or, cleaner, declare it as a named provider in ~/.codex/config.toml:
model = "openai/gpt-5.3-codex"
model_provider = "ofox"
[model_providers.ofox]
name = "OfoxAI"
base_url = "https://api.ofox.ai/v1"
env_key = "OFOX_API_KEY"
Then export OFOX_API_KEY=... and your default OPENAI_API_KEY (if any) stays intact for non-Codex tools.
Why bother? Three reasons developers cite:
- One key, every model. Same key works for GPT-5.3-Codex, Claude Opus 4.7, Gemini 3.1 Pro, DeepSeek V4. Useful if you swap models often. See the API aggregation overview.
- Regional reach. OfoxAI has working endpoints in places where direct
api.openai.comis intermittent. - Cost shape. Pay-per-token across providers instead of per-vendor subscriptions. The cost reduction guide covers the math.
For the deeper routing setup (env vs config, model prefixes, billing dashboard), see our Codex CLI API configuration guide.
Verifying the Install
Quick smoke test sequence:
# 1. CLI is on PATH
codex --version
# 2. Auth is wired
codex login status
# 3. Sandbox works
cd /tmp && mkdir codex-test && cd codex-test
codex "create a Python script that prints fibonacci numbers up to 100"
ls -la
If codex login status reports “Not logged in” after you already authenticated, your ~/.codex/auth.json is missing or unreadable — usually a permissions issue after running codex once as root then again as your user.
Updating and Uninstalling
Update:
npm install -g @openai/codex@latest # npm path
brew upgrade --cask codex # Homebrew path
Pin a version (recommended for CI):
npm install -g @openai/[email protected]
Uninstall:
npm uninstall -g @openai/codex
rm -rf ~/.codex # removes auth, config, history
The ~/.codex directory holds your auth token, config, session history, and any cached MCP server state. Delete it only if you want a clean slate.
Troubleshooting the First Hour
“Command not found: codex” — npm global bin not on PATH. Run npm prefix -g and add <that path>/bin to your shell profile (npm bin -g was removed in npm 9+).
“Cannot find module …” on first run — Node version is too old. Upgrade to Node 22+.
Browser auth loop — Your default browser is not opening the OpenAI consent page. Run codex login --device-auth to switch to the OAuth device-code flow: Codex prints a URL and a one-time code you paste into any browser (including one on a different machine), no local browser handoff required.
Invalid API key after switching providers — OPENAI_API_KEY still points at OpenAI but OPENAI_BASE_URL points at OfoxAI (or vice versa). Both env vars must match the same provider, or use the named [model_providers.X] config to avoid the conflict entirely.
Model not found errors — When routing through a non-OpenAI provider, model IDs need a vendor prefix. Use openai/gpt-5.3-codex instead of gpt-5.3-codex. The same applies if you point Codex at non-OpenAI models like Claude or Gemini through the gateway.
approval_policy ignored — A handful of 0.12x builds have open bugs around approval policy and sandbox enforcement. Pin to the latest stable release and report regressions upstream rather than disabling sandbox to work around them.
What to Do Next
You have a working Codex CLI. Two follow-ups worth reading:
- Daily workflow — Codex CLI real-world coding workflow covers the AGENTS.md + plan-mode + git-worktree loop senior devs use day to day.
- Model picking — Codex defaults to GPT-5.3-Codex, but your task may want a different model. The model comparison guide frames the GPT vs Claude vs Gemini decision.
- Multi-tool setup — If you also run Cursor, Cline, or Claude Code, the unified custom API setup guide shows how to wire all four against one gateway.
The one thing every Codex quickstart skips: the dangerous default is not sandbox_mode = "danger-full-access" — that one is at least honest. The dangerous default is leaving OPENAI_API_KEY exported in your shell while you run codex in a repo whose AGENTS.md says “feel free to read secrets from .env to debug things.” Treat the install as the first ten percent of the setup; the config file and the AGENTS.md are the other ninety.


