Drop-in OAuth for the OpenAI Python SDK — use the ChatGPT Codex API with your Pro/Plus account instead of an API key. Obviously this is for personal usage users, not for production or so.
pip install codex-authimport codex_auth
from openai import OpenAI
client = OpenAI() # no API key needed
response = client.responses.create(
model="gpt-5.1-codex-mini",
input="Write a one-sentence bedtime story about a unicorn.",
)
print(response.output_text)A browser window opens on first run for OAuth. Tokens are cached in
~/.codex-auth/auth.json and refreshed automatically.
Both streaming and non-streaming calls work — the library handles the Codex endpoint's streaming requirement transparently.
stream = client.responses.create(
model="gpt-5.1-codex-mini",
input="Write a hello-world in Rust.",
stream=True,
)
for event in stream:
if event.type == "response.completed":
print(event.response.output_text)Existing code using chat.completions works too — requests are converted
to the Responses API format automatically:
response = client.chat.completions.create(
model="gpt-5.1-codex-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)If you prefer not to monkey-patch:
from codex_auth import CodexClient
client = CodexClient() # browser / device auth
client = CodexClient(token="…") # existing tokenAsync variant:
from codex_auth import AsyncCodexClient
client = AsyncCodexClient()export CODEX_AUTH_NO_PATCH=1Or in code:
import codex_auth
codex_auth.init(auto_patch=False)A custom httpx transport intercepts OpenAI SDK requests to:
- Rewrite URLs to the Codex backend (
chatgpt.com/backend-api/codex/responses) - Convert
chat.completionspayloads to the Responses API format - Buffer SSE responses for non-streaming callers
- Inject OAuth bearer tokens and refresh them transparently
Browser-based PKCE auth is used on desktop; device-code flow on headless/SSH.
~/.codex-auth/auth.json (mode 0600). Override with CODEX_AUTH_TOKEN env var.