Problem Statement
Codex users in org-managed OpenAI environments often authenticate through OpenAI OAuth instead of OPENAI_API_KEY. Today the practical path is often to sign into Codex inside each OpenShell sandbox, which writes real OAuth tokens to sandbox-local ~/.codex/auth.json.
That works functionally, but it weakens OpenShell's credential isolation story: the sandboxed process can read the OAuth tokens from the filesystem.
We validated a current OpenShell-compatible pattern for running Codex OAuth in a sandbox without copying real OAuth tokens into the sandbox filesystem.
Proposed Design
Document and/or productize a Codex OAuth bootstrap pattern using provider-backed auth.json placeholders:
- Read local Codex OAuth state from the user's
~/.codex/auth.json.
- Store sensitive OAuth fields in an OpenShell provider.
- Launch the sandbox with that provider attached.
- Generate sandbox-local
~/.codex/auth.json where sensitive token fields are openshell:resolve:env:* placeholders.
- Run Codex normally in OAuth mode.
Validated placeholder-backed fields:
tokens.access_token
tokens.refresh_token
tokens.account_id
One nuance: a pure placeholder tokens.id_token fails because Codex parses it locally as a JWT before networking. A non-secret JWT-shaped id_token was sufficient for local parsing while the sensitive OAuth fields remained provider-backed.
This should align with the upcoming Providers v2 work rather than replace it. The immediate value is giving users and demos a safe current pattern for Codex OAuth.
Alternatives Considered
- Copy real
auth.json into each sandbox. Functional, but it places OAuth secrets on the sandbox filesystem and should not be the recommended pattern.
- Require API keys. Works with
printenv OPENAI_API_KEY | codex login --with-api-key, and Codex writes an auth.json containing only the OpenShell placeholder. Many org-managed users cannot use this path because they authenticate through OAuth.
- Wait for Providers v2. Providers v2 should likely own the long-term OAuth model, but the Codex OAuth workflow is useful now for getting-started docs and E2E demos.
Agent Investigation
Validated locally with OpenShell 0.0.36 and ghcr.io/nvidia/openshell-community/sandboxes/base:latest.
Findings:
- Direct OpenAI API calls using a
codex provider placeholder succeeded. /v1/models returned 200, confirming provider placeholder rewriting works.
codex exec with only provider-injected OPENAI_API_KEY failed until Codex was bootstrapped with codex login --with-api-key.
codex login --with-api-key writes ~/.codex/auth.json with OPENAI_API_KEY set to openshell:resolve:env:OPENAI_API_KEY, preserving credential isolation.
- OAuth
auth.json with placeholders for sensitive token fields worked when paired with a non-secret JWT-shaped id_token.
- Codex successfully returned a dad joke through OAuth-backed
chatgpt.com endpoints.
ab.chatgpt.com:443 was denied during the run but did not block completion.
Example successful prompt:
codex exec "Return one short dad joke. No shell commands."
Example response:
Why don’t skeletons fight each other? They don’t have the guts.
Notes / Open Questions
- Should this live as a getting-started example, a Codex-specific helper, or both?
- Should the existing
codex provider learn to discover OAuth auth state, or should that wait for Providers v2?
- How should token refresh behavior be described and constrained?
- Should
ab.chatgpt.com remain denied as non-essential telemetry/experimentation, or be added to the Codex default policy?
Problem Statement
Codex users in org-managed OpenAI environments often authenticate through OpenAI OAuth instead of
OPENAI_API_KEY. Today the practical path is often to sign into Codex inside each OpenShell sandbox, which writes real OAuth tokens to sandbox-local~/.codex/auth.json.That works functionally, but it weakens OpenShell's credential isolation story: the sandboxed process can read the OAuth tokens from the filesystem.
We validated a current OpenShell-compatible pattern for running Codex OAuth in a sandbox without copying real OAuth tokens into the sandbox filesystem.
Proposed Design
Document and/or productize a Codex OAuth bootstrap pattern using provider-backed
auth.jsonplaceholders:~/.codex/auth.json.~/.codex/auth.jsonwhere sensitive token fields areopenshell:resolve:env:*placeholders.Validated placeholder-backed fields:
tokens.access_tokentokens.refresh_tokentokens.account_idOne nuance: a pure placeholder
tokens.id_tokenfails because Codex parses it locally as a JWT before networking. A non-secret JWT-shapedid_tokenwas sufficient for local parsing while the sensitive OAuth fields remained provider-backed.This should align with the upcoming Providers v2 work rather than replace it. The immediate value is giving users and demos a safe current pattern for Codex OAuth.
Alternatives Considered
auth.jsoninto each sandbox. Functional, but it places OAuth secrets on the sandbox filesystem and should not be the recommended pattern.printenv OPENAI_API_KEY | codex login --with-api-key, and Codex writes anauth.jsoncontaining only the OpenShell placeholder. Many org-managed users cannot use this path because they authenticate through OAuth.Agent Investigation
Validated locally with OpenShell
0.0.36andghcr.io/nvidia/openshell-community/sandboxes/base:latest.Findings:
codexprovider placeholder succeeded./v1/modelsreturned200, confirming provider placeholder rewriting works.codex execwith only provider-injectedOPENAI_API_KEYfailed until Codex was bootstrapped withcodex login --with-api-key.codex login --with-api-keywrites~/.codex/auth.jsonwithOPENAI_API_KEYset toopenshell:resolve:env:OPENAI_API_KEY, preserving credential isolation.auth.jsonwith placeholders for sensitive token fields worked when paired with a non-secret JWT-shapedid_token.chatgpt.comendpoints.ab.chatgpt.com:443was denied during the run but did not block completion.Example successful prompt:
Example response:
Notes / Open Questions
codexprovider learn to discover OAuth auth state, or should that wait for Providers v2?ab.chatgpt.comremain denied as non-essential telemetry/experimentation, or be added to the Codex default policy?