diff --git a/packages/docs/1-0.mdx b/packages/docs/1-0.mdx new file mode 100644 index 00000000000..cf3e0948c23 --- /dev/null +++ b/packages/docs/1-0.mdx @@ -0,0 +1,61 @@ +--- +title: Migrating to 1.0 +description: What's new in OpenCode 1.0. +--- + +OpenCode 1.0 is a complete rewrite of the TUI. + +We moved from the go+bubbletea based TUI which had performance and capability issues to an in-house framework (OpenTUI) written in zig+solidjs. + +The new TUI works like the old one since it connects to the same opencode server. + +## Upgrading + +You should not be autoupgraded to 1.0 if you are currently using a previous +version. However some older versions of OpenCode always grab latest. + +To upgrade manually, run + +```bash +$ opencode upgrade 1.0.0 +``` + +To downgrade back to 0.x, run + +```bash +$ opencode upgrade 0.15.31 +``` + +## UX changes + +The session history is more compressed, only showing full details of the edit and bash tool. + +We added a command bar which almost everything flows through. Press ctrl+p to bring it up in any context and see everything you can do. + +Added a session sidebar (can be toggled) with useful information. + +We removed some functionality that we weren't sure anyone actually used. If something important is missing please open an issue and we'll add it back quickly. + +## Breaking changes + +### Keybinds renamed + +- messages_revert -> messages_undo +- switch_agent -> agent_cycle +- switch_agent_reverse -> agent_cycle_reverse +- switch_mode -> agent_cycle +- switch_mode_reverse -> agent_cycle_reverse + +### Keybinds removed + +- messages_layout_toggle +- messages_next +- messages_previous +- file_diff_toggle +- file_search +- file_close +- file_list +- app_help +- project_init +- tool_details +- thinking_blocks diff --git a/packages/docs/README.md b/packages/docs/README.md index 17956df3296..055c983adbe 100644 --- a/packages/docs/README.md +++ b/packages/docs/README.md @@ -40,5 +40,4 @@ Install our GitHub app from your [dashboard](https://dashboard.mintlify.com/sett - If a page loads as a 404: Make sure you are running in a folder with a valid `docs.json`. ### Resources - - [Mintlify documentation](https://mintlify.com/docs) diff --git a/packages/docs/acp.mdx b/packages/docs/acp.mdx new file mode 100644 index 00000000000..2ceb38b4fb8 --- /dev/null +++ b/packages/docs/acp.mdx @@ -0,0 +1,150 @@ +--- +title: ACP Support +description: Use OpenCode in any ACP-compatible editor. +--- + +OpenCode supports the [Agent Client Protocol](https://agentclientprotocol.com) or (ACP), allowing you to use it directly in compatible editors and IDEs. + + +**TIP** + +For a list of editors and tools that support ACP, check out the [ACP progress report](https://zed.dev/blog/acp-progress-report#available-now). + + +ACP is an open protocol that standardizes communication between code editors and AI coding agents. + +## Configure + +To use OpenCode via ACP, configure your editor to run the `opencode acp` command. + +The command starts OpenCode as an ACP-compatible subprocess that communicates with your editor over JSON-RPC via stdio. + +Below are examples for popular editors that support ACP. + +### Zed + +Add to your [Zed](https://zed.dev) configuration (`~/.config/zed/settings.json`): + +```json ~/.config/zed/settings.json +{ + "agent_servers": { + "OpenCode": { + "command": "opencode", + "args": ["acp"] + } + } +} +``` + +To open it, use the `agent: new thread` action in the **Command Palette**. + +You can also bind a keyboard shortcut by editing your `keymap.json`: + +```json keymap.json +[ + { + "bindings": { + "cmd-alt-o": [ + "agent::NewExternalAgentThread", + { + "agent": { + "custom": { + "name": "OpenCode", + "command": { + "command": "opencode", + "args": ["acp"] + } + } + } + } + ] + } + } +] +``` + +### JetBrains IDEs + +Add to your [JetBrains IDE](https://www.jetbrains.com/) acp.json according to the [documentation](https://www.jetbrains.com/help/ai-assistant/acp.html): + +```json acp.json +{ + "agent_servers": { + "OpenCode": { + "command": "/absolute/path/bin/opencode", + "args": ["acp"] + } + } +} +``` + +To open it, use the new 'OpenCode' agent in the AI Chat agent selector. + +### Avante.nvim + +Add to your [Avante.nvim](https://github.com/yetone/avante.nvim) configuration: + +```lua +{ + acp_providers = { + ["opencode"] = { + command = "opencode", + args = { "acp" } + } + } +} +``` + +If you need to pass environment variables: + +```lua highlight={6-8} +{ + acp_providers = { + ["opencode"] = { + command = "opencode", + args = { "acp" }, + env = { + OPENCODE_API_KEY = os.getenv("OPENCODE_API_KEY") + } + } + } +} +``` + +### CodeCompanion.nvim + +To use OpenCode as an ACP agent in [CodeCompanion.nvim](https://github.com/olimorris/codecompanion.nvim), add the following to your Neovim config: + +```lua +require("codecompanion").setup({ + strategies = { + chat = { + adapter = { + name = "opencode", + model = "claude-sonnet-4", + }, + }, + }, +}) +``` + +This config sets up CodeCompanion to use OpenCode as the ACP agent for chat. + +If you need to pass environment variables (like `OPENCODE_API_KEY`), refer to [Configuring Adapters: Environment Variables](https://codecompanion.olimorris.dev/configuration/adapters#environment-variables-setting-an-api-key) in the CodeCompanion.nvim documentation for full details. + +## Support + +OpenCode works the same via ACP as it does in the terminal. All features are supported: + + +**NOTE** + +Some built-in slash commands like `/undo` and `/redo` are currently unsupported. + + +- Built-in tools (file operations, terminal commands, etc.) +- Custom tools and slash commands +- MCP servers configured in your OpenCode config +- Project-specific rules from `AGENTS.md` +- Custom formatters and linters +- Agents and permissions system diff --git a/packages/docs/agents.mdx b/packages/docs/agents.mdx new file mode 100644 index 00000000000..c6f2bd4872f --- /dev/null +++ b/packages/docs/agents.mdx @@ -0,0 +1,576 @@ +--- +title: Agents +description: Configure and use specialized agents. +--- + +Agents are specialized AI assistants that can be configured for specific tasks and workflows. They allow you to create focused tools with custom prompts, models, and tool access. + + +**TIP** + +Use the plan agent to analyze code and review suggestions without making any code changes. + + +You can switch between agents during a session or invoke them with the `@` mention. + +## Types + +There are two types of agents in OpenCode; primary agents and subagents. + +### Primary agents + +Primary agents are the main assistants you interact with directly. You can cycle through them using the **Tab** key, or your configured `switch_agent` keybind. These agents handle your main conversation and can access all configured tools. + + +**TIP** + +You can use the **Tab** key to switch between primary agents during a session. + + +OpenCode comes with two built-in primary agents, **Build** and **Plan**. We'll +look at these below. + +### Subagents + +Subagents are specialized assistants that primary agents can invoke for specific tasks. You can also manually invoke them by **@ mentioning** them in your messages. + +OpenCode comes with two built-in subagents, **General** and **Explore**. We'll look at this below. + +## Built-in + +OpenCode comes with two built-in primary agents and one built-in subagent. + +### Build + +_Mode_: `primary` + +Build is the **default** primary agent with all tools enabled. This is the standard agent for development work where you need full access to file operations and system commands. + +### Plan + +_Mode_: `primary` + +A restricted agent designed for planning and analysis. We use a permission system to give you more control and prevent unintended changes. +By default, all of the following are set to `ask`: + +- `file edits`: All writes, patches, and edits +- `bash`: All bash commands + +This agent is useful when you want the LLM to analyze code, suggest changes, or create plans without making any actual modifications to your codebase. + +### General + +_Mode_: `subagent` + +A general-purpose agent for researching complex questions, searching for code, and executing multi-step tasks. Use when searching for keywords or files and you're not confident you'll find the right match in the first few tries. + +### Explore + +_Mode_: `subagent` + +A fast agent specialized for exploring codebases. Use this when you need to quickly find files by patterns, search code for keywords, or answer questions about the codebase. + +## Usage + +1. For primary agents, use the **Tab** key to cycle through them during a session. You can also use your configured `switch_agent` keybind. + +2. Subagents can be invoked: + - **Automatically** by primary agents for specialized tasks based on their descriptions. + - Manually by **@ mentioning** a subagent in your message. For example. + + ```txt frame="none" + @general help me search for this function + ``` + +3. **Navigation between sessions**: When subagents create their own child sessions, you can navigate between the parent session and all child sessions using: + - **\+Right** (or your configured `session_child_cycle` keybind) to cycle forward through parent → child1 → child2 → ... → parent + - **\+Left** (or your configured `session_child_cycle_reverse` keybind) to cycle backward through parent ← child1 ← child2 ← ... ← parent + + This allows you to seamlessly switch between the main conversation and specialized subagent work. + +## Configure + +You can customize the built-in agents or create your own through configuration. Agents can be configured in two ways: + +### JSON + +Configure agents in your `opencode.json` config file: + +```json opencode.json expandable +{ + "$schema": "https://opencode.ai/config.json", + "agent": { + "build": { + "mode": "primary", + "model": "anthropic/claude-sonnet-4-20250514", + "prompt": "{file:./prompts/build.txt}", + "tools": { + "write": true, + "edit": true, + "bash": true + } + }, + "plan": { + "mode": "primary", + "model": "anthropic/claude-haiku-4-20250514", + "tools": { + "write": false, + "edit": false, + "bash": false + } + }, + "code-reviewer": { + "description": "Reviews code for best practices and potential issues", + "mode": "subagent", + "model": "anthropic/claude-sonnet-4-20250514", + "prompt": "You are a code reviewer. Focus on security, performance, and maintainability.", + "tools": { + "write": false, + "edit": false + } + } + } +} +``` + +### Markdown + +You can also define agents using markdown files. Place them in: + +- Global: `~/.config/opencode/agent/` +- Per-project: `.opencode/agent/` + +```markdown ~/.config/opencode/agent/review.md expandable +--- +description: Reviews code for quality and best practices +mode: subagent +model: anthropic/claude-sonnet-4-20250514 +temperature: 0.1 +tools: + write: false + edit: false + bash: false +--- + +You are in code review mode. Focus on: + +- Code quality and best practices +- Potential bugs and edge cases +- Performance implications +- Security considerations + +Provide constructive feedback without making direct changes. +``` + +The markdown file name becomes the agent name. For example, `review.md` creates a `review` agent. + +## Options + +Let's look at these configuration options in detail. + +### Description + +Use the `description` option to provide a brief description of what the agent does and when to use it. + +```json opencode.json +{ + "agent": { + "review": { + "description": "Reviews code for best practices and potential issues" + } + } +} +``` + +This is a **required** config option. + +### Temperature + +Control the randomness and creativity of the LLM's responses with the `temperature` config. + +Lower values make responses more focused and deterministic, while higher values increase creativity and variability. + +```json opencode.json +{ + "agent": { + "plan": { + "temperature": 0.1 + }, + "creative": { + "temperature": 0.8 + } + } +} +``` + +Temperature values typically range from 0.0 to 1.0: + +- **0.0-0.2**: Very focused and deterministic responses, ideal for code analysis and planning +- **0.3-0.5**: Balanced responses with some creativity, good for general development tasks +- **0.6-1.0**: More creative and varied responses, useful for brainstorming and exploration + +```json opencode.json +{ + "agent": { + "analyze": { + "temperature": 0.1, + "prompt": "{file:./prompts/analysis.txt}" + }, + "build": { + "temperature": 0.3 + }, + "brainstorm": { + "temperature": 0.7, + "prompt": "{file:./prompts/creative.txt}" + } + } +} +``` + +If no temperature is specified, OpenCode uses model-specific defaults; typically 0 for most models, 0.55 for Qwen models. + +### Max steps + +Control the maximum number of agentic iterations an agent can perform before being forced to respond with text only. This allows users who wish to control costs to set a limit on agentic actions. + +If this is not set, the agent will continue to iterate until the model chooses to stop or the user interrupts the session. + +```json opencode.json +{ + "agent": { + "quick-thinker": { + "description": "Fast reasoning with limited iterations", + "prompt": "You are a quick thinker. Solve problems with minimal steps.", + "maxSteps": 5 + } + } +} +``` + +When the limit is reached, the agent receives a special system prompt instructing it to respond with a summarization of its work and recommended remaining tasks. + +### Disable + +Set to `true` to disable the agent. + +```json opencode.json +{ + "agent": { + "review": { + "disable": true + } + } +} +``` + +### Prompt + +Specify a custom system prompt file for this agent with the `prompt` config. The prompt file should contain instructions specific to the agent's purpose. + +```json opencode.json +{ + "agent": { + "review": { + "prompt": "{file:./prompts/code-review.txt}" + } + } +} +``` + +This path is relative to where the config file is located. So this works for both the global OpenCode config and the project specific config. + +### Model + +Use the `model` config to override the model for this agent. Useful for using different models optimized for different tasks. For example, a faster model for planning, a more capable model for implementation. + + +**TIP** + +If you don’t specify a model, primary agents use the [model globally configured](/config#models) while subagents will use the model of the primary agent that invoked the subagent. + + +```json opencode.json +{ + "agent": { + "plan": { + "model": "anthropic/claude-haiku-4-20250514" + } + } +} +``` + +### Tools + +Control which tools are available in this agent with the `tools` config. You can enable or disable specific tools by setting them to `true` or `false`. + +```json opencode.json highlight={3-6,9-12} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "write": true, + "bash": true + }, + "agent": { + "plan": { + "tools": { + "write": false, + "bash": false + } + } + } +} +``` + + +**NOTE** + +The agent-specific config overrides the global config. + + +You can also use wildcards to control multiple tools at once. For example, to disable all tools from an MCP server: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "agent": { + "readonly": { + "tools": { + "mymcp_*": false, + "write": false, + "edit": false + } + } + } +} +``` + +[Learn more about tools](/tools). + +### Permissions + +You can configure permissions to manage what actions an agent can take. Currently, the permissions for the `edit`, `bash`, and `webfetch` tools can be configured to: + +- `"ask"` — Prompt for approval before running the tool +- `"allow"` — Allow all operations without approval +- `"deny"` — Disable the tool + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "edit": "deny" + } +} +``` + +You can override these permissions per agent. + +```json opencode.json highlight={3-5,8-10} +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "edit": "deny" + }, + "agent": { + "build": { + "permission": { + "edit": "ask" + } + } + } +} +``` + +You can also set permissions in Markdown agents. + +```markdown ~/.config/opencode/agent/review.md expandable +--- +description: Code review without edits +mode: subagent +permission: + edit: deny + bash: + "git diff": allow + "git log*": allow + "*": ask + webfetch: deny +--- + +Only analyze code and suggest changes. +``` + +You can set permissions for specific bash commands. + +```json opencode.json highlight={7} +{ + "$schema": "https://opencode.ai/config.json", + "agent": { + "build": { + "permission": { + "bash": { + "git push": "ask" + } + } + } + } +} +``` + +This can take a glob pattern. + +```json opencode.json highlight={7} +{ + "$schema": "https://opencode.ai/config.json", + "agent": { + "build": { + "permission": { + "bash": { + "git *": "ask" + } + } + } + } +} +``` + +And you can also use the `*` wildcard to manage permissions for all commands. +Where the specific rule can override the `*` wildcard. + +```json opencode.json highlight={8} +{ + "$schema": "https://opencode.ai/config.json", + "agent": { + "build": { + "permission": { + "bash": { + "git status": "allow", + "*": "ask" + } + } + } + } +} +``` + +[Learn more about permissions](/permissions). + +### Mode + +Control the agent's mode with the `mode` config. The `mode` option is used to determine how the agent can be used. + +```json opencode.json +{ + "agent": { + "review": { + "mode": "subagent" + } + } +} +``` + +The `mode` option can be set to `primary`, `subagent`, or `all`. If no `mode` is specified, it defaults to `all`. + +### Additional + +Any other options you specify in your agent configuration will be **passed through directly** to the provider as model options. This allows you to use provider-specific features and parameters. + +For example, with OpenAI's reasoning models, you can control the reasoning effort: + +```json opencode.json highlight={6,7} +{ + "agent": { + "deep-thinker": { + "description": "Agent that uses high reasoning effort for complex problems", + "model": "openai/gpt-5", + "reasoningEffort": "high", + "textVerbosity": "low" + } + } +} +``` + +These additional options are model and provider-specific. Check your provider's documentation for available parameters. + + +**TIP** + +Run `opencode models` to see a list of the available models. + + +## Create agents + +You can create new agents using the following command: + +```bash +opencode agent create +``` + +This interactive command will: + +1. Ask where to save the agent; global or project-specific. +2. Description of what the agent should do. +3. Generate an appropriate system prompt and identifier. +4. Let you select which tools the agent can access. +5. Finally, create a markdown file with the agent configuration. + +## Use cases + +Here are some common use cases for different agents. + +- **Build agent**: Full development work with all tools enabled +- **Plan agent**: Analysis and planning without making changes +- **Review agent**: Code review with read-only access plus documentation tools +- **Debug agent**: Focused on investigation with bash and read tools enabled +- **Docs agent**: Documentation writing with file operations but no system commands + +## Examples + +Here are some examples agents you might find useful. + + +**TIP** + +Do you have an agent you'd like to share? [Submit a PR](https://github.com/sst/opencode). + + +### Documentation agent + +```markdown ~/.config/opencode/agent/docs-writer.md expandable +--- +description: Writes and maintains project documentation +mode: subagent +tools: + bash: false +--- + +You are a technical writer. Create clear, comprehensive documentation. + +Focus on: + +- Clear explanations +- Proper structure +- Code examples +- User-friendly language +``` + +### Security auditor + +```markdown ~/.config/opencode/agent/security-auditor.md expandable +--- +description: Performs security audits and identifies vulnerabilities +mode: subagent +tools: + write: false + edit: false +--- + +You are a security expert. Focus on identifying potential security issues. + +Look for: + +- Input validation vulnerabilities +- Authentication and authorization flaws +- Data exposure risks +- Dependency vulnerabilities +- Configuration security issues +``` diff --git a/packages/docs/ai-tools/claude-code.mdx b/packages/docs/ai-tools/claude-code.mdx deleted file mode 100644 index 4039c6e0ea3..00000000000 --- a/packages/docs/ai-tools/claude-code.mdx +++ /dev/null @@ -1,83 +0,0 @@ ---- -title: "Claude Code setup" -description: "Configure Claude Code for your documentation workflow" -icon: "asterisk" ---- - -Claude Code is Anthropic's official CLI tool. This guide will help you set up Claude Code to help you write and maintain your documentation. - -## Prerequisites - -- Active Claude subscription (Pro, Max, or API access) - -## Setup - -1. Install Claude Code globally: - -```bash -npm install -g @anthropic-ai/claude-code -``` - -2. Navigate to your docs directory. -3. (Optional) Add the `CLAUDE.md` file below to your project. -4. Run `claude` to start. - -## Create `CLAUDE.md` - -Create a `CLAUDE.md` file at the root of your documentation repository to train Claude Code on your specific documentation standards: - -```markdown -# Mintlify documentation - -## Working relationship - -- You can push back on ideas-this can lead to better documentation. Cite sources and explain your reasoning when you do so -- ALWAYS ask for clarification rather than making assumptions -- NEVER lie, guess, or make up information - -## Project context - -- Format: MDX files with YAML frontmatter -- Config: docs.json for navigation, theme, settings -- Components: Mintlify components - -## Content strategy - -- Document just enough for user success - not too much, not too little -- Prioritize accuracy and usability of information -- Make content evergreen when possible -- Search for existing information before adding new content. Avoid duplication unless it is done for a strategic reason -- Check existing patterns for consistency -- Start by making the smallest reasonable changes - -## Frontmatter requirements for pages - -- title: Clear, descriptive page title -- description: Concise summary for SEO/navigation - -## Writing standards - -- Second-person voice ("you") -- Prerequisites at start of procedural content -- Test all code examples before publishing -- Match style and formatting of existing pages -- Include both basic and advanced use cases -- Language tags on all code blocks -- Alt text on all images -- Relative paths for internal links - -## Git workflow - -- NEVER use --no-verify when committing -- Ask how to handle uncommitted changes before starting -- Create a new branch when no clear branch exists for changes -- Commit frequently throughout development -- NEVER skip or disable pre-commit hooks - -## Do not - -- Skip frontmatter on any MDX file -- Use absolute URLs for internal links -- Include untested code examples -- Make assumptions - always ask for clarification -``` diff --git a/packages/docs/ai-tools/cursor.mdx b/packages/docs/ai-tools/cursor.mdx deleted file mode 100644 index d05882919fe..00000000000 --- a/packages/docs/ai-tools/cursor.mdx +++ /dev/null @@ -1,423 +0,0 @@ ---- -title: "Cursor setup" -description: "Configure Cursor for your documentation workflow" -icon: "arrow-pointer" ---- - -Use Cursor to help write and maintain your documentation. This guide shows how to configure Cursor for better results on technical writing tasks and using Mintlify components. - -## Prerequisites - -- Cursor editor installed -- Access to your documentation repository - -## Project rules - -Create project rules that all team members can use. In your documentation repository root: - -```bash -mkdir -p .cursor -``` - -Create `.cursor/rules.md`: - -````markdown -# Mintlify technical writing rule - -You are an AI writing assistant specialized in creating exceptional technical documentation using Mintlify components and following industry-leading technical writing practices. - -## Core writing principles - -### Language and style requirements - -- Use clear, direct language appropriate for technical audiences -- Write in second person ("you") for instructions and procedures -- Use active voice over passive voice -- Employ present tense for current states, future tense for outcomes -- Avoid jargon unless necessary and define terms when first used -- Maintain consistent terminology throughout all documentation -- Keep sentences concise while providing necessary context -- Use parallel structure in lists, headings, and procedures - -### Content organization standards - -- Lead with the most important information (inverted pyramid structure) -- Use progressive disclosure: basic concepts before advanced ones -- Break complex procedures into numbered steps -- Include prerequisites and context before instructions -- Provide expected outcomes for each major step -- Use descriptive, keyword-rich headings for navigation and SEO -- Group related information logically with clear section breaks - -### User-centered approach - -- Focus on user goals and outcomes rather than system features -- Anticipate common questions and address them proactively -- Include troubleshooting for likely failure points -- Write for scannability with clear headings, lists, and white space -- Include verification steps to confirm success - -## Mintlify component reference - -### Callout components - -#### Note - Additional helpful information - - -Supplementary information that supports the main content without interrupting flow - - -#### Tip - Best practices and pro tips - - -Expert advice, shortcuts, or best practices that enhance user success - - -#### Warning - Important cautions - - -Critical information about potential issues, breaking changes, or destructive actions - - -#### Info - Neutral contextual information - - -Background information, context, or neutral announcements - - -#### Check - Success confirmations - - -Positive confirmations, successful completions, or achievement indicators - - -### Code components - -#### Single code block - -Example of a single code block: - -```javascript config.js -const apiConfig = { - baseURL: "https://api.example.com", - timeout: 5000, - headers: { - Authorization: `Bearer ${process.env.API_TOKEN}`, - }, -} -``` - -#### Code group with multiple languages - -Example of a code group: - - -```javascript Node.js -const response = await fetch('/api/endpoint', { - headers: { Authorization: `Bearer ${apiKey}` } -}); -``` - -```python Python -import requests -response = requests.get('/api/endpoint', - headers={'Authorization': f'Bearer {api_key}'}) -``` - -```curl cURL -curl -X GET '/api/endpoint' \ - -H 'Authorization: Bearer YOUR_API_KEY' -``` - - - -#### Request/response examples - -Example of request/response documentation: - - -```bash cURL -curl -X POST 'https://api.example.com/users' \ - -H 'Content-Type: application/json' \ - -d '{"name": "John Doe", "email": "john@example.com"}' -``` - - - -```json Success -{ - "id": "user_123", - "name": "John Doe", - "email": "john@example.com", - "created_at": "2024-01-15T10:30:00Z" -} -``` - - -### Structural components - -#### Steps for procedures - -Example of step-by-step instructions: - - - - Run `npm install` to install required packages. - - - Verify installation by running `npm list`. - - - - - Create a `.env` file with your API credentials. - - ```bash - API_KEY=your_api_key_here - ``` - - - Never commit API keys to version control. - - - - -#### Tabs for alternative content - -Example of tabbed content: - - - - ```bash - brew install node - npm install -g package-name - ``` - - - - ```powershell - choco install nodejs - npm install -g package-name - ``` - - - - ```bash - sudo apt install nodejs npm - npm install -g package-name - ``` - - - -#### Accordions for collapsible content - -Example of accordion groups: - - - - - **Firewall blocking**: Ensure ports 80 and 443 are open - - **Proxy configuration**: Set HTTP_PROXY environment variable - - **DNS resolution**: Try using 8.8.8.8 as DNS server - - - - ```javascript - const config = { - performance: { cache: true, timeout: 30000 }, - security: { encryption: 'AES-256' } - }; - ``` - - - -### Cards and columns for emphasizing information - -Example of cards and card groups: - - -Complete walkthrough from installation to your first API call in under 10 minutes. - - - - - Learn how to authenticate requests using API keys or JWT tokens. - - - - Understand rate limits and best practices for high-volume usage. - - - -### API documentation components - -#### Parameter fields - -Example of parameter documentation: - - -Unique identifier for the user. Must be a valid UUID v4 format. - - - -User's email address. Must be valid and unique within the system. - - - -Maximum number of results to return. Range: 1-100. - - - -Bearer token for API authentication. Format: `Bearer YOUR_API_KEY` - - -#### Response fields - -Example of response field documentation: - - -Unique identifier assigned to the newly created user. - - - -ISO 8601 formatted timestamp of when the user was created. - - - -List of permission strings assigned to this user. - - -#### Expandable nested fields - -Example of nested field documentation: - - -Complete user object with all associated data. - - - - User profile information including personal details. - - - - User's first name as entered during registration. - - - - URL to user's profile picture. Returns null if no avatar is set. - - - - - - -### Media and advanced components - -#### Frames for images - -Wrap all images in frames: - - -Main dashboard showing analytics overview - - - -Analytics dashboard with charts - - -#### Videos - -Use the HTML video element for self-hosted video content: - - - -Embed YouTube videos using iframe elements: - - - -#### Tooltips - -Example of tooltip usage: - - -API - - -#### Updates - -Use updates for changelogs: - - -## New features -- Added bulk user import functionality -- Improved error messages with actionable suggestions - -## Bug fixes - -- Fixed pagination issue with large datasets -- Resolved authentication timeout problems - - -## Required page structure - -Every documentation page must begin with YAML frontmatter: - -```yaml ---- -title: "Clear, specific, keyword-rich title" -description: "Concise description explaining page purpose and value" ---- -``` - -## Content quality standards - -### Code examples requirements - -- Always include complete, runnable examples that users can copy and execute -- Show proper error handling and edge case management -- Use realistic data instead of placeholder values -- Include expected outputs and results for verification -- Test all code examples thoroughly before publishing -- Specify language and include filename when relevant -- Add explanatory comments for complex logic -- Never include real API keys or secrets in code examples - -### API documentation requirements - -- Document all parameters including optional ones with clear descriptions -- Show both success and error response examples with realistic data -- Include rate limiting information with specific limits -- Provide authentication examples showing proper format -- Explain all HTTP status codes and error handling -- Cover complete request/response cycles - -### Accessibility requirements - -- Include descriptive alt text for all images and diagrams -- Use specific, actionable link text instead of "click here" -- Ensure proper heading hierarchy starting with H2 -- Provide keyboard navigation considerations -- Use sufficient color contrast in examples and visuals -- Structure content for easy scanning with headers and lists - -## Component selection logic - -- Use **Steps** for procedures and sequential instructions -- Use **Tabs** for platform-specific content or alternative approaches -- Use **CodeGroup** when showing the same concept in multiple programming languages -- Use **Accordions** for progressive disclosure of information -- Use **RequestExample/ResponseExample** specifically for API endpoint documentation -- Use **ParamField** for API parameters, **ResponseField** for API responses -- Use **Expandable** for nested object properties or hierarchical information -```` diff --git a/packages/docs/ai-tools/windsurf.mdx b/packages/docs/ai-tools/windsurf.mdx deleted file mode 100644 index 310c81d5f70..00000000000 --- a/packages/docs/ai-tools/windsurf.mdx +++ /dev/null @@ -1,96 +0,0 @@ ---- -title: "Windsurf setup" -description: "Configure Windsurf for your documentation workflow" -icon: "water" ---- - -Configure Windsurf's Cascade AI assistant to help you write and maintain documentation. This guide shows how to set up Windsurf specifically for your Mintlify documentation workflow. - -## Prerequisites - -- Windsurf editor installed -- Access to your documentation repository - -## Workspace rules - -Create workspace rules that provide Windsurf with context about your documentation project and standards. - -Create `.windsurf/rules.md` in your project root: - -````markdown -# Mintlify technical writing rule - -## Project context - -- This is a documentation project on the Mintlify platform -- We use MDX files with YAML frontmatter -- Navigation is configured in `docs.json` -- We follow technical writing best practices - -## Writing standards - -- Use second person ("you") for instructions -- Write in active voice and present tense -- Start procedures with prerequisites -- Include expected outcomes for major steps -- Use descriptive, keyword-rich headings -- Keep sentences concise but informative - -## Required page structure - -Every page must start with frontmatter: - -```yaml ---- -title: "Clear, specific title" -description: "Concise description for SEO and navigation" ---- -``` - -## Mintlify components - -### Callouts - -- `` for helpful supplementary information -- `` for important cautions and breaking changes -- `` for best practices and expert advice -- `` for neutral contextual information -- `` for success confirmations - -### Code examples - -- When appropriate, include complete, runnable examples -- Use `` for multiple language examples -- Specify language tags on all code blocks -- Include realistic data, not placeholders -- Use `` and `` for API docs - -### Procedures - -- Use `` component for sequential instructions -- Include verification steps with `` components when relevant -- Break complex procedures into smaller steps - -### Content organization - -- Use `` for platform-specific content -- Use `` for progressive disclosure -- Use `` and `` for highlighting content -- Wrap images in `` components with descriptive alt text - -## API documentation requirements - -- Document all parameters with `` -- Show response structure with `` -- Include both success and error examples -- Use `` for nested object properties -- Always include authentication examples - -## Quality standards - -- Test all code examples before publishing -- Use relative paths for internal links -- Include alt text for all images -- Ensure proper heading hierarchy (start with h2) -- Check existing patterns for consistency -```` diff --git a/packages/docs/cli.mdx b/packages/docs/cli.mdx new file mode 100644 index 00000000000..dcb7d45484f --- /dev/null +++ b/packages/docs/cli.mdx @@ -0,0 +1,294 @@ +--- +title: CLI +description: OpenCode CLI options and commands. +--- + +The OpenCode CLI by default starts the [TUI](/tui) when run without any arguments. + +```bash +opencode +``` + +But it also accepts commands as documented on this page. This allows you to interact with OpenCode programmatically. + +```bash +opencode run "Explain how closures work in JavaScript" +``` + +### tui + +Start the OpenCode terminal user interface. + +```bash +opencode [project] +``` + +#### Flags + +| Flag | Short | Description | +| :------------ | :----- | :----------------------------------------- | +| `--continue` | `-c` | Continue the last session | +| `--session` | `-s` | Session ID to continue | +| `--prompt` | `-p` | Prompt to use | +| `--model` | `-m` | Model to use in the form of provider/model | +| `--agent` | | Agent to use | +| `--port` | | Port to listen on | +| `--hostname` | | Hostname to listen on | + +## Commands + +The OpenCode CLI also has the following commands. + +### agent + +Manage agents for OpenCode. + +```bash +opencode agent [command] +``` + +#### create + +Create a new agent with custom configuration. + +```bash +opencode agent create +``` + +This command will guide you through creating a new agent with a custom system prompt and tool configuration. + +### auth + +Command to manage credentials and login for providers. + +```bash +opencode auth [command] +``` + +#### login + +OpenCode is powered by the provider list at [Models.dev](https://models.dev), so you can use `opencode auth login` to configure API keys for any provider you'd like to use. This is stored in `~/.local/share/opencode/auth.json`. + +```bash +opencode auth login +``` + +When OpenCode starts up it loads the providers from the credentials file. And if there are any keys defined in your environments or a `.env` file in your project. + +#### list + +Lists all the authenticated providers as stored in the credentials file. + +```bash +opencode auth list +``` + +Or the short version. + +```bash +opencode auth ls +``` + +#### logout + +Logs you out of a provider by clearing it from the credentials file. + +```bash +opencode auth logout +``` + +### github + +Manage the GitHub agent for repository automation. + +```bash +opencode github [command] +``` + +#### install + +Install the GitHub agent in your repository. + +```bash +opencode github install +``` + +This sets up the necessary GitHub Actions workflow and guides you through the configuration process. [Learn more](/github). + +#### run + +Run the GitHub agent. This is typically used in GitHub Actions. + +```bash +opencode github run +``` + +##### Flags + +| Flag | Description | +| :--------- | :-------------------------------------- | +| `--event` | GitHub mock event to run the agent for | +| `--token` | GitHub personal access token | + + +### models + +List all available models from configured providers. + +```bash +opencode models [provider] +``` + +This command displays all models available across your configured providers in the format `provider/model`. + +This is useful for figuring out the exact model name to use in [your config](/config/). + +You can optionally pass a provider ID to filter models by that provider. + +```bash +opencode models anthropic +``` + +#### Flags + +| Flag | Description | +| :------------ | :----------------------------------------- | +| `--refresh` | Refresh the models cache from models.dev | +| `--verbose` | Use more verbose model output (includes metadata like costs) | + +Use the `--refresh` flag to update the cached model list. This is useful when new models have been added to a provider and you want to see them in OpenCode. + +```bash +opencode models --refresh +``` + +### run + +Run opencode in non-interactive mode by passing a prompt directly. + +```bash +opencode run [message..] +``` + +This is useful for scripting, automation, or when you want a quick answer without launching the full TUI. For example. + +```bash "opencode run" +opencode run Explain the use of context in Go +``` + +You can also attach to a running `opencode serve` instance to avoid MCP server cold boot times on every run: + +```bash +# Start a headless server in one terminal +opencode serve + +# In another terminal, run commands that attach to it +opencode run --attach http://localhost:4096 "Explain async/await in JavaScript" +``` + +#### Flags + +| Flag | Short | Description | +| :------------ | :----- | :------------------------------------------------------------------ | +| `--command` | | The command to run, use message for args | +| `--continue` | `-c` | Continue the last session | +| `--session` | `-s` | Session ID to continue | +| `--share` | | Share the session | +| `--model` | `-m` | Model to use in the form of provider/model | +| `--agent` | | Agent to use | +| `--file` | `-f` | File(s) to attach to message | +| `--format` | | Format: default (formatted) or json (raw JSON events) | +| `--title` | | Title for the session (uses truncated prompt if no value provided) | +| `--attach` | | Attach to a running opencode server (e.g., http://localhost:4096) | +| `--port` | | Port for the local server (defaults to random port) | + +### serve + +Start a headless opencode server for API access. Check out the [server docs](/server) for the full HTTP interface. + +```bash +opencode serve +``` + +This starts an HTTP server that provides API access to opencode functionality without the TUI interface. + +#### Flags + +| Flag | Short | Description | +| :------------ | :----- | :--------------------- | +| `--port` | `-p` | Port to listen on | +| `--hostname` | | Hostname to listen on | + +### upgrade + +Updates opencode to the latest version or a specific version. + +```bash +opencode upgrade [target] +``` + +To upgrade to the latest version. + +```bash +opencode upgrade +``` + +To upgrade to a specific version. + +```bash +opencode upgrade v0.1.48 +``` + +#### Flags + +| Flag | Short | Description | +| :--------- | :----- | :----------------------------------------------------------------- | +| `--method` | `-m` | The installation method that was used; curl, npm, pnpm, bun, brew | + +## Global Flags + +The opencode CLI takes the following global flags. + +| Flag | Short | Description | +| :-------------- | :----- | :------------------------------------ | +| `--help` | `-h` | Display help | +| `--version` | `-v` | Print version number | +| `--print-logs` | | Print logs to stderr | +| `--log-level` | | Log level (DEBUG, INFO, WARN, ERROR) | + +## Environment variables + +OpenCode can be configured using environment variables. + +| Variable | Type | Description | +| :------------------------------------- | :------- | :---------------------------------------- | +| `OPENCODE_AUTO_SHARE` | boolean | Automatically share sessions | +| `OPENCODE_GIT_BASH_PATH` | string | Path to Git Bash executable on Windows | +| `OPENCODE_CONFIG` | string | Path to config file | +| `OPENCODE_CONFIG_DIR` | string | Path to config directory | +| `OPENCODE_CONFIG_CONTENT` | string | Inline json config content | +| `OPENCODE_DISABLE_AUTOUPDATE` | boolean | Disable automatic update checks | +| `OPENCODE_DISABLE_PRUNE` | boolean | Disable pruning of old data | +| `OPENCODE_DISABLE_TERMINAL_TITLE` | boolean | Disable automatic terminal title updates | +| `OPENCODE_PERMISSION` | string | Inlined json permissions config | +| `OPENCODE_DISABLE_DEFAULT_PLUGINS` | boolean | Disable default plugins | +| `OPENCODE_DISABLE_LSP_DOWNLOAD` | boolean | Disable automatic LSP server downloads | +| `OPENCODE_ENABLE_EXPERIMENTAL_MODELS` | boolean | Enable experimental models | +| `OPENCODE_DISABLE_AUTOCOMPACT` | boolean | Disable automatic context compaction | +| `OPENCODE_CLIENT` | string | Client identifier (defaults to `cli`) | +| `OPENCODE_ENABLE_EXA` | boolean | Enable Exa web search tools | + +### Experimental + +These environment variables enable experimental features that may change or be removed. + +| Variable | Type | Description | +| :----------------------------------------------- | :------- | :--------------------------------------- | +| `OPENCODE_EXPERIMENTAL` | boolean | Enable all experimental features | +| `OPENCODE_EXPERIMENTAL_ICON_DISCOVERY` | boolean | Enable icon discovery | +| `OPENCODE_EXPERIMENTAL_DISABLE_COPY_ON_SELECT` | boolean | Disable copy on select in TUI | +| `OPENCODE_EXPERIMENTAL_BASH_MAX_OUTPUT_LENGTH` | number | Max output length for bash commands | +| `OPENCODE_EXPERIMENTAL_BASH_DEFAULT_TIMEOUT_MS` | number | Default timeout for bash commands in ms | +| `OPENCODE_EXPERIMENTAL_OUTPUT_TOKEN_MAX` | number | Max output tokens for LLM responses | +| `OPENCODE_EXPERIMENTAL_FILEWATCHER` | boolean | Enable file watcher for entire dir | +| `OPENCODE_EXPERIMENTAL_OXFMT` | boolean | Enable oxfmt formatter | diff --git a/packages/docs/commands.mdx b/packages/docs/commands.mdx new file mode 100644 index 00000000000..fd45c3d0f2a --- /dev/null +++ b/packages/docs/commands.mdx @@ -0,0 +1,296 @@ +--- +title: Commands +description: Create custom commands for repetitive tasks. +--- + +Custom commands let you specify a prompt you want to run when that command is executed in the TUI. + +```bash +/my-command +``` + +Custom commands are in addition to the built-in commands like `/init`, `/undo`, `/redo`, `/share`, `/help`. [Learn more](/tui#commands). + +## Create command files + +Create markdown files in the `command/` directory to define custom commands. + +Create `.opencode/command/test.md`: + +```md .opencode/command/test.md +--- +description: Run tests with coverage +agent: build +model: anthropic/claude-3-5-sonnet-20241022 +--- + +Run the full test suite with coverage report and show any failures. +Focus on the failing tests and suggest fixes. +``` + +The frontmatter defines command properties. The content becomes the template. + +Use the command by typing `/` followed by the command name. + +```bash +"/test" +``` + +## Configure + +You can add custom commands through the OpenCode config or by creating markdown files in the `command/` directory. + +### JSON + +Use the `command` option in your OpenCode [config](/config): + +```json opencode.json highlight={4-12} +{ + "$schema": "https://opencode.ai/config.json", + "command": { + // This becomes the name of the command + "test": { + // This is the prompt that will be sent to the LLM + "template": "Run the full test suite with coverage report and show any failures.\nFocus on the failing tests and suggest fixes.", + // This is show as the description in the TUI + "description": "Run tests with coverage", + "agent": "build", + "model": "anthropic/claude-3-5-sonnet-20241022" + } + } +} +``` + +Now you can run this command in the TUI: + +```bash +/test +``` + +### Markdown + +You can also define commands using markdown files. Place them in: + +- Global: `~/.config/opencode/command/` +- Per-project: `.opencode/command/` + +```markdown ~/.config/opencode/command/test.md +--- +description: Run tests with coverage +agent: build +model: anthropic/claude-3-5-sonnet-20241022 +--- + +Run the full test suite with coverage report and show any failures. +Focus on the failing tests and suggest fixes. +``` + +The markdown file name becomes the command name. For example, `test.md` lets +you run: + +```bash +/test +``` + +## Prompt config + +The prompts for the custom commands support several special placeholders and syntax. + +### Arguments + +Pass arguments to commands using the `$ARGUMENTS` placeholder. + +```md .opencode/command/component.md +--- +description: Create a new component +--- + +Create a new React component named $ARGUMENTS with TypeScript support. +Include proper typing and basic structure. +``` + +Run the command with arguments: + +```bash +/component Button +``` + +And `$ARGUMENTS` will be replaced with `Button`. + +You can also access individual arguments using positional parameters: + +- `$1` - First argument +- `$2` - Second argument +- `$3` - Third argument +- And so on... + +For example: + +```md .opencode/command/create-file.md +--- +description: Create a new file with content +--- + +Create a file named $1 in the directory $2 +with the following content: $3 +``` + +Run the command: + +```bash +/create-file config.json src "{ \"key\": \"value\" }" +``` + +This replaces: + +- `$1` with `config.json` +- `$2` with `src` +- `$3` with `{ "key": "value" }` + +### Shell output + +Use _!`command`_ to inject [bash command](/tui#bash-commands) output into your prompt. + +For example, to create a custom command that analyzes test coverage: + +```md .opencode/command/analyze-coverage.md +--- +description: Analyze test coverage +--- + +Here are the current test results: +!`npm test` + +Based on these results, suggest improvements to increase coverage. +``` + +Or to review recent changes: + +```md .opencode/command/review-changes.md +--- +description: Review recent changes +--- + +Recent git commits: +!`git log --oneline -10` + +Review these changes and suggest any improvements. +``` + +Commands run in your project's root directory and their output becomes part of the prompt. + +### File references + +Include files in your command using `@` followed by the filename. + +```md .opencode/command/review-component.md +--- +description: Review component +--- + +Review the component in @src/components/Button.tsx. +Check for performance issues and suggest improvements. +``` + +The file content gets included in the prompt automatically. + +## Options + +Let's look at the configuration options in detail. + +### Template + +The `template` option defines the prompt that will be sent to the LLM when the command is executed. + +```json opencode.json +{ + "command": { + "test": { + "template": "Run the full test suite with coverage report and show any failures.\nFocus on the failing tests and suggest fixes." + } + } +} +``` + +This is a **required** config option. + +### Description + +Use the `description` option to provide a brief description of what the command does. + +```json opencode.json +{ + "command": { + "test": { + "description": "Run tests with coverage" + } + } +} +``` + +This is shown as the description in the TUI when you type in the command. + + +### Agent + +Use the `agent` config to optionally specify which [agent](/agents) should execute this command. +If this is a [subagent](/agents#subagents) the command will trigger a subagent invocation by default. +To disable this behavior, set `subtask` to `false`. + +```json opencode.json +{ + "command": { + "review": { + "agent": "plan" + } + } +} +``` + +This is an **optional** config option. If not specified, defaults to your current agent. + +### Subtask + +Use the `subtask` boolean to force the command to trigger a [subagent](/agents#subagents) invocation. +This useful if you want the command to not pollute your primary context and will **force** the agent to act as a subagent, +even if `mode` is set to `primary` on the [agent](/agents) configuration. + +```json opencode.json +{ + "command": { + "analyze": { + "subtask": true + } + } +} +``` + +This is an **optional** config option. + +### Model + +Use the `model` config to override the default model for this command. + +```json opencode.json +{ + "command": { + "analyze": { + "model": "anthropic/claude-3-5-sonnet-20241022" + } + } +} +``` + +This is an **optional** config option. + +## Built-in + +opencode includes several built-in commands like `/init`, `/undo`, `/redo`, `/share`, `/help`; [learn more](/tui#commands). + + +**NOTE** + +Custom commands can override built-in commands. + + +If you define a custom command with the same name, it will override the built-in command. diff --git a/packages/docs/config.mdx b/packages/docs/config.mdx new file mode 100644 index 00000000000..fbd5f958f37 --- /dev/null +++ b/packages/docs/config.mdx @@ -0,0 +1,437 @@ +--- +title: Config +description: Using the OpenCode JSON config. +--- + +You can configure OpenCode using a JSON config file. + +## Format + +OpenCode supports both **JSON** and **JSONC** (JSON with Comments) formats. + +```jsonc opencode.jsonc +{ + "$schema": "https://opencode.ai/config.json", + // Theme configuration + "theme": "opencode", + "model": "anthropic/claude-sonnet-4-5", + "autoupdate": true, +} +``` + +## Locations + +You can place your config in a couple of different locations and they have a +different order of precedence. + + + +**Config Merging** + +Configuration files are **merged together**, not replaced. Settings from all config locations are combined using a deep merge strategy, where later configs override earlier ones only for conflicting keys. Non-conflicting settings from all configs are preserved. + +For example, if your global config sets `theme: "opencode"` and `autoupdate: true`, and your project config sets `model: "anthropic/claude-sonnet-4-5"`, the final configuration will include all three settings. + + +### Global + +Place your global OpenCode config in `~/.config/opencode/opencode.json`. You'll want to use the global config for things like themes, providers, or keybinds. + +### Per project + +You can also add a `opencode.json` in your project. Settings from this config are merged with and can override the global config. This is useful for configuring providers or modes specific to your project. + + +**TIP** + +Place project specific config in the root of your project. + + +When OpenCode starts up, it looks for a config file in the current directory or traverse up to the nearest Git directory. + +This is also safe to be checked into Git and uses the same schema as the global one. + + +### Custom path + +You can also specify a custom config file path using the `OPENCODE_CONFIG` environment variable. Settings from this config are merged with and can override the global and project configs. + +```bash +export OPENCODE_CONFIG=/path/to/my/custom-config.json +opencode run "Hello world" +``` + +### Custom directory + +You can specify a custom config directory using the `OPENCODE_CONFIG_DIR` +environment variable. This directory will be searched for agents, commands, +modes, and plugins just like the standard `.opencode` directory, and should +follow the same structure. + +```bash +export OPENCODE_CONFIG_DIR=/path/to/my/config-directory +opencode run "Hello world" +``` + + +**NOTE** + +The custom directory is loaded after the global config and `.opencode` directories, so it can override their settings. + + +## Schema + +The config file has a schema that's defined in [**`opencode.ai/config.json`**](https://opencode.ai/config.json). + +Your editor should be able to validate and autocomplete based on the schema. + +### TUI + +You can configure TUI-specific settings through the `tui` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "tui": { + "scroll_speed": 3, + "scroll_acceleration": { + "enabled": true + } + } +} +``` + +Available options: + +- `scroll_acceleration.enabled` - Enable macOS-style scroll acceleration. **Takes precedence over `scroll_speed`.** +- `scroll_speed` - Custom scroll speed multiplier (default: `1`, minimum: `1`). Ignored if `scroll_acceleration.enabled` is `true`. + + + + + +### Tools + +You can manage the tools an LLM can use through the `tools` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "write": false, + "bash": false + } +} +``` + + + + +### Models + +You can configure the providers and models you want to use in your OpenCode config through the `provider`, `model` and `small_model` options. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "provider": {}, + "model": "anthropic/claude-sonnet-4-5", + "small_model": "anthropic/claude-haiku-4-5" +} +``` + +The `small_model` option configures a separate model for lightweight tasks like title generation. By default, OpenCode tries to use a cheaper model if one is available from your provider, otherwise it falls back to your main model. + +You can also configure [local models](/models#local). + + + + + +### Themes + +You can configure the theme you want to use in your OpenCode config through the `theme` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "theme": "" +} +``` + + + + + +### Agents + +You can configure specialized agents for specific tasks through the `agent` option. + +```jsonc opencode.jsonc +{ + "$schema": "https://opencode.ai/config.json", + "agent": { + "code-reviewer": { + "description": "Reviews code for best practices and potential issues", + "model": "anthropic/claude-sonnet-4-5", + "prompt": "You are a code reviewer. Focus on security, performance, and maintainability.", + "tools": { + // Disable file modification tools for review-only agent + "write": false, + "edit": false, + }, + }, + }, +} +``` + +You can also define agents using markdown files in `~/.config/opencode/agent/` or `.opencode/agent/`. + + + + + +### Sharing + +You can configure the [share](/share) feature through the `share` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "share": "manual" +} +``` + +This takes: + +- `"manual"` - Allow manual sharing via commands (default) +- `"auto"` - Automatically share new conversations +- `"disabled"` - Disable sharing entirely + +By default, sharing is set to manual mode where you need to explicitly share conversations using the `/share` command. + +### Commands + +You can configure custom commands for repetitive tasks through the `command` option. + +```jsonc opencode.jsonc +{ + "$schema": "https://opencode.ai/config.json", + "command": { + "test": { + "template": "Run the full test suite with coverage report and show any failures.\nFocus on the failing tests and suggest fixes.", + "description": "Run tests with coverage", + "agent": "build", + "model": "anthropic/claude-haiku-4-5", + }, + "component": { + "template": "Create a new React component named $ARGUMENTS with TypeScript support.\nInclude proper typing and basic structure.", + "description": "Create a new component", + }, + }, +} +``` + +You can also define commands using markdown files in `~/.config/opencode/command/` or `.opencode/command/`. + + + + + +### Keybinds + +You can customize your keybinds through the `keybinds` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "keybinds": {} +} +``` + + + + + +### Autoupdate + +OpenCode will automatically download any new updates when it starts up. You can disable this with the `autoupdate` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "autoupdate": false +} +``` + +If you don't want updates but want to be notified when a new version is available, set `autoupdate` to `"notify"`. + +### Formatters + +You can configure code formatters through the `formatter` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "formatter": { + "prettier": { + "disabled": true + }, + "custom-prettier": { + "command": ["npx", "prettier", "--write", "$FILE"], + "environment": { + "NODE_ENV": "development" + }, + "extensions": [".js", ".ts", ".jsx", ".tsx"] + } + } +} +``` + + + + + +### Permissions + +By default, opencode **allows all operations** without requiring explicit approval. You can change this using the `permission` option. + +For example, to ensure that the `edit` and `bash` tools require user approval: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "edit": "ask", + "bash": "ask" + } +} +``` + + + + + +### MCP servers + +You can configure MCP servers you want to use through the `mcp` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "mcp": {} +} +``` + + + + + + +### Instructions + +You can configure the instructions for the model you're using through the `instructions` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "instructions": ["CONTRIBUTING.md", "docs/guidelines.md", ".cursor/rules/*.md"] +} +``` + +This takes an array of paths and glob patterns to instruction files. + + + + + +### Disabled providers + +You can disable providers that are loaded automatically through the `disabled_providers` option. This is useful when you want to prevent certain providers from being loaded even if their credentials are available. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "disabled_providers": ["openai", "gemini"] +} +``` + +The `disabled_providers` option accepts an array of provider IDs. When a provider is disabled: + +- It won't be loaded even if environment variables are set. +- It won't be loaded even if API keys are configured through the `/connect` command. +- The provider's models won't appear in the model selection list. + + +### Enabled providers + +You can specify an allowlist of providers through the `enabled_providers` option. When set, only the specified providers will be enabled and all others will be ignored. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "enabled_providers": ["anthropic", "openai"] +} +``` + +This is useful when you want to restrict OpenCode to only use specific providers rather than disabling them one by one. + + +**NOTE** + +If a provider appears in both `enabled_providers` and `disabled_providers`, the `disabled_providers` takes priority for backwards compatibility. + + +## Variables + +You can use variable substitution in your config files to reference environment variables and file contents. + +### Env vars + +Use `{env:VARIABLE_NAME}` to substitute environment variables: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "model": "{env:OPENCODE_MODEL}", + "provider": { + "anthropic": { + "models": {}, + "options": { + "apiKey": "{env:ANTHROPIC_API_KEY}" + } + } + } +} +``` + +If the environment variable is not set, it will be replaced with an empty string. + +### Files + +Use `{file:path/to/file}` to substitute the contents of a file: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "instructions": ["./custom-instructions.md"], + "provider": { + "openai": { + "options": { + "apiKey": "{file:~/.secrets/openai-key}" + } + } + } +} +``` + +File paths can be: + +- Relative to the config file directory +- Or absolute paths starting with `/` or `~` + +These are useful for: + +- Keeping sensitive data like API keys in separate files. +- Including large instruction files without cluttering your config. +- Sharing common configuration snippets across multiple config files. diff --git a/packages/docs/custom-tools.mdx b/packages/docs/custom-tools.mdx new file mode 100644 index 00000000000..8cffbe56f08 --- /dev/null +++ b/packages/docs/custom-tools.mdx @@ -0,0 +1,152 @@ +--- +title: Custom Tools +description: Create tools the LLM can call in opencode. +--- + +Custom tools are functions you create that the LLM can call during conversations. They work alongside opencode's [built-in tools](/tools) like `read`, `write`, and `bash`. + + +## Creating a tool + +Tools are defined as **TypeScript** or **JavaScript** files. However, the tool definition can invoke scripts written in **any language** — TypeScript or JavaScript is only used for the tool definition itself. + +### Location + +They can be defined: + +- Locally by placing them in the `.opencode/tool/` directory of your project. +- Or globally, by placing them in `~/.config/opencode/tool/`. + +### Structure + +The easiest way to create tools is using the `tool()` helper which provides type-safety and validation. + +```ts .opencode/tool/database.ts highlight={1} +import { tool } from "@opencode-ai/plugin" + +export default tool({ + description: "Query the project database", + args: { + query: tool.schema.string().describe("SQL query to execute"), + }, + async execute(args) { + // Your database logic here + return `Executed query: ${args.query}` + }, +}) +``` + +The **filename** becomes the **tool name**. The above creates a `database` tool. + +#### Multiple tools per file + +You can also export multiple tools from a single file. Each export becomes **a separate tool** with the name **`_`**: + +```ts .opencode/tool/math.ts expandable +import { tool } from "@opencode-ai/plugin" + +export const add = tool({ + description: "Add two numbers", + args: { + a: tool.schema.number().describe("First number"), + b: tool.schema.number().describe("Second number"), + }, + async execute(args) { + return args.a + args.b + }, +}) + +export const multiply = tool({ + description: "Multiply two numbers", + args: { + a: tool.schema.number().describe("First number"), + b: tool.schema.number().describe("Second number"), + }, + async execute(args) { + return args.a * args.b + }, +}) +``` + +This creates two tools: `math_add` and `math_multiply`. + +### Arguments + +You can use `tool.schema`, which is just [Zod](https://zod.dev), to define argument types. + +```ts tool.schema +args: { + query: tool.schema.string().describe("SQL query to execute") +} +``` + +You can also import [Zod](https://zod.dev) directly and return a plain object: + +```ts highlight={6} +import { z } from "zod" + +export default { + description: "Tool description", + args: { + param: z.string().describe("Parameter description"), + }, + async execute(args, context) { + // Tool implementation + return "result" + }, +} +``` + +### Context + +Tools receive context about the current session: + +```ts .opencode/tool/project.ts highlight={8} +import { tool } from "@opencode-ai/plugin" + +export default tool({ + description: "Get project information", + args: {}, + async execute(args, context) { + // Access context information + const { agent, sessionID, messageID } = context + return `Agent: ${agent}, Session: ${sessionID}, Message: ${messageID}` + }, +}) +``` + +## Examples + +### Write a tool in Python + +You can write your tools in any language you want. Here's an example that adds two numbers using Python. + +First, create the tool as a Python script: + +```python .opencode/tool/add.py +import sys + +a = int(sys.argv[1]) +b = int(sys.argv[2]) +print(a + b) +``` + +Then create the tool definition that invokes it: + +```ts .opencode/tool/python-add.ts highlight={10} +import { tool } from "@opencode-ai/plugin" + +export default tool({ + description: "Add two numbers using Python", + args: { + a: tool.schema.number().describe("First number"), + b: tool.schema.number().describe("Second number"), + }, + async execute(args) { + const result = await Bun.$`python3 .opencode/tool/add.py ${args.a} ${args.b}`.text() + return result.trim() + }, +}) +``` + +Here we are using the [`Bun.$`](https://bun.com/docs/runtime/shell) utility to run the Python script. diff --git a/packages/docs/development.mdx b/packages/docs/development.mdx deleted file mode 100644 index 432ef80e4f6..00000000000 --- a/packages/docs/development.mdx +++ /dev/null @@ -1,96 +0,0 @@ ---- -title: "Development" -description: "Preview changes locally to update your docs" ---- - -**Prerequisites**: - Node.js version 19 or higher - A docs repository with a `docs.json` file - -Follow these steps to install and run Mintlify on your operating system. - - - - -```bash -npm i -g mint -``` - - - - - -Navigate to your docs directory where your `docs.json` file is located, and run the following command: - -```bash -mint dev -``` - -A local preview of your documentation will be available at `http://localhost:3000`. - - - - -## Custom ports - -By default, Mintlify uses port 3000. You can customize the port Mintlify runs on by using the `--port` flag. For example, to run Mintlify on port 3333, use this command: - -```bash -mint dev --port 3333 -``` - -If you attempt to run Mintlify on a port that's already in use, it will use the next available port: - -```md -Port 3000 is already in use. Trying 3001 instead. -``` - -## Mintlify versions - -Please note that each CLI release is associated with a specific version of Mintlify. If your local preview does not align with the production version, please update the CLI: - -```bash -npm mint update -``` - -## Validating links - -The CLI can assist with validating links in your documentation. To identify any broken links, use the following command: - -```bash -mint broken-links -``` - -## Deployment - -If the deployment is successful, you should see the following: - - - Screenshot of a deployment confirmation message that says All checks have passed. - - -## Code formatting - -We suggest using extensions on your IDE to recognize and format MDX. If you're a VSCode user, consider the [MDX VSCode extension](https://marketplace.visualstudio.com/items?itemName=unifiedjs.vscode-mdx) for syntax highlighting, and [Prettier](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode) for code formatting. - -## Troubleshooting - - - - - This may be due to an outdated version of node. Try the following: - 1. Remove the currently-installed version of the CLI: `npm remove -g mint` - 2. Upgrade to Node v19 or higher. - 3. Reinstall the CLI: `npm i -g mint` - - - - - - Solution: Go to the root of your device and delete the `~/.mintlify` folder. Then run `mint dev` again. - - - -Curious about what changed in the latest CLI version? Check out the [CLI changelog](https://www.npmjs.com/package/mintlify?activeTab=versions). diff --git a/packages/docs/docs.json b/packages/docs/docs.json index 4461f8253b7..01271e188e8 100644 --- a/packages/docs/docs.json +++ b/packages/docs/docs.json @@ -1,53 +1,124 @@ -{ - "$schema": "https://mintlify.com/docs.json", - "theme": "mint", - "name": "@opencode-ai/docs", - "colors": { - "primary": "#16A34A", - "light": "#07C983", - "dark": "#15803D" - }, - "favicon": "/favicon.svg", - "navigation": { - "tabs": [ - { - "tab": "SDK", - "groups": [ + { + "$schema": "https://mintlify.com/docs.json", + "theme": "linden", + "name": "Opencode docs", + "colors": { + "primary": "#4B4646", + "light": "#F1ECEC", + "dark": "#F1ECEC" + }, + "favicon": "/favicon.svg", + "appearance": { + "default": "dark" + }, + "styling": { + "codeblocks": { + "theme": "catppuccin-mocha" + } + }, + "background": { + "color": { + "dark": "#12110f" + } + }, + "fonts": { + "heading": { + "family": "IBM Plex Mono" + }, + "body": { + "family": "IBM Plex Mono" + } + }, + "navigation": { + "pages": [ + "index", + "config", + "providers", + "network", + "enterprise", + "troubleshooting", + "1-0", + { + "group": "Usage", + "pages": [ + "tui", + "cli", + "ide", + "zen", + "share", + "github", + "gitlab" + ] + }, + { + "group": "Configure", + "pages": [ + "tools", + "rules", + "agents", + "models", + "themes", + "keybinds", + "commands", + "formatters", + "permissions", + "lsp", + "mcp-servers", + "acp", + "custom-tools" + ] + }, + { + "group": "Develop", + "pages": [ + "sdk", + "server", + "plugins", + "ecosystem" + ] + } + ], + "global": { + "anchors": [ { - "group": "Getting started", - "pages": ["index", "quickstart", "development"], - "openapi": "https://opencode.ai/openapi.json" + "anchor": "Community", + "href": "https://opencode.ai/discord", + "icon": "discord" } ] } - ], - "global": {} - }, - "logo": { - "light": "/logo/light.svg", - "dark": "/logo/dark.svg" - }, - "navbar": { - "links": [ - { - "label": "Support", - "href": "mailto:hi@mintlify.com" + }, + "logo": { + "light": "/logo/light.svg", + "dark": "/logo/dark.svg", + "href": "https://opencode.ai/" + }, + "navbar": { + "primary": { + "type": "github", + "href": "https://github.com/sst/opencode" + } + }, + "metadata": { + "timestamp": true + }, + "contextual": { + "options": [ + "copy", + "view", + "chatgpt", + "claude", + "perplexity", + "mcp", + "cursor", + "vscode" + ] + }, + "footer": { + "socials": { + "x": "https://x.com/opencode", + "github": "https://github.com/sst/opencode", + "discord": "https://opencode.ai/discord" } - ], - "primary": { - "type": "button", - "label": "Dashboard", - "href": "https://dashboard.mintlify.com" - } - }, - "contextual": { - "options": ["copy", "view", "chatgpt", "claude", "perplexity", "mcp", "cursor", "vscode"] - }, - "footer": { - "socials": { - "x": "https://x.com/mintlify", - "github": "https://github.com/mintlify", - "linkedin": "https://linkedin.com/company/mintlify" } } -} diff --git a/packages/docs/ecosystem.mdx b/packages/docs/ecosystem.mdx new file mode 100644 index 00000000000..1166a1ff658 --- /dev/null +++ b/packages/docs/ecosystem.mdx @@ -0,0 +1,51 @@ +--- +title: Ecosystem +description: Projects and integrations built with OpenCode. +--- + +A collection of community projects built on OpenCode. + + +**NOTE** + +Want to add your OpenCode related project to this list? Submit a PR. + + +You can also check out [awesome-opencode](https://github.com/awesome-opencode/awesome-opencode) and [opencode.cafe](https://opencode.cafe), a community that aggregates the ecosystem and community. + + +## Plugins + +| Name | Description | +| :------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------- | +| [opencode-helicone-session](https://github.com/H2Shami/opencode-helicone-session) | Automatically inject Helicone session headers for request grouping | +| [opencode-skills](https://github.com/malhashemi/opencode-skills) | Manage and organize OpenCode skills and capabilities | +| [opencode-type-inject](https://github.com/nick-vi/opencode-type-inject) | Auto-inject TypeScript/Svelte types into file reads with lookup tools | +| [opencode-openai-codex-auth](https://github.com/numman-ali/opencode-openai-codex-auth) | Use your ChatGPT Plus/Pro subscription instead of API credits | +| [opencode-gemini-auth](https://github.com/jenslys/opencode-gemini-auth) | Use your existing Gemini plan instead of API billing | +| [opencode-antigravity-auth](https://github.com/NoeFabris/opencode-antigravity-auth) | Use Antigravity's free models instead of API billing | +| [opencode-google-antigravity-auth](https://github.com/shekohex/opencode-google-antigravity-auth) | Google Antigravity OAuth Plugin, with support for Google Search, and more robust API handling | +| [opencode-dynamic-context-pruning](https://github.com/Tarquinen/opencode-dynamic-context-pruning) | Optimize token usage by pruning obsolete tool outputs | +| [opencode-websearch-cited](https://github.com/ghoulr/opencode-websearch-cited.git) | Add native websearch support for supported providers with Google grounded style | +| [opencode-pty](https://github.com/shekohex/opencode-pty.git) | Enables AI agents to run background processes in a PTY, send interactive input to them. | +| [opencode-wakatime](https://github.com/angristan/opencode-wakatime) | Track OpenCode usage with Wakatime | +| [opencode-md-table-formatter](https://github.com/franlol/opencode-md-table-formatter/tree/main) | Clean up markdown tables produced by LLMs | +| [oh-my-opencode](https://github.com/code-yeongyu/oh-my-opencode) | Background agents, pre-built LSP/AST/MCP tools, curated agents, Claude Code compatible | +| [opencode-zellij-namer](https://github.com/24601/opencode-zellij-namer) | AI-powered automatic Zellij session naming based on OpenCode context | + +## Projects + +| Name | Description | +| :--------------------------------------------------------------------------------- | :-------------------------------------------------------------- | +| [kimaki](https://github.com/remorses/kimaki) | Discord bot to control OpenCode sessions, built on the SDK | +| [opencode.nvim](https://github.com/NickvanDyke/opencode.nvim) | Neovim plugin for editor-aware prompts, built on the API | +| [portal](https://github.com/hosenur/portal) | Mobile-first web UI for OpenCode over Tailscale/VPN | +| [opencode plugin template](https://github.com/zenobi-us/opencode-plugin-template/) | Template for building OpenCode plugins | +| [opencode.nvim](https://github.com/sudo-tee/opencode.nvim) | Neovim frontend for opencode - a terminal-based AI coding agent | + +## Agents + +| Name | Description | +| :---------------------------------------------------------------- | :----------------------------------------------------------- | +| [Agentic](https://github.com/Cluster444/agentic) | Modular AI agents and commands for structured development | +| [opencode-agents](https://github.com/darrenhinde/opencode-agents) | Configs, prompts, agents, and plugins for enhanced workflows | diff --git a/packages/docs/enterprise.mdx b/packages/docs/enterprise.mdx new file mode 100644 index 00000000000..4fa4e4cc046 --- /dev/null +++ b/packages/docs/enterprise.mdx @@ -0,0 +1,144 @@ +--- +title: Enterprise +description: Using OpenCode securely in your organization. +--- + +OpenCode Enterprise is for organizations that want to ensure that their code and data never leaves their infrastructure. It can do this by using a centralized config that integrates with your SSO and internal AI gateway. + + +OpenCode does not store any of your code or context data. + + +To get started with OpenCode Enterprise: + +1. Do a trial internally with your team. +2. [Contact us](mailto:contact@anoma.ly) to discuss pricing and implementation options. + + +## Trial + +OpenCode is open source and does not store any of your code or context data, so your developers can simply [get started](/index) and carry out a trial. + +### Data handling + +**OpenCode does not store your code or context data.** All processing happens locally or through direct API calls to your AI provider. + +This means that as long as you are using a provider you trust, or an internal +AI gateway, you can use OpenCode securely. + +The only caveat here is the optional `/share` feature. + +#### Sharing conversations + +If a user enables the `/share` feature, the conversation and the data associated with it are sent to the service we use to host these share pages at opencode.ai. + +The data is currently served through our CDN's edge network, and is cached on the edge near your users. + +We recommend you disable this for your trial. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "share": "disabled" +} +``` + +[Learn more about sharing](/share). + +### Code ownership + +**You own all code produced by OpenCode.** There are no licensing restrictions or ownership claims. + +## Pricing + +We use a per-seat model for OpenCode Enterprise. If you have your own LLM gateway, we do not charge for tokens used. For further details about pricing and implementation options, [contact us](mailto:contact@anoma.ly). + +## Deployment + +Once you have completed your trial and you are ready to use OpenCode at +your organization, you can [contact us](mailto:contact@anoma.ly) to discuss +pricing and implementation options. + +### Central Config + +We can set up OpenCode to use a single central config for your entire organization. + +This centralized config can integrate with your SSO provider and ensures all users access only your internal AI gateway. + +### SSO integration + +Through the central config, OpenCode can integrate with your organization's SSO provider for authentication. + +This allows OpenCode to obtain credentials for your internal AI gateway through your existing identity management system. + +### Internal AI gateway + +With the central config, OpenCode can also be configured to use only your internal AI gateway. + +You can also disable all other AI providers, ensuring all requests go through your organization's approved infrastructure. + +### Self-hosting + +While we recommend disabling the share pages to ensure your data never leaves +your organization, we can also help you self-host them on your infrastructure. + +This is currently on our roadmap. If you're interested, [let us know](mailto:contact@anoma.ly). + +## FAQ + + + + +OpenCode Enterprise is for organizations that want to ensure that their code and data never leaves their infrastructure. It can do this by using a centralized config that integrates with your SSO and internal AI gateway. + + + + +Simply start with an internal trial with your team. OpenCode by default does not store your code or context data, making it easy to get started. + +Then [contact us](mailto:contact@anoma.ly) to discuss pricing and implementation options. + + + + + +We offer per-seat enterprise pricing. If you have your own LLM gateway, we do not charge for tokens used. For further details, [contact us](mailto:contact@anoma.ly) for a custom quote based on your organization's needs. + + + + + +Yes. OpenCode does not store your code or context data. All processing happens locally or through direct API calls to your AI provider. With central config and SSO integration, your data remains secure within your organization's infrastructure. + + + + + +OpenCode supports private npm registries through Bun's native `.npmrc` file support. If your organization uses a private registry, such as JFrog Artifactory, Nexus, or similar, ensure developers are authenticated before running OpenCode. + +To set up authentication with your private registry: + +```bash +npm login --registry=https://your-company.jfrog.io/api/npm/npm-virtual/ +``` + +This creates `~/.npmrc` with authentication details. OpenCode will automatically +pick this up. + + +**CAUTION** + +You must be logged into the private registry before running OpenCode. + + +Alternatively, you can manually configure a `.npmrc` file: + +```bash ~/.npmrc +registry=https://your-company.jfrog.io/api/npm/npm-virtual/ +//your-company.jfrog.io/api/npm/npm-virtual/:_authToken=${NPM_AUTH_TOKEN} +``` + +Developers must be logged into the private registry before running OpenCode to ensure packages can be installed from your enterprise registry. + + + diff --git a/packages/docs/essentials/code.mdx b/packages/docs/essentials/code.mdx deleted file mode 100644 index 7a046544704..00000000000 --- a/packages/docs/essentials/code.mdx +++ /dev/null @@ -1,35 +0,0 @@ ---- -title: "Code blocks" -description: "Display inline code and code blocks" -icon: "code" ---- - -## Inline code - -To denote a `word` or `phrase` as code, enclose it in backticks (`). - -``` -To denote a `word` or `phrase` as code, enclose it in backticks (`). -``` - -## Code blocks - -Use [fenced code blocks](https://www.markdownguide.org/extended-syntax/#fenced-code-blocks) by enclosing code in three backticks and follow the leading ticks with the programming language of your snippet to get syntax highlighting. Optionally, you can also write the name of your code after the programming language. - -```java HelloWorld.java -class HelloWorld { - public static void main(String[] args) { - System.out.println("Hello, World!"); - } -} -``` - -````md -```java HelloWorld.java -class HelloWorld { - public static void main(String[] args) { - System.out.println("Hello, World!"); - } -} -``` -```` diff --git a/packages/docs/essentials/images.mdx b/packages/docs/essentials/images.mdx deleted file mode 100644 index f2a10d2538e..00000000000 --- a/packages/docs/essentials/images.mdx +++ /dev/null @@ -1,56 +0,0 @@ ---- -title: "Images and embeds" -description: "Add image, video, and other HTML elements" -icon: "image" ---- - - - -## Image - -### Using Markdown - -The [markdown syntax](https://www.markdownguide.org/basic-syntax/#images) lets you add images using the following code - -```md -![title](/path/image.jpg) -``` - -Note that the image file size must be less than 5MB. Otherwise, we recommend hosting on a service like [Cloudinary](https://cloudinary.com/) or [S3](https://aws.amazon.com/s3/). You can then use that URL and embed. - -### Using embeds - -To get more customizability with images, you can also use [embeds](/writing-content/embed) to add images - -```html - -``` - -## Embeds and HTML elements - - - -
- - - -Mintlify supports [HTML tags in Markdown](https://www.markdownguide.org/basic-syntax/#html). This is helpful if you prefer HTML tags to Markdown syntax, and lets you create documentation with infinite flexibility. - - - -### iFrames - -Loads another HTML page within the document. Most commonly used for embedding videos. - -```html - -``` diff --git a/packages/docs/essentials/markdown.mdx b/packages/docs/essentials/markdown.mdx deleted file mode 100644 index 0ca5b825097..00000000000 --- a/packages/docs/essentials/markdown.mdx +++ /dev/null @@ -1,88 +0,0 @@ ---- -title: "Markdown syntax" -description: "Text, title, and styling in standard markdown" -icon: "text-size" ---- - -## Titles - -Best used for section headers. - -```md -## Titles -``` - -### Subtitles - -Best used for subsection headers. - -```md -### Subtitles -``` - - - -Each **title** and **subtitle** creates an anchor and also shows up on the table of contents on the right. - - - -## Text formatting - -We support most markdown formatting. Simply add `**`, `_`, or `~` around text to format it. - -| Style | How to write it | Result | -| ------------- | ----------------- | --------------- | -| Bold | `**bold**` | **bold** | -| Italic | `_italic_` | _italic_ | -| Strikethrough | `~strikethrough~` | ~strikethrough~ | - -You can combine these. For example, write `**_bold and italic_**` to get **_bold and italic_** text. - -You need to use HTML to write superscript and subscript text. That is, add `` or `` around your text. - -| Text Size | How to write it | Result | -| ----------- | ------------------------ | ---------------------- | -| Superscript | `superscript` | superscript | -| Subscript | `subscript` | subscript | - -## Linking to pages - -You can add a link by wrapping text in `[]()`. You would write `[link to google](https://google.com)` to [link to google](https://google.com). - -Links to pages in your docs need to be root-relative. Basically, you should include the entire folder path. For example, `[link to text](/writing-content/text)` links to the page "Text" in our components section. - -Relative links like `[link to text](../text)` will open slower because we cannot optimize them as easily. - -## Blockquotes - -### Singleline - -To create a blockquote, add a `>` in front of a paragraph. - -> Dorothy followed her through many of the beautiful rooms in her castle. - -```md -> Dorothy followed her through many of the beautiful rooms in her castle. -``` - -### Multiline - -> Dorothy followed her through many of the beautiful rooms in her castle. -> -> The Witch bade her clean the pots and kettles and sweep the floor and keep the fire fed with wood. - -```md -> Dorothy followed her through many of the beautiful rooms in her castle. -> -> The Witch bade her clean the pots and kettles and sweep the floor and keep the fire fed with wood. -``` - -### LaTeX - -Mintlify supports [LaTeX](https://www.latex-project.org) through the Latex component. - -8 x (vk x H1 - H2) = (0,1) - -```md -8 x (vk x H1 - H2) = (0,1) -``` diff --git a/packages/docs/essentials/navigation.mdx b/packages/docs/essentials/navigation.mdx deleted file mode 100644 index a6a3090046d..00000000000 --- a/packages/docs/essentials/navigation.mdx +++ /dev/null @@ -1,87 +0,0 @@ ---- -title: "Navigation" -description: "The navigation field in docs.json defines the pages that go in the navigation menu" -icon: "map" ---- - -The navigation menu is the list of links on every website. - -You will likely update `docs.json` every time you add a new page. Pages do not show up automatically. - -## Navigation syntax - -Our navigation syntax is recursive which means you can make nested navigation groups. You don't need to include `.mdx` in page names. - - - -```json Regular Navigation -"navigation": { - "tabs": [ - { - "tab": "Docs", - "groups": [ - { - "group": "Getting Started", - "pages": ["quickstart"] - } - ] - } - ] -} -``` - -```json Nested Navigation -"navigation": { - "tabs": [ - { - "tab": "Docs", - "groups": [ - { - "group": "Getting Started", - "pages": [ - "quickstart", - { - "group": "Nested Reference Pages", - "pages": ["nested-reference-page"] - } - ] - } - ] - } - ] -} -``` - - - -## Folders - -Simply put your MDX files in folders and update the paths in `docs.json`. - -For example, to have a page at `https://yoursite.com/your-folder/your-page` you would make a folder called `your-folder` containing an MDX file called `your-page.mdx`. - - - -You cannot use `api` for the name of a folder unless you nest it inside another folder. Mintlify uses Next.js which reserves the top-level `api` folder for internal server calls. A folder name such as `api-reference` would be accepted. - - - -```json Navigation With Folder -"navigation": { - "tabs": [ - { - "tab": "Docs", - "groups": [ - { - "group": "Group Name", - "pages": ["your-folder/your-page"] - } - ] - } - ] -} -``` - -## Hidden pages - -MDX files not included in `docs.json` will not show up in the sidebar but are accessible through the search bar and by linking directly to them. diff --git a/packages/docs/essentials/reusable-snippets.mdx b/packages/docs/essentials/reusable-snippets.mdx deleted file mode 100644 index a26ab89a351..00000000000 --- a/packages/docs/essentials/reusable-snippets.mdx +++ /dev/null @@ -1,112 +0,0 @@ ---- -title: "Reusable snippets" -description: "Reusable, custom snippets to keep content in sync" -icon: "recycle" ---- - -import SnippetIntro from "/snippets/snippet-intro.mdx" - - - -## Creating a custom snippet - -**Pre-condition**: You must create your snippet file in the `snippets` directory. - - - Any page in the `snippets` directory will be treated as a snippet and will not be rendered into a standalone page. If - you want to create a standalone page from the snippet, import the snippet into another file and call it as a - component. - - -### Default export - -1. Add content to your snippet file that you want to re-use across multiple - locations. Optionally, you can add variables that can be filled in via props - when you import the snippet. - -```mdx snippets/my-snippet.mdx -Hello world! This is my content I want to reuse across pages. My keyword of the -day is {word}. -``` - - - The content that you want to reuse must be inside the `snippets` directory in order for the import to work. - - -2. Import the snippet into your destination file. - -```mdx destination-file.mdx ---- -title: My title -description: My Description ---- - -import MySnippet from "/snippets/path/to/my-snippet.mdx" - -## Header - -Lorem impsum dolor sit amet. - - -``` - -### Reusable variables - -1. Export a variable from your snippet file: - -```mdx snippets/path/to/custom-variables.mdx -export const myName = "my name" - -export const myObject = { fruit: "strawberries" } - -; -``` - -2. Import the snippet from your destination file and use the variable: - -```mdx destination-file.mdx ---- -title: My title -description: My Description ---- - -import { myName, myObject } from "/snippets/path/to/custom-variables.mdx" - -Hello, my name is {myName} and I like {myObject.fruit}. -``` - -### Reusable components - -1. Inside your snippet file, create a component that takes in props by exporting - your component in the form of an arrow function. - -```mdx snippets/custom-component.mdx -export const MyComponent = ({ title }) => ( -
-

{title}

-

... snippet content ...

-
-) - -; -``` - - - MDX does not compile inside the body of an arrow function. Stick to HTML syntax when you can or use a default export - if you need to use MDX. - - -2. Import the snippet into your destination file and pass in the props - -```mdx destination-file.mdx ---- -title: My title -description: My Description ---- - -import { MyComponent } from "/snippets/custom-component.mdx" - -Lorem ipsum dolor sit amet. - - -``` diff --git a/packages/docs/essentials/settings.mdx b/packages/docs/essentials/settings.mdx deleted file mode 100644 index 7aa44ce1ec6..00000000000 --- a/packages/docs/essentials/settings.mdx +++ /dev/null @@ -1,317 +0,0 @@ ---- -title: "Global Settings" -description: "Mintlify gives you complete control over the look and feel of your documentation using the docs.json file" -icon: "gear" ---- - -Every Mintlify site needs a `docs.json` file with the core configuration settings. Learn more about the [properties](#properties) below. - -## Properties - - -Name of your project. Used for the global title. - -Example: `mintlify` - - - - - An array of groups with all the pages within that group - - - The name of the group. - - Example: `Settings` - - - - The relative paths to the markdown files that will serve as pages. - - Example: `["customization", "page"]` - - - - - - - - Path to logo image or object with path to "light" and "dark" mode logo images - - - Path to the logo in light mode - - - Path to the logo in dark mode - - - Where clicking on the logo links you to - - - - - - Path to the favicon image - - - - Hex color codes for your global theme - - - The primary color. Used for most often for highlighted content, section headers, accents, in light mode - - - The primary color for dark mode. Used for most often for highlighted content, section headers, accents, in dark - mode - - - The primary color for important buttons - - - The color of the background in both light and dark mode - - - The hex color code of the background in light mode - - - The hex color code of the background in dark mode - - - - - - - - Array of `name`s and `url`s of links you want to include in the topbar - - - The name of the button. - - Example: `Contact us` - - - The url once you click on the button. Example: `https://mintlify.com/docs` - - - - - - - - - Link shows a button. GitHub shows the repo information at the url provided including the number of GitHub stars. - - - If `link`: What the button links to. - - If `github`: Link to the repository to load GitHub information from. - - - Text inside the button. Only required if `type` is a `link`. - - - - - - - Array of version names. Only use this if you want to show different versions of docs with a dropdown in the navigation - bar. - - - - An array of the anchors, includes the `icon`, `color`, and `url`. - - - The [Font Awesome](https://fontawesome.com/search?q=heart) icon used to feature the anchor. - - Example: `comments` - - - The name of the anchor label. - - Example: `Community` - - - The start of the URL that marks what pages go in the anchor. Generally, this is the name of the folder you put your pages in. - - - The hex color of the anchor icon background. Can also be a gradient if you pass an object with the properties `from` and `to` that are each a hex color. - - - Used if you want to hide an anchor until the correct docs version is selected. - - - Pass `true` if you want to hide the anchor until you directly link someone to docs inside it. - - - One of: "brands", "duotone", "light", "sharp-solid", "solid", or "thin" - - - - - - - Override the default configurations for the top-most anchor. - - - The name of the top-most anchor - - - Font Awesome icon. - - - One of: "brands", "duotone", "light", "sharp-solid", "solid", or "thin" - - - - - - An array of navigational tabs. - - - The name of the tab label. - - - The start of the URL that marks what pages go in the tab. Generally, this is the name of the folder you put your - pages in. - - - - - - Configuration for API settings. Learn more about API pages at [API Components](/api-playground/demo). - - - The base url for all API endpoints. If `baseUrl` is an array, it will enable for multiple base url - options that the user can toggle. - - - - - - The authentication strategy used for all API endpoints. - - - The name of the authentication parameter used in the API playground. - - If method is `basic`, the format should be `[usernameName]:[passwordName]` - - - The default value that's designed to be a prefix for the authentication input field. - - E.g. If an `inputPrefix` of `AuthKey` would inherit the default input result of the authentication field as `AuthKey`. - - - - - - Configurations for the API playground - - - - Whether the playground is showing, hidden, or only displaying the endpoint with no added user interactivity `simple` - - Learn more at the [playground guides](/api-playground/demo) - - - - - - Enabling this flag ensures that key ordering in OpenAPI pages matches the key ordering defined in the OpenAPI file. - - This behavior will soon be enabled by default, at which point this field will be deprecated. - - - - - - - A string or an array of strings of URL(s) or relative path(s) pointing to your - OpenAPI file. - - Examples: - - ```json Absolute - "openapi": "https://example.com/openapi.json" - ``` - ```json Relative - "openapi": "/openapi.json" - ``` - ```json Multiple - "openapi": ["https://example.com/openapi1.json", "/openapi2.json", "/openapi3.json"] - ``` - - - - - - An object of social media accounts where the key:property pair represents the social media platform and the account url. - - Example: - ```json - { - "x": "https://x.com/mintlify", - "website": "https://mintlify.com" - } - ``` - - - One of the following values `website`, `facebook`, `x`, `discord`, `slack`, `github`, `linkedin`, `instagram`, `hacker-news` - - Example: `x` - - - The URL to the social platform. - - Example: `https://x.com/mintlify` - - - - - - Configurations to enable feedback buttons - - - - Enables a button to allow users to suggest edits via pull requests - - - Enables a button to allow users to raise an issue about the documentation - - - - - - Customize the dark mode toggle. - - - Set if you always want to show light or dark mode for new users. When not - set, we default to the same mode as the user's operating system. - - - Set to true to hide the dark/light mode toggle. You can combine `isHidden` with `default` to force your docs to only use light or dark mode. For example: - - - ```json Only Dark Mode - "modeToggle": { - "default": "dark", - "isHidden": true - } - ``` - - ```json Only Light Mode - "modeToggle": { - "default": "light", - "isHidden": true - } - ``` - - - - - - - - - A background image to be displayed behind every page. See example with [Infisical](https://infisical.com/docs) and - [FRPC](https://frpc.io). - diff --git a/packages/docs/favicon.svg b/packages/docs/favicon.svg index b785c738bf1..e8f83a18d10 100644 --- a/packages/docs/favicon.svg +++ b/packages/docs/favicon.svg @@ -1,19 +1 @@ - - - - - - - - - - - - - - - - - - - + \ No newline at end of file diff --git a/packages/docs/formatters.mdx b/packages/docs/formatters.mdx new file mode 100644 index 00000000000..ee2df079be6 --- /dev/null +++ b/packages/docs/formatters.mdx @@ -0,0 +1,115 @@ +--- +title: Formatters +description: OpenCode uses language specific formatters. +--- + +OpenCode automatically formats files after they are written or edited using language-specific formatters. This ensures that the code that is generated follows the code styles of your project. + + +## Built-in + +OpenCode comes with several built-in formatters for popular languages and frameworks. Below is a list of the formatters, supported file extensions, and commands or config options it needs. + +| Formatter | Extensions | Requirements | +| :--------------------- | :------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------- | +| gofmt | .go | `gofmt` command available | +| mix | .ex, .exs, .eex, .heex, .leex, .neex, .sface | `mix` command available | +| prettier | .js, .jsx, .ts, .tsx, .html, .css, .md, .json, .yaml, and [more](https://prettier.io/docs/en/index.html) | `prettier` dependency in `package.json` | +| biome | .js, .jsx, .ts, .tsx, .html, .css, .md, .json, .yaml, and [more](https://biomejs.dev/) | `biome.json(c)` config file | +| zig | .zig, .zon | `zig` command available | +| clang-format | .c, .cpp, .h, .hpp, .ino, and [more](https://clang.llvm.org/docs/ClangFormat.html) | `.clang-format` config file | +| ktlint | .kt, .kts | `ktlint` command available | +| ruff | .py, .pyi | `ruff` command available with config | +| uv | .py, .pyi | `uv` command available | +| rubocop | .rb, .rake, .gemspec, .ru | `rubocop` command available | +| standardrb | .rb, .rake, .gemspec, .ru | `standardrb` command available | +| htmlbeautifier | .erb, .html.erb | `htmlbeautifier` command available | +| air | .R | `air` command available | +| dart | .dart | `dart` command available | +| ocamlformat | .ml, .mli | `ocamlformat` command available and `.ocamlformat` config file | +| terraform | .tf, .tfvars | `terraform` command available | +| gleam | .gleam | `gleam` command available | +| oxfmt (Experimental) | .js, .jsx, .ts, .tsx | `oxfmt` dependency in `package.json` and an [experiental env variable flag](/cli#experimental) | + +So if your project has `prettier` in your `package.json`, OpenCode will automatically use it. + +## How it works + +When OpenCode writes or edits a file, it: + +1. Checks the file extension against all enabled formatters. +2. Runs the appropriate formatter command on the file. +3. Applies the formatting changes automatically. + +This process happens in the background, ensuring your code styles are maintained without any manual steps. + +## Configure + +You can customize formatters through the `formatter` section in your OpenCode config. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "formatter": {} +} +``` + +Each formatter configuration supports the following: + +| Property | Type | Description | +| :------------- | :-------- | :------------------------------------------------------ | +| `disabled` | boolean | Set this to `true` to disable the formatter | +| `command` | string[] | The command to run for formatting | +| `environment` | object | Environment variables to set when running the formatter | +| `extensions` | string[] | File extensions this formatter should handle | + +Let's look at some examples. + +### Disabling formatters + +To disable **all** formatters globally, set `formatter` to `false`: + +```json opencode.json highlight={3} +{ + "$schema": "https://opencode.ai/config.json", + "formatter": false +} +``` + +To disable a **specific** formatter, set `disabled` to `true`: + +```json opencode.json highlight={5} +{ + "$schema": "https://opencode.ai/config.json", + "formatter": { + "prettier": { + "disabled": true + } + } +} +``` + +### Custom formatters + +You can override the built-in formatters or add new ones by specifying the command, environment variables, and file extensions: + +```json opencode.json highlight={4-14} +{ + "$schema": "https://opencode.ai/config.json", + "formatter": { + "prettier": { + "command": ["npx", "prettier", "--write", "$FILE"], + "environment": { + "NODE_ENV": "development" + }, + "extensions": [".js", ".ts", ".jsx", ".tsx"] + }, + "custom-markdown-formatter": { + "command": ["deno", "fmt", "$FILE"], + "extensions": [".md"] + } + } +} +``` + +The **`$FILE` placeholder** in the command will be replaced with the path to the file being formatted. diff --git a/packages/docs/github.mdx b/packages/docs/github.mdx new file mode 100644 index 00000000000..d7a78f31ed5 --- /dev/null +++ b/packages/docs/github.mdx @@ -0,0 +1,162 @@ +--- +title: GitHub +description: Use OpenCode in GitHub issues and pull-requests. +--- + +OpenCode integrates with your GitHub workflow. Mention `/opencode` or `/oc` in your comment, and OpenCode will execute tasks within your GitHub Actions runner. + + +## Features + +- **Triage issues**: Ask OpenCode to look into an issue and explain it to you. +- **Fix and implement**: Ask OpenCode to fix an issue or implement a feature. And it will work in a new branch and submits a PR with all the changes. +- **Secure**: OpenCode runs inside your GitHub's runners. + + +## Installation + +Run the following command in a project that is in a GitHub repo: + +```bash +opencode github install +``` + +This will walk you through installing the GitHub app, creating the workflow, and setting up secrets. + +### Manual Setup + +Or you can set it up manually. + + + + +Head over to [**github.com/apps/opencode-agent**](https://github.com/apps/opencode-agent). Make sure it's installed on the target repository. + + + +Add the following workflow file to `.github/workflows/opencode.yml` in your repo. Make sure to set the appropriate `model` and required API keys in `env`. + +```yml .github/workflows/opencode.yml highlight={24,26} +name: opencode + +on: + issue_comment: + types: [created] + pull_request_review_comment: + types: [created] + +jobs: + opencode: + if: | + contains(github.event.comment.body, '/oc') || + contains(github.event.comment.body, '/opencode') + runs-on: ubuntu-latest + permissions: + id-token: write + steps: + - name: Checkout repository + uses: actions/checkout@v4 + with: + fetch-depth: 1 + + - name: Run OpenCode + uses: sst/opencode/github@latest + env: + ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} + with: + model: anthropic/claude-sonnet-4-20250514 + # share: true + # github_token: xxxx +``` + + +In your organization or project **settings**, expand **Secrets and variables** on the left and select **Actions**. And add the required API keys. + + + +## Configuration + +- `model`: The model to use with OpenCode. Takes the format of `provider/model`. This is **required**. +- `share`: Whether to share the OpenCode session. Defaults to **true** for public repositories. +- `prompt`: Optional custom prompt to override the default behavior. Use this to customize how OpenCode processes requests. +- `token`: Optional GitHub access token for performing operations such as creating comments, committing changes, and opening pull requests. By default, OpenCode uses the installation access token from the OpenCode GitHub App, so commits, comments, and pull requests appear as coming from the app. + + Alternatively, you can use the GitHub Action runner's [built-in `GITHUB_TOKEN`](https://docs.github.com/en/actions/tutorials/authenticate-with-github_token) without installing the OpenCode GitHub App. Just make sure to grant the required permissions in your workflow: + +```yaml +permissions: + id-token: write + contents: write + pull-requests: write + issues: write +``` +You can also use a [personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens)(PAT) if preferred. + +## Custom prompts + +Override the default prompt to customize OpenCode's behavior for your workflow. + +```yaml .github/workflows/opencode.yml +- uses: sst/opencode/github@latest + with: + model: anthropic/claude-sonnet-4-5 + prompt: | + Review this pull request: + - Check for code quality issues + - Look for potential bugs + - Suggest improvements +``` + +This is useful for enforcing specific review criteria, coding standards, or focus areas relevant to your project. + + +## Examples + +Here are some examples of how you can use OpenCode in GitHub. + +- **Explain an issue** + + Add this comment in a GitHub issue. + + ```bash + /opencode explain this issue + ``` + + OpenCode will read the entire thread, including all comments, and reply with a clear explanation. + +- **Fix an issue** + + In a GitHub issue, say: + + ```bash + /opencode fix this + ``` + + And OpenCode will create a new branch, implement the changes, and open a PR with the changes. + +- **Review PRs and make changes** + + Leave the following comment on a GitHub PR. + + ```bash + Delete the attachment from S3 when the note is removed /oc + ``` + + OpenCode will implement the requested change and commit it to the same PR. + +- **Review specific code lines** + + Leave a comment directly on code lines in the PR's "Files" tab. OpenCode automatically detects the file, line numbers, and diff context to provide precise responses. + + ```bash + [Comment on specific lines in Files tab] + /oc add error handling here + ``` + + When commenting on specific lines, OpenCode receives: + - The exact file being reviewed + - The specific lines of code + - The surrounding diff context + - Line number information + + This allows for more targeted requests without needing to specify file paths or line numbers manually. diff --git a/packages/docs/gitlab.mdx b/packages/docs/gitlab.mdx new file mode 100644 index 00000000000..ceb677df5ea --- /dev/null +++ b/packages/docs/gitlab.mdx @@ -0,0 +1,162 @@ +--- +title: GitLab +description: Use OpenCode in GitLab issues and merge requests. +--- + +OpenCode integrates with your GitLab workflow. +Mention `@opencode` in a comment, and OpenCode will execute tasks within your GitLab CI pipeline. + +## Features + +- **Triage issues**: Ask OpenCode to look into an issue and explain it to you. +- **Fix and implement**: Ask OpenCode to fix an issue or implement a feature. + It will work create a new branch and raised a merge request with the changes. +- **Secure**: OpenCode runs on your GitLab runners. + +## Setup + +OpenCode runs in your GitLab CI/CD pipeline, here's what you'll need to set it up: + + +**TIP** + +Check out the [**GitLab docs**](https://docs.gitlab.com/user/duo_agent_platform/agent_assistant/) for up to date instructions. + + + + +Configure your GitLab environment + + +Set up CI/CD + + +Get an AI model provider API key + + +Create a service account + + +Configure CI/CD variables + + +Create a flow config file, here's an example: + + + +```yaml +image: node:22-slim +commands: + - echo "Installing opencode" + - npm install --global opencode-ai + - echo "Installing glab" + - export GITLAB_TOKEN=$GITLAB_TOKEN_OPENCODE + - apt-get update --quiet && apt-get install --yes curl wget gpg git && rm --recursive --force /var/lib/apt/lists/* + - curl --silent --show-error --location "https://raw.githubusercontent.com/upciti/wakemeops/main/assets/install_repository" | bash + - apt-get install --yes glab + - echo "Configuring glab" + - echo $GITLAB_HOST + - echo "Creating OpenCode auth configuration" + - mkdir --parents ~/.local/share/opencode + - | + cat > ~/.local/share/opencode/auth.json << EOF + { + "anthropic": { + "type": "api", + "key": "$ANTHROPIC_API_KEY" + } + } + EOF + - echo "Configuring git" + - git config --global user.email "opencode@gitlab.com" + - git config --global user.name "OpenCode" + - echo "Testing glab" + - glab issue list + - echo "Running OpenCode" + - | + opencode run " + You are an AI assistant helping with GitLab operations. + + Context: $AI_FLOW_CONTEXT + Task: $AI_FLOW_INPUT + Event: $AI_FLOW_EVENT + + Please execute the requested task using the available GitLab tools. + Be thorough in your analysis and provide clear explanations. + + + Please use the glab CLI to access data from GitLab. The glab CLI has already been authenticated. You can run the corresponding commands. + + If you are asked to summarize an MR or issue or asked to provide more information then please post back a note to the MR/Issue so that the user can see it. + You don't need to commit or push up changes, those will be done automatically based on the file changes you make. + + " + - git checkout --branch $CI_WORKLOAD_REF origin/$CI_WORKLOAD_REF + - echo "Checking for git changes and pushing if any exist" + - | + if ! git diff --quiet || ! git diff --cached --quiet || [ --not --zero "$(git ls-files --others --exclude-standard)" ]; then + echo "Git changes detected, adding and pushing..." + git add . + if git diff --cached --quiet; then + echo "No staged changes to commit" + else + echo "Committing changes to branch: $CI_WORKLOAD_REF" + git commit --message "Codex changes" + echo "Pushing changes up to $CI_WORKLOAD_REF" + git push https://gitlab-ci-token:$GITLAB_TOKEN@$GITLAB_HOST/gl-demo-ultimate-dev-ai-epic-17570/test-java-project.git $CI_WORKLOAD_REF + echo "Changes successfully pushed" + fi + else + echo "No git changes detected, skipping push" + fi +variables: + - ANTHROPIC_API_KEY + - GITLAB_TOKEN_OPENCODE + - GITLAB_HOST +``` + + + + + +You can refer to the [GitLab CLI agents docs](https://docs.gitlab.com/user/duo_agent_platform/agent_assistant/) for detailed instructions. + +## Examples + +Here are some examples of how you can use OpenCode in GitLab. + + +**TIP** + +You can configure to use a different trigger phrase than `@opencode`. + + +- **Explain an issue** + + Add this comment in a GitLab issue. + + ``` + @opencode explain this issue + ``` + + OpenCode will read the issue and reply with a clear explanation. + +- **Fix an issue** + + In a GitLab issue, say: + + ``` + @opencode fix this + ``` + + OpenCode will create a new branch, implement the changes, and open a merge request with the changes. + +- **Review merge requests** + + Leave the following comment on a GitLab merge request. + + ``` + @opencode review this merge request + ``` + + OpenCode will review the merge request and provide feedback. diff --git a/packages/docs/ide.mdx b/packages/docs/ide.mdx new file mode 100644 index 00000000000..38a1848a86f --- /dev/null +++ b/packages/docs/ide.mdx @@ -0,0 +1,49 @@ +--- +title: IDE +description: The OpenCode extension for VS Code, Cursor, and other IDEs +--- + +OpenCode integrates with VS Code, Cursor, or any IDE that supports a terminal. Just run `opencode` in the terminal to get started. + +## Usage + +- **Quick Launch**: Use `Cmd+Esc` (Mac) or `Ctrl+Esc` (Windows/Linux) to open OpenCode in a split terminal view, or focus an existing terminal session if one is already running. +- **New Session**: Use `Cmd+Shift+Esc` (Mac) or `Ctrl+Shift+Esc` (Windows/Linux) to start a new OpenCode terminal session, even if one is already open. You can also click the OpenCode button in the UI. +- **Context Awareness**: Automatically share your current selection or tab with OpenCode. +- **File Reference Shortcuts**: Use `Cmd+Option+K` (Mac) or `Alt+Ctrl+K` (Linux/Windows) to insert file references. For example, `@File#L37-42`. + +## Installation + +To install OpenCode on VS Code and popular forks like Cursor, Windsurf, VSCodium: + + + + Open VS Code + + + Open the integrated terminal + + + Run `opencode` - the extension installs automatically + + + +If on the other hand you want to use your own IDE when you run `/editor` or `/export` from the TUI, you'll need to set `export EDITOR="code --wait"`. [Learn more](/tui#editor-setup). + + +### Manual Install + +Search for **OpenCode** in the Extension Marketplace and click **Install**. + +### Troubleshooting + +If the extension fails to install automatically: + +- Ensure you’re running `opencode` in the integrated terminal. +- Confirm the CLI for your IDE is installed: + - For VS Code: `code` command + - For Cursor: `cursor` command + - For Windsurf: `windsurf` command + - For VSCodium: `codium` command + - If not, run `Cmd+Shift+P` (Mac) or `Ctrl+Shift+P` (Windows/Linux) and search for "Shell Command: Install 'code' command in PATH" (or the equivalent for your IDE) +- Ensure VS Code has permission to install extensions diff --git a/packages/docs/images/checks-passed.png b/packages/docs/images/checks-passed.png deleted file mode 100644 index 3303c773646..00000000000 Binary files a/packages/docs/images/checks-passed.png and /dev/null differ diff --git a/packages/docs/images/hero-dark.png b/packages/docs/images/hero-dark.png deleted file mode 100644 index a61cbb12528..00000000000 Binary files a/packages/docs/images/hero-dark.png and /dev/null differ diff --git a/packages/docs/images/hero-light.png b/packages/docs/images/hero-light.png deleted file mode 100644 index 68c712d6db8..00000000000 Binary files a/packages/docs/images/hero-light.png and /dev/null differ diff --git a/packages/docs/images/screenshot-github.png b/packages/docs/images/screenshot-github.png new file mode 100644 index 00000000000..fda74e641b9 Binary files /dev/null and b/packages/docs/images/screenshot-github.png differ diff --git a/packages/docs/images/screenshot-splash.png b/packages/docs/images/screenshot-splash.png new file mode 100644 index 00000000000..e900673ef5c Binary files /dev/null and b/packages/docs/images/screenshot-splash.png differ diff --git a/packages/docs/images/screenshot-vscode.png b/packages/docs/images/screenshot-vscode.png new file mode 100644 index 00000000000..b8966a6b866 Binary files /dev/null and b/packages/docs/images/screenshot-vscode.png differ diff --git a/packages/docs/images/screenshot.png b/packages/docs/images/screenshot.png new file mode 100644 index 00000000000..feb61758515 Binary files /dev/null and b/packages/docs/images/screenshot.png differ diff --git a/packages/docs/index.mdx b/packages/docs/index.mdx index 19a09f890cd..4bdbb6f7e93 100644 --- a/packages/docs/index.mdx +++ b/packages/docs/index.mdx @@ -1,56 +1,340 @@ --- -title: "Introduction" -description: "Welcome to the new home for your documentation" +title: Intro +description: Get started with OpenCode. --- -## Setting up - -Get your documentation site up and running in minutes. - - - Follow our three step quickstart guide. - - -## Make it yours - -Design a docs site that looks great and empowers your users. - - - - Edit your docs locally and preview them in real time. - - - Customize the design and colors of your site to match your brand. - - - Organize your docs to help users find what they need and succeed with your product. - - - Auto-generate API documentation from OpenAPI specifications. - - - -## Create beautiful pages - -Everything you need to create world-class documentation. - - - - Use MDX to style your docs pages. - - - Add sample code to demonstrate how to use your product. - - - Display images and other media. - - - Write once and reuse across your docs. - - - -## Need inspiration? - - - Browse our showcase of exceptional documentation sites. - + + +[**OpenCode**](https://opencode.ai/) is an open source AI coding agent. It's available as a terminal-based interface, desktop app, or IDE extension. + + +![OpenCode TUI with the opencode theme](/images/screenshot.png) + + +Let's get started. + +#### Prerequisites + +To use OpenCode in your terminal, you'll need: + +1. A modern terminal emulator like: + - [WezTerm](https://wezterm.org), cross-platform + - [Alacritty](https://alacritty.org), cross-platform + - [Ghostty](https://ghostty.org), Linux and macOS + - [Kitty](https://sw.kovidgoyal.net/kitty/), Linux and macOS + +2. API keys for the LLM providers you want to use. + +## Install + +The easiest way to install OpenCode is through the install script. + +```bash +curl -fsSL https://opencode.ai/install | bash +``` + +You can also install it with the following commands: + +- **Using Node.js** + + + +```bash npm +npm install -g opencode-ai +``` + +```bash Bun +bun install -g opencode-ai +``` + +```bash pnpm +pnpm install -g opencode-ai +``` + +```bash Yarn +yarn global add opencode-ai +``` + + +- **Using Homebrew on macOS and Linux** + +```bash +brew install opencode +``` + +- **Using Paru on Arch Linux** + +```bash +paru -S opencode-bin +``` + +#### Windows + +- **Using Chocolatey** + +```bash +choco install opencode +``` + +- **Using Scoop** + +```bash +scoop bucket add extras +scoop install extras/opencode +``` + +- **Using NPM** + +```bash +npm install -g opencode-ai +``` + +- **Using Mise** + +```bash +mise use -g ubi:sst/opencode +``` + +- **Using Docker** + +```bash +docker run -it --rm ghcr.io/sst/opencode +``` + +Support for installing OpenCode on Windows using Bun is currently in progress. + +You can also grab the binary from the [Releases](https://github.com/sst/opencode/releases). + +## Configure + +With OpenCode you can use any LLM provider by configuring their API keys. + +If you are new to using LLM providers, we recommend using [OpenCode Zen](/zen). +It's a curated list of models that have been tested and verified by the OpenCode +team. + + + +Run the `/connect` command in the TUI, select opencode, and head to [opencode.ai/auth](https://opencode.ai/auth). + +```txt +/connect +``` + + +Sign in, add your billing details, and copy your API key. + + +Paste your API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + + +Alternatively, you can select one of the other providers. [Learn more](/providers#directory). + +## Initialize + +Now that you've configured a provider, you can navigate to a project that +you want to work on. + +```bash +cd /path/to/project +``` + +And run OpenCode. + +```bash +opencode +``` + +Next, initialize OpenCode for the project by running the following command. + +```bash frame="none" +/init +``` + +This will get OpenCode to analyze your project and create an `AGENTS.md` file in +the project root. + + +**TIP** + +You should commit your project's `AGENTS.md` file to Git. + + +This helps OpenCode understand the project structure and the coding patterns +used. + +## Usage + +You are now ready to use OpenCode to work on your project. Feel free to ask it +anything! + +If you are new to using an AI coding agent, here are some examples that might +help. + +### Ask questions + +You can ask OpenCode to explain the codebase to you. + + +**TIP** + +Use the `@` key to fuzzy search for files in the project. + + +```txt +How is authentication handled in @packages/functions/src/api/index.ts +``` + +This is helpful if there's a part of the codebase that you didn't work on. + +### Add features + +You can ask OpenCode to add new features to your project. Though we first recommend asking it to create a plan. + + + + + OpenCode has a _Plan mode_ that disables its ability to make changes and + instead suggest _how_ it'll implement the feature. + + Switch to it using the **Tab** key. You'll see an indicator for this in the lower right corner. + + ```bash + + ``` + + Now let's describe what we want it to do. + + ```txt + When a user deletes a note, we'd like to flag it as deleted in the database. + Then create a screen that shows all the recently deleted notes. + From this screen, the user can undelete a note or permanently delete it. + ``` + + You want to give OpenCode enough details to understand what you want. It helps + to talk to it like you are talking to a junior developer on your team. + + + **TIP** + + Give OpenCode plenty of context and examples to help it understand what you + want. + + + + + Once it gives you a plan, you can give it feedback or add more details. + + ```txt + We'd like to design this new screen using a design I've used before. + [Image #1] Take a look at this image and use it as a reference. + ``` + + + **TIP** + + Drag and drop images into the terminal to add them to the prompt. + + + OpenCode can scan any images you give it and add them to the prompt. You can + do this by dragging and dropping an image into the terminal. + + + + Once you feel comfortable with the plan, switch back to _Build mode_ by + hitting the **Tab** key again. + + ```bash frame="none" + + ``` + + And asking it to make the changes. + + ```bash frame="none" + Sounds good! Go ahead and make the changes. + ``` + + + +### Make changes + +For more straightforward changes, you can ask OpenCode to directly build it +without having to review the plan first. + +```txt +We need to add authentication to the /settings route. Take a look at how this is +handled in the /notes route in @packages/functions/src/notes.ts and implement +the same logic in @packages/functions/src/settings.ts +``` + +You want to make sure you provide a good amount of detail so OpenCode makes the right +changes. + +### Undo changes + +Let's say you ask OpenCode to make some changes. + +```txt +Can you refactor the function in @packages/functions/src/api/index.ts? +``` + +But you realize that it is not what you wanted. You **can undo** the changes +using the `/undo` command. + +```bash +/undo +``` + +OpenCode will now revert the changes you made and show your original message +again. + +```txt +Can you refactor the function in @packages/functions/src/api/index.ts? +``` + +From here you can tweak the prompt and ask OpenCode to try again. + + +**TIP** + +You can run `/undo` multiple times to undo multiple changes. + + +Or you **can redo** the changes using the `/redo` command. + +```bash +/redo +``` + +## Share + +The conversations that you have with OpenCode can be [shared with your +team](/share). + +```bash +/share +``` + +This will create a link to the current conversation and copy it to your clipboard. + + +**NOTE** + +Conversations are not shared by default. + + +Here's an [example conversation](https://opencode.ai/s/4XP1fce5) with OpenCode. + +## Customize + +And that's it! You are now a pro at using OpenCode. + +To make it your own, we recommend [picking a theme](/themes), [customizing the keybinds](/keybinds), [configuring code formatters](/formatters), [creating custom commands](/commands), or playing around with the [OpenCode config](/config). diff --git a/packages/docs/keybinds.mdx b/packages/docs/keybinds.mdx new file mode 100644 index 00000000000..61c46b8e3f1 --- /dev/null +++ b/packages/docs/keybinds.mdx @@ -0,0 +1,110 @@ +--- +title: Keybinds +description: Customize your keybinds. +--- + +OpenCode has a list of keybinds that you can customize through the OpenCode config. + +```json opencode.json expandable +{ + "$schema": "https://opencode.ai/config.json", + "keybinds": { + "leader": "ctrl+x", + "app_exit": "ctrl+c,ctrl+d,q", + "editor_open": "e", + "theme_list": "t", + "sidebar_toggle": "b", + "username_toggle": "none", + "status_view": "s", + "session_export": "x", + "session_new": "n", + "session_list": "l", + "session_timeline": "g", + "session_share": "none", + "session_unshare": "none", + "session_interrupt": "escape", + "session_compact": "c", + "session_child_cycle": "+right", + "session_child_cycle_reverse": "+left", + "messages_page_up": "pageup", + "messages_page_down": "pagedown", + "messages_half_page_up": "ctrl+alt+u", + "messages_half_page_down": "ctrl+alt+d", + "messages_first": "ctrl+g,home", + "messages_last": "ctrl+alt+g,end", + "messages_copy": "y", + "messages_undo": "u", + "messages_redo": "r", + "messages_last_user": "none", + "messages_toggle_conceal": "h", + "model_list": "m", + "model_cycle_recent": "f2", + "model_cycle_recent_reverse": "shift+f2", + "command_list": "ctrl+p", + "agent_list": "a", + "agent_cycle": "tab", + "agent_cycle_reverse": "shift+tab", + "input_clear": "ctrl+c", + "input_paste": "ctrl+v", + "input_submit": "return", + "input_newline": "shift+return,ctrl+return,alt+return,ctrl+j", + "input_move_left": "left,ctrl+b", + "input_move_right": "right,ctrl+f", + "input_move_up": "up", + "input_move_down": "down", + "input_select_left": "shift+left", + "input_select_right": "shift+right", + "input_select_up": "shift+up", + "input_select_down": "shift+down", + "input_line_home": "ctrl+a", + "input_line_end": "ctrl+e", + "input_select_line_home": "ctrl+shift+a", + "input_select_line_end": "ctrl+shift+e", + "input_visual_line_home": "alt+a", + "input_visual_line_end": "alt+e", + "input_select_visual_line_home": "alt+shift+a", + "input_select_visual_line_end": "alt+shift+e", + "input_buffer_home": "home", + "input_buffer_end": "end", + "input_select_buffer_home": "shift+home", + "input_select_buffer_end": "shift+end", + "input_delete_line": "ctrl+shift+d", + "input_delete_to_line_end": "ctrl+k", + "input_delete_to_line_start": "ctrl+u", + "input_backspace": "backspace,shift+backspace", + "input_delete": "ctrl+d,delete,shift+delete", + "input_undo": "ctrl+-,super+z", + "input_redo": "ctrl+.,super+shift+z", + "input_word_forward": "alt+f,alt+right,ctrl+right", + "input_word_backward": "alt+b,alt+left,ctrl+left", + "input_select_word_forward": "alt+shift+f,alt+shift+right", + "input_select_word_backward": "alt+shift+b,alt+shift+left", + "input_delete_word_forward": "alt+d,alt+delete,ctrl+delete", + "input_delete_word_backward": "ctrl+w,ctrl+backspace,alt+backspace", + "history_previous": "up", + "history_next": "down", + "terminal_suspend": "ctrl+z" + } +} +``` + +## Leader key + +OpenCode uses a `leader` key for most keybinds. This avoids conflicts in your terminal. + +By default, `ctrl+x` is the leader key and most actions require you to first press the leader key and then the shortcut. For example, to start a new session you first press `ctrl+x` and then press `n`. + +You don't need to use a leader key for your keybinds but we recommend doing so. + +## Disable keybind + +You can disable a keybind by adding the key to your config with a value of "none". + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "keybinds": { + "session_compact": "none" + } +} +``` diff --git a/packages/docs/logo/dark.svg b/packages/docs/logo/dark.svg index 8b343cd6fc9..9ec86344f8b 100644 --- a/packages/docs/logo/dark.svg +++ b/packages/docs/logo/dark.svg @@ -1,21 +1 @@ - - - - - - - - - - - - - - - - - - - - - + \ No newline at end of file diff --git a/packages/docs/logo/light.svg b/packages/docs/logo/light.svg index 03e62bf1d9f..7f6aeb4ab89 100644 --- a/packages/docs/logo/light.svg +++ b/packages/docs/logo/light.svg @@ -1,21 +1 @@ - - - - - - - - - - - - - - - - - - - - - + \ No newline at end of file diff --git a/packages/docs/lsp.mdx b/packages/docs/lsp.mdx new file mode 100644 index 00000000000..fe52c139d41 --- /dev/null +++ b/packages/docs/lsp.mdx @@ -0,0 +1,129 @@ +--- +title: LSP Servers +description: OpenCode integrates with your LSP servers. +--- + +OpenCode integrates with your Language Server Protocol (LSP) to help the LLM interact with your codebase. It uses diagnostics to provide feedback to the LLM. + + +## Built-in + +OpenCode comes with several built-in LSP servers for popular languages: + +| LSP Server | Extensions | Requirements | +| :----------------- | :--------------------------------------------------- | :----------------------------------------------------------- | +| astro | .astro | Auto-installs for Astro projects | +| bash | .sh, .bash, .zsh, .ksh | Auto-installs bash-language-server | +| clangd | .c, .cpp, .cc, .cxx, .c++, .h, .hpp, .hh, .hxx, .h++ | Auto-installs for C/C++ projects | +| csharp | .cs | `.NET SDK` installed | +| dart | .dart | `dart` command available | +| deno | .ts, .tsx, .js, .jsx, .mjs | `deno` command available (auto-detects deno.json/deno.jsonc) | +| elixir-ls | .ex, .exs | `elixir` command available | +| eslint | .ts, .tsx, .js, .jsx, .mjs, .cjs, .mts, .cts, .vue | `eslint` dependency in project | +| fsharp | .fs, .fsi, .fsx, .fsscript | `.NET SDK` installed | +| gleam | .gleam | `gleam` command available | +| gopls | .go | `go` command available | +| jdtls | .java | `Java SDK (version 21+)` installed | +| lua-ls | .lua | Auto-installs for Lua projects | +| ocaml-lsp | .ml, .mli | `ocamllsp` command available | +| php intelephense | .php | Auto-installs for PHP projects | +| pyright | .py, .pyi | `pyright` dependency installed | +| ruby-lsp (rubocop) | .rb, .rake, .gemspec, .ru | `ruby` and `gem` commands available | +| rust | .rs | `rust-analyzer` command available | +| sourcekit-lsp | .swift, .objc, .objcpp | `swift` installed (`xcode` on macOS) | +| svelte | .svelte | Auto-installs for Svelte projects | +| terraform | .tf, .tfvars | Auto-installs from GitHub releases | +| typescript | .ts, .tsx, .js, .jsx, .mjs, .cjs, .mts, .cts | `typescript` dependency in project | +| vue | .vue | Auto-installs for Vue projects | +| yaml-ls | .yaml, .yml | Auto-installs Red Hat yaml-language-server | +| zls | .zig, .zon | `zig` command available | + +LSP servers are automatically enabled when one of the above file extensions are detected and the requirements are met. + + +**NOTE** + +You can disable automatic LSP server downloads by setting the `OPENCODE_DISABLE_LSP_DOWNLOAD` environment variable to `true`. + + +## How It Works + +When opencode opens a file, it: + +1. Checks the file extension against all enabled LSP servers. +2. Starts the appropriate LSP server if not already running. + + +## Configure + +You can customize LSP servers through the `lsp` section in your opencode config. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "lsp": {} +} +``` + +Each LSP server supports the following: + +| Property | Type | Description | +| :---------------- | :-------- | :------------------------------------------------- | +| `disabled` | boolean | Set this to `true` to disable the LSP server | +| `command` | string[] | The command to start the LSP server | +| `extensions` | string[] | File extensions this LSP server should handle | +| `env` | object | Environment variables to set when starting server | +| `initialization` | object | Initialization options to send to the LSP server | + +Let's look at some examples. + +### Disabling LSP servers + +To disable **all** LSP servers globally, set `lsp` to `false`: + +```json opencode.json highlight={3} +{ + "$schema": "https://opencode.ai/config.json", + "lsp": false +} +``` + +To disable a **specific** LSP server, set `disabled` to `true`: + +```json opencode.json highlight={5} +{ + "$schema": "https://opencode.ai/config.json", + "lsp": { + "typescript": { + "disabled": true + } + } +} +``` + +### Custom LSP servers + +You can add custom LSP servers by specifying the command and file extensions: + +```json opencode.json highlight={4-7} +{ + "$schema": "https://opencode.ai/config.json", + "lsp": { + "custom-lsp": { + "command": ["custom-lsp-server", "--stdio"], + "extensions": [".custom"] + } + } +} +``` + +## Additional Information + +### PHP Intelephense + +PHP Intelephense offers premium features through a license key. You can provide a license key by placing (only) the key in a text file at: + +- On macOS/Linux: `$HOME/intelephense/licence.txt` +- On Windows: `%USERPROFILE%/intelephense/licence.txt` + +The file should contain only the license key with no additional content. diff --git a/packages/docs/mcp-servers.mdx b/packages/docs/mcp-servers.mdx new file mode 100644 index 00000000000..d32cc1ee21d --- /dev/null +++ b/packages/docs/mcp-servers.mdx @@ -0,0 +1,404 @@ +--- +title: MCP servers +description: Add local and remote MCP tools. +--- + +You can add external tools to OpenCode using the _Model Context Protocol_, or MCP. + +OpenCode supports both: + +- Local servers +- Remote servers + +Once added, MCP tools are automatically available to the LLM alongside built-in tools. + +## Caveats + +When you use an MCP server, it adds to the context. This can quickly add up if +you have a lot of tools. So we recommend being careful with which MCP servers +you use. + + +**TIP** + +MCP servers add to your context, so you want to be careful with which ones you enable. + + +Certain MCP servers, like the GitHub MCP server tend to add a lot of tokens and +can easily exceed the context limit. + +## Configure + +You can define MCP servers in your OpenCode config under `mcp`. Add each MCP +with a unique name. You can refer to that MCP by name when prompting the LLM. + +```json opencode.json highlight={6} +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "name-of-mcp-server": { + // ... + "enabled": true, + }, + "name-of-other-mcp-server": { + // ... + }, + }, +} +``` + +You can also disable a server by setting `enabled` to `false`. This is useful if you want to temporarily disable a server without removing it from your config. + +### Local + +Add local MCP servers using `type` to `"local"` within the MCP object. + +```json opencode.json highlight={15} +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "my-local-mcp-server": { + "type": "local", + // Or ["bun", "x", "my-mcp-command"] + "command": ["npx", "-y", "my-mcp-command"], + "enabled": true, + "environment": { + "MY_ENV_VAR": "my_env_var_value", + }, + }, + }, +} +``` + +The command is how the local MCP server is started. You can also pass in a list of environment variables as well. + +For example, here's how I can add the test +[`@modelcontextprotocol/server-everything`](https://www.npmjs.com/package/@modelcontextprotocol/server-everything) MCP server. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "mcp_everything": { + "type": "local", + "command": ["npx", "-y", "@modelcontextprotocol/server-everything"], + }, + }, +} +``` + +And to use it I can add `use the mcp_everything tool` to my prompts. + +```txt "mcp_everything" +use the mcp_everything tool to add the number 3 and 4 +``` + +#### Options + +Here are all the options for configuring a local MCP server. + +| Option | Type | Required | Description | +| :------------- | :------- | :-------- | :---------------------------------------------------------------------------------- | +| `type` | String | Y | Type of MCP server connection, must be `"local"`. | +| `command` | Array | Y | Command and arguments to run the MCP server. | +| `environment` | Object | | Environment variables to set when running the server. | +| `enabled` | Boolean | | Enable or disable the MCP server on startup. | +| `timeout` | Number | | Timeout in ms for fetching tools from the MCP server. Defaults to 5000 (5 seconds). | + +### Remote + +Add remote MCP servers under by setting `type` to `"remote"`. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "my-remote-mcp": { + "type": "remote", + "url": "https://my-mcp-server.com", + "enabled": true, + "headers": { + "Authorization": "Bearer MY_API_KEY" + } + } + } +} +``` + +Here the `url` is the URL of the remote MCP server and with the `headers` option you can pass in a list of headers. + +#### Options + +| Option | Type | Required | Description | +| :--------- | :------- | :-------- | :---------------------------------------------------------------------------------- | +| `type` | String | Y | Type of MCP server connection, must be `"remote"`. | +| `url` | String | Y | URL of the remote MCP server. | +| `enabled` | Boolean | | Enable or disable the MCP server on startup. | +| `headers` | Object | | Headers to send with the request. | +| `oauth` | Object | | OAuth authentication configuration. See [OAuth](#oauth) section below. | +| `timeout` | Number | | Timeout in ms for fetching tools from the MCP server. Defaults to 5000 (5 seconds). | + +### OAuth + +OpenCode automatically handles OAuth authentication for remote MCP servers. When a server requires authentication, OpenCode will: + +1. Detect the 401 response and initiate the OAuth flow +2. Use **Dynamic Client Registration (RFC 7591)** if supported by the server +3. Store tokens securely for future requests + +#### Automatic OAuth + +For most OAuth-enabled MCP servers, no special configuration is needed. Just configure the remote server: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "my-oauth-server": { + "type": "remote", + "url": "https://mcp.example.com/mcp" + } + } +} +``` + +If the server requires authentication, OpenCode will prompt you to authenticate when you first try to use it. + +#### Pre-registered Client + +If you have client credentials from the MCP server provider, you can configure them: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "my-oauth-server": { + "type": "remote", + "url": "https://mcp.example.com/mcp", + "oauth": { + "clientId": "{env:MY_MCP_CLIENT_ID}", + "clientSecret": "{env:MY_MCP_CLIENT_SECRET}", + "scope": "tools:read tools:execute" + } + } + } +} +``` + +#### Disabling OAuth + +If you want to disable automatic OAuth for a server (e.g., for servers that use API keys instead), set `oauth` to `false`: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "my-api-key-server": { + "type": "remote", + "url": "https://mcp.example.com/mcp", + "oauth": false, + "headers": { + "Authorization": "Bearer {env:MY_API_KEY}" + } + } + } +} +``` + +#### OAuth Options + +| Option | Type | Required | Description | +| :-------------- | :--------------- | :-------- | :------------------------------------------------------------------------------- | +| `oauth` | Object \| false | | OAuth config object, or `false` to disable OAuth auto-detection. | +| `clientId` | String | | OAuth client ID. If not provided, dynamic client registration will be attempted. | +| `clientSecret` | String | | OAuth client secret, if required by the authorization server. | +| `scope` | String | | OAuth scopes to request during authorization. | + +#### Authenticating + +You can manually trigger authentication or manage credentials: + +```bash +# Authenticate with a specific MCP server +opencode mcp auth my-oauth-server + +# List all MCP servers and their auth status +opencode mcp list + +# Remove stored credentials +opencode mcp logout my-oauth-server +``` + +The `mcp auth` command will open your browser for authorization. After you authorize, OpenCode will store the tokens securely in `~/.local/share/opencode/mcp-auth.json`. + +## Manage + +Your MCPs are available as tools in OpenCode, alongside built-in tools. So you +can manage them through the OpenCode config like any other tool. + + +### Global + +This means that you can enable or disable them globally. + +```json opencode.json highlight={14} +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "my-mcp-foo": { + "type": "local", + "command": ["bun", "x", "my-mcp-command-foo"] + }, + "my-mcp-bar": { + "type": "local", + "command": ["bun", "x", "my-mcp-command-bar"] + } + }, + "tools": { + "my-mcp-foo": false + } +} +``` + +We can also use a glob pattern to disable all matching MCPs. + +```json opencode.json highlight={14} +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "my-mcp-foo": { + "type": "local", + "command": ["bun", "x", "my-mcp-command-foo"] + }, + "my-mcp-bar": { + "type": "local", + "command": ["bun", "x", "my-mcp-command-bar"] + } + }, + "tools": { + "my-mcp*": false + } +} +``` + +Here we are using the glob pattern `my-mcp*` to disable all MCPs. + +### Per agent + +If you have a large number of MCP servers you may want to only enable them per +agent and disable them globally. To do this: + +1. Disable it as a tool globally. +2. In your [agent config](/agents#tools) enable the MCP server as a tool. + +```json opencode.json highlight={11, 14-18} +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "my-mcp": { + "type": "local", + "command": ["bun", "x", "my-mcp-command"], + "enabled": true + } + }, + "tools": { + "my-mcp*": false + }, + "agent": { + "my-agent": { + "tools": { + "my-mcp*": true + } + } + } +} +``` + +#### Glob patterns + +The glob pattern uses simple regex globbing patterns. + +- `*` matches zero or more of any character +- `?` matches exactly one character +- All other characters match literally + +## Examples + +Below are examples of some common MCP servers. You can submit a PR if you want to document other servers. + +### Context7 + +Add the [Context7 MCP server](https://github.com/upstash/context7) to search through docs. + +```json opencode.json highlight={4-7} +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "context7": { + "type": "remote", + "url": "https://mcp.context7.com/mcp" + } + } +} +``` + +If you have signed up for a free account, you can use your API key and get higher rate-limits. + +```json opencode.json highlight={7-9} +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "context7": { + "type": "remote", + "url": "https://mcp.context7.com/mcp", + "headers": { + "CONTEXT7_API_KEY": "{env:CONTEXT7_API_KEY}" + } + } + } +} +``` + +Here we are assuming that you have the `CONTEXT7_API_KEY` environment variable set. + +Add `use context7` to your prompts to use Context7 MCP server. + +```txt +Configure a Cloudflare Worker script to cache JSON API responses for five minutes. use context7 +``` + +Alternatively, you can add something like this to your +[AGENTS.md](/rules/). + +```md AGENTS.md +When you need to search docs, use `context7` tools. +``` + +### Grep by Vercel + +Add the [Grep by Vercel](https://grep.app) MCP server to search through code snippets on GitHub. + +```json opencode.json highlight={4-7} +{ + "$schema": "https://opencode.ai/config.json", + "mcp": { + "gh_grep": { + "type": "remote", + "url": "https://mcp.grep.app" + } + } +} +``` + +Since we named our MCP server `gh_grep`, you can add `use the gh_grep tool` to your prompts to get the agent to use it. + +```txt +What's the right way to set a custom domain in an SST Astro component? use the gh_grep tool +``` + +Alternatively, you can add something like this to your +[AGENTS.md](/rules/). + +```md AGENTS.md +If you are unsure how to do something, use `gh_grep` to search code examples from github. +``` diff --git a/packages/docs/models.mdx b/packages/docs/models.mdx new file mode 100644 index 00000000000..2ad511b60cc --- /dev/null +++ b/packages/docs/models.mdx @@ -0,0 +1,152 @@ +--- +title: Models +description: Configuring an LLM provider and model. +--- + +OpenCode uses the [AI SDK](https://ai-sdk.dev/) and [Models.dev](https://models.dev) to support for **75+ LLM providers** and it supports running local models. + +## Providers + +Most popular providers are preloaded by default. If you've added the credentials for a provider through the `/connect` command, they'll be available when you start OpenCode. + +Learn more about [providers](/providers). + +## Select a model + +Once you've configured your provider you can select the model you want by typing in: + +```bash +/models +``` + +## Recommended models + +There are a lot of models out there, with new models coming out every week. + + +**TIP** + +Consider using one of the models we recommend. + + +However, there are only a few of them that are good at both generating code and tool calling. + +Here are several models that work well with OpenCode, in no particular order. (This is not an exhaustive list): + +- GPT 5.1 +- GPT 5.1 Codex +- Claude Sonnet 4.5 +- Claude Haiku 4.5 +- Kimi K2 +- GLM 4.6 +- Qwen3 Coder +- Gemini 3 Pro + +## Set a default + +To set one of these as the default model, you can set the `model` key in your +OpenCode config. + +```json opencode.json highlight={3} +{ + "$schema": "https://opencode.ai/config.json", + "model": "lmstudio/google/gemma-3n-e4b" +} +``` + +Here the full ID is `provider_id/model_id`. + +If you've configured a [custom provider](/providers#custom), the `provider_id` is key from the `provider` part of your config, and the `model_id` is the key from `provider.models`. + +## Configure models + +You can globally configure a model's options through the config. + +```json opencode.json highlight={7-12,19-24} expandable +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "openai": { + "models": { + "gpt-5": { + "options": { + "reasoningEffort": "high", + "textVerbosity": "low", + "reasoningSummary": "auto", + "include": ["reasoning.encrypted_content"], + }, + }, + }, + }, + "anthropic": { + "models": { + "claude-sonnet-4-5-20250929": { + "options": { + "thinking": { + "type": "enabled", + "budgetTokens": 16000, + }, + }, + }, + }, + }, + }, +} +``` + +Here we're configuring global settings for two built-in models: `gpt-5` when accessed via the `openai` provider, and `claude-sonnet-4-20250514` when accessed via the `anthropic` provider. +The built-in provider and model names can be found on [Models.dev](https://models.dev). + +You can also configure these options for any agents that you are using. The agent config overrides any global options here. [Learn more](/agents#additional). + +You can also define custom models that extend built-in ones and can optionally use specific options by referring to their id: + +```json opencode.json highlight={6-20} expandable +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "opencode": { + "models": { + "gpt-5-high": { + "id": "gpt-5", + "options": { + "reasoningEffort": "high", + "textVerbosity": "low", + "reasoningSummary": "auto", + }, + }, + "gpt-5-low": { + "id": "gpt-5", + "options": { + "reasoningEffort": "low", + "textVerbosity": "low", + "reasoningSummary": "auto", + }, + }, + }, + }, + }, +} +``` + + +## Loading models + +When OpenCode starts up, it checks for models in the following priority order: + +1. The `--model` or `-m` command line flag. The format is the same as in the config file: `provider_id/model_id`. + +2. The model list in the OpenCode config. + + ```json opencode.json + { + "$schema": "https://opencode.ai/config.json", + "model": "anthropic/claude-sonnet-4-20250514" + } + ``` + + The format here is `provider/model`. + +3. The last used model. + +4. The first model using an internal priority. diff --git a/packages/docs/modes.mdx b/packages/docs/modes.mdx new file mode 100644 index 00000000000..da7913dd3a4 --- /dev/null +++ b/packages/docs/modes.mdx @@ -0,0 +1,312 @@ +--- +title: Modes +description: Different modes for different use cases. +--- + + +**CAUTION** + +Modes are now configured through the `agent` option in the opencode config. The +`mode` option is now deprecated. [Learn more](/agents). + + +Modes in opencode allow you to customize the behavior, tools, and prompts for different use cases. + +It comes with two built-in modes: **build** and **plan**. You can customize +these or configure your own through the opencode config. + +You can switch between modes during a session or configure them in your config file. + + +## Built-in + +opencode comes with two built-in modes. + +### Build + +Build is the **default** mode with all tools enabled. This is the standard mode for development work where you need full access to file operations and system commands. + + +### Plan + +A restricted mode designed for planning and analysis. In plan mode, the following tools are disabled by default: + +- `write` - Cannot create new files +- `edit` - Cannot modify existing files +- `patch` - Cannot apply patches +- `bash` - Cannot execute shell commands + +This mode is useful when you want the AI to analyze code, suggest changes, or create plans without making any actual modifications to your codebase. + +## Switching + +You can switch between modes during a session using the _Tab_ key. Or your configured `switch_mode` keybind. + +See also: [Formatters](/formatters) for information about code formatting configuration. + +## Configure + +You can customize the built-in modes or create your own through configuration. Modes can be configured in two ways: + +### JSON Configuration + +Configure modes in your `opencode.json` config file: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "mode": { + "build": { + "model": "anthropic/claude-sonnet-4-20250514", + "prompt": "{file:./prompts/build.txt}", + "tools": { + "write": true, + "edit": true, + "bash": true + } + }, + "plan": { + "model": "anthropic/claude-haiku-4-20250514", + "tools": { + "write": false, + "edit": false, + "bash": false + } + } + } +} +``` + +### Markdown Configuration + +You can also define modes using markdown files. Place them in: + +- Global: `~/.config/opencode/mode/` +- Project: `.opencode/mode/` + +```markdown ~/.config/opencode/mode/review.md expandable +--- +model: anthropic/claude-sonnet-4-20250514 +temperature: 0.1 +tools: + write: false + edit: false + bash: false +--- + +You are in code review mode. Focus on: + +- Code quality and best practices +- Potential bugs and edge cases +- Performance implications +- Security considerations + +Provide constructive feedback without making direct changes. +``` + +The markdown file name becomes the mode name (e.g., `review.md` creates a `review` mode). + +Let's look at these configuration options in detail. + +### Model + +Use the `model` config to override the default model for this mode. Useful for using different models optimized for different tasks. For example, a faster model for planning, a more capable model for implementation. + +```json opencode.json +{ + "mode": { + "plan": { + "model": "anthropic/claude-haiku-4-20250514" + } + } +} +``` + + +### Temperature + +Control the randomness and creativity of the AI's responses with the `temperature` config. Lower values make responses more focused and deterministic, while higher values increase creativity and variability. + +```json opencode.json +{ + "mode": { + "plan": { + "temperature": 0.1 + }, + "creative": { + "temperature": 0.8 + } + } +} +``` + +Temperature values typically range from 0.0 to 1.0: + +- **0.0-0.2**: Very focused and deterministic responses, ideal for code analysis and planning +- **0.3-0.5**: Balanced responses with some creativity, good for general development tasks +- **0.6-1.0**: More creative and varied responses, useful for brainstorming and exploration + +```json opencode.json +{ + "mode": { + "analyze": { + "temperature": 0.1, + "prompt": "{file:./prompts/analysis.txt}" + }, + "build": { + "temperature": 0.3 + }, + "brainstorm": { + "temperature": 0.7, + "prompt": "{file:./prompts/creative.txt}" + } + } +} +``` + +If no temperature is specified, opencode uses model-specific defaults (typically 0 for most models, 0.55 for Qwen models). + +### Prompt + +Specify a custom system prompt file for this mode with the `prompt` config. The prompt file should contain instructions specific to the mode's purpose. + +```json opencode.json +{ + "mode": { + "review": { + "prompt": "{file:./prompts/code-review.txt}" + } + } +} +``` + +This path is relative to where the config file is located. So this works for +both the global opencode config and the project specific config. + +### Tools + +Control which tools are available in this mode with the `tools` config. You can enable or disable specific tools by setting them to `true` or `false`. + +```json +{ + "mode": { + "readonly": { + "tools": { + "write": false, + "edit": false, + "bash": false, + "read": true, + "grep": true, + "glob": true + } + } + } +} +``` + +If no tools are specified, all tools are enabled by default. + +#### Available tools + +Here are all the tools can be controlled through the mode config. + +| Tool | Description | +| :---------- | :---------------------- | +| `bash` | Execute shell commands | +| `edit` | Modify existing files | +| `write` | Create new files | +| `read` | Read file contents | +| `grep` | Search file contents | +| `glob` | Find files by pattern | +| `list` | List directory contents | +| `patch` | Apply patches to files | +| `todowrite` | Manage todo lists | +| `todoread` | Read todo lists | +| `webfetch` | Fetch web content | + +## Custom modes + +You can create your own custom modes by adding them to the configuration. Here are examples using both approaches: + +### Using JSON configuration + +```json opencode.json highlight={4-14} +{ + "$schema": "https://opencode.ai/config.json", + "mode": { + "docs": { + "prompt": "{file:./prompts/documentation.txt}", + "tools": { + "write": true, + "edit": true, + "bash": false, + "read": true, + "grep": true, + "glob": true + } + } + } +} +``` + +### Using markdown files + +Create mode files in `.opencode/mode/` for project-specific modes or `~/.config/opencode/mode/` for global modes: + +```markdown .opencode/mode/debug.md expandable +--- +temperature: 0.1 +tools: + bash: true + read: true + grep: true + write: false + edit: false +--- + +You are in debug mode. Your primary goal is to help investigate and diagnose issues. + +Focus on: + +- Understanding the problem through careful analysis +- Using bash commands to inspect system state +- Reading relevant files and logs +- Searching for patterns and anomalies +- Providing clear explanations of findings + +Do not make any changes to files. Only investigate and report. +``` + +```markdown ~/.config/opencode/mode/refactor.md expandable +--- +model: anthropic/claude-sonnet-4-20250514 +temperature: 0.2 +tools: + edit: true + read: true + grep: true + glob: true +--- + +You are in refactoring mode. Focus on improving code quality without changing functionality. + +Priorities: + +- Improve code readability and maintainability +- Apply consistent naming conventions +- Reduce code duplication +- Optimize performance where appropriate +- Ensure all tests continue to pass +``` + +### Use cases + +Here are some common use cases for different modes. + +- **Build mode**: Full development work with all tools enabled +- **Plan mode**: Analysis and planning without making changes +- **Review mode**: Code review with read-only access plus documentation tools +- **Debug mode**: Focused on investigation with bash and read tools enabled +- **Docs mode**: Documentation writing with file operations but no system commands + +You might also find different models are good for different use cases. diff --git a/packages/docs/network.mdx b/packages/docs/network.mdx new file mode 100644 index 00000000000..48a2f161a82 --- /dev/null +++ b/packages/docs/network.mdx @@ -0,0 +1,56 @@ +--- +title: Network +description: Configure proxies and custom certificates. +--- + +OpenCode supports standard proxy environment variables and custom certificates for enterprise network environments. + + +## Proxy + +OpenCode respects standard proxy environment variables. + +```bash +# HTTPS proxy (recommended) +export HTTPS_PROXY=https://proxy.example.com:8080 + +# HTTP proxy (if HTTPS not available) +export HTTP_PROXY=http://proxy.example.com:8080 + +# Bypass proxy for local server (required) +export NO_PROXY=localhost,127.0.0.1 +``` + + +**CAUTION** + +The TUI communicates with a local HTTP server. You must bypass the proxy for this connection to prevent routing loops. + + +You can configure the server's port and hostname using [CLI flags](/cli#run). + +### Authenticate + +If your proxy requires basic authentication, include credentials in the URL. + +```bash +export HTTPS_PROXY=http://username:password@proxy.example.com:8080 +``` + + +**CAUTION** + +Avoid hardcoding passwords. Use environment variables or secure credential storage. + + +For proxies requiring advanced authentication like NTLM or Kerberos, consider using an LLM Gateway that supports your authentication method. + +## Custom certificates + +If your enterprise uses custom CAs for HTTPS connections, configure OpenCode to trust them. + +```bash +export NODE_EXTRA_CA_CERTS=/path/to/ca-cert.pem +``` + +This works for both proxy connections and direct API access. diff --git a/packages/docs/openapi.json b/packages/docs/openapi.json deleted file mode 120000 index 854dd8b2b80..00000000000 --- a/packages/docs/openapi.json +++ /dev/null @@ -1 +0,0 @@ -../sdk/openapi.json \ No newline at end of file diff --git a/packages/docs/permissions.mdx b/packages/docs/permissions.mdx new file mode 100644 index 00000000000..ec680604a0a --- /dev/null +++ b/packages/docs/permissions.mdx @@ -0,0 +1,219 @@ +--- +title: Permissions +description: Control which actions require approval to run. +--- + +By default, OpenCode allows most operations without approval, except `doom_loop` and `external_directory` which default to `ask`. You can configure this using the `permission` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "edit": "allow", + "bash": "ask", + "webfetch": "deny", + "doom_loop": "ask", + "external_directory": "ask" + } +} +``` + +This lets you configure granular controls for the `edit`, `bash`, `webfetch`, `doom_loop`, and `external_directory` tools. + +- `"ask"` — Prompt for approval before running the tool +- `"allow"` — Allow all operations without approval +- `"deny"` — Disable the tool + +## Tools + +Currently, the permissions for the `edit`, `bash`, `webfetch`, `doom_loop`, and `external_directory` tools can be configured through the `permission` option. + +### edit + +Use the `permission.edit` key to control whether file editing operations require user approval. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "edit": "ask" + } +} +``` + +### bash + +You can use the `permission.bash` key to control whether bash commands as a +whole need user approval. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "bash": "ask" + } +} +``` + +Or, you can target specific commands and set it to `allow`, `ask`, or `deny`. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "bash": { + "git push": "ask", + "git status": "allow", + "git diff": "allow", + "npm run build": "allow", + "ls": "allow", + "pwd": "allow" + } + } +} +``` + +#### Wildcards + +You can also use wildcards to manage permissions for specific bash commands. + + +**TIP** + +You can use wildcards to manage permissions for specific bash commands. + + +For example, **disable all** Terraform commands. + +```json opencode.json highlight={5} +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "bash": { + "terraform *": "deny" + } + } +} +``` + +You can also use the `*` wildcard to manage permissions for all commands. For +example, **deny all commands** except a couple of specific ones. + +```json opencode.json highlight={5} +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "bash": { + "*": "deny", + "pwd": "allow", + "git status": "ask" + } + } +} +``` + +Here a specific rule can override the `*` wildcard. + +##### Glob patterns + +The wildcard uses simple regex globbing patterns. + +- `*` matches zero or more of any character +- `?` matches exactly one character +- All other characters match literally + +#### Scope of the `"ask"` option + +When the agent asks for permission to run a particular bash command, it will +request feedback with the three options "accept", "accept always" and "deny". +The "accept always" answer applies for the rest of the current session. + +In addition, command permissions are applied to the first two elements of a command. So, an "accept always" response for a command like `git log` would whitelist `git log *` but not `git commit ...`. + +When an agent asks for permission to run a command in a pipeline, we use tree sitter to parse each command in the pipeline. The "accept always" permission thus applies separately to each command in the pipeline. + +### webfetch + +Use the `permission.webfetch` key to control whether the LLM can fetch web pages. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "webfetch": "ask" + } +} +``` + +### doom_loop + +Use the `permission.doom_loop` key to control whether approval is required when a doom loop is detected. A doom loop occurs when the same tool is called 3 times in a row with identical arguments. + +This helps prevent infinite loops where the LLM repeatedly attempts the same action without making progress. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "doom_loop": "ask" + } +} +``` + +### external_directory + +Use the `permission.external_directory` key to control whether file operations require approval when accessing files outside the working directory. + +This provides an additional safety layer to prevent unintended modifications to files outside your project. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "external_directory": "ask" + } +} +``` + +## Agents + +You can also configure permissions per agent. Where the agent specific config +overrides the global config. [Learn more](/agents#permissions) about agent permissions. + +```json opencode.json highlight={3-7,10-14} +{ + "$schema": "https://opencode.ai/config.json", + "permission": { + "bash": { + "git push": "ask" + } + }, + "agent": { + "build": { + "permission": { + "bash": { + "git push": "allow" + } + } + } + } +} +``` + +For example, here the `build` agent overrides the global `bash` permission to +allow `git push` commands. + +You can also configure permissions for agents in Markdown. + +```markdown ~/.config/opencode/agent/review.md +--- +description: Code review without edits +mode: subagent +permission: + edit: deny + bash: ask + webfetch: deny +--- + +Only analyze code and suggest changes. +``` diff --git a/packages/docs/plugins.mdx b/packages/docs/plugins.mdx new file mode 100644 index 00000000000..acb58d52db7 --- /dev/null +++ b/packages/docs/plugins.mdx @@ -0,0 +1,216 @@ +--- +title: Plugins +description: Write your own plugins to extend OpenCode. +--- + +Plugins allow you to extend OpenCode by hooking into various events and customizing behavior. You can create plugins to add new features, integrate with external services, or modify OpenCode's default behavior. + +For examples, check out the [plugins](/ecosystem#plugins) created by the community. + + +## Create a plugin + +A plugin is a **JavaScript/TypeScript module** that exports one or more plugin +functions. Each function receives a context object and returns a hooks object. + +### Location + +Plugins are loaded from: + +1. `.opencode/plugin` directory either in your project +2. Or, globally in `~/.config/opencode/plugin` + +### Basic structure + +```js .opencode/plugin/example.js +export const MyPlugin = async ({ project, client, $, directory, worktree }) => { + console.log("Plugin initialized!") + + return { + // Hook implementations go here + } +} +``` + +The plugin function receives: + +- `project`: The current project information. +- `directory`: The current working directory. +- `worktree`: The git worktree path. +- `client`: An opencode SDK client for interacting with the AI. +- `$`: Bun's [shell API](https://bun.com/docs/runtime/shell) for executing commands. + +### TypeScript support + +For TypeScript plugins, you can import types from the plugin package: + +```ts my-plugin.ts highlight={1} +import type { Plugin } from "@opencode-ai/plugin" + +export const MyPlugin: Plugin = async ({ project, client, $, directory, worktree }) => { + return { + // Type-safe hook implementations + } +} +``` + +### Events + +Plugins can subscribe to events as seen below in the Examples section. Here is a list of the different events available. + +#### Command Events + +- `command.executed` + +#### File Events + +- `file.edited` +- `file.watcher.updated` + +#### Installation Events + +- `installation.updated` + +#### LSP Events + +- `lsp.client.diagnostics` +- `lsp.updated` + +#### Message Events + +- `message.part.removed` +- `message.part.updated` +- `message.removed` +- `message.updated` + +#### Permission Events + +- `permission.replied` +- `permission.updated` + +#### Server Events + +- `server.connected` + +#### Session Events + +- `session.created` +- `session.compacted` +- `session.deleted` +- `session.diff` +- `session.error` +- `session.idle` +- `session.status` +- `session.updated` + +#### Todo Events + +- `todo.updated` + +#### Tool Events + +- `tool.execute.after` +- `tool.execute.before` + +#### TUI Events + +- `tui.prompt.append` +- `tui.command.execute` +- `tui.toast.show` + +## Examples + +Here are some examples of plugins you can use to extend opencode. + +### Send notifications + +Send notifications when certain events occur: + +```js .opencode/plugin/notification.js +export const NotificationPlugin = async ({ project, client, $, directory, worktree }) => { + return { + event: async ({ event }) => { + // Send notification on session completion + if (event.type === "session.idle") { + await $`osascript -e 'display notification "Session completed!" with title "opencode"'` + } + }, + } +} +``` + +We are using `osascript` to run AppleScript on macOS. Here we are using it to send notifications. + +### .env protection + +Prevent opencode from reading `.env` files: + +```javascript .opencode/plugin/env-protection.js +export const EnvProtection = async ({ project, client, $, directory, worktree }) => { + return { + "tool.execute.before": async (input, output) => { + if (input.tool === "read" && output.args.filePath.includes(".env")) { + throw new Error("Do not read .env files") + } + }, + } +} +``` + +### Custom tools + +Plugins can also add custom tools to opencode: + +```ts .opencode/plugin/custom-tools.ts +import { type Plugin, tool } from "@opencode-ai/plugin" + +export const CustomToolsPlugin: Plugin = async (ctx) => { + return { + tool: { + mytool: tool({ + description: "This is a custom tool", + args: { + foo: tool.schema.string(), + }, + async execute(args, ctx) { + return `Hello ${args.foo}!` + }, + }), + }, + } +} +``` + +The `tool` helper creates a custom tool that opencode can call. It takes a Zod schema function and returns a tool definition with: + +- `description`: What the tool does +- `args`: Zod schema for the tool's arguments +- `execute`: Function that runs when the tool is called + +Your custom tools will be available to opencode alongside built-in tools. + +### Compaction hooks + +Customize the context included when a session is compacted: + +```ts .opencode/plugin/compaction.ts +import type { Plugin } from "@opencode-ai/plugin" + +export const CompactionPlugin: Plugin = async (ctx) => { + return { + "experimental.session.compacting": async (input, output) => { + // Inject additional context into the compaction prompt + output.context.push(` +## Custom Context + +Include any state that should persist across compaction: +- Current task status +- Important decisions made +- Files being actively worked on +`) + }, + } +} +``` + +The `experimental.session.compacting` hook fires before the LLM generates a continuation summary. Use it to inject domain-specific context that the default compaction prompt would miss. diff --git a/packages/docs/providers.mdx b/packages/docs/providers.mdx new file mode 100644 index 00000000000..8e98df77936 --- /dev/null +++ b/packages/docs/providers.mdx @@ -0,0 +1,1561 @@ +--- +title: Providers +description: Using any LLM provider in OpenCode. +--- + + +OpenCode uses the [AI SDK](https://ai-sdk.dev/) and [Models.dev](https://models.dev) to support for **75+ LLM providers** and it supports running local models. + +To add a provider you need to: + +1. Add the API keys for the provider using the `/connect` command. +2. Configure the provider in your OpenCode config. + + +### Credentials + +When you add a provider's API keys with the `/connect` command, they are stored +in `~/.local/share/opencode/auth.json`. + +### Config + +You can customize the providers through the `provider` section in your OpenCode +config. + +#### Base URL + +You can customize the base URL for any provider by setting the `baseURL` option. This is useful when using proxy services or custom endpoints. + +```json opencode.json highlight={6} +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "anthropic": { + "options": { + "baseURL": "https://api.anthropic.com/v1" + } + } + } +} +``` + +## OpenCode Zen + +OpenCode Zen is a list of models provided by the OpenCode team that have been +tested and verified to work well with OpenCode. [Learn more](/zen). + + +**TIP** + +If you are new, we recommend starting with OpenCode Zen. + + + + +Run the `/connect` command in the TUI, select opencode, and head to [opencode.ai/auth](https://opencode.ai/auth). + +```txt +/connect +``` + + +Sign in, add your billing details, and copy your API key. + + +Paste your API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run `/models` in the TUI to see the list of models we recommend. + +```txt +/models +``` + + + +It works like any other provider in OpenCode. And is completely optional to use +it. + +## Directory + +Let's look at some of the providers in detail. If you'd like to add a provider to the +list, feel free to open a PR. + + +**NOTE** + +Don't see a provider here? Submit a PR. + + +### Amazon Bedrock + +To use Amazon Bedrock with OpenCode: + +1. Head over to the **Model catalog** in the Amazon Bedrock console and request + access to the models you want. + + + **TIP** + + You need to have access to the model you want in Amazon Bedrock. + + +1. You'll need either to set one of the following environment variables: + - `AWS_ACCESS_KEY_ID`: You can get this by creating an IAM user and generating + an access key for it. + - `AWS_PROFILE`: First login through AWS IAM Identity Center (or AWS SSO) using + `aws sso login`. Then get the name of the profile you want to use. + - `AWS_BEARER_TOKEN_BEDROCK`: You can generate a long-term API key from the + Amazon Bedrock console. + + Once you have one of the above, set it while running opencode. + +```bash +AWS_ACCESS_KEY_ID=XXX opencode +``` + +Or add it to your bash profile. + +```bash title="~/.bash_profile" +export AWS_ACCESS_KEY_ID=XXX +``` + +1. Run the `/models` command to select the model you want. + +```txt +/models +``` + +### Anthropic + +We recommend signing up for [Claude Pro](https://www.anthropic.com/news/claude-pro) or [Max](https://www.anthropic.com/max). + + + +Once you've signed up, run the `/connect` command and select Anthropic. + +```txt +/connect +``` + + +Here you can select the **Claude Pro/Max** option and it'll open your browser + and ask you to authenticate. + +```txt +┌ Select auth method +│ +│ Claude Pro/Max +│ Create an API Key +│ Manually enter API Key +└ +``` + + +Now all the the Anthropic models should be available when you use the `/models` command. + +```txt +/models +``` + + + +##### Using API keys + +You can also select **Create an API Key** if you don't have a Pro/Max subscription. It'll also open your browser and ask you to login to Anthropic and give you a code you can paste in your terminal. + +Or if you already have an API key, you can select **Manually enter API Key** and paste it in your terminal. + + +### Azure OpenAI + + +**NOTE** + +If you encounter "I'm sorry, but I cannot assist with that request" errors, try changing the content filter from **DefaultV2** to **Default** in your Azure resource. + + + + +Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need: + - **Resource name**: This becomes part of your API endpoint (`https://RESOURCE_NAME.openai.azure.com/`) + - **API key**: Either `KEY 1` or `KEY 2` from your resource + + +Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model. + + +The deployment name must match the model name for opencode to work properly. + + +Run the `/connect` command and search for **Azure**. + +```txt +/connect +``` + + +Enter your API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Set your resource name as an environment variable: + +```bash +AZURE_RESOURCE_NAME=XXX opencode +``` + +Or add it to your bash profile: + +```bash title="~/.bash_profile" +export AZURE_RESOURCE_NAME=XXX +``` + + +Run the `/models` command to select your deployed model. + +```txt +/models +``` + + + +### Azure Cognitive Services + + + +Head over to the [Azure portal](https://portal.azure.com/) and create an **Azure OpenAI** resource. You'll need: + - **Resource name**: This becomes part of your API endpoint (`https://AZURE_COGNITIVE_SERVICES_RESOURCE_NAME.cognitiveservices.azure.com/`) + - **API key**: Either `KEY 1` or `KEY 2` from your resource + + +Go to [Azure AI Foundry](https://ai.azure.com/) and deploy a model. + + +**NOTE** + +The deployment name must match the model name for opencode to work properly. + + + + +Run the `/connect` command and search for **Azure Cognitive Services**. + +```txt +/connect +``` + + +Enter your API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Set your resource name as an environment variable: + + +```bash +AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX opencode +``` + +Or add it to your bash profile: + +```bash title="~/.bash_profile" +export AZURE_COGNITIVE_SERVICES_RESOURCE_NAME=XXX +``` + + + +Run the `/models` command to select your deployed model. + +```txt +/models +``` + + + +### Baseten + + + +Head over to the [Baseten](https://app.baseten.co/), create an account, and generate an API key. + + +Run the `/connect` command and search for **Baseten**. + +```txt +/connect +``` + + +Enter your Baseten API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model. + +```txt +/models +``` + + + + +### Cerebras + + + +Head over to the [Cerebras console](https://inference.cerebras.ai/), create an account, and generate an API key. + + +Run the `/connect` command and search for **Cerebras**. + +```txt +/connect +``` + + +Enter your Cerebras API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model like _Qwen 3 Coder 480B_. + +```txt +/models +``` + + + +### Cortecs + + +Head over to the [Cortecs console](https://cortecs.ai/), create an account, and generate an API key. + + +Run the `/connect` command and search for **Cortecs**. + +```txt +/connect +``` + + +Enter your Cortecs API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model like _Kimi K2 Instruct_. + +```txt +/models +``` + + + +### DeepSeek + + + +Head over to the [DeepSeek console](https://platform.deepseek.com/), create an account, and click **Create new API key**. + + +Run the `/connect` command and search for **DeepSeek**. + +```txt +/connect +``` + + +Enter your DeepSeek API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a DeepSeek model like _DeepSeek Reasoner_. + +```txt +/models +``` + + + +### Deep Infra + + + +Head over to the [Deep Infra dashboard](https://deepinfra.com/dash), create an account, and generate an API key. + + +Run the `/connect` command and search for **Deep Infra**. + +```txt +/connect +``` + + +Enter your Deep Infra API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model. + +```txt +/models +``` + + + +### Fireworks AI + + + +Head over to the [Fireworks AI console](https://app.fireworks.ai/), create an account, and click **Create API Key**. + + + +Run the `/connect` command and search for **Fireworks AI**. + +```txt +/connect +``` + + +Enter your Fireworks AI API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model like _Kimi K2 Instruct_. + +```txt +/models +``` + + + +### GitHub Copilot + +To use your GitHub Copilot subscription with opencode: + + +**NOTE** + +Some models might need a [Pro+ +subscription](https://github.com/features/copilot/plans) to use. + +Some models need to be manually enabled in your [GitHub Copilot settings](https://docs.github.com/en/copilot/how-tos/use-ai-models/configure-access-to-ai-models#setup-for-individual-use). + + + + +Run the `/connect` command and search for GitHub Copilot. + +```txt +/connect +``` + + +Navigate to [github.com/login/device](https://github.com/login/device) and enter the code. + +```txt +┌ Login with GitHub Copilot +│ +│ https://github.com/login/device +│ +│ Enter code: 8F43-6FCF +│ +└ Waiting for authorization... +``` + + +Now run the `/models` command to select the model you want. + +```txt +/models +``` + + + +### Google Vertex AI + +To use Google Vertex AI with OpenCode: + + + +Head over to the **Model Garden** in the Google Cloud Console and check the + models available in your region. + + +**NOTE** + + You need to have a Google Cloud project with Vertex AI API enabled. + + + +Set the required environment variables: + - `GOOGLE_CLOUD_PROJECT`: Your Google Cloud project ID + - `VERTEX_LOCATION` (optional): The region for Vertex AI (defaults to `global`) + - Authentication (choose one): + - `GOOGLE_APPLICATION_CREDENTIALS`: Path to your service account JSON key file + - Authenticate using gcloud CLI: `gcloud auth application-default login` + + Set them while running opencode. + +```bash +GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json GOOGLE_CLOUD_PROJECT=your-project-id opencode +``` +Or add them to your bash profile. + +```bash title="~/.bash_profile" +export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json +export GOOGLE_CLOUD_PROJECT=your-project-id +export VERTEX_LOCATION=global +``` + + +**NOTE** + +The `global` region improves availability and reduces errors at no extra cost. Use regional endpoints (e.g., `us-central1`) for data residency requirements. [Learn more](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-partner-models#regional_and_global_endpoints) + + + +Run the `/models` command to select the model you want. + +```txt +/models +``` + + + +### Groq + + + +Head over to the [Groq console](https://console.groq.com/), click **Create API Key**, and copy the key. + + + +Run the `/connect` command and search for Groq. + +```txt +/connect +``` + + +Enter the API key for the provider. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select the one you want. + +```txt +/models +``` + + + +### Hugging Face + +[Hugging Face Inference Providers](https://huggingface.co/docs/inference-providers) provides access to open models supported by 17+ providers. + + + +Head over to [Hugging Face settings](https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained) to create a token with permission to make calls to Inference Providers. + + + +Run the `/connect` command and search for **Hugging Face**. + +```txt +/connect +``` + + +Enter your Hugging Face token. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model like _Kimi-K2-Instruct_ or _GLM-4.6_. + +```txt +/models +``` + + + +### Helicone + +[Helicone](https://helicone.ai) is an LLM observability platform that provides logging, monitoring, and analytics for your AI applications. The Helicone AI Gateway routes your requests to the appropriate provider automatically based on the model. + + + +Head over to [Helicone](https://helicone.ai), create an account, and generate an API key from your dashboard. + + + +Run the `/connect` command and search for **Helicone**. + +```txt +/connect +``` + + +Enter your Helicone API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model. + +```txt +/models +``` + + + +For more providers and advanced features like caching and rate limiting, check the [Helicone documentation](https://docs.helicone.ai). + +#### Optional Configs + +In the event you see a feature or model from Helicone that isn't configured automatically through opencode, you can always configure it yourself. + +Here's [Helicone's Model Directory](https://helicone.ai/models), you'll need this to grab the IDs of the models you want to add. + +```jsonc ~/.config/opencode/opencode.jsonc +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "helicone": { + "npm": "@ai-sdk/openai-compatible", + "name": "Helicone", + "options": { + "baseURL": "https://ai-gateway.helicone.ai", + }, + "models": { + "gpt-4o": { + // Model ID (from Helicone's model directory page) + "name": "GPT-4o", // Your own custom name for the model + }, + "claude-sonnet-4-20250514": { + "name": "Claude Sonnet 4", + }, + }, + }, + }, +} +``` + +#### Custom Headers + +Helicone supports custom headers for features like caching, user tracking, and session management. Add them to your provider config using `options.headers`: + +```jsonc ~/.config/opencode/opencode.jsonc +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "helicone": { + "npm": "@ai-sdk/openai-compatible", + "name": "Helicone", + "options": { + "baseURL": "https://ai-gateway.helicone.ai", + "headers": { + "Helicone-Cache-Enabled": "true", + "Helicone-User-Id": "opencode", + }, + }, + }, + }, +} +``` + +##### Session tracking + +Helicone's [Sessions](https://docs.helicone.ai/features/sessions) feature lets you group related LLM requests together. Use the [opencode-helicone-session](https://github.com/H2Shami/opencode-helicone-session) plugin to automatically log each OpenCode conversation as a session in Helicone. + +```bash +npm install -g opencode-helicone-session +``` + +Add it to your config. + +```json opencode.json +{ + "plugin": ["opencode-helicone-session"] +} +``` + +The plugin injects `Helicone-Session-Id` and `Helicone-Session-Name` headers into your requests. In Helicone's Sessions page, you'll see each OpenCode conversation listed as a separate session. + +##### Common Helicone headers + +| Header | Description | +| :------------------------- | :----------------------------------------------------------- | +| `Helicone-Cache-Enabled` | Enable response caching (`true`/`false`) | +| `Helicone-User-Id` | Track metrics by user | +| `Helicone-Property-[Name]` | Add custom properties (e.g., `Helicone-Property-Environment`) | +| `Helicone-Prompt-Id` | Associate requests with prompt versions | + +See the [Helicone Header Directory](https://docs.helicone.ai/helicone-headers/header-directory) for all available headers. + +### llama.cpp + +You can configure opencode to use local models through [llama.cpp's](https://github.com/ggml-org/llama.cpp) llama-server utility + +```json opencode.json highlight={5, 6, 8, 10-15} +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "llama.cpp": { + "npm": "@ai-sdk/openai-compatible", + "name": "llama-server (local)", + "options": { + "baseURL": "http://127.0.0.1:8080/v1" + }, + "models": { + "qwen3-coder:a3b": { + "name": "Qwen3-Coder: a3b-30b (local)", + "limit": { + "context": 128000, + "output": 65536 + } + } + } + } + } +} +``` + +In this example: + +- `llama.cpp` is the custom provider ID. This can be any string you want. +- `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API. +- `name` is the display name for the provider in the UI. +- `options.baseURL` is the endpoint for the local server. +- `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list. + +### IO.NET + +IO.NET offers 17 models optimized for various use cases: + + + +Head over to the [IO.NET console](https://ai.io.net/), create an account, and generate an API key. + + + +Run the `/connect` command and search for **IO.NET**. + +```txt +/connect +``` + + +Enter your IO.NET API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model. + +```txt +/models +``` + + + +### LM Studio + +You can configure opencode to use local models through LM Studio. + +```json opencode.json highlight={5, 6, 8, 10-14} +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "lmstudio": { + "npm": "@ai-sdk/openai-compatible", + "name": "LM Studio (local)", + "options": { + "baseURL": "http://127.0.0.1:1234/v1" + }, + "models": { + "google/gemma-3n-e4b": { + "name": "Gemma 3n-e4b (local)" + } + } + } + } +} +``` + +In this example: + +- `lmstudio` is the custom provider ID. This can be any string you want. +- `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API. +- `name` is the display name for the provider in the UI. +- `options.baseURL` is the endpoint for the local server. +- `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list. + + +### Moonshot AI + +To use Kimi K2 from Moonshot AI: + + + +Head over to the [Moonshot AI console](https://platform.moonshot.ai/console), create an account, and click **Create API key**. + + + +Run the `/connect` command and search for **Moonshot AI**. + +```txt +/connect +``` + + +Enter your Moonshot API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select _Kimi K2_. + +```txt +/models +``` + + + +### Nebius Token Factory + + + +Head over to the [Nebius Token Factory console](https://tokenfactory.nebius.com/), create an account, and click **Add Key**. + + + +Run the `/connect` command and search for **Nebius Token Factory**. +```txt +/connect +``` + + +Enter your Nebius Token Factory API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model like _Kimi K2 Instruct_. + +```txt +/models +``` + + + +### Ollama + +You can configure opencode to use local models through Ollama. + +```json opencode.json highlight={5, 6, 8, 10-14} +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "ollama": { + "npm": "@ai-sdk/openai-compatible", + "name": "Ollama (local)", + "options": { + "baseURL": "http://localhost:11434/v1" + }, + "models": { + "llama2": { + "name": "Llama 2" + } + } + } + } +} +``` + +In this example: + +- `ollama` is the custom provider ID. This can be any string you want. +- `npm` specifies the package to use for this provider. Here, `@ai-sdk/openai-compatible` is used for any OpenAI-compatible API. +- `name` is the display name for the provider in the UI. +- `options.baseURL` is the endpoint for the local server. +- `models` is a map of model IDs to their configurations. The model name will be displayed in the model selection list. + + +**TIP** + +If tool calls aren't working, try increasing `num_ctx` in Ollama. Start around 16k - 32k. + + +### Ollama Cloud + +To use Ollama Cloud with OpenCode: + + + +Head over to [https://ollama.com/](https://ollama.com/) and sign in or create an account. + + +Navigate to **Settings** > **Keys** and click **Add API Key** to generate a new API key. + + +Copy the API key for use in OpenCode. + + +Run the `/connect` command and search for **Ollama Cloud**. +```txt +/connect +``` + + +Enter your Ollama Cloud API key. + +```txt +┌ API key +│ +│ +└ enter +``` + + +**Important**: Before using cloud models in OpenCode, you must pull the model information locally: + +```bash +ollama pull gpt-oss:20b-cloud +``` + + +Run the `/models` command to select your Ollama Cloud model. +```txt +/models +``` + + + +### OpenAI + + + +Head over to the [OpenAI Platform console](https://platform.openai.com/api-keys), click **Create new secret key**, and copy the key. + + +Run the `/connect` command and search for OpenAI. +```txt +/connect +``` + + +Enter the API key for the provider. +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select the one you want. +```txt +/models +``` + + + +### OpenCode Zen + +OpenCode Zen is a list of tested and verified models provided by the OpenCode team. [Learn more](/zen). + + + +Sign in to **[OpenCode Zen](https://opencode.ai/auth)** and click **Create API Key**. + + +Run the `/connect` command and search for **OpenCode Zen**. +```txt +/connect +``` + + +Enter your OpenCode API key. +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model like _Qwen 3 Coder 480B_. +```txt +/models +``` + + + +### OpenRouter + + + +Head over to the [OpenRouter dashboard](https://openrouter.ai/settings/keys), click **Create API Key**, and copy the key. + + +Run the `/connect` command and search for OpenRouter. +```txt +/connect +``` + + +Enter the API key for the provider. +```txt +┌ API key +│ +│ +└ enter +``` + + +Many OpenRouter models are preloaded by default, run the `/models` command to select the one you want. +```txt + /models + ``` + + You can also add additional models through your opencode config. + +```json opencode.json highlight={6} +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "openrouter": { + "models": { + "somecoolnewmodel": {} + } + } + } +} +``` + + +You can also customize them through your opencode config. Here's an example of specifying a provider + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "openrouter": { + "models": { + "moonshotai/kimi-k2": { + "options": { + "provider": { + "order": ["baseten"], + "allow_fallbacks": false + } + } + } + } + } + } +} +``` + + + +### SAP AI Core + +SAP AI Core provides access to 40+ models from OpenAI, Anthropic, Google, Amazon, Meta, Mistral, and AI21 through a unified platform. + + + +Go to your [SAP BTP Cockpit](https://account.hana.ondemand.com/), navigate to your SAP AI Core service instance, and create a service key. + + +**TIP** + +The service key is a JSON object containing `clientid`, `clientsecret`, `url`, and `serviceurls.AI_API_URL`. You can find your AI Core instance under **Services** > **Instances and Subscriptions** in the BTP Cockpit. + + + +Run the `/connect` command and search for **SAP AI Core**. + +```txt +/connect +``` + + +Enter your service key JSON. + +```txt +┌ Service key +│ +│ +└ enter +``` + +Or set the `AICORE_SERVICE_KEY` environment variable: + +```bash +AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}' opencode +``` + +Or add it to your bash profile: + +```bash title="~/.bash_profile" +export AICORE_SERVICE_KEY='{"clientid":"...","clientsecret":"...","url":"...","serviceurls":{"AI_API_URL":"..."}}' +``` + + +Optionally set deployment ID and resource group: + +```bash +AICORE_DEPLOYMENT_ID=your-deployment-id AICORE_RESOURCE_GROUP=your-resource-group opencode +``` + + +**NOTE** + +These settings are optional and should be configured according to your SAP AI Core setup. + + + +Run the `/models` command to select from 40+ available models. +```txt +/models +``` + + + +### OVHcloud AI Endpoints + + + +Head over to the [OVHcloud panel](https://ovh.com/manager). Navigate to the `Public Cloud` section, `AI & Machine Learning` > `AI Endpoints` and in `API Keys` tab, click **Create a new API key**. + + +Run the `/connect` command and search for **OVHcloud AI Endpoints**. + +```txt +/connect +``` + + +Enter your OVHcloud AI Endpoints API key. +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model like _gpt-oss-120b_. +```txt +/models +``` + + + +### Together AI + + + +Head over to the [Together AI console](https://api.together.ai), create an account, and click **Add Key**. + + +Run the `/connect` command and search for **Together AI**. +```txt +/connect +``` + + +Enter your Together AI API key. +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model like _Kimi K2 Instruct_. +```txt +/models +``` + + + +### Venice AI + + + +Head over to the [Venice AI console](https://venice.ai), create an account, and generate an API key. + + + +Run the `/connect` command and search for **Venice AI**. +```txt +/connect +``` + + +Enter your Venice AI API key. +```txt + ┌ API key + │ + │ + └ enter +``` + + + +Run the `/models` command to select a model like _Llama 3.3 70B_. +```txt +/models +``` + + + +### xAI + + + +Head over to the [xAI console](https://console.x.ai/), create an account, and generate an API key. + + + +Run the `/connect` command and search for **xAI**. +```txt +/connect +``` + + +Enter your xAI API key. +```txt + ┌ API key + │ + │ + └ enter + ``` + + +Run the `/models` command to select a model like _Grok Beta_. +```txt + /models + ``` + + + +### Z.AI + + + +Head over to the [Z.AI API console](https://z.ai/manage-apikey/apikey-list), create an account, and click **Create a new API key**. + + +Run the `/connect` command and search for **Z.AI**. + +```txt +/connect +``` + +If you are subscribed to the **GLM Coding Plan**, select **Z.AI Coding Plan**. + + + +Enter your Z.AI API key. +```txt +┌ API key +│ +│ +└ enter +``` + + +Run the `/models` command to select a model like _GLM-4.5_. +```txt +/models +``` + + + +### ZenMux + + + +Head over to the [ZenMux dashboard](https://zenmux.ai/settings/keys), click **Create API Key**, and copy the key. + + +Run the `/connect` command and search for ZenMux. + +```txt +/connect +``` + + +Enter the API key for the provider. +```txt +┌ API key +│ +│ +└ enter +``` + + +Many ZenMux models are preloaded by default, run the `/models` command to select the one you want. +```txt +/models +``` + + +You can also add additional models through your opencode config. +```json title="opencode.json" {6} +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "zenmux": { + "models": { + "somecoolnewmodel": {} + } + } + } +} +``` + + + +## Custom provider + +To add any **OpenAI-compatible** provider that's not listed in the `/connect` command: + + +**TIP** + +You can use any OpenAI-compatible provider with opencode. Most modern AI providers offer OpenAI-compatible APIs. + + + + +Run the `/connect` command and scroll down to **Other**. + +```bash +$ /connect + +┌ Add credential +│ +◆ Select provider +│ ... +│ ● Other +└ +``` + + +Enter a unique ID for the provider. + +```bash +$ /connect + +┌ Add credential +│ +◇ Enter provider id +│ myprovider +└ +``` + + +**NOTE** + +Choose a memorable ID, you'll use this in your config file. + + + + +Enter your API key for the provider. +```bash +$ /connect + +┌ Add credential +│ +▲ This only stores a credential for myprovider - you will need to configure it in opencode.json, check the docs for examples. +│ +◇ Enter your API key +│ sk-... +└ +``` + + +Create or update your `opencode.json` file in your project directory: + +```json opencode.json highlight={5-15} +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "myprovider": { + "npm": "@ai-sdk/openai-compatible", + "name": "My AI ProviderDisplay Name", + "options": { + "baseURL": "https://api.myprovider.com/v1" + }, + "models": { + "my-model-name": { + "name": "My Model Display Name" + } + } + } + } +} +``` + + Here are the configuration options: + - **npm**: AI SDK package to use, `@ai-sdk/openai-compatible` for OpenAI-compatible providers + - **name**: Display name in UI. + - **models**: Available models. + - **options.baseURL**: API endpoint URL. + - **options.apiKey**: Optionally set the API key, if not using auth. + - **options.headers**: Optionally set custom headers. + + More on the advanced options in the example below. + + +Run the `/models` command and your custom provider and models will appear in the selection list. + + + +### Example + +Here's an example setting the `apiKey`, `headers`, and model `limit` options. +```json opencode.json highlight={9,11,17-20} +{ + "$schema": "https://opencode.ai/config.json", + "provider": { + "myprovider": { + "npm": "@ai-sdk/openai-compatible", + "name": "My AI ProviderDisplay Name", + "options": { + "baseURL": "https://api.myprovider.com/v1", + "apiKey": "{env:ANTHROPIC_API_KEY}", + "headers": { + "Authorization": "Bearer custom-token" + } + }, + "models": { + "my-model-name": { + "name": "My Model Display Name", + "limit": { + "context": 200000, + "output": 65536 + } + } + } + } + } +} +``` + +Configuration details: + +- **apiKey**: Set using `env` variable syntax, [learn more](/config#env-vars). +- **headers**: Custom headers sent with each request. +- **limit.context**: Maximum input tokens the model accepts. +- **limit.output**: Maximum tokens the model can generate. + +The `limit` fields allow OpenCode to understand how much context you have left. Standard providers pull these from models.dev automatically. + +## Troubleshooting + +If you are having trouble with configuring a provider, check the following: + +1. **Check the auth setup**: Run `opencode auth list` to see if the credentials + for the provider are added to your config. + + This doesn't apply to providers like Amazon Bedrock, that rely on environment variables for their auth. + +2. For custom providers, check the opencode config and: + - Make sure the provider ID used in the `/connect` command matches the ID in your opencode config. + - The right npm package is used for the provider. For example, use `@ai-sdk/cerebras` for Cerebras. And for all other OpenAI-compatible providers, use `@ai-sdk/openai-compatible`. + - Check correct API endpoint is used in the `options.baseURL` field. diff --git a/packages/docs/quickstart.mdx b/packages/docs/quickstart.mdx deleted file mode 100644 index 52243fba6c2..00000000000 --- a/packages/docs/quickstart.mdx +++ /dev/null @@ -1,81 +0,0 @@ ---- -title: "Quickstart" -description: "Start building awesome documentation in minutes" ---- - -## Get started in three steps - -Get your documentation site running locally and make your first customization. - -### Step 1: Set up your local environment - - - - During the onboarding process, you created a GitHub repository with your docs content if you didn't already have - one. You can find a link to this repository in your [dashboard](https://dashboard.mintlify.com). To clone the - repository locally so that you can make and preview changes to your docs, follow the [Cloning a - repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) guide - in the GitHub docs. - - - 1. Install the Mintlify CLI: `npm i -g mint` 2. Navigate to your docs directory and run: `mint dev` 3. Open - `http://localhost:3000` to see your docs live! - Your preview updates automatically as you edit files. - - - -### Step 2: Deploy your changes - - - - Install the Mintlify GitHub app from your [dashboard](https://dashboard.mintlify.com/settings/organization/github-app). - - Our GitHub app automatically deploys your changes to your docs site, so you don't need to manage deployments yourself. - - - For a first change, let's update the name and colors of your docs site. - - 1. Open `docs.json` in your editor. - 2. Change the `"name"` field to your project name. - 3. Update the `"colors"` to match your brand. - 4. Save and see your changes instantly at `http://localhost:3000`. - - Try changing the primary color to see an immediate difference! - - - - -### Step 3: Go live - - - 1. Commit and push your changes. 2. Your docs will update and be live in moments! - - -## Next steps - -Now that you have your docs running, explore these key features: - - - - - Learn MDX syntax and start writing your documentation. - - - - Make your docs match your brand perfectly. - - - - Include syntax-highlighted code blocks. - - - - Auto-generate API docs from OpenAPI specs. - - - - - - **Need help?** See our [full documentation](https://mintlify.com/docs) or join our - [community](https://mintlify.com/community). - diff --git a/packages/docs/rules.mdx b/packages/docs/rules.mdx new file mode 100644 index 00000000000..90cbef260ed --- /dev/null +++ b/packages/docs/rules.mdx @@ -0,0 +1,145 @@ +--- +title: Rules +description: Set custom instructions for opencode. +--- + +You can provide custom instructions to opencode by creating an `AGENTS.md` file. This is similar to `CLAUDE.md` or Cursor's rules. It contains instructions that will be included in the LLM's context to customize its behavior for your specific project. + +## Initialize + +To create a new `AGENTS.md` file, you can run the `/init` command in opencode. + + +**TIP** + +You should commit your project's `AGENTS.md` file to Git. + + +This will scan your project and all its contents to understand what the project is about and generate an `AGENTS.md` file with it. This helps opencode to navigate the project better. + +If you have an existing `AGENTS.md` file, this will try to add to it. + + +## Example + +You can also just create this file manually. Here's an example of some things you can put into an `AGENTS.md` file. + +```markdown AGENTS.md expandable +# SST v3 Monorepo Project + +This is an SST v3 monorepo with TypeScript. The project uses bun workspaces for package management. + +## Project Structure + +- `packages/` - Contains all workspace packages (functions, core, web, etc.) +- `infra/` - Infrastructure definitions split by service (storage.ts, api.ts, web.ts) +- `sst.config.ts` - Main SST configuration with dynamic imports + +## Code Standards + +- Use TypeScript with strict mode enabled +- Shared code goes in `packages/core/` with proper exports configuration +- Functions go in `packages/functions/` +- Infrastructure should be split into logical files in `infra/` + +## Monorepo Conventions + +- Import shared modules using workspace names: `@my-app/core/example` +``` + +We are adding project-specific instructions here and this will be shared across your team. + +## Types + +opencode also supports reading the `AGENTS.md` file from multiple locations. And this serves different purposes. + +### Project + +The ones we have seen above, where the `AGENTS.md` is placed in the project root, are project-specific rules. These only apply when you are working in this directory or its sub-directories. + +### Global + +You can also have global rules in a `~/.config/opencode/AGENTS.md` file. This gets applied across all opencode sessions. + +Since this isn't committed to Git or shared with your team, we recommend using this to specify any personal rules that the LLM should follow. + +## Precedence + +So when opencode starts, it looks for: + +1. **Local files** by traversing up from the current directory +2. **Global file** by checking `~/.config/opencode/AGENTS.md` + +If you have both global and project-specific rules, opencode will combine them together. + +## Custom Instructions + +You can specify custom instruction files in your `opencode.json` or the global `~/.config/opencode/opencode.json`. This allows you and your team to reuse existing rules rather than having to duplicate them to AGENTS.md. + +Example: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "instructions": ["CONTRIBUTING.md", "docs/guidelines.md", ".cursor/rules/*.md"] +} +``` + +All instruction files are combined with your `AGENTS.md` files. + +## Referencing External Files + +While opencode doesn't automatically parse file references in `AGENTS.md`, you can achieve similar functionality in two ways: + +### Using opencode.json + +The recommended approach is to use the `instructions` field in `opencode.json`: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "instructions": ["docs/development-standards.md", "test/testing-guidelines.md", "packages/*/AGENTS.md"] +} +``` + +### Manual Instructions in AGENTS.md + +You can teach opencode to read external files by providing explicit instructions in your `AGENTS.md`. Here's a practical example: + +```markdown AGENTS.md expandable +# TypeScript Project Rules + +## External File Loading + +CRITICAL: When you encounter a file reference (e.g., @rules/general.md), use your Read tool to load it on a need-to-know basis. They're relevant to the SPECIFIC task at hand. + +Instructions: + +- Do NOT preemptively load all references - use lazy loading based on actual need +- When loaded, treat content as mandatory instructions that override defaults +- Follow references recursively when needed + +## Development Guidelines + +For TypeScript code style and best practices: @docs/typescript-guidelines.md +For React component architecture and hooks patterns: @docs/react-patterns.md +For REST API design and error handling: @docs/api-standards.md +For testing strategies and coverage requirements: @test/testing-guidelines.md + +## General Guidelines + +Read the following file immediately as it's relevant to all workflows: @rules/general-guidelines.md. +``` + +This approach allows you to: + +- Create modular, reusable rule files +- Share rules across projects via symlinks or git submodules +- Keep AGENTS.md concise while referencing detailed guidelines +- Ensure opencode loads files only when needed for the specific task + + +**TIP** + +For monorepos or projects with shared standards, using `opencode.json` with glob patterns (like `packages/*/AGENTS.md`) is more maintainable than manual instructions. + diff --git a/packages/docs/sdk.mdx b/packages/docs/sdk.mdx new file mode 100644 index 00000000000..211e9d7e3e7 --- /dev/null +++ b/packages/docs/sdk.mdx @@ -0,0 +1,314 @@ +--- +title: SDK +description: Type-safe JS client for opencode server. +--- + + +The opencode JS/TS SDK provides a type-safe client for interacting with the server. +Use it to build integrations and control opencode programmatically. + +[Learn more](/server) about how the server works. For examples, check out the [projects](/ecosystem#projects) built by the community. + +## Install + +Install the SDK from npm: + +```bash +npm install @opencode-ai/sdk +``` + +## Create client + +Create an instance of opencode: + +```javascript +import { createOpencode } from "@opencode-ai/sdk" + +const { client } = await createOpencode() +``` + +This starts both a server and a client + +#### Options + +| Option | Type | Description | Default | +| :--------- | :------------ | :----------------------------- | :---------- | +| `hostname` | `string` | Server hostname | `127.0.0.1` | +| `port` | `number` | Server port | `4096` | +| `signal` | `AbortSignal` | Abort signal for cancellation | `undefined` | +| `timeout` | `number` | Timeout in ms for server start | `5000` | +| `config` | `Config` | Configuration object | `{}` | + +## Config + +You can pass a configuration object to customize behavior. The instance still picks up your `opencode.json`, but you can override or add configuration inline: + +```javascript +import { createOpencode } from "@opencode-ai/sdk" + +const opencode = await createOpencode({ + hostname: "127.0.0.1", + port: 4096, + config: { + model: "anthropic/claude-3-5-sonnet-20241022", + }, +}) + +console.log(`Server running at ${opencode.server.url}`) + +opencode.server.close() +``` + +## Client only + +If you already have a running instance of opencode, you can create a client instance to connect to it: + +```javascript +import { createOpencodeClient } from "@opencode-ai/sdk" + +const client = createOpencodeClient({ + baseUrl: "http://localhost:4096", +}) +``` + +#### Options + +| Option | Type | Description | Default | +| :--------------- | :--------- | :-------------------------------- | :---------------------- | +| `baseUrl` | `string` | URL of the server | `http://localhost:4096` | +| `fetch` | `function` | Custom fetch implementation | `globalThis.fetch` | +| `parseAs` | `string` | Response parsing method | `auto` | +| `responseStyle` | `string` | Return style: `data` or `fields` | `fields` | +| `throwOnError` | `boolean` | Throw errors instead of return | `false` | + +## Types + +The SDK includes TypeScript definitions for all API types. Import them directly: + +```typescript +import type { Session, Message, Part } from "@opencode-ai/sdk" +``` + +All types are generated from the server's OpenAPI specification and available in the [types file](https://github.com/sst/opencode/blob/dev/packages/sdk/js/src/gen/types.gen.ts). + +## Errors + +The SDK can throw errors that you can catch and handle: + +```typescript +try { + await client.session.get({ path: { id: "invalid-id" } }) +} catch (error) { + console.error("Failed to get session:", (error as Error).message) +} +``` + +## APIs + +The SDK exposes all server APIs through a type-safe client. + +### App + +| Method | Description | Response | +| :------------- | :------------------------ | :------------------------------------------ | +| `app.log()` | Write a log entry | `boolean` | +| `app.agents()` | List all available agents | Agent[] | + +#### Examples + +```javascript +// Write a log entry +await client.app.log({ + body: { + service: "my-app", + level: "info", + message: "Operation completed", + }, +}) + +// List available agents +const agents = await client.app.agents() +``` + +### Project + +| Method | Description | Response | +| :------------------ | :------------------ | :-------------------------------------------- | +| `project.list()` | List all projects | Project[] | +| `project.current()` | Get current project | Project | + +#### Examples + +```javascript +// List all projects +const projects = await client.project.list() + +// Get current project +const currentProject = await client.project.current() +``` + +### Path + +| Method | Description | Response | +| :----------- | :--------------- | :--------------------------------------- | +| `path.get()` | Get current path | Path | + +#### Examples + +```javascript +// Get current path information +const pathInfo = await client.path.get() +``` + +### Config + +| Method | Description | Response | +| :------------------- | :-------------------------------- | :---------------------------------------------------------------------------------------------------- | +| `config.get()` | Get config info | Config | +| `config.providers()` | List providers and default models | `{ providers: `Provider[]`, default: { [key: string]: string } }` | + +#### Examples + +```javascript +const config = await client.config.get() + +const { providers, default: defaults } = await client.config.providers() +``` + +### Sessions + +| Method | Description | Notes | +| :--------------------------------------------------------- | :--------------------------------- | :--------------------------------------------------------------------------------------------------------------------------------------------- | +| `session.list()` | List sessions | Returns Session[] | +| `session.get({ path })` | Get session | Returns Session | +| `session.children({ path })` | List child sessions | Returns Session[] | +| `session.create({ body })` | Create session | Returns Session | +| `session.delete({ path })` | Delete session | Returns `boolean` | +| `session.update({ path, body })` | Update session properties | Returns Session | +| `session.init({ path, body })` | Analyze app and create `AGENTS.md` | Returns `boolean` | +| `session.abort({ path })` | Abort a running session | Returns `boolean` | +| `session.share({ path })` | Share session | Returns Session | +| `session.unshare({ path })` | Unshare session | Returns Session | +| `session.summarize({ path, body })` | Summarize session | Returns `boolean` | +| `session.messages({ path })` | List messages in a session | Returns `{ info: `Message`, parts: `Part[]`}[]` | +| `session.message({ path })` | Get message details | Returns `{ info: `Message`, parts: `Part[]`}` | +| `session.prompt({ path, body })` | Send prompt message | `body.noReply: true` returns UserMessage (context only). Default returns AssistantMessage with AI response | +| `session.command({ path, body })` | Send command to session | Returns `{ info: `AssistantMessage`, parts: `Part[]`}` | +| `session.shell({ path, body })` | Run a shell command | Returns AssistantMessage | +| `session.revert({ path, body })` | Revert a message | Returns Session | +| `session.unrevert({ path })` | Restore reverted messages | Returns Session | +| `postSessionByIdPermissionsByPermissionId({ path, body })` | Respond to a permission request | Returns `boolean` | + +#### Examples + +```javascript expandable +// Create and manage sessions +const session = await client.session.create({ + body: { title: "My session" }, +}) + +const sessions = await client.session.list() + +// Send a prompt message +const result = await client.session.prompt({ + path: { id: session.id }, + body: { + model: { providerID: "anthropic", modelID: "claude-3-5-sonnet-20241022" }, + parts: [{ type: "text", text: "Hello!" }], + }, +}) + +// Inject context without triggering AI response (useful for plugins) +await client.session.prompt({ + path: { id: session.id }, + body: { + noReply: true, + parts: [{ type: "text", text: "You are a helpful assistant." }], + }, +}) +``` + +### Files + +| Method | Description | Response | +| :------------------------ | :--------------------------- | :------------------------------------------------------------------------------------------ | +| `find.text({ query })` | Search for text in files | Array of match objects with `path`, `lines`, `line_number`, `absolute_offset`, `submatches` | +| `find.files({ query })` | Find files by name | `string[]` (file paths) | +| `find.symbols({ query })` | Find workspace symbols | Symbol[] | +| `file.read({ query })` | Read a file | `{ type: "raw" \| "patch", content: string }` | +| `file.status({ query? })` | Get status for tracked files | File[] | + +#### Examples + +```javascript +// Search and read files +const textResults = await client.find.text({ + query: { pattern: "function.*opencode" }, +}) + +const files = await client.find.files({ + query: { query: "*.ts" }, +}) + +const content = await client.file.read({ + query: { path: "src/index.ts" }, +}) +``` + +### TUI + +| Method | Description | Response | +| :----------------------------- | :------------------------ | :-------- | +| `tui.appendPrompt({ body })` | Append text to the prompt | `boolean` | +| `tui.openHelp()` | Open the help dialog | `boolean` | +| `tui.openSessions()` | Open the session selector | `boolean` | +| `tui.openThemes()` | Open the theme selector | `boolean` | +| `tui.openModels()` | Open the model selector | `boolean` | +| `tui.submitPrompt()` | Submit the current prompt | `boolean` | +| `tui.clearPrompt()` | Clear the prompt | `boolean` | +| `tui.executeCommand({ body })` | Execute a command | `boolean` | +| `tui.showToast({ body })` | Show toast notification | `boolean` | + +#### Examples + +```javascript +// Control TUI interface +await client.tui.appendPrompt({ + body: { text: "Add this to prompt" }, +}) + +await client.tui.showToast({ + body: { message: "Task completed", variant: "success" }, +}) +``` + +### Auth + +| Method | Description | Response | +| :------------------ | :----------------------------- | :-------- | +| `auth.set({ ... })` | Set authentication credentials | `boolean` | + +#### Examples + +```javascript +await client.auth.set({ + path: { id: "anthropic" }, + body: { type: "api", key: "your-api-key" }, +}) +``` + +### Events + +| Method | Description | Response | +| :------------------ | :------------------------ | :------------------------ | +| `event.subscribe()` | Server-sent events stream | Server-sent events stream | + +#### Examples + +```javascript +// Listen to real-time events +const events = await client.event.subscribe() +for await (const event of events.stream) { + console.log("Event:", event.type, event.properties) +} +``` diff --git a/packages/docs/server.mdx b/packages/docs/server.mdx new file mode 100644 index 00000000000..2c440bd8032 --- /dev/null +++ b/packages/docs/server.mdx @@ -0,0 +1,213 @@ +--- +title: Server +description: Interact with opencode server over HTTP. +--- + +The `opencode serve` command runs a headless HTTP server that exposes an OpenAPI endpoint that an opencode client can use. + +### Usage + +```bash +opencode serve [--port ] [--hostname ] +``` + +#### Options + +| Flag | Short | Description | Default | +| :----------- | :---- | :-------------------- | :---------- | +| `--port` | `-p` | Port to listen on | `4096` | +| `--hostname` | `-h` | Hostname to listen on | `127.0.0.1` | + +### How it works + +When you run `opencode` it starts a TUI and a server. Where the TUI is the +client that talks to the server. The server exposes an OpenAPI 3.1 spec +endpoint. This endpoint is also used to generate an [SDK](/sdk). + + +**TIP** + +Use the opencode server to interact with opencode programmatically. + + +This architecture lets opencode support multiple clients and allows you to interact with opencode programmatically. + +You can run `opencode serve` to start a standalone server. If you have the +opencode TUI running, `opencode serve` will start a new server. + +#### Connect to an existing server + +When you start the TUI it randomly assigns a port and hostname. You can instead pass in the `--hostname` and `--port` [flags](/cli). Then use this to connect to its server. + +The [`/tui`](#tui) endpoint can be used to drive the TUI through the server. For example, you can prefill or run a prompt. This setup is used by the OpenCode [IDE](/ide) plugins. + +## Spec + +The server publishes an OpenAPI 3.1 spec that can be viewed at: + +```text +http://:/doc +``` + +For example, `http://localhost:4096/doc`. Use the spec to generate clients or inspect request and response types. Or view it in a Swagger explorer. + +## APIs + +The opencode server exposes the following APIs. + +### Global Events + +| Method | Path | Description | Response | +| :----- | :-------------- | :----------------------------- | :----------- | +| `GET` | `/global/event` | Get global events (SSE stream) | Event stream | + +### Project + +| Method | Path | Description | Response | +| :----- | :----------------- | :---------------------- | :-------------------------------------------- | +| `GET` | `/project` | List all projects | Project[] | +| `GET` | `/project/current` | Get the current project | Project | + + +### Path & VCS + +| Method | Path | Description | Response | +| :----- | :------ | :----------------------------------- | :------------------------------------------ | +| `GET` | `/path` | Get the current path | Path | +| `GET` | `/vcs` | Get VCS info for the current project | VcsInfo | + +### Instance + +| Method | Path | Description | Response | +| :----- | :------------------ | :--------------------------- | :-------- | +| `POST` | `/instance/dispose` | Dispose the current instance | `boolean` | + +### Config + +| Method | Path | Description | Response | +| :------ | :------------------ | :-------------------------------- | :--------------------------------------------------------------------------------------- | +| `GET` | `/config` | Get config info | Config | +| `PATCH` | `/config` | Update config | Config | +| `GET` | `/config/providers` | List providers and default models | `{ providers: `Provider[]`, default: { [key: string]: string } }` | + +### Provider + +| Method | Path | Description | Response | +| :----- | :------------------------------- | :----------------------------------- | :---------------------------------------------------------------------------------- | +| `GET` | `/provider` | List all providers | `{ all: `Provider[]`, default: {...}, connected: string[] }` | +| `GET` | `/provider/auth` | Get provider authentication methods | `{ [providerID: string]: `ProviderAuthMethod[]` }` | +| `POST` | `/provider/{id}/oauth/authorize` | Authorize a provider using OAuth | ProviderAuthAuthorization | +| `POST` | `/provider/{id}/oauth/callback` | Handle OAuth callback for a provider | `boolean` | + +### Sessions + +| Method | Path | Description | Notes | +| :------- | :--------------------------------------- | :------------------------------------ | :--------------------------------------------------------------------------------- | +| `GET` | `/session` | List all sessions | Returns Session[] | +| `POST` | `/session` | Create a new session | body: `{ parentID?, title? }`, returns Session | +| `GET` | `/session/status` | Get session status for all sessions | Returns `{ [sessionID: string]: `SessionStatus` }` | +| `GET` | `/session/:id` | Get session details | Returns Session | +| `DELETE` | `/session/:id` | Delete a session and all its data | Returns `boolean` | +| `PATCH` | `/session/:id` | Update session properties | body: `{ title? }`, returns Session | +| `GET` | `/session/:id/children` | Get a session's child sessions | Returns Session[] | +| `GET` | `/session/:id/todo` | Get the todo list for a session | Returns Todo[] | +| `POST` | `/session/:id/init` | Analyze app and create `AGENTS.md` | body: `{ messageID, providerID, modelID }`, returns `boolean` | +| `POST` | `/session/:id/fork` | Fork an existing session at a message | body: `{ messageID? }`, returns Session | +| `POST` | `/session/:id/abort` | Abort a running session | Returns `boolean` | +| `POST` | `/session/:id/share` | Share a session | Returns Session | +| `DELETE` | `/session/:id/share` | Unshare a session | Returns Session | +| `GET` | `/session/:id/diff` | Get the diff for this session | query: `messageID?`, returns FileDiff[] | +| `POST` | `/session/:id/summarize` | Summarize the session | body: `{ providerID, modelID }`, returns `boolean` | +| `POST` | `/session/:id/revert` | Revert a message | body: `{ messageID, partID? }`, returns `boolean` | +| `POST` | `/session/:id/unrevert` | Restore all reverted messages | Returns `boolean` | +| `POST` | `/session/:id/permissions/:permissionID` | Respond to a permission request | body: `{ response, remember? }`, returns `boolean` | + +### Messages + +| Method | Path | Description | Notes | +| :----- | :--------------------------------- | :-------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `GET` | `/session/:id/message` | List messages in a session | query: `limit?`, returns `{ info: `Message`, parts: `Part[]`}[]` | +| `POST` | `/session/:id/message` | Send a message and wait for response | body: `{ messageID?, model?, agent?, noReply?, system?, tools?, parts }`, returns `{ info: `Message`, parts: `Part[]`}` | +| `GET` | `/session/:id/message/:messageID` | Get message details | Returns `{ info: `Message`, parts: `Part[]`}` | +| `POST` | `/session/:id/prompt_async` | Send a message asynchronously (no wait) | body: same as `/session/:id/message`, returns `204 No Content` | +| `POST` | `/session/:id/command` | Execute a slash command | body: `{ messageID?, agent?, model?, command, arguments }`, returns `{ info: `Message`, parts: `Part[]`}` | +| `POST` | `/session/:id/shell` | Run a shell command | body: `{ agent, model?, command }`, returns `{ info: `Message`, parts: `Part[]`}` | + +### Commands + +| Method | Path | Description | Response | +| :----- | :--------- | :---------------- | :-------------------------------------------- | +| `GET` | `/command` | List all commands | Command[] | + +### Files + +| Method | Path | Description | Response | +| :----- | :----------------------- | :--------------------------- | :------------------------------------------------------------------------------------------ | +| `GET` | `/find?pattern=` | Search for text in files | Array of match objects with `path`, `lines`, `line_number`, `absolute_offset`, `submatches` | +| `GET` | `/find/file?query=` | Find files by name | `string[]` (file paths) | +| `GET` | `/find/symbol?query=` | Find workspace symbols | Symbol[] | +| `GET` | `/file?path=` | List files and directories | FileNode[] | +| `GET` | `/file/content?path=

` | Read a file | FileContent | +| `GET` | `/file/status` | Get status for tracked files | File[] | + +### Tools (Experimental) + +| Method | Path | Description | Response | +| :----- | :------------------------------------------ | :--------------------------------------- | :------------------------------------------- | +| `GET` | `/experimental/tool/ids` | List all tool IDs | ToolIDs | +| `GET` | `/experimental/tool?provider=

&model=` | List tools with JSON schemas for a model | ToolList | + +### LSP, Formatters & MCP + +| Method | Path | Description | Response | +| :----- | :----------- | :------------------------- | :------------------------------------------------------- | +| `GET` | `/lsp` | Get LSP server status | LSPStatus[] | +| `GET` | `/formatter` | Get formatter status | FormatterStatus[] | +| `GET` | `/mcp` | Get MCP server status | `{ [name: string]: `MCPStatus` }` | +| `POST` | `/mcp` | Add MCP server dynamically | body: `{ name, config }`, returns MCP status object | + +### Agents + +| Method | Path | Description | Response | +| :----- | :------- | :------------------------ | :------------------------------------------ | +| `GET` | `/agent` | List all available agents | Agent[] | + +### Logging + +| Method | Path | Description | Response | +| :----- | :----- | :----------------------------------------------------------- | :-------- | +| `POST` | `/log` | Write log entry. Body: `{ service, level, message, extra? }` | `boolean` | + +### TUI + +| Method | Path | Description | Response | +| :----- | :---------------------- | :------------------------------------------ | :--------------------- | +| `POST` | `/tui/append-prompt` | Append text to the prompt | `boolean` | +| `POST` | `/tui/open-help` | Open the help dialog | `boolean` | +| `POST` | `/tui/open-sessions` | Open the session selector | `boolean` | +| `POST` | `/tui/open-themes` | Open the theme selector | `boolean` | +| `POST` | `/tui/open-models` | Open the model selector | `boolean` | +| `POST` | `/tui/submit-prompt` | Submit the current prompt | `boolean` | +| `POST` | `/tui/clear-prompt` | Clear the prompt | `boolean` | +| `POST` | `/tui/execute-command` | Execute a command (`{ command }`) | `boolean` | +| `POST` | `/tui/show-toast` | Show toast (`{ title?, message, variant }`) | `boolean` | +| `GET` | `/tui/control/next` | Wait for the next control request | Control request object | +| `POST` | `/tui/control/response` | Respond to a control request (`{ body }`) | `boolean` | + +### Auth + +| Method | Path | Description | Response | +| :----- | :---------- | :-------------------------------------------------------------- | :-------- | +| `PUT` | `/auth/:id` | Set authentication credentials. Body must match provider schema | `boolean` | + +### Events + +| Method | Path | Description | Response | +| :----- | :------- | :---------------------------------------------------------------------------- | :------------------------ | +| `GET` | `/event` | Server-sent events stream. First event is `server.connected`, then bus events | Server-sent events stream | + +### Docs + +| Method | Path | Description | Response | +| :----- | :----- | :------------------------ | :-------------------------- | +| `GET` | `/doc` | OpenAPI 3.1 specification | HTML page with OpenAPI spec | diff --git a/packages/docs/share.mdx b/packages/docs/share.mdx new file mode 100644 index 00000000000..4cf6a39f487 --- /dev/null +++ b/packages/docs/share.mdx @@ -0,0 +1,110 @@ +--- +title: Share +description: Share your OpenCode conversations. +--- + +OpenCode's share feature allows you to create public links to your OpenCode conversations, so you can collaborate with teammates or get help from others. + + +**NOTE** + +Shared conversations are publicly accessible to anyone with the link. + + +## How it works + +When you share a conversation, OpenCode: + +1. Creates a unique public URL for your session +2. Syncs your conversation history to our servers +3. Makes the conversation accessible via the shareable link — `opncd.ai/s/` + +## Sharing + +OpenCode supports three sharing modes that control how conversations are shared: + +### Manual (default) + +By default, OpenCode uses manual sharing mode. Sessions are not shared automatically, but you can manually share them using the `/share` command: + +``` +/share +``` + +This will generate a unique URL that'll be copied to your clipboard. + +To explicitly set manual mode in your [config file](/config): + +```json opencode.json +{ + "$schema": "https://opncd.ai/config.json", + "share": "manual" +} +``` + +### Auto-share + +You can enable automatic sharing for all new conversations by setting the `share` option to `"auto"` in your [config file](/config): + +```json opencode.json +{ + "$schema": "https://opncd.ai/config.json", + "share": "auto" +} +``` + +With auto-share enabled, every new conversation will automatically be shared and a link will be generated. + +### Disabled + +You can disable sharing entirely by setting the `share` option to `"disabled"` in your [config file](/config): + +```json opencode.json +{ + "$schema": "https://opncd.ai/config.json", + "share": "disabled" +} +``` + +To enforce this across your team for a given project, add it to the `opencode.json` in your project and check into Git. + +## Un-sharing + +To stop sharing a conversation and remove it from public access: + +```bash +/unshare +``` + +This will remove the share link and delete the data related to the conversation. + +## Privacy + +There are a few things to keep in mind when sharing a conversation. + +### Data retention + +Shared conversations remain accessible until you explicitly unshare them. This +includes: + +- Full conversation history +- All messages and responses +- Session metadata + +### Recommendations + +- Only share conversations that don't contain sensitive information. +- Review conversation content before sharing. +- Unshare conversations when collaboration is complete. +- Avoid sharing conversations with proprietary code or confidential data. +- For sensitive projects, disable sharing entirely. + +## For enterprises + +For enterprise deployments, the share feature can be: + +- **Disabled** entirely for security compliance +- **Restricted** to users authenticated through SSO only +- **Self-hosted** on your own infrastructure + +[Learn more](/enterprise) about using opencode in your organization. diff --git a/packages/docs/snippets/snippet-intro.mdx b/packages/docs/snippets/snippet-intro.mdx deleted file mode 100644 index e20fbb6fc93..00000000000 --- a/packages/docs/snippets/snippet-intro.mdx +++ /dev/null @@ -1,4 +0,0 @@ -One of the core principles of software development is DRY (Don't Repeat -Yourself). This is a principle that applies to documentation as -well. If you find yourself repeating the same content in multiple places, you -should consider creating a custom snippet to keep your content in sync. diff --git a/packages/docs/themes.mdx b/packages/docs/themes.mdx new file mode 100644 index 00000000000..0ab731e3e52 --- /dev/null +++ b/packages/docs/themes.mdx @@ -0,0 +1,349 @@ +--- +title: Themes +description: Select a built-in theme or define your own. +--- + +With OpenCode you can select from one of several built-in themes, use a theme that adapts to your terminal theme, or define your own custom theme. + +By default, OpenCode uses our own `opencode` theme. + +## Terminal requirements + +For themes to display correctly with their full color palette, your terminal must support **truecolor** (24-bit color). Most modern terminals support this by default, but you may need to enable it: + +- **Check support**: Run `echo $COLORTERM` - it should output `truecolor` or `24bit` +- **Enable truecolor**: Set the environment variable `COLORTERM=truecolor` in your shell profile +- **Terminal compatibility**: Ensure your terminal emulator supports 24-bit color (most modern terminals like iTerm2, Alacritty, Kitty, Windows Terminal, and recent versions of GNOME Terminal do) + +Without truecolor support, themes may appear with reduced color accuracy or fall back to the nearest 256-color approximation. + +## Built-in themes + +OpenCode comes with several built-in themes. + +| Name | Description | +| :--------------------- | :--------------------------------------------------------------------------- | +| `system` | Adapts to your terminal’s background color | +| `tokyonight` | Based on the [Tokyonight](https://github.com/folke/tokyonight.nvim) theme | +| `everforest` | Based on the [Everforest](https://github.com/sainnhe/everforest) theme | +| `ayu` | Based on the [Ayu](https://github.com/ayu-theme) dark theme | +| `catppuccin` | Based on the [Catppuccin](https://github.com/catppuccin) theme | +| `catppuccin-macchiato` | Based on the [Catppuccin](https://github.com/catppuccin) theme | +| `gruvbox` | Based on the [Gruvbox](https://github.com/morhetz/gruvbox) theme | +| `kanagawa` | Based on the [Kanagawa](https://github.com/rebelot/kanagawa.nvim) theme | +| `nord` | Based on the [Nord](https://github.com/nordtheme/nord) theme | +| `matrix` | Hacker-style green on black theme | +| `one-dark` | Based on the [Atom One](https://github.com/Th3Whit3Wolf/one-nvim) Dark theme | + +And more, we are constantly adding new themes. + +## System theme + +The `system` theme is designed to automatically adapt to your terminal's color scheme. Unlike traditional themes that use fixed colors, the _system_ theme: + +- **Generates gray scale**: Creates a custom gray scale based on your terminal's background color, ensuring optimal contrast. +- **Uses ANSI colors**: Leverages standard ANSI colors (0-15) for syntax highlighting and UI elements, which respect your terminal's color palette. +- **Preserves terminal defaults**: Uses `none` for text and background colors to maintain your terminal's native appearance. + +The system theme is for users who: + +- Want OpenCode to match their terminal's appearance +- Use custom terminal color schemes +- Prefer a consistent look across all terminal applications + +## Using a theme + +You can select a theme by bringing up the theme select with the `/theme` command. Or you can specify it in your [config](/config). + +```json opencode.json highlight={3} +{ + "$schema": "https://opencode.ai/config.json", + "theme": "tokyonight" +} +``` + + +## Custom themes + +OpenCode supports a flexible JSON-based theme system that allows users to create and customize themes easily. + + +### Hierarchy + +Themes are loaded from multiple directories in the following order where later directories override earlier ones: + +1. **Built-in themes** - These are embedded in the binary +2. **User config directory** - Defined in `~/.config/opencode/themes/*.json` or `$XDG_CONFIG_HOME/opencode/themes/*.json` +3. **Project root directory** - Defined in the `/.opencode/themes/*.json` +4. **Current working directory** - Defined in `./.opencode/themes/*.json` + +If multiple directories contain a theme with the same name, the theme from the directory with higher priority will be used. + +### Creating a theme + +To create a custom theme, create a JSON file in one of the theme directories. + +For user-wide themes: + +```bash +mkdir -p ~/.config/opencode/themes +vim ~/.config/opencode/themes/my-theme.json +``` + +And for project-specific themes. + +```bash +mkdir -p .opencode/themes +vim .opencode/themes/my-theme.json +``` + +### JSON format + +Themes use a flexible JSON format with support for: + +- **Hex colors**: `"#ffffff"` +- **ANSI colors**: `3` (0-255) +- **Color references**: `"primary"` or custom definitions +- **Dark/light variants**: `{"dark": "#000", "light": "#fff"}` +- **No color**: `"none"` - Uses the terminal's default color or transparent + +### Color definitions + +The `defs` section is optional and it allows you to define reusable colors that can be referenced in the theme. + +### Terminal defaults + +The special value `"none"` can be used for any color to inherit the terminal's default color. This is particularly useful for creating themes that blend seamlessly with your terminal's color scheme: + +- `"text": "none"` - Uses terminal's default foreground color +- `"background": "none"` - Uses terminal's default background color + +### Example + +Here's an example of a custom theme: + +```json my-theme.json expandable +{ + "$schema": "https://opencode.ai/theme.json", + "defs": { + "nord0": "#2E3440", + "nord1": "#3B4252", + "nord2": "#434C5E", + "nord3": "#4C566A", + "nord4": "#D8DEE9", + "nord5": "#E5E9F0", + "nord6": "#ECEFF4", + "nord7": "#8FBCBB", + "nord8": "#88C0D0", + "nord9": "#81A1C1", + "nord10": "#5E81AC", + "nord11": "#BF616A", + "nord12": "#D08770", + "nord13": "#EBCB8B", + "nord14": "#A3BE8C", + "nord15": "#B48EAD" + }, + "theme": { + "primary": { + "dark": "nord8", + "light": "nord10" + }, + "secondary": { + "dark": "nord9", + "light": "nord9" + }, + "accent": { + "dark": "nord7", + "light": "nord7" + }, + "error": { + "dark": "nord11", + "light": "nord11" + }, + "warning": { + "dark": "nord12", + "light": "nord12" + }, + "success": { + "dark": "nord14", + "light": "nord14" + }, + "info": { + "dark": "nord8", + "light": "nord10" + }, + "text": { + "dark": "nord4", + "light": "nord0" + }, + "textMuted": { + "dark": "nord3", + "light": "nord1" + }, + "background": { + "dark": "nord0", + "light": "nord6" + }, + "backgroundPanel": { + "dark": "nord1", + "light": "nord5" + }, + "backgroundElement": { + "dark": "nord1", + "light": "nord4" + }, + "border": { + "dark": "nord2", + "light": "nord3" + }, + "borderActive": { + "dark": "nord3", + "light": "nord2" + }, + "borderSubtle": { + "dark": "nord2", + "light": "nord3" + }, + "diffAdded": { + "dark": "nord14", + "light": "nord14" + }, + "diffRemoved": { + "dark": "nord11", + "light": "nord11" + }, + "diffContext": { + "dark": "nord3", + "light": "nord3" + }, + "diffHunkHeader": { + "dark": "nord3", + "light": "nord3" + }, + "diffHighlightAdded": { + "dark": "nord14", + "light": "nord14" + }, + "diffHighlightRemoved": { + "dark": "nord11", + "light": "nord11" + }, + "diffAddedBg": { + "dark": "#3B4252", + "light": "#E5E9F0" + }, + "diffRemovedBg": { + "dark": "#3B4252", + "light": "#E5E9F0" + }, + "diffContextBg": { + "dark": "nord1", + "light": "nord5" + }, + "diffLineNumber": { + "dark": "nord2", + "light": "nord4" + }, + "diffAddedLineNumberBg": { + "dark": "#3B4252", + "light": "#E5E9F0" + }, + "diffRemovedLineNumberBg": { + "dark": "#3B4252", + "light": "#E5E9F0" + }, + "markdownText": { + "dark": "nord4", + "light": "nord0" + }, + "markdownHeading": { + "dark": "nord8", + "light": "nord10" + }, + "markdownLink": { + "dark": "nord9", + "light": "nord9" + }, + "markdownLinkText": { + "dark": "nord7", + "light": "nord7" + }, + "markdownCode": { + "dark": "nord14", + "light": "nord14" + }, + "markdownBlockQuote": { + "dark": "nord3", + "light": "nord3" + }, + "markdownEmph": { + "dark": "nord12", + "light": "nord12" + }, + "markdownStrong": { + "dark": "nord13", + "light": "nord13" + }, + "markdownHorizontalRule": { + "dark": "nord3", + "light": "nord3" + }, + "markdownListItem": { + "dark": "nord8", + "light": "nord10" + }, + "markdownListEnumeration": { + "dark": "nord7", + "light": "nord7" + }, + "markdownImage": { + "dark": "nord9", + "light": "nord9" + }, + "markdownImageText": { + "dark": "nord7", + "light": "nord7" + }, + "markdownCodeBlock": { + "dark": "nord4", + "light": "nord0" + }, + "syntaxComment": { + "dark": "nord3", + "light": "nord3" + }, + "syntaxKeyword": { + "dark": "nord9", + "light": "nord9" + }, + "syntaxFunction": { + "dark": "nord8", + "light": "nord8" + }, + "syntaxVariable": { + "dark": "nord7", + "light": "nord7" + }, + "syntaxString": { + "dark": "nord14", + "light": "nord14" + }, + "syntaxNumber": { + "dark": "nord15", + "light": "nord15" + }, + "syntaxType": { + "dark": "nord7", + "light": "nord7" + }, + "syntaxOperator": { + "dark": "nord9", + "light": "nord9" + }, + "syntaxPunctuation": { + "dark": "nord4", + "light": "nord0" + } + } +} +``` diff --git a/packages/docs/tools.mdx b/packages/docs/tools.mdx new file mode 100644 index 00000000000..1fe104b34bd --- /dev/null +++ b/packages/docs/tools.mdx @@ -0,0 +1,292 @@ +--- +title: Tools +description: Manage the tools an LLM can use. +--- + +Tools allow the LLM to perform actions in your codebase. OpenCode comes with a set of built-in tools, but you can extend it with [custom tools](/custom-tools) or [MCP servers](/mcp-servers). + +By default, all tools are **enabled** and don't need permission to run. But you can configure this and control the [permissions](/permissions) through your config. + +## Configure + +You can configure tools globally or per agent. Agent-specific configs override global settings. + +By default, all tools are set to `true`. To disable a tool, set it to `false`. + + +### Global + +Disable or enable tools globally using the `tools` option. + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "write": false, + "bash": false, + "webfetch": true + } +} +``` + +You can also use wildcards to control multiple tools at once. For example, to disable all tools from an MCP server: + +```json opencode.json +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "mymcp_*": false + } +} +``` + +### Per agent + +Override global tool settings for specific agents using the `tools` config in the agent definition. + +```json opencode.json highlight={3-6,9-12} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "write": true, + "bash": true + }, + "agent": { + "plan": { + "tools": { + "write": false, + "bash": false + } + } + } +} +``` + +For example, here the `plan` agent overrides the global config to disable `write` and `bash` tools. + +You can also configure tools for agents in Markdown. + +```markdown ~/.config/opencode/agent/readonly.md +--- +description: Read-only analysis agent +mode: subagent +tools: + write: false + edit: false + bash: false +--- + +Analyze code without making any modifications. +``` + +[Learn more](/agents#tools) about configuring tools per agent. + +## Built-in + +Here are all the built-in tools available in OpenCode. + +### bash + +Execute shell commands in your project environment. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "bash": true + } +} +``` + +This tool allows the LLM to run terminal commands like `npm install`, `git status`, or any other shell command. + +### edit + +Modify existing files using exact string replacements. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "edit": true + } +} +``` + +This tool performs precise edits to files by replacing exact text matches. It's the primary way the LLM modifies code. + +### write + +Create new files or overwrite existing ones. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "write": true + } +} +``` + +Use this to allow the LLM to create new files. It will overwrite existing files if they already exist. + +### read + +Read file contents from your codebase. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "read": true + } +} +``` + +This tool reads files and returns their contents. It supports reading specific line ranges for large files. + +### grep + +Search file contents using regular expressions. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "grep": true + } +} +``` + +Fast content search across your codebase. Supports full regex syntax and file pattern filtering. + +### glob + +Find files by pattern matching. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "glob": true + } +} +``` + +Search for files using glob patterns like `**/*.js` or `src/**/*.ts`. Returns matching file paths sorted by modification time. + +--- + +### list + +List files and directories in a given path. + +```json title="opencode.json" {4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "list": true + } +} +``` + +This tool lists directory contents. It accepts glob patterns to filter results. + +### patch + +Apply patches to files. + +```json title="opencode.json" {4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "patch": true + } +} +``` + +This tool applies patch files to your codebase. Useful for applying diffs and patches from various sources. + +### todowrite + +Manage todo lists during coding sessions. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "todowrite": true + } +} +``` + +Creates and updates task lists to track progress during complex operations. The LLM uses this to organize multi-step tasks. + + +**NOTE** + +This tool is disabled for subagents by default, but you can enable it manually. [Learn more](/agents#tools) + + +### todoread + +Read existing todo lists. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "todoread": true + } +} +``` + +Reads the current todo list state. Used by the LLM to track what tasks are pending or completed. + + +**NOTE** + +This tool is disabled for subagents by default, but you can enable it manually. [Learn more](/agents#tools) + +### webfetch + +Fetch web content. + +```json opencode.json highlight={4} +{ + "$schema": "https://opencode.ai/config.json", + "tools": { + "webfetch": true + } +} +``` + +Allows the LLM to fetch and read web pages. Useful for looking up documentation or researching online resources. + +## Custom tools + +Custom tools let you define your own functions that the LLM can call. These are defined in your config file and can execute arbitrary code. + +[Learn more](/custom-tools) about creating custom tools. + +## MCP servers + +MCP (Model Context Protocol) servers allow you to integrate external tools and services. This includes database access, API integrations, and third-party services. + +[Learn more](/mcp-servers) about configuring MCP servers. + +## Internals + +Internally, tools like `grep`, `glob`, and `list` use [ripgrep](https://github.com/BurntSushi/ripgrep) under the hood. By default, ripgrep respects `.gitignore` patterns, which means files and directories listed in your `.gitignore` will be excluded from searches and listings. + +### Ignore patterns + +To include files that would normally be ignored, create a `.ignore` file in your project root. This file can explicitly allow certain paths. + +```text .ignore +!node_modules/ +!dist/ +!build/ +``` + +For example, this `.ignore` file allows ripgrep to search within `node_modules/`, `dist/`, and `build/` directories even if they're listed in `.gitignore`. diff --git a/packages/docs/troubleshooting.mdx b/packages/docs/troubleshooting.mdx new file mode 100644 index 00000000000..937f8601efd --- /dev/null +++ b/packages/docs/troubleshooting.mdx @@ -0,0 +1,147 @@ +--- +title: Troubleshooting +description: Common issues and how to resolve them. +--- + +To debug any issues with OpenCode, you can check the logs or the session data +that it stores locally. + +### Logs + +Log files are written to: + +- **macOS/Linux**: `~/.local/share/opencode/log/` +- **Windows**: `%USERPROFILE%\.local\share\opencode\log\` + +Log files are named with timestamps (e.g., `2025-01-09T123456.log`) and the most recent 10 log files are kept. + +You can set the log level with the `--log-level` command-line option to get more detailed debug information. For example, `opencode --log-level DEBUG`. + +### Storage + +opencode stores session data and other application data on disk at: + +- **macOS/Linux**: `~/.local/share/opencode/` +- **Windows**: `%USERPROFILE%\.local\share\opencode` + +This directory contains: + +- `auth.json` - Authentication data like API keys, OAuth tokens +- `log/` - Application logs +- `project/` - Project-specific data like session and message data + - If the project is within a Git repo, it is stored in `.//storage/` + - If it is not a Git repo, it is stored in `./global/storage/` + +## Getting help + +If you're experiencing issues with OpenCode: + +1. **Report issues on GitHub** + + The best way to report bugs or request features is through our GitHub repository: + + [**github.com/sst/opencode/issues**](https://github.com/sst/opencode/issues) + + Before creating a new issue, search existing issues to see if your problem has already been reported. + +2. **Join our Discord** + + For real-time help and community discussion, join our Discord server: + + [**opencode.ai/discord**](https://opencode.ai/discord) + + +## Common issues + +Here are some common issues and how to resolve them. + +### OpenCode won't start + +1. Check the logs for error messages +2. Try running with `--print-logs` to see output in the terminal +3. Ensure you have the latest version with `opencode upgrade` + + +### Authentication issues + +1. Try re-authenticating with the `/connect` command in the TUI +2. Check that your API keys are valid +3. Ensure your network allows connections to the provider's API + +### Model not available + +1. Check that you've authenticated with the provider +2. Verify the model name in your config is correct +3. Some models may require specific access or subscriptions + +If you encounter `ProviderModelNotFoundError` you are most likely incorrectly +referencing a model somewhere. +Models should be referenced like so: `/` + +Examples: + +- `openai/gpt-4.1` +- `openrouter/google/gemini-2.5-flash` +- `opencode/kimi-k2` + +To figure out what models you have access to, run `opencode models` + +### ProviderInitError + +If you encounter a ProviderInitError, you likely have an invalid or corrupted configuration. + +To resolve this: + +1. First, verify your provider is set up correctly by following the [providers guide](/providers) +2. If the issue persists, try clearing your stored configuration: + + ```bash + rm -rf ~/.local/share/opencode + ``` + +3. Re-authenticate with your provider using the `/connect` command in the TUI. + +### AI_APICallError and provider package issues + +If you encounter API call errors, this may be due to outdated provider packages. opencode dynamically installs provider packages (OpenAI, Anthropic, Google, etc.) as needed and caches them locally. + +To resolve provider package issues: + +1. Clear the provider package cache: + + ```bash + rm -rf ~/.cache/opencode + ``` + +2. Restart opencode to reinstall the latest provider packages + +This will force opencode to download the most recent versions of provider packages, which often resolves compatibility issues with model parameters and API changes. + +### Copy/paste not working on Linux + +Linux users need to have one of the following clipboard utilities installed for copy/paste functionality to work: + +**For X11 systems:** + +```bash +apt install -y xclip +# or +apt install -y xsel +``` + +**For Wayland systems:** + +```bash +apt install -y wl-clipboard +``` + +**For headless environments:** + +```bash +apt install -y xvfb +# and run: +Xvfb :99 -screen 0 1024x768x24 > /dev/null 2>&1 & +export DISPLAY=:99.0 +``` + +opencode will detect if you're using Wayland and prefer `wl-clipboard`, otherwise it will try to find clipboard tools in order of: `xclip` and `xsel`. diff --git a/packages/docs/tui.mdx b/packages/docs/tui.mdx new file mode 100644 index 00000000000..835c8eeffe1 --- /dev/null +++ b/packages/docs/tui.mdx @@ -0,0 +1,359 @@ +--- +title: TUI +description: Using the OpenCode terminal user interface. +--- + + +OpenCode provides an interactive terminal interface or TUI for working on your projects with an LLM. + +Running OpenCode starts the TUI for the current directory. + +```bash +opencode +``` + +Or you can start it for a specific working directory. + +```bash +opencode /path/to/project +``` + +Once you're in the TUI, you can prompt it with a message. + +```text +Give me a quick summary of the codebase. +``` + +## File references + +You can reference files in your messages using `@`. This does a fuzzy file search in the current working directory. + + +**TIP** + +You can also use `@` to reference files in your messages. + + +```text +How is auth handled in @packages/functions/src/api/index.ts? +``` + +The content of the file is added to the conversation automatically. + +## Bash commands + +Start a message with `!` to run a shell command. + +```bash frame="none" +!ls -la +``` + +The output of the command is added to the conversation as a tool result. +## Commands + +When using the OpenCode TUI, you can type `/` followed by a command name to quickly execute actions. For example: + +```bash +/help +``` + +Most commands also have keybind using `ctrl+x` as the leader key, where `ctrl+x` is the default leader key. [Learn more](/keybinds). + +Here are all available slash commands: + + + +### connect + +Add a provider to OpenCode. Allows you to select from available providers and add their API keys. + +```bash +/connect +``` + +### compact + +Compact the current session. _Alias_: `/summarize` + +```bash +/compact +``` + +**Keybind:** `ctrl+x c` + +### details + +Toggle tool execution details. + +```bash +/details +``` + +**Keybind:** `ctrl+x d` + +### editor + +Open external editor for composing messages. Uses the editor set in your `EDITOR` environment variable. [Learn more](#editor-setup). + +```bash +/editor +``` + +**Keybind:** `ctrl+x e` + +### exit + +Exit OpenCode. _Aliases_: `/quit`, `/q` + +```bash frame="none" +/exit +``` + +**Keybind:** `ctrl+x q` + +### export + +Export current conversation to Markdown and open in your default editor. Uses the editor set in your `EDITOR` environment variable. [Learn more](#editor-setup). + +```bash frame="none" +/export +``` + +**Keybind:** `ctrl+x x` + + +### help + +Show the help dialog. + +```bash frame="none" +/help +``` + +**Keybind:** `ctrl+x h` + + + +### init + +Create or update `AGENTS.md` file. [Learn more](/rules). + +```bash frame="none" +/init +``` + +**Keybind:** `ctrl+x i` + + + +### models + +List available models. + +```bash frame="none" +/models +``` + +**Keybind:** `ctrl+x m` + + + +### new + +Start a new session. _Alias_: `/clear` + +```bash frame="none" +/new +``` + +**Keybind:** `ctrl+x n` + + + +### redo + +Redo a previously undone message. Only available after using `/undo`. + + +**TIP** + +Any file changes will also be restored. + + +Internally, this uses Git to manage the file changes. So your project **needs to +be a Git repository**. + +```bash frame="none" +/redo +``` + +**Keybind:** `ctrl+x r` + + + +### sessions + +List and switch between sessions. _Aliases_: `/resume`, `/continue` + +```bash frame="none" +/sessions +``` + +**Keybind:** `ctrl+x l` + + + +### share + +Share current session. [Learn more](/share). + +```bash frame="none" +/share +``` + +**Keybind:** `ctrl+x s` + + + +### themes + +List available themes. + +```bash frame="none" +/themes +``` + +**Keybind:** `ctrl+x t` + +### undo + +Undo last message in the conversation. Removes the most recent user message, all subsequent responses, and any file changes. + + +**TIP** + +Any file changes made will also be reverted. + + +Internally, this uses Git to manage the file changes. So your project **needs to +be a Git repository**. + +```bash frame="none" +/undo +``` + +**Keybind:** `ctrl+x u` + + +### unshare + +Unshare current session. [Learn more](/share#un-sharing). + +```bash +/unshare +``` + + + +## Editor setup + +Both the `/editor` and `/export` commands use the editor specified in your `EDITOR` environment variable. + + + + ```bash + # Example for nano or vim + export EDITOR=nano + export EDITOR=vim + + # For GUI editors, VS Code, Cursor, VSCodium, Windsurf, Zed, etc. + # include --wait + export EDITOR="code --wait" + ``` + + To make it permanent, add this to your shell profile; + `~/.bashrc`, `~/.zshrc`, etc. + + + + + ```bash + set EDITOR=notepad + + # For GUI editors, VS Code, Cursor, VSCodium, Windsurf, Zed, etc. + # include --wait + set EDITOR=code --wait + ``` + + To make it permanent, use **System Properties** > **Environment + Variables**. + + + + + ```powershell + $env:EDITOR = "notepad" + + # For GUI editors, VS Code, Cursor, VSCodium, Windsurf, Zed, etc. + # include --wait + $env:EDITOR = "code --wait" + ``` + + To make it permanent, add this to your PowerShell profile. + + + + +Popular editor options include: + +- `code` - Visual Studio Code +- `cursor` - Cursor +- `windsurf` - Windsurf +- `vim` - Vim editor +- `nano` - Nano editor +- `notepad` - Windows Notepad +- `subl` - Sublime Text + + +**NOTE** + +Some editors like VS Code need to be started with the `--wait` flag. + + +Some editors need command-line arguments to run in blocking mode. The `--wait` flag makes the editor process block until closed. + + + +## Configure + +You can customize TUI behavior through your OpenCode config file. + +```json title="opencode.json" +{ + "$schema": "https://opencode.ai/config.json", + "tui": { + "scroll_speed": 3, + "scroll_acceleration": { + "enabled": true + } + } +} +``` + +### Options + +- `scroll_acceleration` - Enable macOS-style scroll acceleration for smooth, natural scrolling. When enabled, scroll speed increases with rapid scrolling gestures and stays precise for slower movements. **This setting takes precedence over `scroll_speed` and overrides it when enabled.** +- `scroll_speed` - Controls how fast the TUI scrolls when using scroll commands (minimum: `1`). Defaults to `1` on Unix and `3` on Windows. **Note: This is ignored if `scroll_acceleration.enabled` is set to `true`.** + + + +## View customization + +You can customize various aspects of the TUI view using the command palette (`ctrl+x h` or `/help`). These settings persist across restarts. + +### Username display + +Toggle whether your username appears in chat messages. Access this through: + +- Command palette: Search for "username" or "hide username" +- The setting persists automatically and will be remembered across TUI sessions diff --git a/packages/docs/zen.mdx b/packages/docs/zen.mdx new file mode 100644 index 00000000000..1da7be88e9a --- /dev/null +++ b/packages/docs/zen.mdx @@ -0,0 +1,205 @@ +--- +title: Zen +description: Curated list of models provided by OpenCode. +--- + +OpenCode Zen is a list of tested and verified models provided by the OpenCode team. + + +**NOTE** + +OpenCode Zen is currently in beta. + + +Zen works like any other provider in OpenCode. You login to OpenCode Zen and get +your API key. It's **completely optional** and you don't need to use it to use +OpenCode. + + +## Background + +There are a large number of models out there but only a few of +these models work well as coding agents. Additionally, most providers are +configured very differently; so you get very different performance and quality. + + +**TIP** + +We tested a select group of models and providers that work well with OpenCode. + + +So if you are using a model through something like OpenRouter, you can never be +sure if you are getting the best version of the model you want. + +To fix this, we did a couple of things: + +1. We tested a select group of models and talked to their teams about how to + best run them. +2. We then worked with a few providers to make sure these were being served + correctly. +3. Finally, we benchmarked the combination of the model/provider and came up + with a list that we feel good recommending. + +OpenCode Zen is an AI gateway that gives you access to these models. + +## How it works + +OpenCode Zen works like any other provider in OpenCode. + + + +You sign in to **[OpenCode Zen](https://opencode.ai/auth)**, add your billing details, and copy your API key. + + +You run the `/connect` command in the TUI, select OpenCode Zen, and paste your API key. + + +Run `/models` in the TUI to see the list of models we recommend. + + + +You are charged per request and you can add credits to your account. + +## Endpoints + +You can also access our models through the following API endpoints. + +| Model | Model ID | Endpoint | AI SDK Package | +| :---------------- | :---------------- | :------------------------------------------------- | :--------------------------- | +| GPT 5.2 | gpt-5.2 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` | +| GPT 5.1 | gpt-5.1 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` | +| GPT 5.1 Codex | gpt-5.1-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` | +| GPT 5.1 Codex Max | gpt-5.1-codex-max | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` | +| GPT 5 | gpt-5 | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` | +| GPT 5 Codex | gpt-5-codex | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` | +| GPT 5 Nano | gpt-5-nano | `https://opencode.ai/zen/v1/responses` | `@ai-sdk/openai` | +| Claude Sonnet 4.5 | claude-sonnet-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` | +| Claude Sonnet 4 | claude-sonnet-4 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` | +| Claude Haiku 4.5 | claude-haiku-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` | +| Claude Haiku 3.5 | claude-3-5-haiku | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` | +| Claude Opus 4.5 | claude-opus-4-5 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` | +| Claude Opus 4.1 | claude-opus-4-1 | `https://opencode.ai/zen/v1/messages` | `@ai-sdk/anthropic` | +| Gemini 3 Pro | gemini-3-pro | `https://opencode.ai/zen/v1/models/gemini-3-pro` | `@ai-sdk/google` | +| Gemini 3 Flash | gemini-3-flash | `https://opencode.ai/zen/v1/models/gemini-3-flash` | `@ai-sdk/google` | +| GLM 4.6 | glm-4.6 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` | +| Kimi K2 | kimi-k2 | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` | +| Kimi K2 Thinking | kimi-k2-thinking | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` | +| Qwen3 Coder 480B | qwen3-coder | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` | +| Grok Code Fast 1 | grok-code | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` | +| Big Pickle | big-pickle | `https://opencode.ai/zen/v1/chat/completions` | `@ai-sdk/openai-compatible` | + +The [model id](/config#models) in your OpenCode config +uses the format `opencode/`. For example, for GPT 5.1 Codex, you would +use `opencode/gpt-5.1-codex` in your config. + +### Models + +You can fetch the full list of available models and their metadata from: + +```bash +https://opencode.ai/zen/v1/models +``` + +## Pricing + +We support a pay-as-you-go model. Below are the prices **per 1M tokens**. + +| Model | Input | Output | Cached Read | Cached Write | +| :-------------------------------- | :----- | :----- | :---------- | :----------- | +| Big Pickle | Free | Free | Free | - | +| Grok Code Fast 1 | Free | Free | Free | - | +| GLM 4.6 | $0.60 | $2.20 | $0.10 | - | +| Kimi K2 | $0.40 | $2.50 | - | - | +| Kimi K2 Thinking | $0.40 | $2.50 | - | - | +| Qwen3 Coder 480B | $0.45 | $1.50 | - | - | +| Claude Sonnet 4.5 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 | +| Claude Sonnet 4.5 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 | +| Claude Sonnet 4 (≤ 200K tokens) | $3.00 | $15.00 | $0.30 | $3.75 | +| Claude Sonnet 4 (> 200K tokens) | $6.00 | $22.50 | $0.60 | $7.50 | +| Claude Haiku 4.5 | $1.00 | $5.00 | $0.10 | $1.25 | +| Claude Haiku 3.5 | $0.80 | $4.00 | $0.08 | $1.00 | +| Claude Opus 4.5 | $5.00 | $25.00 | $0.50 | $6.25 | +| Claude Opus 4.1 | $15.00 | $75.00 | $1.50 | $18.75 | +| Gemini 3 Pro (≤ 200K tokens) | $2.00 | $12.00 | $0.20 | - | +| Gemini 3 Pro (> 200K tokens) | $4.00 | $18.00 | $0.40 | - | +| Gemini 3 Flash | $0.50 | $3.00 | $0.05 | - | +| GPT 5.2 | $1.75 | $14.00 | $0.175 | - | +| GPT 5.1 | $1.07 | $8.50 | $0.107 | - | +| GPT 5.1 Codex | $1.07 | $8.50 | $0.107 | - | +| GPT 5.1 Codex Max | $1.25 | $10.00 | $0.125 | - | +| GPT 5 | $1.07 | $8.50 | $0.107 | - | +| GPT 5 Codex | $1.07 | $8.50 | $0.107 | - | +| GPT 5 Nano | Free | Free | Free | - | + +You might notice _Claude Haiku 3.5_ in your usage history. This is a [low cost model](/config#models) that's used to generate the titles of your sessions. + + +**NOTE** + +Credit card fees are passed along at cost; we don't charge anything beyond that. + + +The free models: + +- Grok Code Fast 1 is currently free on OpenCode for a limited time. The xAI team is using this time to collect feedback and improve Grok Code. +- Big Pickle is a stealth model that's free on OpenCode for a limited time. The team is using this time to collect feedback and improve the model. + +Contact us if you have any questions. + +## Privacy + +All our models are hosted in the US. Our providers follow a zero-retention policy and do not use your data for model training, with the following exceptions: + +- Grok Code Fast 1: During its free period, collected data may be used to improve Grok Code. +- Big Pickle: During its free period, collected data may be used to improve the model. +- OpenAI APIs: Requests are retained for 30 days in accordance with [OpenAI's Data Policies](https://platform.openai.com/docs/guides/your-data). +- Anthropic APIs: Requests are retained for 30 days in accordance with [Anthropic's Data Policies](https://docs.anthropic.com/en/docs/claude-code/data-usage). + + +## For Teams + +Zen also works great for teams. You can invite teammates, assign roles, curate +the models your team uses, and more. + + +**NOTE** + +Workspaces are currently free for teams as a part of the beta. + + +Managing your workspace is currently free for teams as a part of the beta. We'll be +sharing more details on the pricing soon. + +### Roles + +You can invite teammates to your workspace and assign roles: + +- **Admin**: Manage models, members, API keys, and billing +- **Member**: Manage only their own API keys + +Admins can also set monthly spending limits for each member to keep costs under control. + +### Model access + +Admins can enable or disable specific models for the workspace. Requests made to a disabled model will return an error. + +This is useful for cases where you want to disable the use of a model that +collects data. + +### Bring your own key + +You can use your own OpenAI or Anthropic API keys while still accessing other models in Zen. + +When you use your own keys, tokens are billed directly by the provider, not by Zen. + +For example, your organization might already have a key for OpenAI or Anthropic +and you want to use that instead of the one that Zen provides. + +## Goals + +We created OpenCode Zen to: + +1. **Benchmark** the best models/providers for coding agents. +2. Have access to the **highest quality** options and not downgrade performance or route to cheaper providers. +3. Pass along any **price drops** by selling at cost; so the only markup is to cover our processing fees. +4. Have **no lock-in** by allowing you to use it with any other coding agent. And always let you use any other provider with OpenCode as well.