diff --git a/docs/providers/claude-max-api-proxy.md b/docs/providers/claude-max-api-proxy.md
index dbd65acc3c9..8a3d6ed75ba 100644
--- a/docs/providers/claude-max-api-proxy.md
+++ b/docs/providers/claude-max-api-proxy.md
@@ -17,7 +17,7 @@ usage outside Claude Code in the past. You must decide for yourself whether to u
it and verify Anthropic's current terms before relying on it.
-## Why Use This?
+## Why use this?
| Approach | Cost | Best For |
| ----------------------- | --------------------------------------------------- | ------------------------------------------ |
@@ -26,7 +26,7 @@ it and verify Anthropic's current terms before relying on it.
If you have a Claude Max subscription and want to use it with OpenAI-compatible tools, this proxy may reduce cost for some workflows. API keys remain the clearer policy path for production use.
-## How It Works
+## How it works
```
Your App → claude-max-api-proxy → Claude Code CLI → Anthropic (via subscription)
@@ -39,71 +39,65 @@ The proxy:
2. Converts them to Claude Code CLI commands
3. Returns responses in OpenAI format (streaming supported)
-## Installation
+## Getting started
-```bash
-# Requires Node.js 20+ and Claude Code CLI
-npm install -g claude-max-api-proxy
+
+
+ Requires Node.js 20+ and Claude Code CLI.
-# Verify Claude CLI is authenticated
-claude --version
-```
+ ```bash
+ npm install -g claude-max-api-proxy
-## Usage
+ # Verify Claude CLI is authenticated
+ claude --version
+ ```
-### Start the server
+
+
+ ```bash
+ claude-max-api
+ # Server runs at http://localhost:3456
+ ```
+
+
+ ```bash
+ # Health check
+ curl http://localhost:3456/health
-```bash
-claude-max-api
-# Server runs at http://localhost:3456
-```
+ # List models
+ curl http://localhost:3456/v1/models
-### Test it
+ # Chat completion
+ curl http://localhost:3456/v1/chat/completions \
+ -H "Content-Type: application/json" \
+ -d '{
+ "model": "claude-opus-4",
+ "messages": [{"role": "user", "content": "Hello!"}]
+ }'
+ ```
-```bash
-# Health check
-curl http://localhost:3456/health
+
+
+ Point OpenClaw at the proxy as a custom OpenAI-compatible endpoint:
-# List models
-curl http://localhost:3456/v1/models
+ ```json5
+ {
+ env: {
+ OPENAI_API_KEY: "not-needed",
+ OPENAI_BASE_URL: "http://localhost:3456/v1",
+ },
+ agents: {
+ defaults: {
+ model: { primary: "openai/claude-opus-4" },
+ },
+ },
+ }
+ ```
-# Chat completion
-curl http://localhost:3456/v1/chat/completions \
- -H "Content-Type: application/json" \
- -d '{
- "model": "claude-opus-4",
- "messages": [{"role": "user", "content": "Hello!"}]
- }'
-```
+
+
-### With OpenClaw
-
-You can point OpenClaw at the proxy as a custom OpenAI-compatible endpoint:
-
-```json5
-{
- env: {
- OPENAI_API_KEY: "not-needed",
- OPENAI_BASE_URL: "http://localhost:3456/v1",
- },
- agents: {
- defaults: {
- model: { primary: "openai/claude-opus-4" },
- },
- },
-}
-```
-
-This path uses the same proxy-style OpenAI-compatible route as other custom
-`/v1` backends:
-
-- native OpenAI-only request shaping does not apply
-- no `service_tier`, no Responses `store`, no prompt-cache hints, and no
- OpenAI reasoning-compat payload shaping
-- hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
- are not injected on the proxy URL
-
-## Available Models
+## Available models
| Model ID | Maps To |
| ----------------- | --------------- |
@@ -111,38 +105,55 @@ This path uses the same proxy-style OpenAI-compatible route as other custom
| `claude-sonnet-4` | Claude Sonnet 4 |
| `claude-haiku-4` | Claude Haiku 4 |
-## Auto-Start on macOS
+## Advanced
-Create a LaunchAgent to run the proxy automatically:
+
+
+ This path uses the same proxy-style OpenAI-compatible route as other custom
+ `/v1` backends:
-```bash
-cat > ~/Library/LaunchAgents/com.claude-max-api.plist << 'EOF'
-
-
-
-
- Label
- com.claude-max-api
- RunAtLoad
-
- KeepAlive
-
- ProgramArguments
-
- /usr/local/bin/node
- /usr/local/lib/node_modules/claude-max-api-proxy/dist/server/standalone.js
-
- EnvironmentVariables
-
- PATH
- /usr/local/bin:/opt/homebrew/bin:~/.local/bin:/usr/bin:/bin
-
-
-
-EOF
+ - Native OpenAI-only request shaping does not apply
+ - No `service_tier`, no Responses `store`, no prompt-cache hints, and no
+ OpenAI reasoning-compat payload shaping
+ - Hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
+ are not injected on the proxy URL
-launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.claude-max-api.plist
-```
+
+
+
+ Create a LaunchAgent to run the proxy automatically:
+
+ ```bash
+ cat > ~/Library/LaunchAgents/com.claude-max-api.plist << 'EOF'
+
+
+
+
+ Label
+ com.claude-max-api
+ RunAtLoad
+
+ KeepAlive
+
+ ProgramArguments
+
+ /usr/local/bin/node
+ /usr/local/lib/node_modules/claude-max-api-proxy/dist/server/standalone.js
+
+ EnvironmentVariables
+
+ PATH
+ /usr/local/bin:/opt/homebrew/bin:~/.local/bin:/usr/bin:/bin
+
+
+
+ EOF
+
+ launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.claude-max-api.plist
+ ```
+
+
+
## Links
@@ -157,7 +168,23 @@ launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.claude-max-api.plist
- The proxy runs locally and does not send data to any third-party servers
- Streaming responses are fully supported
-## See Also
+
+For native Anthropic integration with Claude CLI or API keys, see [Anthropic provider](/providers/anthropic). For OpenAI/Codex subscriptions, see [OpenAI provider](/providers/openai).
+
-- [Anthropic provider](/providers/anthropic) - Native OpenClaw integration with Claude CLI or API keys
-- [OpenAI provider](/providers/openai) - For OpenAI/Codex subscriptions
+## Related
+
+
+
+ Native OpenClaw integration with Claude CLI or API keys.
+
+
+ For OpenAI/Codex subscriptions.
+
+
+ Overview of all providers, model refs, and failover behavior.
+
+
+ Full config reference.
+
+
diff --git a/docs/providers/litellm.md b/docs/providers/litellm.md
index 4f6a4bd2905..6e43118f40f 100644
--- a/docs/providers/litellm.md
+++ b/docs/providers/litellm.md
@@ -10,40 +10,55 @@ read_when:
[LiteLLM](https://litellm.ai) is an open-source LLM gateway that provides a unified API to 100+ model providers. Route OpenClaw through LiteLLM to get centralized cost tracking, logging, and the flexibility to switch backends without changing your OpenClaw config.
-## Why use LiteLLM with OpenClaw?
+
+**Why use LiteLLM with OpenClaw?**
- **Cost tracking** — See exactly what OpenClaw spends across all models
- **Model routing** — Switch between Claude, GPT-4, Gemini, Bedrock without config changes
- **Virtual keys** — Create keys with spend limits for OpenClaw
- **Logging** — Full request/response logs for debugging
- **Fallbacks** — Automatic failover if your primary provider is down
+
## Quick start
-### Via onboarding
+
+
+ **Best for:** fastest path to a working LiteLLM setup.
-```bash
-openclaw onboard --auth-choice litellm-api-key
-```
+
+
+ ```bash
+ openclaw onboard --auth-choice litellm-api-key
+ ```
+
+
-### Manual setup
+
-1. Start LiteLLM Proxy:
+
+ **Best for:** full control over installation and config.
-```bash
-pip install 'litellm[proxy]'
-litellm --model claude-opus-4-6
-```
+
+
+ ```bash
+ pip install 'litellm[proxy]'
+ litellm --model claude-opus-4-6
+ ```
+
+
+ ```bash
+ export LITELLM_API_KEY="your-litellm-key"
-2. Point OpenClaw to LiteLLM:
+ openclaw
+ ```
-```bash
-export LITELLM_API_KEY="your-litellm-key"
+ That's it. OpenClaw now routes through LiteLLM.
+
+
-openclaw
-```
-
-That's it. OpenClaw now routes through LiteLLM.
+
+
## Configuration
@@ -92,68 +107,91 @@ export LITELLM_API_KEY="sk-litellm-key"
}
```
-## Virtual keys
+## Advanced topics
-Create a dedicated key for OpenClaw with spend limits:
+
+
+ Create a dedicated key for OpenClaw with spend limits:
-```bash
-curl -X POST "http://localhost:4000/key/generate" \
- -H "Authorization: Bearer $LITELLM_MASTER_KEY" \
- -H "Content-Type: application/json" \
- -d '{
- "key_alias": "openclaw",
- "max_budget": 50.00,
- "budget_duration": "monthly"
- }'
-```
+ ```bash
+ curl -X POST "http://localhost:4000/key/generate" \
+ -H "Authorization: Bearer $LITELLM_MASTER_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "key_alias": "openclaw",
+ "max_budget": 50.00,
+ "budget_duration": "monthly"
+ }'
+ ```
-Use the generated key as `LITELLM_API_KEY`.
+ Use the generated key as `LITELLM_API_KEY`.
-## Model routing
+
-LiteLLM can route model requests to different backends. Configure in your LiteLLM `config.yaml`:
+
+ LiteLLM can route model requests to different backends. Configure in your LiteLLM `config.yaml`:
-```yaml
-model_list:
- - model_name: claude-opus-4-6
- litellm_params:
- model: claude-opus-4-6
- api_key: os.environ/ANTHROPIC_API_KEY
+ ```yaml
+ model_list:
+ - model_name: claude-opus-4-6
+ litellm_params:
+ model: claude-opus-4-6
+ api_key: os.environ/ANTHROPIC_API_KEY
- - model_name: gpt-4o
- litellm_params:
- model: gpt-4o
- api_key: os.environ/OPENAI_API_KEY
-```
+ - model_name: gpt-4o
+ litellm_params:
+ model: gpt-4o
+ api_key: os.environ/OPENAI_API_KEY
+ ```
-OpenClaw keeps requesting `claude-opus-4-6` — LiteLLM handles the routing.
+ OpenClaw keeps requesting `claude-opus-4-6` — LiteLLM handles the routing.
-## Viewing usage
+
-Check LiteLLM's dashboard or API:
+
+ Check LiteLLM's dashboard or API:
-```bash
-# Key info
-curl "http://localhost:4000/key/info" \
- -H "Authorization: Bearer sk-litellm-key"
+ ```bash
+ # Key info
+ curl "http://localhost:4000/key/info" \
+ -H "Authorization: Bearer sk-litellm-key"
-# Spend logs
-curl "http://localhost:4000/spend/logs" \
- -H "Authorization: Bearer $LITELLM_MASTER_KEY"
-```
+ # Spend logs
+ curl "http://localhost:4000/spend/logs" \
+ -H "Authorization: Bearer $LITELLM_MASTER_KEY"
+ ```
-## Notes
+
-- LiteLLM runs on `http://localhost:4000` by default
-- OpenClaw connects through LiteLLM's proxy-style OpenAI-compatible `/v1`
- endpoint
-- Native OpenAI-only request shaping does not apply through LiteLLM:
- no `service_tier`, no Responses `store`, no prompt-cache hints, and no
- OpenAI reasoning-compat payload shaping
-- Hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
- are not injected on custom LiteLLM base URLs
+
+ - LiteLLM runs on `http://localhost:4000` by default
+ - OpenClaw connects through LiteLLM's proxy-style OpenAI-compatible `/v1`
+ endpoint
+ - Native OpenAI-only request shaping does not apply through LiteLLM:
+ no `service_tier`, no Responses `store`, no prompt-cache hints, and no
+ OpenAI reasoning-compat payload shaping
+ - Hidden OpenClaw attribution headers (`originator`, `version`, `User-Agent`)
+ are not injected on custom LiteLLM base URLs
+
+
-## See also
+
+For general provider configuration and failover behavior, see [Model Providers](/concepts/model-providers).
+
-- [LiteLLM Docs](https://docs.litellm.ai)
-- [Model Providers](/concepts/model-providers)
+## Related
+
+
+
+ Official LiteLLM documentation and API reference.
+
+
+ Overview of all providers, model refs, and failover behavior.
+
+
+ Full config reference.
+
+
+ How to choose and configure models.
+
+
diff --git a/docs/providers/stepfun.md b/docs/providers/stepfun.md
index daffbe8111a..89b092ca875 100644
--- a/docs/providers/stepfun.md
+++ b/docs/providers/stepfun.md
@@ -13,49 +13,18 @@ OpenClaw includes a bundled StepFun provider plugin with two provider ids:
- `stepfun` for the standard endpoint
- `stepfun-plan` for the Step Plan endpoint
-The built-in catalogs currently differ by surface:
-
-- Standard: `step-3.5-flash`
-- Step Plan: `step-3.5-flash`, `step-3.5-flash-2603`
+
+Standard and Step Plan are **separate providers** with different endpoints and model ref prefixes (`stepfun/...` vs `stepfun-plan/...`). Use a China key with the `.com` endpoints and a global key with the `.ai` endpoints.
+
## Region and endpoint overview
-- China standard endpoint: `https://api.stepfun.com/v1`
-- Global standard endpoint: `https://api.stepfun.ai/v1`
-- China Step Plan endpoint: `https://api.stepfun.com/step_plan/v1`
-- Global Step Plan endpoint: `https://api.stepfun.ai/step_plan/v1`
-- Auth env var: `STEPFUN_API_KEY`
+| Endpoint | China (`.com`) | Global (`.ai`) |
+| --------- | -------------------------------------- | ------------------------------------- |
+| Standard | `https://api.stepfun.com/v1` | `https://api.stepfun.ai/v1` |
+| Step Plan | `https://api.stepfun.com/step_plan/v1` | `https://api.stepfun.ai/step_plan/v1` |
-Use a China key with the `.com` endpoints and a global key with the `.ai`
-endpoints.
-
-## CLI setup
-
-Interactive setup:
-
-```bash
-openclaw onboard
-```
-
-Choose one of these auth choices:
-
-- `stepfun-standard-api-key-cn`
-- `stepfun-standard-api-key-intl`
-- `stepfun-plan-api-key-cn`
-- `stepfun-plan-api-key-intl`
-
-Non-interactive examples:
-
-```bash
-openclaw onboard --auth-choice stepfun-standard-api-key-intl --stepfun-api-key "$STEPFUN_API_KEY"
-openclaw onboard --auth-choice stepfun-plan-api-key-intl --stepfun-api-key "$STEPFUN_API_KEY"
-```
-
-## Model refs
-
-- Standard default model: `stepfun/step-3.5-flash`
-- Step Plan default model: `stepfun-plan/step-3.5-flash`
-- Step Plan alternate model: `stepfun-plan/step-3.5-flash-2603`
+Auth env var: `STEPFUN_API_KEY`
## Built-in catalogs
@@ -72,81 +41,190 @@ Step Plan (`stepfun-plan`):
| `stepfun-plan/step-3.5-flash` | 262,144 | 65,536 | Default Step Plan model |
| `stepfun-plan/step-3.5-flash-2603` | 262,144 | 65,536 | Additional Step Plan model |
-## Config snippets
+## Getting started
-Standard provider:
+Choose your provider surface and follow the setup steps.
-```json5
-{
- env: { STEPFUN_API_KEY: "your-key" },
- agents: { defaults: { model: { primary: "stepfun/step-3.5-flash" } } },
- models: {
- mode: "merge",
- providers: {
- stepfun: {
- baseUrl: "https://api.stepfun.ai/v1",
- api: "openai-completions",
- apiKey: "${STEPFUN_API_KEY}",
- models: [
- {
- id: "step-3.5-flash",
- name: "Step 3.5 Flash",
- reasoning: true,
- input: ["text"],
- cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
- contextWindow: 262144,
- maxTokens: 65536,
+
+
+ **Best for:** general-purpose use via the standard StepFun endpoint.
+
+
+
+ | Auth choice | Endpoint | Region |
+ | -------------------------------- | -------------------------------- | ------------- |
+ | `stepfun-standard-api-key-intl` | `https://api.stepfun.ai/v1` | International |
+ | `stepfun-standard-api-key-cn` | `https://api.stepfun.com/v1` | China |
+
+
+ ```bash
+ openclaw onboard --auth-choice stepfun-standard-api-key-intl
+ ```
+
+ Or for the China endpoint:
+
+ ```bash
+ openclaw onboard --auth-choice stepfun-standard-api-key-cn
+ ```
+
+
+ ```bash
+ openclaw onboard --auth-choice stepfun-standard-api-key-intl \
+ --stepfun-api-key "$STEPFUN_API_KEY"
+ ```
+
+
+ ```bash
+ openclaw models list --provider stepfun
+ ```
+
+
+
+ ### Model refs
+
+ - Default model: `stepfun/step-3.5-flash`
+
+
+
+
+ **Best for:** Step Plan reasoning endpoint.
+
+
+
+ | Auth choice | Endpoint | Region |
+ | ---------------------------- | --------------------------------------- | ------------- |
+ | `stepfun-plan-api-key-intl` | `https://api.stepfun.ai/step_plan/v1` | International |
+ | `stepfun-plan-api-key-cn` | `https://api.stepfun.com/step_plan/v1` | China |
+
+
+ ```bash
+ openclaw onboard --auth-choice stepfun-plan-api-key-intl
+ ```
+
+ Or for the China endpoint:
+
+ ```bash
+ openclaw onboard --auth-choice stepfun-plan-api-key-cn
+ ```
+
+
+ ```bash
+ openclaw onboard --auth-choice stepfun-plan-api-key-intl \
+ --stepfun-api-key "$STEPFUN_API_KEY"
+ ```
+
+
+ ```bash
+ openclaw models list --provider stepfun-plan
+ ```
+
+
+
+ ### Model refs
+
+ - Default model: `stepfun-plan/step-3.5-flash`
+ - Alternate model: `stepfun-plan/step-3.5-flash-2603`
+
+
+
+
+## Advanced
+
+
+
+ ```json5
+ {
+ env: { STEPFUN_API_KEY: "your-key" },
+ agents: { defaults: { model: { primary: "stepfun/step-3.5-flash" } } },
+ models: {
+ mode: "merge",
+ providers: {
+ stepfun: {
+ baseUrl: "https://api.stepfun.ai/v1",
+ api: "openai-completions",
+ apiKey: "${STEPFUN_API_KEY}",
+ models: [
+ {
+ id: "step-3.5-flash",
+ name: "Step 3.5 Flash",
+ reasoning: true,
+ input: ["text"],
+ cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
+ contextWindow: 262144,
+ maxTokens: 65536,
+ },
+ ],
},
- ],
+ },
},
- },
- },
-}
-```
+ }
+ ```
+
-Step Plan provider:
-
-```json5
-{
- env: { STEPFUN_API_KEY: "your-key" },
- agents: { defaults: { model: { primary: "stepfun-plan/step-3.5-flash" } } },
- models: {
- mode: "merge",
- providers: {
- "stepfun-plan": {
- baseUrl: "https://api.stepfun.ai/step_plan/v1",
- api: "openai-completions",
- apiKey: "${STEPFUN_API_KEY}",
- models: [
- {
- id: "step-3.5-flash",
- name: "Step 3.5 Flash",
- reasoning: true,
- input: ["text"],
- cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
- contextWindow: 262144,
- maxTokens: 65536,
+
+ ```json5
+ {
+ env: { STEPFUN_API_KEY: "your-key" },
+ agents: { defaults: { model: { primary: "stepfun-plan/step-3.5-flash" } } },
+ models: {
+ mode: "merge",
+ providers: {
+ "stepfun-plan": {
+ baseUrl: "https://api.stepfun.ai/step_plan/v1",
+ api: "openai-completions",
+ apiKey: "${STEPFUN_API_KEY}",
+ models: [
+ {
+ id: "step-3.5-flash",
+ name: "Step 3.5 Flash",
+ reasoning: true,
+ input: ["text"],
+ cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
+ contextWindow: 262144,
+ maxTokens: 65536,
+ },
+ {
+ id: "step-3.5-flash-2603",
+ name: "Step 3.5 Flash 2603",
+ reasoning: true,
+ input: ["text"],
+ cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
+ contextWindow: 262144,
+ maxTokens: 65536,
+ },
+ ],
},
- {
- id: "step-3.5-flash-2603",
- name: "Step 3.5 Flash 2603",
- reasoning: true,
- input: ["text"],
- cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
- contextWindow: 262144,
- maxTokens: 65536,
- },
- ],
+ },
},
- },
- },
-}
-```
+ }
+ ```
+
-## Notes
+
+ - The provider is bundled with OpenClaw, so there is no separate plugin install step.
+ - `step-3.5-flash-2603` is currently exposed only on `stepfun-plan`.
+ - A single auth flow writes region-matched profiles for both `stepfun` and `stepfun-plan`, so both surfaces can be discovered together.
+ - Use `openclaw models list` and `openclaw models set ` to inspect or switch models.
+
+
-- The provider is bundled with OpenClaw, so there is no separate plugin install step.
-- `step-3.5-flash-2603` is currently exposed only on `stepfun-plan`.
-- A single auth flow writes region-matched profiles for both `stepfun` and `stepfun-plan`, so both surfaces can be discovered together.
-- Use `openclaw models list` and `openclaw models set ` to inspect or switch models.
-- For the broader provider overview, see [Model providers](/concepts/model-providers).
+
+For the broader provider overview, see [Model providers](/concepts/model-providers).
+
+
+## Related
+
+
+
+ Overview of all providers, model refs, and failover behavior.
+
+
+ Full config schema for providers, models, and plugins.
+
+
+ How to choose and configure models.
+
+
+ StepFun API key management and documentation.
+
+
diff --git a/docs/providers/vydra.md b/docs/providers/vydra.md
index 2ae407e45ab..06dcfe01142 100644
--- a/docs/providers/vydra.md
+++ b/docs/providers/vydra.md
@@ -10,134 +10,165 @@ title: "Vydra"
The bundled Vydra plugin adds:
-- image generation via `vydra/grok-imagine`
-- video generation via `vydra/veo3` and `vydra/kling`
-- speech synthesis via Vydra's ElevenLabs-backed TTS route
+- Image generation via `vydra/grok-imagine`
+- Video generation via `vydra/veo3` and `vydra/kling`
+- Speech synthesis via Vydra's ElevenLabs-backed TTS route
OpenClaw uses the same `VYDRA_API_KEY` for all three capabilities.
-## Important base URL
-
-Use `https://www.vydra.ai/api/v1`.
+
+Use `https://www.vydra.ai/api/v1` as the base URL.
Vydra's apex host (`https://vydra.ai/api/v1`) currently redirects to `www`. Some HTTP clients drop `Authorization` on that cross-host redirect, which turns a valid API key into a misleading auth failure. The bundled plugin uses the `www` base URL directly to avoid that.
+
## Setup
-Interactive onboarding:
+
+
+ ```bash
+ openclaw onboard --auth-choice vydra-api-key
+ ```
-```bash
-openclaw onboard --auth-choice vydra-api-key
-```
+ Or set the env var directly:
-Or set the env var directly:
+ ```bash
+ export VYDRA_API_KEY="vydra_live_..."
+ ```
-```bash
-export VYDRA_API_KEY="vydra_live_..."
-```
+
+
+ Pick one or more of the capabilities below (image, video, or speech) and apply the matching configuration.
+
+
-## Image generation
+## Capabilities
-Default image model:
+
+
+ Default image model:
-- `vydra/grok-imagine`
+ - `vydra/grok-imagine`
-Set it as the default image provider:
+ Set it as the default image provider:
-```json5
-{
- agents: {
- defaults: {
- imageGenerationModel: {
- primary: "vydra/grok-imagine",
- },
- },
- },
-}
-```
-
-Current bundled support is text-to-image only. Vydra's hosted edit routes expect remote image URLs, and OpenClaw does not add a Vydra-specific upload bridge in the bundled plugin yet.
-
-See [Image Generation](/tools/image-generation) for shared tool behavior.
-
-## Video generation
-
-Registered video models:
-
-- `vydra/veo3` for text-to-video
-- `vydra/kling` for image-to-video
-
-Set Vydra as the default video provider:
-
-```json5
-{
- agents: {
- defaults: {
- videoGenerationModel: {
- primary: "vydra/veo3",
- },
- },
- },
-}
-```
-
-Notes:
-
-- `vydra/veo3` is bundled as text-to-video only.
-- `vydra/kling` currently requires a remote image URL reference. Local file uploads are rejected up front.
-- Vydra's current `kling` HTTP route has been inconsistent about whether it requires `image_url` or `video_url`; the bundled provider maps the same remote image URL into both fields.
-- The bundled plugin stays conservative and does not forward undocumented style knobs such as aspect ratio, resolution, watermark, or generated audio.
-
-Provider-specific live coverage:
-
-```bash
-OPENCLAW_LIVE_TEST=1 \
-OPENCLAW_LIVE_VYDRA_VIDEO=1 \
-pnpm test:live -- extensions/vydra/vydra.live.test.ts
-```
-
-The bundled Vydra live file now covers:
-
-- `vydra/veo3` text-to-video
-- `vydra/kling` image-to-video using a remote image URL
-
-Override the remote image fixture when needed:
-
-```bash
-export OPENCLAW_LIVE_VYDRA_KLING_IMAGE_URL="https://example.com/reference.png"
-```
-
-See [Video Generation](/tools/video-generation) for shared tool behavior.
-
-## Speech synthesis
-
-Set Vydra as the speech provider:
-
-```json5
-{
- messages: {
- tts: {
- provider: "vydra",
- providers: {
- vydra: {
- apiKey: "${VYDRA_API_KEY}",
- voiceId: "21m00Tcm4TlvDq8ikWAM",
+ ```json5
+ {
+ agents: {
+ defaults: {
+ imageGenerationModel: {
+ primary: "vydra/grok-imagine",
+ },
},
},
- },
- },
-}
-```
+ }
+ ```
-Defaults:
+ Current bundled support is text-to-image only. Vydra's hosted edit routes expect remote image URLs, and OpenClaw does not add a Vydra-specific upload bridge in the bundled plugin yet.
-- model: `elevenlabs/tts`
-- voice id: `21m00Tcm4TlvDq8ikWAM`
+
+ See [Image Generation](/tools/image-generation) for shared tool parameters, provider selection, and failover behavior.
+
-The bundled plugin currently exposes one known-good default voice and returns MP3 audio files.
+
+
+
+ Registered video models:
+
+ - `vydra/veo3` for text-to-video
+ - `vydra/kling` for image-to-video
+
+ Set Vydra as the default video provider:
+
+ ```json5
+ {
+ agents: {
+ defaults: {
+ videoGenerationModel: {
+ primary: "vydra/veo3",
+ },
+ },
+ },
+ }
+ ```
+
+ Notes:
+
+ - `vydra/veo3` is bundled as text-to-video only.
+ - `vydra/kling` currently requires a remote image URL reference. Local file uploads are rejected up front.
+ - Vydra's current `kling` HTTP route has been inconsistent about whether it requires `image_url` or `video_url`; the bundled provider maps the same remote image URL into both fields.
+ - The bundled plugin stays conservative and does not forward undocumented style knobs such as aspect ratio, resolution, watermark, or generated audio.
+
+
+ See [Video Generation](/tools/video-generation) for shared tool parameters, provider selection, and failover behavior.
+
+
+
+
+
+ Provider-specific live coverage:
+
+ ```bash
+ OPENCLAW_LIVE_TEST=1 \
+ OPENCLAW_LIVE_VYDRA_VIDEO=1 \
+ pnpm test:live -- extensions/vydra/vydra.live.test.ts
+ ```
+
+ The bundled Vydra live file now covers:
+
+ - `vydra/veo3` text-to-video
+ - `vydra/kling` image-to-video using a remote image URL
+
+ Override the remote image fixture when needed:
+
+ ```bash
+ export OPENCLAW_LIVE_VYDRA_KLING_IMAGE_URL="https://example.com/reference.png"
+ ```
+
+
+
+
+ Set Vydra as the speech provider:
+
+ ```json5
+ {
+ messages: {
+ tts: {
+ provider: "vydra",
+ providers: {
+ vydra: {
+ apiKey: "${VYDRA_API_KEY}",
+ voiceId: "21m00Tcm4TlvDq8ikWAM",
+ },
+ },
+ },
+ },
+ }
+ ```
+
+ Defaults:
+
+ - Model: `elevenlabs/tts`
+ - Voice id: `21m00Tcm4TlvDq8ikWAM`
+
+ The bundled plugin currently exposes one known-good default voice and returns MP3 audio files.
+
+
+
## Related
-- [Provider Directory](/providers/index)
-- [Image Generation](/tools/image-generation)
-- [Video Generation](/tools/video-generation)
+
+
+ Browse all available providers.
+
+
+ Shared image tool parameters and provider selection.
+
+
+ Shared video tool parameters and provider selection.
+
+
+ Agent defaults and model configuration.
+
+
diff --git a/docs/providers/xai.md b/docs/providers/xai.md
index e495ad74446..dfa1adbba0b 100644
--- a/docs/providers/xai.md
+++ b/docs/providers/xai.md
@@ -10,113 +10,167 @@ title: "xAI"
OpenClaw ships a bundled `xai` provider plugin for Grok models.
-## Setup
+## Getting started
-1. Create an API key in the xAI console.
-2. Set `XAI_API_KEY`, or run:
+
+
+ Create an API key in the [xAI console](https://console.x.ai/).
+
+
+ Set `XAI_API_KEY`, or run:
-```bash
-openclaw onboard --auth-choice xai-api-key
-```
+ ```bash
+ openclaw onboard --auth-choice xai-api-key
+ ```
-3. Pick a model such as:
+
+
+ ```json5
+ {
+ agents: { defaults: { model: { primary: "xai/grok-4" } } },
+ }
+ ```
+
+
-```json5
-{
- agents: { defaults: { model: { primary: "xai/grok-4" } } },
-}
-```
-
-OpenClaw now uses the xAI Responses API as the bundled xAI transport. The same
+
+OpenClaw uses the xAI Responses API as the bundled xAI transport. The same
`XAI_API_KEY` can also power Grok-backed `web_search`, first-class `x_search`,
and remote `code_execution`.
If you store an xAI key under `plugins.entries.xai.config.webSearch.apiKey`,
-the bundled xAI model provider now reuses that key as a fallback too.
+the bundled xAI model provider reuses that key as a fallback too.
`code_execution` tuning lives under `plugins.entries.xai.config.codeExecution`.
+
-## Current bundled model catalog
+## Bundled model catalog
-OpenClaw now includes these xAI model families out of the box:
+OpenClaw includes these xAI model families out of the box:
-- `grok-3`, `grok-3-fast`, `grok-3-mini`, `grok-3-mini-fast`
-- `grok-4`, `grok-4-0709`
-- `grok-4-fast`, `grok-4-fast-non-reasoning`
-- `grok-4-1-fast`, `grok-4-1-fast-non-reasoning`
-- `grok-4.20-beta-latest-reasoning`, `grok-4.20-beta-latest-non-reasoning`
-- `grok-code-fast-1`
+| Family | Model ids |
+| -------------- | ------------------------------------------------------------------------ |
+| Grok 3 | `grok-3`, `grok-3-fast`, `grok-3-mini`, `grok-3-mini-fast` |
+| Grok 4 | `grok-4`, `grok-4-0709` |
+| Grok 4 Fast | `grok-4-fast`, `grok-4-fast-non-reasoning` |
+| Grok 4.1 Fast | `grok-4-1-fast`, `grok-4-1-fast-non-reasoning` |
+| Grok 4.20 Beta | `grok-4.20-beta-latest-reasoning`, `grok-4.20-beta-latest-non-reasoning` |
+| Grok Code | `grok-code-fast-1` |
The plugin also forward-resolves newer `grok-4*` and `grok-code-fast*` ids when
they follow the same API shape.
-Fast-model notes:
+
+`grok-4-fast`, `grok-4-1-fast`, and the `grok-4.20-beta-*` variants are the
+current image-capable Grok refs in the bundled catalog.
+
-- `grok-4-fast`, `grok-4-1-fast`, and the `grok-4.20-beta-*` variants are the
- current image-capable Grok refs in the bundled catalog.
-- `/fast on` or `agents.defaults.models["xai/"].params.fastMode: true`
- rewrites native xAI requests as follows:
- - `grok-3` -> `grok-3-fast`
- - `grok-3-mini` -> `grok-3-mini-fast`
- - `grok-4` -> `grok-4-fast`
- - `grok-4-0709` -> `grok-4-fast`
+### Fast-mode mappings
-Legacy compatibility aliases still normalize to the canonical bundled ids. For
-example:
+`/fast on` or `agents.defaults.models["xai/"].params.fastMode: true`
+rewrites native xAI requests as follows:
-- `grok-4-fast-reasoning` -> `grok-4-fast`
-- `grok-4-1-fast-reasoning` -> `grok-4-1-fast`
-- `grok-4.20-reasoning` -> `grok-4.20-beta-latest-reasoning`
-- `grok-4.20-non-reasoning` -> `grok-4.20-beta-latest-non-reasoning`
+| Source model | Fast-mode target |
+| ------------- | ------------------ |
+| `grok-3` | `grok-3-fast` |
+| `grok-3-mini` | `grok-3-mini-fast` |
+| `grok-4` | `grok-4-fast` |
+| `grok-4-0709` | `grok-4-fast` |
-## Web search
+### Legacy compatibility aliases
-The bundled `grok` web-search provider uses `XAI_API_KEY` too:
+Legacy aliases still normalize to the canonical bundled ids:
-```bash
-openclaw config set tools.web.search.provider grok
-```
+| Legacy alias | Canonical id |
+| ------------------------- | ------------------------------------- |
+| `grok-4-fast-reasoning` | `grok-4-fast` |
+| `grok-4-1-fast-reasoning` | `grok-4-1-fast` |
+| `grok-4.20-reasoning` | `grok-4.20-beta-latest-reasoning` |
+| `grok-4.20-non-reasoning` | `grok-4.20-beta-latest-non-reasoning` |
-## Video generation
+## Features
-The bundled `xai` plugin also registers video generation through the shared
-`video_generate` tool.
+
+
+ The bundled `grok` web-search provider uses `XAI_API_KEY` too:
-- Default video model: `xai/grok-imagine-video`
-- Modes: text-to-video, image-to-video, and remote video edit/extend flows
-- Supports `aspectRatio` and `resolution`
-- Current limit: local video buffers are not accepted; use remote `http(s)`
- URLs for video-reference/edit inputs
+ ```bash
+ openclaw config set tools.web.search.provider grok
+ ```
-To use xAI as the default video provider:
+
-```json5
-{
- agents: {
- defaults: {
- videoGenerationModel: {
- primary: "xai/grok-imagine-video",
+
+ The bundled `xai` plugin registers video generation through the shared
+ `video_generate` tool.
+
+ - Default video model: `xai/grok-imagine-video`
+ - Modes: text-to-video, image-to-video, and remote video edit/extend flows
+ - Supports `aspectRatio` and `resolution`
+
+
+ Local video buffers are not accepted. Use remote `http(s)` URLs for
+ video-reference and edit inputs.
+
+
+ To use xAI as the default video provider:
+
+ ```json5
+ {
+ agents: {
+ defaults: {
+ videoGenerationModel: {
+ primary: "xai/grok-imagine-video",
+ },
+ },
},
- },
- },
-}
-```
+ }
+ ```
-See [Video Generation](/tools/video-generation) for the shared tool
-parameters, provider selection, and failover behavior.
+
+ See [Video Generation](/tools/video-generation) for shared tool parameters,
+ provider selection, and failover behavior.
+
-## Known limits
+
-- Auth is API-key only today. There is no xAI OAuth/device-code flow in OpenClaw yet.
-- `grok-4.20-multi-agent-experimental-beta-0304` is not supported on the normal xAI provider path because it requires a different upstream API surface than the standard OpenClaw xAI transport.
+
+ - Auth is API-key only today. There is no xAI OAuth or device-code flow in
+ OpenClaw yet.
+ - `grok-4.20-multi-agent-experimental-beta-0304` is not supported on the
+ normal xAI provider path because it requires a different upstream API
+ surface than the standard OpenClaw xAI transport.
+
-## Notes
+
+ - OpenClaw applies xAI-specific tool-schema and tool-call compatibility fixes
+ automatically on the shared runner path.
+ - Native xAI requests default `tool_stream: true`. Set
+ `agents.defaults.models["xai/"].params.tool_stream` to `false` to
+ disable it.
+ - The bundled xAI wrapper strips unsupported strict tool-schema flags and
+ reasoning payload keys before sending native xAI requests.
+ - `web_search`, `x_search`, and `code_execution` are exposed as OpenClaw
+ tools. OpenClaw enables the specific xAI built-in it needs inside each tool
+ request instead of attaching all native tools to every chat turn.
+ - `x_search` and `code_execution` are owned by the bundled xAI plugin rather
+ than hardcoded into the core model runtime.
+ - `code_execution` is remote xAI sandbox execution, not local
+ [`exec`](/tools/exec).
+
+
-- OpenClaw applies xAI-specific tool-schema and tool-call compatibility fixes automatically on the shared runner path.
-- Native xAI requests default `tool_stream: true`. Set
- `agents.defaults.models["xai/"].params.tool_stream` to `false` to
- disable it.
-- The bundled xAI wrapper strips unsupported strict tool-schema flags and
- reasoning payload keys before sending native xAI requests.
-- `web_search`, `x_search`, and `code_execution` are exposed as OpenClaw tools. OpenClaw enables the specific xAI built-in it needs inside each tool request instead of attaching all native tools to every chat turn.
-- `x_search` and `code_execution` are owned by the bundled xAI plugin rather than hardcoded into the core model runtime.
-- `code_execution` is remote xAI sandbox execution, not local [`exec`](/tools/exec).
-- For the broader provider overview, see [Model providers](/providers/index).
+## Related
+
+
+
+ Choosing providers, model refs, and failover behavior.
+
+
+ Shared video tool parameters and provider selection.
+
+
+ The broader provider overview.
+
+
+ Common issues and fixes.
+
+