Files
openclaw/docs/providers/lmstudio.md
Rugved Somwanshi 0cfb83edfa feat: LM Studio Integration (#53248)
* Feat: LM Studio Integration

* Format

* Support usage in streaming true

Fix token count

* Add custom window check

* Drop max tokens fallback

* tweak docs

Update generated

* Avoid error if stale header does not resolve

* Fix test

* Fix test

* Fix rebase issues

Trim code

* Fix tests

Drop keyless

Fixes

* Fix linter issues in tests

* Update generated artifacts

* Do not have fatal header resoltuion for discovery

* Do the same for API key as well

* fix: honor lmstudio preload runtime auth

* fix: clear stale lmstudio header auth

* fix: lazy-load lmstudio runtime facade

* fix: preserve lmstudio shared synthetic auth

* fix: clear stale lmstudio header auth in discovery

* fix: prefer lmstudio header auth for discovery

* fix: honor lmstudio header auth in warmup paths

* fix: clear stale lmstudio profile auth

* fix: ignore lmstudio env auth on header migration

* fix: use local lmstudio setup seam

* fix: resolve lmstudio rebase fallout

---------

Co-authored-by: Frank Yang <frank.ekn@gmail.com>
2026-04-13 15:22:44 +08:00

4.5 KiB

summary, read_when, title
summary read_when title
Run OpenClaw with LM Studio
You want to run OpenClaw with open source models via LM Studio
You want to set up and configure LM Studio
LM Studio

LM Studio

LM Studio is a friendly yet powerful app for running open-weight models on your own hardware. It lets you run llama.cpp (GGUF) or MLX models (Apple Silicon). Comes in a GUI package or headless daemon (llmster). For product and setup docs, see lmstudio.ai.

Quick start

  1. Install LM Studio (desktop) or llmster (headless), then start the local server:
curl -fsSL https://lmstudio.ai/install.sh | bash
  1. Start the server

Make sure you either start the desktop app or run the daemon using the following command:

lms daemon up
lms server start --port 1234

If you are using the app, make sure you have JIT enabled for a smooth experience. Learn more in the LM Studio JIT and TTL guide.

  1. OpenClaw requires an LM Studio token value. Set LM_API_TOKEN:
export LM_API_TOKEN="your-lm-studio-api-token"

If LM Studio authentication is disabled, use any non-empty token value:

export LM_API_TOKEN="placeholder-key"

For LM Studio auth setup details, see LM Studio Authentication.

  1. Run onboarding and choose LM Studio:
openclaw onboard
  1. In onboarding, use the Default model prompt to pick your LM Studio model.

You can also set or change it later:

openclaw models set lmstudio/qwen/qwen3.5-9b

LM Studio model keys follow a author/model-name format (e.g. qwen/qwen3.5-9b). OpenClaw model refs prepend the provider name: lmstudio/qwen/qwen3.5-9b. You can find the exact key for a model by running curl http://localhost:1234/api/v1/models and looking at the key field.

Non-interactive onboarding

Use non-interactive onboarding when you want to script setup (CI, provisioning, remote bootstrap):

openclaw onboard \
  --non-interactive \
  --accept-risk \
  --auth-choice lmstudio

Or specify base URL or model with API key:

openclaw onboard \
  --non-interactive \
  --accept-risk \
  --auth-choice lmstudio \
  --custom-base-url http://localhost:1234/v1 \
  --lmstudio-api-key "$LM_API_TOKEN" \
  --custom-model-id qwen/qwen3.5-9b

--custom-model-id takes the model key as returned by LM Studio (e.g. qwen/qwen3.5-9b), without the lmstudio/ provider prefix.

Non-interactive onboarding requires --lmstudio-api-key (or LM_API_TOKEN in env). For unauthenticated LM Studio servers, any non-empty token value works.

--custom-api-key remains supported for compatibility, but --lmstudio-api-key is preferred for LM Studio.

This writes models.providers.lmstudio, sets the default model to lmstudio/<custom-model-id>, and writes the lmstudio:default auth profile.

Interactive setup can prompt for an optional preferred load context length and applies it across the discovered LM Studio models it saves into config.

Configuration

Explicit configuration

{
  models: {
    providers: {
      lmstudio: {
        baseUrl: "http://localhost:1234/v1",
        apiKey: "${LM_API_TOKEN}",
        api: "openai-completions",
        models: [
          {
            id: "qwen/qwen3-coder-next",
            name: "Qwen 3 Coder Next",
            reasoning: false,
            input: ["text"],
            cost: { input: 0, output: 0, cacheRead: 0, cacheWrite: 0 },
            contextWindow: 128000,
            maxTokens: 8192,
          },
        ],
      },
    },
  },
}

Troubleshooting

LM Studio not detected

Make sure LM Studio is running and that you set LM_API_TOKEN (for unauthenticated servers, any non-empty token value works):

# Start via desktop app, or headless:
lms server start --port 1234

Verify the API is accessible:

curl http://localhost:1234/api/v1/models

Authentication errors (HTTP 401)

If setup reports HTTP 401, verify your API key:

  • Check that LM_API_TOKEN matches the key configured in LM Studio.
  • For LM Studio auth setup details, see LM Studio Authentication.
  • If your server does not require authentication, use any non-empty token value for LM_API_TOKEN.

Just-in-time model loading

LM Studio supports just-in-time (JIT) model loading, where models are loaded on first request. Make sure you have this enabled to avoid 'Model not loaded' errors.