25 KiB
summary, title, sidebarTitle, read_when
| summary | title | sidebarTitle | read_when | ||||
|---|---|---|---|---|---|---|---|
| All configuration knobs for memory search, embedding providers, QMD, hybrid search, and multimodal indexing | Memory configuration reference | Memory config |
|
This page lists every configuration knob for OpenClaw memory search. For conceptual overviews, see:
How memory works. Default SQLite backend. Local-first sidecar. Search pipeline and tuning. Memory sub-agent for interactive sessions.All memory search settings live under agents.defaults.memorySearch in openclaw.json unless noted otherwise.
Active memory uses a two-gate model:
- the plugin must be enabled and target the current agent id
- the request must be an eligible interactive persistent chat session
See Active Memory for the activation model, plugin-owned config, transcript persistence, and safe rollout pattern.
Provider selection
| Key | Type | Default | Description |
|---|---|---|---|
provider |
string |
auto-detected | Embedding adapter ID: bedrock, gemini, github-copilot, local, mistral, ollama, openai, voyage |
model |
string |
provider default | Embedding model name |
fallback |
string |
"none" |
Fallback adapter ID when the primary fails |
enabled |
boolean |
true |
Enable or disable memory search |
Auto-detection order
When provider is not set, OpenClaw selects the first available:
ollama is supported but not auto-detected (set it explicitly).
API key resolution
Remote embeddings require an API key. Bedrock uses the AWS SDK default credential chain instead (instance roles, SSO, access keys).
| Provider | Env var | Config key |
|---|---|---|
| Bedrock | AWS credential chain | No API key needed |
| Gemini | GEMINI_API_KEY |
models.providers.google.apiKey |
| GitHub Copilot | COPILOT_GITHUB_TOKEN, GH_TOKEN, GITHUB_TOKEN |
Auth profile via device login |
| Mistral | MISTRAL_API_KEY |
models.providers.mistral.apiKey |
| Ollama | OLLAMA_API_KEY (placeholder) |
-- |
| OpenAI | OPENAI_API_KEY |
models.providers.openai.apiKey |
| Voyage | VOYAGE_API_KEY |
models.providers.voyage.apiKey |
Remote endpoint config
For custom OpenAI-compatible endpoints or overriding provider defaults:
Custom API base URL. Override API key. Extra HTTP headers (merged with provider defaults).{
agents: {
defaults: {
memorySearch: {
provider: "openai",
model: "text-embedding-3-small",
remote: {
baseUrl: "https://api.example.com/v1/",
apiKey: "YOUR_KEY",
},
},
},
},
}
Provider-specific config
| Key | Type | Default | Description | | ---------------------- | -------- | ---------------------- | ------------------------------------------ | | `model` | `string` | `gemini-embedding-001` | Also supports `gemini-embedding-2-preview` | | `outputDimensionality` | `number` | `3072` | For Embedding 2: 768, 1536, or 3072 |<Warning>
Changing model or `outputDimensionality` triggers an automatic full reindex.
</Warning>
Bedrock uses the AWS SDK default credential chain — no API keys needed. If OpenClaw runs on EC2 with a Bedrock-enabled instance role, just set the provider and model:
```json5
{
agents: {
defaults: {
memorySearch: {
provider: "bedrock",
model: "amazon.titan-embed-text-v2:0",
},
},
},
}
```
| Key | Type | Default | Description |
| ---------------------- | -------- | ------------------------------ | ------------------------------- |
| `model` | `string` | `amazon.titan-embed-text-v2:0` | Any Bedrock embedding model ID |
| `outputDimensionality` | `number` | model default | For Titan V2: 256, 512, or 1024 |
**Supported models** (with family detection and dimension defaults):
| Model ID | Provider | Default Dims | Configurable Dims |
| ------------------------------------------ | ---------- | ------------ | -------------------- |
| `amazon.titan-embed-text-v2:0` | Amazon | 1024 | 256, 512, 1024 |
| `amazon.titan-embed-text-v1` | Amazon | 1536 | -- |
| `amazon.titan-embed-g1-text-02` | Amazon | 1536 | -- |
| `amazon.titan-embed-image-v1` | Amazon | 1024 | -- |
| `amazon.nova-2-multimodal-embeddings-v1:0` | Amazon | 1024 | 256, 384, 1024, 3072 |
| `cohere.embed-english-v3` | Cohere | 1024 | -- |
| `cohere.embed-multilingual-v3` | Cohere | 1024 | -- |
| `cohere.embed-v4:0` | Cohere | 1536 | 256-1536 |
| `twelvelabs.marengo-embed-3-0-v1:0` | TwelveLabs | 512 | -- |
| `twelvelabs.marengo-embed-2-7-v1:0` | TwelveLabs | 1024 | -- |
Throughput-suffixed variants (e.g., `amazon.titan-embed-text-v1:2:8k`) inherit the base model's configuration.
**Authentication:** Bedrock auth uses the standard AWS SDK credential resolution order:
1. Environment variables (`AWS_ACCESS_KEY_ID` + `AWS_SECRET_ACCESS_KEY`)
2. SSO token cache
3. Web identity token credentials
4. Shared credentials and config files
5. ECS or EC2 metadata credentials
Region is resolved from `AWS_REGION`, `AWS_DEFAULT_REGION`, the `amazon-bedrock` provider `baseUrl`, or defaults to `us-east-1`.
**IAM permissions:** the IAM role or user needs:
```json
{
"Effect": "Allow",
"Action": "bedrock:InvokeModel",
"Resource": "*"
}
```
For least-privilege, scope `InvokeModel` to the specific model:
```
arn:aws:bedrock:*::foundation-model/amazon.titan-embed-text-v2:0
```
| Key | Type | Default | Description |
| --------------------- | ------------------ | ---------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `local.modelPath` | `string` | auto-downloaded | Path to GGUF model file |
| `local.modelCacheDir` | `string` | node-llama-cpp default | Cache dir for downloaded models |
| `local.contextSize` | `number \| "auto"` | `4096` | Context window size for the embedding context. 4096 covers typical chunks (128–512 tokens) while bounding non-weight VRAM. Lower to 1024–2048 on constrained hosts. `"auto"` uses the model's trained maximum — not recommended for 8B+ models (Qwen3-Embedding-8B: 40 960 tokens → ~32 GB VRAM vs ~8.8 GB at 4096). |
Default model: `embeddinggemma-300m-qat-Q8_0.gguf` (~0.6 GB, auto-downloaded). Requires native build: `pnpm approve-builds` then `pnpm rebuild node-llama-cpp`.
Use the standalone CLI to verify the same provider path the Gateway uses:
```bash
openclaw memory status --deep --agent main
openclaw memory index --force --agent main
```
If `provider` is `auto`, `local` is selected only when `local.modelPath` points to an existing local file. `hf:` and HTTP(S) model references can still be used explicitly with `provider: "local"`, but they do not make `auto` select local before the model is available on disk.
Inline embedding timeout
Override the timeout for inline embedding batches during memory indexing.Unset uses the provider default: 600 seconds for local/self-hosted providers such as local, ollama, and lmstudio, and 120 seconds for hosted providers. Increase this when local CPU-bound embedding batches are healthy but slow.
Hybrid search config
All under memorySearch.query.hybrid:
| Key | Type | Default | Description |
|---|---|---|---|
enabled |
boolean |
true |
Enable hybrid BM25 + vector search |
vectorWeight |
number |
0.7 |
Weight for vector scores (0-1) |
textWeight |
number |
0.3 |
Weight for BM25 scores (0-1) |
candidateMultiplier |
number |
4 |
Candidate pool size multiplier |
Evergreen files (`MEMORY.md`, non-dated files in `memory/`) are never decayed.
Full example
{
agents: {
defaults: {
memorySearch: {
query: {
hybrid: {
vectorWeight: 0.7,
textWeight: 0.3,
mmr: { enabled: true, lambda: 0.7 },
temporalDecay: { enabled: true, halfLifeDays: 30 },
},
},
},
},
},
}
Additional memory paths
| Key | Type | Description |
|---|---|---|
extraPaths |
string[] |
Additional directories or files to index |
{
agents: {
defaults: {
memorySearch: {
extraPaths: ["../team-docs", "/srv/shared-notes"],
},
},
},
}
Paths can be absolute or workspace-relative. Directories are scanned recursively for .md files. Symlink handling depends on the active backend: the builtin engine ignores symlinks, while QMD follows the underlying QMD scanner behavior.
For agent-scoped cross-agent transcript search, use agents.list[].memorySearch.qmd.extraCollections instead of memory.qmd.paths. Those extra collections follow the same { path, name, pattern? } shape, but they are merged per agent and can preserve explicit shared names when the path points outside the current workspace. If the same resolved path appears in both memory.qmd.paths and memorySearch.qmd.extraCollections, QMD keeps the first entry and skips the duplicate.
Multimodal memory (Gemini)
Index images and audio alongside Markdown using Gemini Embedding 2:
| Key | Type | Default | Description |
|---|---|---|---|
multimodal.enabled |
boolean |
false |
Enable multimodal indexing |
multimodal.modalities |
string[] |
-- | ["image"], ["audio"], or ["all"] |
multimodal.maxFileBytes |
number |
10000000 |
Max file size for indexing |
Supported formats: .jpg, .jpeg, .png, .webp, .gif, .heic, .heif (images); .mp3, .wav, .ogg, .opus, .m4a, .aac, .flac (audio).
Embedding cache
| Key | Type | Default | Description |
|---|---|---|---|
cache.enabled |
boolean |
false |
Cache chunk embeddings in SQLite |
cache.maxEntries |
number |
50000 |
Max cached embeddings |
Prevents re-embedding unchanged text during reindex or transcript updates.
Batch indexing
| Key | Type | Default | Description |
|---|---|---|---|
remote.batch.enabled |
boolean |
false |
Enable batch embedding API |
remote.batch.concurrency |
number |
2 |
Parallel batch jobs |
remote.batch.wait |
boolean |
true |
Wait for batch completion |
remote.batch.pollIntervalMs |
number |
-- | Poll interval |
remote.batch.timeoutMinutes |
number |
-- | Batch timeout |
Available for openai, gemini, and voyage. OpenAI batch is typically fastest and cheapest for large backfills.
This is separate from sync.embeddingBatchTimeoutSeconds, which controls inline embedding calls used by local/self-hosted providers and hosted providers when provider batch APIs are not active.
Session memory search (experimental)
Index session transcripts and surface them via memory_search:
| Key | Type | Default | Description |
|---|---|---|---|
experimental.sessionMemory |
boolean |
false |
Enable session indexing |
sources |
string[] |
["memory"] |
Add "sessions" to include transcripts |
sync.sessions.deltaBytes |
number |
100000 |
Byte threshold for reindex |
sync.sessions.deltaMessages |
number |
50 |
Message threshold for reindex |
SQLite vector acceleration (sqlite-vec)
| Key | Type | Default | Description |
|---|---|---|---|
store.vector.enabled |
boolean |
true |
Use sqlite-vec for vector queries |
store.vector.extensionPath |
string |
bundled | Override sqlite-vec path |
When sqlite-vec is unavailable, OpenClaw falls back to in-process cosine similarity automatically.
Index storage
| Key | Type | Default | Description |
|---|---|---|---|
store.path |
string |
~/.openclaw/memory/{agentId}.sqlite |
Index location (supports {agentId} token) |
store.fts.tokenizer |
string |
unicode61 |
FTS5 tokenizer (unicode61 or trigram) |
QMD backend config
Set memory.backend = "qmd" to enable. All QMD settings live under memory.qmd:
| Key | Type | Default | Description |
|---|---|---|---|
command |
string |
qmd |
QMD executable path |
searchMode |
string |
search |
Search command: search, vsearch, query |
includeDefaultMemory |
boolean |
true |
Auto-index MEMORY.md + memory/**/*.md |
paths[] |
array |
-- | Extra paths: { name, path, pattern? } |
sessions.enabled |
boolean |
false |
Index session transcripts |
sessions.retentionDays |
number |
-- | Transcript retention |
sessions.exportDir |
string |
-- | Export directory |
OpenClaw prefers the current QMD collection and MCP query shapes, but keeps older QMD releases working by falling back to legacy --mask collection flags and older MCP tool names when needed.
```json5
{
memory: {
qmd: {
scope: {
default: "deny",
rules: [{ action: "allow", match: { chatType: "direct" } }],
},
},
},
}
```
The shipped default allows direct and channel sessions, while still denying groups.
Default is DM-only. `match.keyPrefix` matches the normalized session key; `match.rawKeyPrefix` matches the raw key including `agent:<id>:`.
`memory.citations` applies to all backends:
| Value | Behavior |
| ---------------- | --------------------------------------------------- |
| `auto` (default) | Include `Source: <path#line>` footer in snippets |
| `on` | Always include footer |
| `off` | Omit footer (path still passed to agent internally) |
Full QMD example
{
memory: {
backend: "qmd",
citations: "auto",
qmd: {
includeDefaultMemory: true,
update: { interval: "5m", debounceMs: 15000 },
limits: { maxResults: 6, timeoutMs: 4000 },
scope: {
default: "deny",
rules: [{ action: "allow", match: { chatType: "direct" } }],
},
paths: [{ name: "docs", path: "~/notes", pattern: "**/*.md" }],
},
},
}
Dreaming
Dreaming is configured under plugins.entries.memory-core.config.dreaming, not under agents.defaults.memorySearch.
Dreaming runs as one scheduled sweep and uses internal light/deep/REM phases as an implementation detail.
For conceptual behavior and slash commands, see Dreaming.
User settings
| Key | Type | Default | Description |
|---|---|---|---|
enabled |
boolean |
false |
Enable or disable dreaming entirely |
frequency |
string |
0 3 * * * |
Optional cron cadence for the full dreaming sweep |
Example
{
plugins: {
entries: {
"memory-core": {
config: {
dreaming: {
enabled: true,
frequency: "0 3 * * *",
},
},
},
},
},
}