mirror of
https://github.com/openclaw/openclaw.git
synced 2026-05-16 05:46:36 +02:00
* refactor: remove stale file-backed shims * fix: harden sqlite state ci boundaries * refactor: store matrix idb snapshots in sqlite * fix: satisfy rebased CI guardrails * refactor: store current conversation bindings in sqlite table * refactor: store tui last sessions in sqlite table * refactor: reset sqlite schema history * refactor: drop unshipped sqlite table migration * refactor: remove plugin index file rollback * refactor: drop unshipped sqlite sidecar migrations * refactor: remove runtime commitments kv migration * refactor: preserve kysely sync result types * refactor: drop unshipped sqlite schema migration table * test: keep session usage coverage sqlite-backed * refactor: keep sqlite migration doctor-only * refactor: isolate device legacy imports * refactor: isolate push voicewake legacy imports * refactor: isolate remaining runtime legacy imports * refactor: tighten sqlite migration guardrails * test: cover sqlite persisted enum parsing * refactor: isolate legacy update and tui imports * refactor: tighten sqlite state ownership * refactor: move legacy imports behind doctor * refactor: remove legacy session row lookup * refactor: canonicalize memory transcript locators * refactor: drop transcript path scope fallbacks * refactor: drop runtime legacy session delivery pruning * refactor: store tts prefs only in sqlite * refactor: remove cron store path runtime * refactor: use cron sqlite store keys * refactor: rename telegram message cache scope * refactor: read memory dreaming status from sqlite * refactor: rename cron status store key * refactor: stop remembering transcript file paths * test: use sqlite locators in agent fixtures * refactor: remove file-shaped commitments and cron store surfaces * refactor: keep compaction transcript handles out of session rows * refactor: derive transcript handles from session identity * refactor: derive runtime transcript handles * refactor: remove gateway session locator reads * refactor: remove transcript locator from session rows * refactor: store raw stream diagnostics in sqlite * refactor: remove file-shaped transcript rotation * refactor: hide legacy trajectory paths from runtime * refactor: remove runtime transcript file bridges * refactor: repair database-first rebase fallout * refactor: align tests with database-first state * refactor: remove transcript file handoffs * refactor: sync post-compaction memory by transcript scope * refactor: run codex app-server sessions by id * refactor: bind codex runtime state by session id * refactor: pass memory transcripts by sqlite scope * refactor: remove transcript locator cleanup leftovers * test: remove stale transcript file fixtures * refactor: remove transcript locator test helper * test: make cron sqlite keys explicit * test: remove cron runtime store paths * test: remove stale session file fixtures * test: use sqlite cron keys in diagnostics * refactor: remove runtime delivery queue backfill * test: drop fake export session file mocks * refactor: rename acp session read failure flag * refactor: rename acp row session key * refactor: remove session store test seams * refactor: move legacy session parser tests to doctor * refactor: reindex managed memory in place * refactor: drop stale session store wording * refactor: rename session row helpers * refactor: rename sqlite session entry modules * refactor: remove transcript locator leftovers * refactor: trim file-era audit wording * refactor: clean managed media through sqlite * fix: prefer explicit agent for exports * fix: use prepared agent for session resets * fix: canonicalize legacy codex binding import * test: rename state cleanup helper * docs: align backup docs with sqlite state * refactor: drop legacy Pi usage auth fallback * refactor: move legacy auth profile imports to doctor * refactor: keep Pi model discovery auth in memory * refactor: remove MSTeams legacy learning key fallback * refactor: store model catalog config in sqlite * refactor: use sqlite model catalog at runtime * refactor: remove model json compatibility aliases * refactor: store auth profiles in sqlite * refactor: seed copied auth profiles in sqlite * refactor: make auth profile runtime sqlite-addressed * refactor: migrate hermes secrets into sqlite auth store * refactor: move plugin install config migration to doctor * refactor: rename plugin index audit checks * test: drop auth file assumptions * test: remove legacy transcript file assertions * refactor: drop legacy cli session aliases * refactor: store skill uploads in sqlite * refactor: keep subagent attachments in sqlite vfs * refactor: drop subagent attachment cleanup state * refactor: move legacy session aliases to doctor * refactor: require node 24 for sqlite state runtime * refactor: move provider caches into sqlite state * fix: harden virtual agent filesystem * refactor: enforce database-first runtime state * refactor: rename compaction transcript rotation setting * test: clean sqlite refactor test types * refactor: consolidate sqlite runtime state * refactor: model session conversations in sqlite * refactor: stop deriving cron delivery from session keys * refactor: stop classifying sessions from key shape * refactor: hydrate announce targets from typed delivery * refactor: route heartbeat delivery from typed sqlite context * refactor: tighten typed sqlite session routing * refactor: remove session origin routing shadow * refactor: drop session origin shadow fixtures * perf: query sqlite vfs paths by prefix * refactor: use typed conversation metadata for sessions * refactor: prefer typed session routing metadata * refactor: require typed session routing metadata * refactor: resolve group tool policy from typed sessions * refactor: delete dead session thread info bridge * Show Codex subscription reset times in channel errors (#80456) * feat(plugin-sdk): consolidate session workflow APIs * fix(agents): allow read-only agent mount reads * [codex] refresh plugin regression fixtures * fix(agents): restore compaction gateway logs * test: tighten gateway startup assertions * Redact persisted secret-shaped payloads [AI] (#79006) * test: tighten device pair notify assertions * test: tighten hermes secret assertions * test: assert matrix client error shapes * test: assert config compat warnings * fix(heartbeat): remap cron-run exec events to session keys (#80214) * fix(codex): route btw through native side threads * fix(auth): accept friendly OpenAI order for Codex profiles * fix(codex): rotate auth profiles inside harness * fix: keep browser status page probe within timeout * test: assert agents add outputs * test: pin cron read status * fix(agents): avoid Pi resource discovery stalls Co-authored-by: dataCenter430 <titan032000@gmail.com> * fix: retire timed-out codex app-server clients * test: tighten qa lab runtime assertions * test: check security fix outputs * test: verify extension runtime messages * feat(wake): expose typed sessionKey on wake protocol + system event CLI * fix(gateway): await session_end during shutdown drain and track channel + compaction lifecycle paths (#57790) * test: guard talk consult call helper * fix(codex): scale context engine projection (#80761) * fix(codex): scale context engine projection * fix: document Codex context projection scaling * fix: document Codex context projection scaling * fix: document Codex context projection scaling * fix: document Codex context projection scaling * chore: align Codex projection changelog * chore: realign Codex projection changelog * fix: isolate Codex projection patch --------- Co-authored-by: Eva (agent) <eva+agent-78055@100yen.org> Co-authored-by: Josh Lehman <josh@martian.engineering> * refactor: move agent runtime state toward piless * refactor: remove cron session reaper * refactor: move session management to sqlite * refactor: finish database-first state migration * chore: refresh generated sqlite db types * refactor: remove stale file-backed shims * test: harden kysely type coverage # Conflicts: # .agents/skills/kysely-database-access/SKILL.md # src/infra/kysely-sync.types.test.ts # src/proxy-capture/store.sqlite.test.ts # src/state/openclaw-agent-db.test.ts # src/state/openclaw-state-db.test.ts * refactor: remove cron store path runtime * refactor: keep compaction transcript handles out of session rows * refactor: derive embedded transcripts from sqlite identity * refactor: remove embedded transcript locator handoff * refactor: remove runtime transcript file bridges * refactor: remove transcript file handoffs * refactor: remove MSTeams legacy learning key fallback * refactor: store model catalog config in sqlite * refactor: use sqlite model catalog at runtime # Conflicts: # docs/cli/secrets.md # docs/gateway/authentication.md # docs/gateway/secrets.md * fix: keep oauth sibling sync sqlite-local # Conflicts: # src/commands/onboard-auth.test.ts * refactor: remove task session store maintenance # Conflicts: # src/commands/tasks.ts * refactor: keep diagnostics in state sqlite * refactor: enforce database-first runtime state * refactor: consolidate sqlite runtime state * Show Codex subscription reset times in channel errors (#80456) * fix(codex): refresh subscription limit resets * fix(codex): format reset times for channels * Update CHANGELOG with latest changes and fixes Updated CHANGELOG with recent fixes and improvements. * fix(codex): keep command load failures on codex surface * fix(codex): format account rate limits as rows * fix(codex): summarize account limits as usage status * fix(codex): simplify account limit status * test: tighten subagent announce queue assertion * test: tighten session delete lifecycle assertions * test: tighten cron ops assertions * fix: track cron execution milestones * test: tighten hermes secret assertions * test: assert matrix sync store payloads * test: assert config compat warnings * fix(codex): align btw side thread semantics * fix(codex): honor codex fallback blocking * fix(agents): avoid Pi resource discovery stalls * test: tighten codex event assertions * test: tighten cron assertions * Fix Codex app-server OAuth harness auth * refactor: move agent runtime state toward piless * refactor: move device and push state to sqlite * refactor: move runtime json state imports to doctor * refactor: finish database-first state migration * chore: refresh generated sqlite db types * refactor: clarify cron sqlite store keys * refactor: remove stale file-backed shims * refactor: bind codex runtime state by session id * test: expect sqlite trajectory branch export * refactor: rename session row helpers * fix: keep legacy device identity import in doctor * refactor: enforce database-first runtime state * refactor: consolidate sqlite runtime state * build: align pi contract wrappers * chore: repair database-first rebase * refactor: remove session file test contracts * test: update gateway session expectations * refactor: stop routing from session compatibility shadows * refactor: stop persisting session route shadows * refactor: use typed delivery context in clients * refactor: stop echoing session route shadows * refactor: repair embedded runner rebase imports # Conflicts: # src/agents/pi-embedded-runner/run/attempt.tool-call-argument-repair.ts * refactor: align pi contract imports * refactor: satisfy kysely sync helper guard * refactor: remove file transcript bridge remnants * refactor: remove session locator compatibility * refactor: remove session file test contracts * refactor: keep rebase database-first clean * refactor: remove session file assumptions from e2e * docs: clarify database-first goal state * test: remove legacy store markers from sqlite runtime tests * refactor: remove legacy store assumptions from runtime seams * refactor: align sqlite runtime helper seams * test: update memory recall sqlite audit mock * refactor: align database-first runtime type seams * test: clarify doctor cron legacy store names * fix: preserve sqlite session route projections * test: fix copilot token cache test syntax * docs: update database-first proof status * test: align database-first test fixtures * docs: update database-first proof status * refactor: clean extension database-first drift * test: align agent session route proof * test: clarify doctor legacy path fixtures * chore: clean database-first changed checks * chore: repair database-first rebase markers * build: allow baileys git subdependency * chore: repair exp-vfs rebase drift * chore: finish exp-vfs rebase cleanup * chore: satisfy rebase lint drift * chore: fix qqbot rebase type seam * chore: fix rebase drift leftovers * fix: keep auth profile oauth secrets out of sqlite * fix: repair rebase drift tests * test: stabilize pairing request ordering * test: use source manifests in plugin contract checks * fix: restore gateway session metadata after rebase * fix: repair database-first rebase drift * fix: clean up database-first rebase fallout * test: stabilize line quick reply receipt time * fix: repair extension rebase drift * test: keep transcript redaction tests sqlite-backed * fix: carry injected transcript redaction through sqlite * chore: clean database branch rebase residue * fix: repair database branch CI drift * fix: repair database branch CI guard drift * fix: stabilize oauth tls preflight test * test: align database branch fast guards * test: repair build artifact boundary guards * chore: clean changelog rebase markers --------- Co-authored-by: pashpashpash <nik@vault77.ai> Co-authored-by: Eva <eva@100yen.org> Co-authored-by: stainlu <stainlu@newtype-ai.org> Co-authored-by: Jason Zhou <jason.zhou.design@gmail.com> Co-authored-by: Ruben Cuevas <hi@rubencu.com> Co-authored-by: Pavan Kumar Gondhi <pavangondhi@gmail.com> Co-authored-by: Shakker <shakkerdroid@gmail.com> Co-authored-by: Kaspre <36520309+Kaspre@users.noreply.github.com> Co-authored-by: dataCenter430 <titan032000@gmail.com> Co-authored-by: Kaspre <kaspre@gmail.com> Co-authored-by: pandadev66 <nova.full.stack@outlook.com> Co-authored-by: Eva <admin@100yen.org> Co-authored-by: Eva (agent) <eva+agent-78055@100yen.org> Co-authored-by: Josh Lehman <josh@martian.engineering> Co-authored-by: jeffjhunter <support@aipersonamethod.com>
706 lines
24 KiB
Bash
Executable File
706 lines
24 KiB
Bash
Executable File
#!/usr/bin/env bash
|
|
set -euo pipefail
|
|
|
|
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
|
|
source "$ROOT_DIR/scripts/lib/docker-build.sh"
|
|
COMPOSE_FILE="$ROOT_DIR/docker-compose.yml"
|
|
EXTRA_COMPOSE_FILE="$ROOT_DIR/docker-compose.extra.yml"
|
|
IMAGE_NAME="${OPENCLAW_IMAGE:-openclaw:local}"
|
|
EXTRA_MOUNTS="${OPENCLAW_EXTRA_MOUNTS:-}"
|
|
HOME_VOLUME_NAME="${OPENCLAW_HOME_VOLUME:-}"
|
|
RAW_SANDBOX_SETTING="${OPENCLAW_SANDBOX:-}"
|
|
SANDBOX_ENABLED=""
|
|
DOCKER_SOCKET_PATH="${OPENCLAW_DOCKER_SOCKET:-}"
|
|
TIMEZONE="${OPENCLAW_TZ:-}"
|
|
RAW_SKIP_ONBOARDING="${OPENCLAW_SKIP_ONBOARDING:-}"
|
|
SKIP_ONBOARDING=""
|
|
|
|
fail() {
|
|
echo "ERROR: $*" >&2
|
|
exit 1
|
|
}
|
|
|
|
require_cmd() {
|
|
if ! command -v "$1" >/dev/null 2>&1; then
|
|
echo "Missing dependency: $1" >&2
|
|
exit 1
|
|
fi
|
|
}
|
|
|
|
run_docker_build() {
|
|
# Dockerfile uses BuildKit-only syntax (RUN --mount=type=cache). Force
|
|
# BuildKit so hosts defaulting to the legacy builder do not fail.
|
|
docker_build_exec "$@"
|
|
}
|
|
|
|
is_truthy_value() {
|
|
local raw="${1:-}"
|
|
raw="$(printf '%s' "$raw" | tr '[:upper:]' '[:lower:]')"
|
|
case "$raw" in
|
|
1 | true | yes | on) return 0 ;;
|
|
*) return 1 ;;
|
|
esac
|
|
}
|
|
|
|
read_config_gateway_token() {
|
|
local config_path="$OPENCLAW_CONFIG_DIR/openclaw.json"
|
|
if [[ ! -f "$config_path" ]]; then
|
|
return 0
|
|
fi
|
|
if command -v python3 >/dev/null 2>&1; then
|
|
python3 - "$config_path" <<'PY'
|
|
import json
|
|
import sys
|
|
|
|
path = sys.argv[1]
|
|
try:
|
|
with open(path, "r", encoding="utf-8") as f:
|
|
cfg = json.load(f)
|
|
except Exception:
|
|
raise SystemExit(0)
|
|
|
|
gateway = cfg.get("gateway")
|
|
if not isinstance(gateway, dict):
|
|
raise SystemExit(0)
|
|
auth = gateway.get("auth")
|
|
if not isinstance(auth, dict):
|
|
raise SystemExit(0)
|
|
token = auth.get("token")
|
|
if isinstance(token, str):
|
|
token = token.strip()
|
|
if token:
|
|
print(token)
|
|
PY
|
|
return 0
|
|
fi
|
|
if command -v node >/dev/null 2>&1; then
|
|
node - "$config_path" <<'NODE'
|
|
const fs = require("node:fs");
|
|
const configPath = process.argv[2];
|
|
try {
|
|
const cfg = JSON.parse(fs.readFileSync(configPath, "utf8"));
|
|
const token = cfg?.gateway?.auth?.token;
|
|
if (typeof token === "string" && token.trim().length > 0) {
|
|
process.stdout.write(token.trim());
|
|
}
|
|
} catch {
|
|
// Keep docker-setup resilient when config parsing fails.
|
|
}
|
|
NODE
|
|
fi
|
|
}
|
|
|
|
read_env_gateway_token() {
|
|
local env_path="$1"
|
|
local line=""
|
|
local token=""
|
|
if [[ ! -f "$env_path" ]]; then
|
|
return 0
|
|
fi
|
|
while IFS= read -r line || [[ -n "$line" ]]; do
|
|
line="${line%$'\r'}"
|
|
if [[ "$line" == OPENCLAW_GATEWAY_TOKEN=* ]]; then
|
|
token="${line#OPENCLAW_GATEWAY_TOKEN=}"
|
|
fi
|
|
done <"$env_path"
|
|
if [[ -n "$token" ]]; then
|
|
printf '%s' "$token"
|
|
fi
|
|
}
|
|
|
|
sync_gateway_config() {
|
|
local allowed_origin_json=""
|
|
local current_allowed_origins=""
|
|
local batch_json=""
|
|
|
|
if [[ "${OPENCLAW_GATEWAY_BIND}" != "loopback" ]]; then
|
|
allowed_origin_json="$(printf '["http://localhost:%s","http://127.0.0.1:%s"]' "$OPENCLAW_GATEWAY_PORT" "$OPENCLAW_GATEWAY_PORT")"
|
|
current_allowed_origins="$(
|
|
run_prestart_cli config get gateway.controlUi.allowedOrigins 2>/dev/null || true
|
|
)"
|
|
current_allowed_origins="${current_allowed_origins//$'\r'/}"
|
|
fi
|
|
|
|
batch_json="$(printf '[{"path":"gateway.mode","value":"local"},{"path":"gateway.bind","value":"%s"}' "$OPENCLAW_GATEWAY_BIND")"
|
|
if [[ -n "$allowed_origin_json" ]]; then
|
|
if [[ -n "$current_allowed_origins" && "$current_allowed_origins" != "null" && "$current_allowed_origins" != "[]" ]]; then
|
|
echo "Control UI allowlist already configured; leaving gateway.controlUi.allowedOrigins unchanged."
|
|
else
|
|
batch_json+=",{\"path\":\"gateway.controlUi.allowedOrigins\",\"value\":$allowed_origin_json}"
|
|
fi
|
|
fi
|
|
batch_json+="]"
|
|
|
|
run_prestart_cli config set --batch-json "$batch_json" >/dev/null
|
|
echo "Pinned gateway.mode=local and gateway.bind=$OPENCLAW_GATEWAY_BIND for Docker setup."
|
|
if [[ -n "$allowed_origin_json" ]]; then
|
|
if [[ -z "$current_allowed_origins" || "$current_allowed_origins" == "null" || "$current_allowed_origins" == "[]" ]]; then
|
|
echo "Set gateway.controlUi.allowedOrigins to $allowed_origin_json for non-loopback bind."
|
|
fi
|
|
fi
|
|
}
|
|
|
|
run_prestart_gateway() {
|
|
docker compose "${COMPOSE_ARGS[@]}" run --rm --no-deps "$@"
|
|
}
|
|
|
|
run_prestart_cli() {
|
|
# During setup, avoid the shared-network openclaw-cli service because it
|
|
# requires the gateway container's network namespace to already exist. That
|
|
# creates a circular dependency for config writes that are needed before the
|
|
# gateway can start cleanly.
|
|
run_prestart_gateway --entrypoint node openclaw-gateway \
|
|
dist/index.js "$@"
|
|
}
|
|
|
|
run_runtime_cli() {
|
|
local compose_scope="${1:-current}"
|
|
local deps_mode="${2:-with-deps}"
|
|
shift 2
|
|
|
|
local -a compose_args
|
|
local -a run_args=(run --rm)
|
|
|
|
case "$compose_scope" in
|
|
current) compose_args=("${COMPOSE_ARGS[@]}") ;;
|
|
base) compose_args=("${BASE_COMPOSE_ARGS[@]}") ;;
|
|
*) fail "Unknown runtime CLI compose scope: $compose_scope" ;;
|
|
esac
|
|
|
|
case "$deps_mode" in
|
|
with-deps) ;;
|
|
no-deps) run_args+=(--no-deps) ;;
|
|
*) fail "Unknown runtime CLI deps mode: $deps_mode" ;;
|
|
esac
|
|
|
|
docker compose "${compose_args[@]}" "${run_args[@]}" openclaw-cli "$@"
|
|
}
|
|
|
|
contains_disallowed_chars() {
|
|
local value="$1"
|
|
[[ "$value" == *$'\n'* || "$value" == *$'\r'* || "$value" == *$'\t'* ]]
|
|
}
|
|
|
|
is_valid_timezone() {
|
|
local value="$1"
|
|
[[ -e "/usr/share/zoneinfo/$value" && ! -d "/usr/share/zoneinfo/$value" ]]
|
|
}
|
|
|
|
validate_mount_path_value() {
|
|
local label="$1"
|
|
local value="$2"
|
|
if [[ -z "$value" ]]; then
|
|
fail "$label cannot be empty."
|
|
fi
|
|
if contains_disallowed_chars "$value"; then
|
|
fail "$label contains unsupported control characters."
|
|
fi
|
|
if [[ "$value" =~ [[:space:]] ]]; then
|
|
fail "$label cannot contain whitespace."
|
|
fi
|
|
}
|
|
|
|
validate_named_volume() {
|
|
local value="$1"
|
|
if [[ ! "$value" =~ ^[A-Za-z0-9][A-Za-z0-9_.-]*$ ]]; then
|
|
fail "OPENCLAW_HOME_VOLUME must match [A-Za-z0-9][A-Za-z0-9_.-]* when using a named volume."
|
|
fi
|
|
}
|
|
|
|
validate_mount_spec() {
|
|
local mount="$1"
|
|
if contains_disallowed_chars "$mount"; then
|
|
fail "OPENCLAW_EXTRA_MOUNTS entries cannot contain control characters."
|
|
fi
|
|
# Keep mount specs strict to avoid YAML structure injection.
|
|
# Expected format: source:target[:options]
|
|
if [[ ! "$mount" =~ ^[^[:space:],:]+:[^[:space:],:]+(:[^[:space:],:]+)?$ ]]; then
|
|
fail "Invalid mount format '$mount'. Expected source:target[:options] without spaces."
|
|
fi
|
|
}
|
|
|
|
require_cmd docker
|
|
if ! docker compose version >/dev/null 2>&1; then
|
|
echo "Docker Compose not available (try: docker compose version)" >&2
|
|
exit 1
|
|
fi
|
|
|
|
if [[ -z "$DOCKER_SOCKET_PATH" && "${DOCKER_HOST:-}" == unix://* ]]; then
|
|
DOCKER_SOCKET_PATH="${DOCKER_HOST#unix://}"
|
|
fi
|
|
if [[ -z "$DOCKER_SOCKET_PATH" ]]; then
|
|
DOCKER_SOCKET_PATH="/var/run/docker.sock"
|
|
fi
|
|
if is_truthy_value "$RAW_SANDBOX_SETTING"; then
|
|
SANDBOX_ENABLED="1"
|
|
fi
|
|
if is_truthy_value "$RAW_SKIP_ONBOARDING"; then
|
|
SKIP_ONBOARDING="1"
|
|
fi
|
|
|
|
OPENCLAW_CONFIG_DIR="${OPENCLAW_CONFIG_DIR:-$HOME/.openclaw}"
|
|
OPENCLAW_WORKSPACE_DIR="${OPENCLAW_WORKSPACE_DIR:-$HOME/.openclaw/workspace}"
|
|
OPENCLAW_AUTH_PROFILE_SECRET_DIR="${OPENCLAW_AUTH_PROFILE_SECRET_DIR:-$HOME/.openclaw-auth-profile-secrets}"
|
|
|
|
validate_mount_path_value "OPENCLAW_CONFIG_DIR" "$OPENCLAW_CONFIG_DIR"
|
|
validate_mount_path_value "OPENCLAW_WORKSPACE_DIR" "$OPENCLAW_WORKSPACE_DIR"
|
|
validate_mount_path_value "OPENCLAW_AUTH_PROFILE_SECRET_DIR" "$OPENCLAW_AUTH_PROFILE_SECRET_DIR"
|
|
if [[ -n "$HOME_VOLUME_NAME" ]]; then
|
|
if [[ "$HOME_VOLUME_NAME" == *"/"* ]]; then
|
|
validate_mount_path_value "OPENCLAW_HOME_VOLUME" "$HOME_VOLUME_NAME"
|
|
else
|
|
validate_named_volume "$HOME_VOLUME_NAME"
|
|
fi
|
|
fi
|
|
if contains_disallowed_chars "$EXTRA_MOUNTS"; then
|
|
fail "OPENCLAW_EXTRA_MOUNTS cannot contain control characters."
|
|
fi
|
|
if [[ -n "$SANDBOX_ENABLED" ]]; then
|
|
validate_mount_path_value "OPENCLAW_DOCKER_SOCKET" "$DOCKER_SOCKET_PATH"
|
|
fi
|
|
if [[ -n "$TIMEZONE" ]]; then
|
|
if contains_disallowed_chars "$TIMEZONE"; then
|
|
fail "OPENCLAW_TZ contains unsupported control characters."
|
|
fi
|
|
if [[ ! "$TIMEZONE" =~ ^[A-Za-z0-9/_+\-]+$ ]]; then
|
|
fail "OPENCLAW_TZ must be a valid IANA timezone string (e.g. Asia/Shanghai)."
|
|
fi
|
|
if ! is_valid_timezone "$TIMEZONE"; then
|
|
fail "OPENCLAW_TZ must match a timezone in /usr/share/zoneinfo (e.g. Asia/Shanghai)."
|
|
fi
|
|
fi
|
|
|
|
mkdir -p "$OPENCLAW_CONFIG_DIR"
|
|
mkdir -p "$OPENCLAW_WORKSPACE_DIR"
|
|
mkdir -p "$OPENCLAW_AUTH_PROFILE_SECRET_DIR"
|
|
# Seed directory tree eagerly so bind mounts work even on Docker Desktop/Windows
|
|
# where the container (even as root) cannot create new host subdirectories.
|
|
mkdir -p "$OPENCLAW_CONFIG_DIR/identity"
|
|
mkdir -p "$OPENCLAW_CONFIG_DIR/agents/main/agent"
|
|
|
|
export OPENCLAW_CONFIG_DIR
|
|
export OPENCLAW_WORKSPACE_DIR
|
|
export OPENCLAW_AUTH_PROFILE_SECRET_DIR
|
|
export OPENCLAW_GATEWAY_PORT="${OPENCLAW_GATEWAY_PORT:-18789}"
|
|
export OPENCLAW_BRIDGE_PORT="${OPENCLAW_BRIDGE_PORT:-18790}"
|
|
export OPENCLAW_GATEWAY_BIND="${OPENCLAW_GATEWAY_BIND:-lan}"
|
|
export OPENCLAW_DISABLE_BONJOUR="${OPENCLAW_DISABLE_BONJOUR:-}"
|
|
export OPENCLAW_IMAGE="$IMAGE_NAME"
|
|
export OPENCLAW_DOCKER_APT_PACKAGES="${OPENCLAW_DOCKER_APT_PACKAGES:-}"
|
|
export OPENCLAW_EXTENSIONS="${OPENCLAW_EXTENSIONS:-}"
|
|
export OPENCLAW_INSTALL_BROWSER="${OPENCLAW_INSTALL_BROWSER:-}"
|
|
export OPENCLAW_EXTRA_MOUNTS="$EXTRA_MOUNTS"
|
|
export OPENCLAW_HOME_VOLUME="$HOME_VOLUME_NAME"
|
|
export OPENCLAW_ALLOW_INSECURE_PRIVATE_WS="${OPENCLAW_ALLOW_INSECURE_PRIVATE_WS:-}"
|
|
export OPENCLAW_SANDBOX="$SANDBOX_ENABLED"
|
|
export OPENCLAW_DOCKER_SOCKET="$DOCKER_SOCKET_PATH"
|
|
export OPENCLAW_DOCKER_SETUP=1
|
|
export OPENCLAW_TZ="$TIMEZONE"
|
|
export OTEL_EXPORTER_OTLP_ENDPOINT="${OTEL_EXPORTER_OTLP_ENDPOINT:-}"
|
|
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="${OTEL_EXPORTER_OTLP_TRACES_ENDPOINT:-}"
|
|
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT="${OTEL_EXPORTER_OTLP_METRICS_ENDPOINT:-}"
|
|
export OTEL_EXPORTER_OTLP_LOGS_ENDPOINT="${OTEL_EXPORTER_OTLP_LOGS_ENDPOINT:-}"
|
|
export OTEL_EXPORTER_OTLP_PROTOCOL="${OTEL_EXPORTER_OTLP_PROTOCOL:-}"
|
|
export OTEL_SERVICE_NAME="${OTEL_SERVICE_NAME:-}"
|
|
export OTEL_SEMCONV_STABILITY_OPT_IN="${OTEL_SEMCONV_STABILITY_OPT_IN:-}"
|
|
export OPENCLAW_OTEL_PRELOADED="${OPENCLAW_OTEL_PRELOADED:-}"
|
|
export OPENCLAW_SKIP_ONBOARDING="$SKIP_ONBOARDING"
|
|
|
|
# Detect Docker socket GID for sandbox group_add.
|
|
DOCKER_GID=""
|
|
if [[ -n "$SANDBOX_ENABLED" && -S "$DOCKER_SOCKET_PATH" ]]; then
|
|
DOCKER_GID="$(stat -c '%g' "$DOCKER_SOCKET_PATH" 2>/dev/null || stat -f '%g' "$DOCKER_SOCKET_PATH" 2>/dev/null || echo "")"
|
|
fi
|
|
export DOCKER_GID
|
|
|
|
if [[ -z "${OPENCLAW_GATEWAY_TOKEN:-}" ]]; then
|
|
EXISTING_CONFIG_TOKEN="$(read_config_gateway_token || true)"
|
|
if [[ -n "$EXISTING_CONFIG_TOKEN" ]]; then
|
|
OPENCLAW_GATEWAY_TOKEN="$EXISTING_CONFIG_TOKEN"
|
|
echo "Reusing gateway token from $OPENCLAW_CONFIG_DIR/openclaw.json"
|
|
else
|
|
DOTENV_GATEWAY_TOKEN="$(read_env_gateway_token "$ROOT_DIR/.env" || true)"
|
|
if [[ -n "$DOTENV_GATEWAY_TOKEN" ]]; then
|
|
OPENCLAW_GATEWAY_TOKEN="$DOTENV_GATEWAY_TOKEN"
|
|
echo "Reusing gateway token from $ROOT_DIR/.env"
|
|
elif command -v openssl >/dev/null 2>&1; then
|
|
OPENCLAW_GATEWAY_TOKEN="$(openssl rand -hex 32)"
|
|
else
|
|
OPENCLAW_GATEWAY_TOKEN="$(python3 - <<'PY'
|
|
import secrets
|
|
print(secrets.token_hex(32))
|
|
PY
|
|
)"
|
|
fi
|
|
fi
|
|
fi
|
|
export OPENCLAW_GATEWAY_TOKEN
|
|
|
|
COMPOSE_FILES=("$COMPOSE_FILE")
|
|
COMPOSE_ARGS=()
|
|
|
|
write_extra_compose() {
|
|
local home_volume="$1"
|
|
shift
|
|
local mount
|
|
local gateway_home_mount
|
|
local gateway_config_mount
|
|
local gateway_workspace_mount
|
|
local gateway_auth_profile_secret_mount
|
|
|
|
cat >"$EXTRA_COMPOSE_FILE" <<'YAML'
|
|
services:
|
|
openclaw-gateway:
|
|
volumes:
|
|
YAML
|
|
|
|
if [[ -n "$home_volume" ]]; then
|
|
gateway_home_mount="${home_volume}:/home/node"
|
|
gateway_config_mount="${OPENCLAW_CONFIG_DIR}:/home/node/.openclaw"
|
|
gateway_workspace_mount="${OPENCLAW_WORKSPACE_DIR}:/home/node/.openclaw/workspace"
|
|
gateway_auth_profile_secret_mount="${OPENCLAW_AUTH_PROFILE_SECRET_DIR}:/home/node/.config/openclaw"
|
|
validate_mount_spec "$gateway_home_mount"
|
|
validate_mount_spec "$gateway_config_mount"
|
|
validate_mount_spec "$gateway_workspace_mount"
|
|
validate_mount_spec "$gateway_auth_profile_secret_mount"
|
|
printf ' - %s\n' "$gateway_home_mount" >>"$EXTRA_COMPOSE_FILE"
|
|
printf ' - %s\n' "$gateway_config_mount" >>"$EXTRA_COMPOSE_FILE"
|
|
printf ' - %s\n' "$gateway_workspace_mount" >>"$EXTRA_COMPOSE_FILE"
|
|
printf ' - %s\n' "$gateway_auth_profile_secret_mount" >>"$EXTRA_COMPOSE_FILE"
|
|
fi
|
|
|
|
for mount in "$@"; do
|
|
validate_mount_spec "$mount"
|
|
printf ' - %s\n' "$mount" >>"$EXTRA_COMPOSE_FILE"
|
|
done
|
|
|
|
cat >>"$EXTRA_COMPOSE_FILE" <<'YAML'
|
|
openclaw-cli:
|
|
volumes:
|
|
YAML
|
|
|
|
if [[ -n "$home_volume" ]]; then
|
|
printf ' - %s\n' "$gateway_home_mount" >>"$EXTRA_COMPOSE_FILE"
|
|
printf ' - %s\n' "$gateway_config_mount" >>"$EXTRA_COMPOSE_FILE"
|
|
printf ' - %s\n' "$gateway_workspace_mount" >>"$EXTRA_COMPOSE_FILE"
|
|
printf ' - %s\n' "$gateway_auth_profile_secret_mount" >>"$EXTRA_COMPOSE_FILE"
|
|
fi
|
|
|
|
for mount in "$@"; do
|
|
validate_mount_spec "$mount"
|
|
printf ' - %s\n' "$mount" >>"$EXTRA_COMPOSE_FILE"
|
|
done
|
|
|
|
if [[ -n "$home_volume" && "$home_volume" != *"/"* ]]; then
|
|
validate_named_volume "$home_volume"
|
|
cat >>"$EXTRA_COMPOSE_FILE" <<YAML
|
|
volumes:
|
|
${home_volume}:
|
|
YAML
|
|
fi
|
|
}
|
|
|
|
# When sandbox is requested, ensure Docker CLI build arg is set for local builds.
|
|
# Docker socket mount is deferred until sandbox prerequisites are verified.
|
|
if [[ -n "$SANDBOX_ENABLED" ]]; then
|
|
if [[ -z "${OPENCLAW_INSTALL_DOCKER_CLI:-}" ]]; then
|
|
export OPENCLAW_INSTALL_DOCKER_CLI=1
|
|
fi
|
|
fi
|
|
|
|
VALID_MOUNTS=()
|
|
if [[ -n "$EXTRA_MOUNTS" ]]; then
|
|
IFS=',' read -r -a mounts <<<"$EXTRA_MOUNTS"
|
|
for mount in "${mounts[@]}"; do
|
|
mount="${mount#"${mount%%[![:space:]]*}"}"
|
|
mount="${mount%"${mount##*[![:space:]]}"}"
|
|
if [[ -n "$mount" ]]; then
|
|
VALID_MOUNTS+=("$mount")
|
|
fi
|
|
done
|
|
fi
|
|
|
|
if [[ -n "$HOME_VOLUME_NAME" || ${#VALID_MOUNTS[@]} -gt 0 ]]; then
|
|
# Bash 3.2 + nounset treats "${array[@]}" on an empty array as unbound.
|
|
if [[ ${#VALID_MOUNTS[@]} -gt 0 ]]; then
|
|
write_extra_compose "$HOME_VOLUME_NAME" "${VALID_MOUNTS[@]}"
|
|
else
|
|
write_extra_compose "$HOME_VOLUME_NAME"
|
|
fi
|
|
COMPOSE_FILES+=("$EXTRA_COMPOSE_FILE")
|
|
fi
|
|
for compose_file in "${COMPOSE_FILES[@]}"; do
|
|
COMPOSE_ARGS+=("-f" "$compose_file")
|
|
done
|
|
# Keep a base compose arg set without sandbox overlay so rollback paths can
|
|
# force a known-safe gateway service definition (no docker.sock mount).
|
|
BASE_COMPOSE_ARGS=("${COMPOSE_ARGS[@]}")
|
|
COMPOSE_HINT="docker compose"
|
|
for compose_file in "${COMPOSE_FILES[@]}"; do
|
|
COMPOSE_HINT+=" -f ${compose_file}"
|
|
done
|
|
|
|
ENV_FILE="$ROOT_DIR/.env"
|
|
upsert_env() {
|
|
local file="$1"
|
|
shift
|
|
local -a keys=("$@")
|
|
local tmp
|
|
tmp="$(mktemp)"
|
|
# Use a delimited string instead of an associative array so the script
|
|
# works with Bash 3.2 (macOS default) which lacks `declare -A`.
|
|
local seen=" "
|
|
|
|
if [[ -f "$file" ]]; then
|
|
while IFS= read -r line || [[ -n "$line" ]]; do
|
|
local key="${line%%=*}"
|
|
local replaced=false
|
|
for k in "${keys[@]}"; do
|
|
if [[ "$key" == "$k" ]]; then
|
|
printf '%s=%s\n' "$k" "${!k-}" >>"$tmp"
|
|
seen="$seen$k "
|
|
replaced=true
|
|
break
|
|
fi
|
|
done
|
|
if [[ "$replaced" == false ]]; then
|
|
printf '%s\n' "$line" >>"$tmp"
|
|
fi
|
|
done <"$file"
|
|
fi
|
|
|
|
for k in "${keys[@]}"; do
|
|
if [[ "$seen" != *" $k "* ]]; then
|
|
printf '%s=%s\n' "$k" "${!k-}" >>"$tmp"
|
|
fi
|
|
done
|
|
|
|
mv "$tmp" "$file"
|
|
}
|
|
|
|
upsert_env "$ENV_FILE" \
|
|
OPENCLAW_CONFIG_DIR \
|
|
OPENCLAW_WORKSPACE_DIR \
|
|
OPENCLAW_AUTH_PROFILE_SECRET_DIR \
|
|
OPENCLAW_GATEWAY_PORT \
|
|
OPENCLAW_BRIDGE_PORT \
|
|
OPENCLAW_GATEWAY_BIND \
|
|
OPENCLAW_DISABLE_BONJOUR \
|
|
OPENCLAW_GATEWAY_TOKEN \
|
|
OPENCLAW_IMAGE \
|
|
OPENCLAW_EXTRA_MOUNTS \
|
|
OPENCLAW_HOME_VOLUME \
|
|
OPENCLAW_DOCKER_APT_PACKAGES \
|
|
OPENCLAW_EXTENSIONS \
|
|
OPENCLAW_INSTALL_BROWSER \
|
|
OPENCLAW_SANDBOX \
|
|
OPENCLAW_DOCKER_SOCKET \
|
|
DOCKER_GID \
|
|
OPENCLAW_INSTALL_DOCKER_CLI \
|
|
OPENCLAW_ALLOW_INSECURE_PRIVATE_WS \
|
|
OPENCLAW_TZ \
|
|
OTEL_EXPORTER_OTLP_ENDPOINT \
|
|
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT \
|
|
OTEL_EXPORTER_OTLP_METRICS_ENDPOINT \
|
|
OTEL_EXPORTER_OTLP_LOGS_ENDPOINT \
|
|
OTEL_EXPORTER_OTLP_PROTOCOL \
|
|
OTEL_SERVICE_NAME \
|
|
OTEL_SEMCONV_STABILITY_OPT_IN \
|
|
OPENCLAW_OTEL_PRELOADED \
|
|
OPENCLAW_SKIP_ONBOARDING
|
|
|
|
if [[ "$IMAGE_NAME" == "openclaw:local" ]]; then
|
|
echo "==> Building Docker image: $IMAGE_NAME"
|
|
run_docker_build \
|
|
--build-arg "OPENCLAW_DOCKER_APT_PACKAGES=${OPENCLAW_DOCKER_APT_PACKAGES}" \
|
|
--build-arg "OPENCLAW_EXTENSIONS=${OPENCLAW_EXTENSIONS}" \
|
|
--build-arg "OPENCLAW_INSTALL_BROWSER=${OPENCLAW_INSTALL_BROWSER}" \
|
|
--build-arg "OPENCLAW_INSTALL_DOCKER_CLI=${OPENCLAW_INSTALL_DOCKER_CLI:-}" \
|
|
-t "$IMAGE_NAME" \
|
|
-f "$ROOT_DIR/Dockerfile" \
|
|
"$ROOT_DIR"
|
|
else
|
|
echo "==> Pulling Docker image: $IMAGE_NAME"
|
|
if ! docker pull "$IMAGE_NAME"; then
|
|
echo "ERROR: Failed to pull image $IMAGE_NAME. Please check the image name and your access permissions." >&2
|
|
exit 1
|
|
fi
|
|
fi
|
|
|
|
# Ensure bind-mounted data directories are writable by the container's `node`
|
|
# user (uid 1000). Host-created dirs inherit the host user's uid which may
|
|
# differ, causing EACCES when the container tries to mkdir/write.
|
|
# Running a brief root container to chown is the portable Docker idiom --
|
|
# it works regardless of the host uid and doesn't require host-side root.
|
|
echo ""
|
|
echo "==> Fixing data-directory permissions"
|
|
# Use -xdev to restrict chown to the config-dir mount only — without it,
|
|
# the recursive chown would cross into the workspace bind mount and rewrite
|
|
# ownership of all user project files on Linux hosts.
|
|
# After fixing the config dir, only the OpenClaw metadata subdirectory
|
|
# (.openclaw/) inside the workspace gets chowned, not the user's project files.
|
|
run_prestart_gateway --user root --entrypoint sh openclaw-gateway -c \
|
|
'find /home/node/.openclaw -xdev -exec chown node:node {} +; \
|
|
find /home/node/.config/openclaw -xdev -exec chown node:node {} +; \
|
|
[ -d /home/node/.openclaw/workspace/.openclaw ] && chown -R node:node /home/node/.openclaw/workspace/.openclaw || true'
|
|
|
|
echo ""
|
|
if [[ -n "$SKIP_ONBOARDING" ]]; then
|
|
echo "==> Skipping onboarding (OPENCLAW_SKIP_ONBOARDING is set)"
|
|
else
|
|
echo "==> Onboarding (interactive)"
|
|
echo "Docker setup pins Gateway mode to local."
|
|
echo "Gateway runtime bind comes from OPENCLAW_GATEWAY_BIND (default: lan)."
|
|
echo "Current runtime bind: $OPENCLAW_GATEWAY_BIND"
|
|
if is_truthy_value "$OPENCLAW_DISABLE_BONJOUR"; then
|
|
echo "Bonjour/mDNS advertising: force disabled (OPENCLAW_DISABLE_BONJOUR=$OPENCLAW_DISABLE_BONJOUR)."
|
|
elif [[ -z "$OPENCLAW_DISABLE_BONJOUR" ]]; then
|
|
echo "Bonjour/mDNS advertising: auto (disabled inside the Gateway container unless explicitly enabled)."
|
|
else
|
|
echo "Bonjour/mDNS advertising: explicitly enabled (OPENCLAW_DISABLE_BONJOUR=$OPENCLAW_DISABLE_BONJOUR)."
|
|
fi
|
|
echo "Gateway token: $OPENCLAW_GATEWAY_TOKEN"
|
|
echo "Tailscale exposure: Off (use host-level tailnet/Tailscale setup separately)."
|
|
echo "Install Gateway daemon: No (managed by Docker Compose)"
|
|
echo ""
|
|
run_prestart_cli onboard --mode local --no-install-daemon
|
|
fi
|
|
|
|
echo ""
|
|
echo "==> Docker gateway defaults"
|
|
sync_gateway_config
|
|
|
|
echo ""
|
|
echo "==> Provider setup (optional)"
|
|
echo "WhatsApp (QR):"
|
|
echo " ${COMPOSE_HINT} run --rm openclaw-cli channels login"
|
|
echo "Telegram (bot token):"
|
|
echo " ${COMPOSE_HINT} run --rm openclaw-cli channels add --channel telegram --token <token>"
|
|
echo "Discord (bot token):"
|
|
echo " ${COMPOSE_HINT} run --rm openclaw-cli channels add --channel discord --token <token>"
|
|
echo "Docs: https://docs.openclaw.ai/channels"
|
|
|
|
echo ""
|
|
echo "==> Starting gateway"
|
|
docker compose "${COMPOSE_ARGS[@]}" up -d openclaw-gateway
|
|
|
|
# --- Sandbox setup (opt-in via OPENCLAW_SANDBOX=1) ---
|
|
if [[ -n "$SANDBOX_ENABLED" ]]; then
|
|
echo ""
|
|
echo "==> Sandbox setup"
|
|
|
|
sandbox_dockerfile="$ROOT_DIR/scripts/docker/sandbox/Dockerfile"
|
|
if [[ -f "$sandbox_dockerfile" ]]; then
|
|
echo "Building sandbox image: openclaw-sandbox:bookworm-slim"
|
|
run_docker_build \
|
|
-t "openclaw-sandbox:bookworm-slim" \
|
|
-f "$sandbox_dockerfile" \
|
|
"$ROOT_DIR"
|
|
else
|
|
echo "WARNING: sandbox Dockerfile not found at $sandbox_dockerfile" >&2
|
|
echo " Sandbox config will be applied but no sandbox image will be built." >&2
|
|
echo " Agent exec may fail if the configured sandbox image does not exist." >&2
|
|
fi
|
|
|
|
# Defense-in-depth: verify Docker CLI in the running image before enabling
|
|
# sandbox. This avoids claiming sandbox is enabled when the image cannot
|
|
# launch sandbox containers.
|
|
if ! docker compose "${COMPOSE_ARGS[@]}" run --rm --entrypoint docker openclaw-gateway --version >/dev/null 2>&1; then
|
|
echo "WARNING: Docker CLI not found inside the container image." >&2
|
|
echo " Sandbox requires Docker CLI. Rebuild with --build-arg OPENCLAW_INSTALL_DOCKER_CLI=1" >&2
|
|
echo " or use a local build (OPENCLAW_IMAGE=openclaw:local). Skipping sandbox setup." >&2
|
|
SANDBOX_ENABLED=""
|
|
fi
|
|
fi
|
|
|
|
# Apply sandbox config only if prerequisites are met.
|
|
if [[ -n "$SANDBOX_ENABLED" ]]; then
|
|
# Mount Docker socket via a dedicated compose overlay. This overlay is
|
|
# created only after sandbox prerequisites pass, so the socket is never
|
|
# exposed when sandbox cannot actually run.
|
|
if [[ -S "$DOCKER_SOCKET_PATH" ]]; then
|
|
SANDBOX_COMPOSE_FILE="$ROOT_DIR/docker-compose.sandbox.yml"
|
|
cat >"$SANDBOX_COMPOSE_FILE" <<YAML
|
|
services:
|
|
openclaw-gateway:
|
|
volumes:
|
|
- ${DOCKER_SOCKET_PATH}:/var/run/docker.sock
|
|
YAML
|
|
if [[ -n "${DOCKER_GID:-}" ]]; then
|
|
cat >>"$SANDBOX_COMPOSE_FILE" <<YAML
|
|
group_add:
|
|
- "${DOCKER_GID}"
|
|
YAML
|
|
fi
|
|
COMPOSE_ARGS+=("-f" "$SANDBOX_COMPOSE_FILE")
|
|
echo "==> Sandbox: added Docker socket mount"
|
|
else
|
|
echo "WARNING: OPENCLAW_SANDBOX enabled but Docker socket not found at $DOCKER_SOCKET_PATH." >&2
|
|
echo " Sandbox requires Docker socket access. Skipping sandbox setup." >&2
|
|
SANDBOX_ENABLED=""
|
|
fi
|
|
fi
|
|
|
|
if [[ -n "$SANDBOX_ENABLED" ]]; then
|
|
# Enable sandbox in OpenClaw config.
|
|
sandbox_config_ok=true
|
|
if ! run_runtime_cli current no-deps \
|
|
config set agents.defaults.sandbox.mode "non-main" >/dev/null; then
|
|
echo "WARNING: Failed to set agents.defaults.sandbox.mode" >&2
|
|
sandbox_config_ok=false
|
|
fi
|
|
if ! run_runtime_cli current no-deps \
|
|
config set agents.defaults.sandbox.scope "agent" >/dev/null; then
|
|
echo "WARNING: Failed to set agents.defaults.sandbox.scope" >&2
|
|
sandbox_config_ok=false
|
|
fi
|
|
if ! run_runtime_cli current no-deps \
|
|
config set agents.defaults.sandbox.workspaceAccess "none" >/dev/null; then
|
|
echo "WARNING: Failed to set agents.defaults.sandbox.workspaceAccess" >&2
|
|
sandbox_config_ok=false
|
|
fi
|
|
|
|
if [[ "$sandbox_config_ok" == true ]]; then
|
|
echo "Sandbox enabled: mode=non-main, scope=agent, workspaceAccess=none"
|
|
echo "Docs: https://docs.openclaw.ai/gateway/sandboxing"
|
|
# Restart gateway with sandbox compose overlay to pick up socket mount + config.
|
|
docker compose "${COMPOSE_ARGS[@]}" up -d openclaw-gateway
|
|
else
|
|
echo "WARNING: Sandbox config was partially applied. Check errors above." >&2
|
|
echo " Skipping gateway restart to avoid exposing Docker socket without a full sandbox policy." >&2
|
|
if ! run_runtime_cli base no-deps \
|
|
config set agents.defaults.sandbox.mode "off" >/dev/null; then
|
|
echo "WARNING: Failed to roll back agents.defaults.sandbox.mode to off" >&2
|
|
else
|
|
echo "Sandbox mode rolled back to off due to partial sandbox config failure."
|
|
fi
|
|
if [[ -n "${SANDBOX_COMPOSE_FILE:-}" ]]; then
|
|
rm -f "$SANDBOX_COMPOSE_FILE"
|
|
fi
|
|
# Ensure gateway service definition is reset without sandbox overlay mount.
|
|
docker compose "${BASE_COMPOSE_ARGS[@]}" up -d --force-recreate openclaw-gateway
|
|
fi
|
|
else
|
|
# Keep reruns deterministic: if sandbox is not active for this run, reset
|
|
# persisted sandbox mode so future execs do not require docker.sock by stale
|
|
# config alone.
|
|
if ! run_runtime_cli current with-deps \
|
|
config set agents.defaults.sandbox.mode "off" >/dev/null; then
|
|
echo "WARNING: Failed to reset agents.defaults.sandbox.mode to off" >&2
|
|
fi
|
|
if [[ -f "$ROOT_DIR/docker-compose.sandbox.yml" ]]; then
|
|
rm -f "$ROOT_DIR/docker-compose.sandbox.yml"
|
|
fi
|
|
fi
|
|
|
|
echo ""
|
|
echo "Gateway running with host port mapping."
|
|
echo "Access from tailnet devices via the host's tailnet IP."
|
|
echo "Config: $OPENCLAW_CONFIG_DIR"
|
|
echo "Workspace: $OPENCLAW_WORKSPACE_DIR"
|
|
echo "Token: $OPENCLAW_GATEWAY_TOKEN"
|
|
echo ""
|
|
echo "Commands:"
|
|
echo " ${COMPOSE_HINT} logs -f openclaw-gateway"
|
|
echo " ${COMPOSE_HINT} exec openclaw-gateway node dist/index.js health --token \"$OPENCLAW_GATEWAY_TOKEN\""
|