- docs/tools/tts.md: alphabetize providers in three places that listed
them: the supported-providers table (Azure Speech ... Xiaomi MiMo),
the configuration Tabs (12 provider presets in A-Z), and the field
reference AccordionGroup. Top-level fields stay first; provider
tabs/accordions follow strict alphabetical order. Wording, schema,
and defaults unchanged.
- docs/docs.json: add tools/tts to the main Tools sidebar group
(slotted between trajectory and video-generation, matching the
alphabetical neighborhood with image-generation, music-generation,
video-generation). Previously tts only appeared under
Nodes > Media capabilities, which was a discoverability gap for
readers looking for TTS alongside the other generation tools.
Logging.md had grown to 487 lines with ~300 lines dedicated to
OpenTelemetry export — wire protocol, full metric/span catalog, env
vars, captureContent shape, sampling, the diagnostic event catalog,
and protocol notes — leaving the genuine logging overview buried
behind exporter reference material.
Move the OTEL surface to a dedicated page and slim logging.md to a
focused logs overview:
- Add docs/gateway/opentelemetry.md (OpenTelemetry export). Same
content reorganized: how it fits together, quick start, signals,
configuration reference + env vars table, privacy/captureContent,
sampling/flushing, full metric and span catalog, diagnostic event
catalog, no-exporter mode, diagnostics flags pointer, disable.
- docs/logging.md: drop the OTEL section in favor of a short
'Diagnostics and OpenTelemetry' summary that cross-links the new
page and the diagnostics-flags page. Drops 273 lines net. Also
drops the redundant body H1, retitles to 'Logging' (was 'Logging
overview' which mismatched sidebar usage), and refreshes the
Related list.
- docs/docs.json: insert gateway/opentelemetry into the
'Health and diagnostics' sidebar group, reorder pages so the user-
facing health/run pages come before exporter/internals pages, and
put logging next to opentelemetry where readers naturally
associate them.
- docs/gateway/diagnostics.md, docs/gateway/logging.md,
docs/gateway/configuration-reference.md: cross-link the new page
and sentence-case stale Title-Cased Related entries on
diagnostics.md.
Adds the Gradium bundled plugin with TTS and speech-provider registration, docs, label routing, and focused/live coverage.
Also carries the current main lint cleanup needed for the rebased CI lane.
Co-authored-by: laurent <laurent.mazare@gmail.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* feat(tencent): add bundled Tencent Cloud provider plugin (Tokenhub + Token Plan)
* fix(tencent): use provider-specific default model aliases
Both Tencent providers previously defaulted to the same alias "HY3 Preview",
which collides in buildModelAliasIndex (single alias map, keyed by normalized
alias). When both providers are onboarded, alias-based selection routed to
whichever provider was configured last.
Disambiguate the fallback aliases so resolution is deterministic regardless
of onboarding order:
- tencent-tokenhub -> "HY3 Preview (TokenHub)"
- tencent-token-plan -> "HY3 Preview (Token Plan)"
* docs(tencent): rename model to "Hy3 preview" and drop "HY3" family name
Align with the external-facing product name:
- model display name: "HY3 Preview" -> "Hy3 preview"
- family/umbrella references in docs and auth hints: "HY3" -> "Hy3 preview"
- internal cost constant: HY3_COST -> HY_COST
Model call id (hy3-preview) is unchanged.
* docs(tencent): use "Hy" as the family name in generic references
Keep specific model references as "Hy3 preview" (model catalog names,
onboarding aliases, Available-models docs entries), but switch
family/umbrella references to the plain "Hy" family name so future Hy
versions fit without doc churn:
- auth hints: "Hy via Tencent TokenHub Gateway" / "Hy via Token Plan"
- docs intro + Use-case table: "Tencent Hy models" / "call Hy via ..."
- models.ts pricing comment: "Hy pricing"
* feat(tencent): add tiered pricing for Hy3 preview model
---------
Co-authored-by: albertxyu <albertxyu@tencent.com>
* Feat: LM Studio Integration
* Format
* Support usage in streaming true
Fix token count
* Add custom window check
* Drop max tokens fallback
* tweak docs
Update generated
* Avoid error if stale header does not resolve
* Fix test
* Fix test
* Fix rebase issues
Trim code
* Fix tests
Drop keyless
Fixes
* Fix linter issues in tests
* Update generated artifacts
* Do not have fatal header resoltuion for discovery
* Do the same for API key as well
* fix: honor lmstudio preload runtime auth
* fix: clear stale lmstudio header auth
* fix: lazy-load lmstudio runtime facade
* fix: preserve lmstudio shared synthetic auth
* fix: clear stale lmstudio header auth in discovery
* fix: prefer lmstudio header auth for discovery
* fix: honor lmstudio header auth in warmup paths
* fix: clear stale lmstudio profile auth
* fix: ignore lmstudio env auth on header migration
* fix: use local lmstudio setup seam
* fix: resolve lmstudio rebase fallout
---------
Co-authored-by: Frank Yang <frank.ekn@gmail.com>
Add a bundled Arcee AI provider plugin with ARCEEAI_API_KEY onboarding,
Trinity model catalog (mini, large-preview, large-thinking), and
OpenAI-compatible API support.
- Trinity Large Thinking: 256K context, reasoning enabled
- Trinity Large Preview: 128K context, general-purpose
- Trinity Mini 26B: 128K context, fast and cost-efficient