Unified Config Contract v0.3
Issue: #1505
Before
Before v0.3, the repo had several partially overlapping config contracts:
- router runtime consumed a flat Go config
- Python CLI used its own nested YAML plus merge/default logic
- dashboard and onboarding imported YAML but still assumed legacy top-level
signalsanddecisions - Helm and operator each translated config differently
- DSL mixed routing semantics with legacy
BACKENDandGLOBALexpectations
This caused three persistent problems:
- The same concept had to be edited in multiple schema layers.
- Endpoint, API key, and model semantics were mixed together.
- Runtime defaults depended on external template files such as
router-defaults.yaml, which made defaults harder to reason about and replace.
Problems with the old model
CLI and router drifted
The Python CLI and Go router did not share one schema owner. A user could build config through the CLI, the dashboard, or Kubernetes and still hit structural mismatches.
Model semantics and deployment bindings were entangled
Logical models were carrying:
- semantic routing identity
- endpoint binding
- API key
- provider model ID
That made reuse hard. If several logical models pointed at the same backend, config still repeated backend details.
DSL scope was too broad
DSL was useful for routing semantics, but legacy BACKEND and GLOBAL blocks made it look like the right place to author deployment and runtime state too. That was not sustainable across local, dashboard, and Kubernetes workflows.
v0.3 contract
v0.3 defines one canonical config:
version:
listeners:
providers:
routing:
global:
What each section means
providers: deployment bindings and provider defaultsrouting: semantic routing graphglobal: sparse router-wide runtime overrides
DSL boundary
DSL now owns only:
routing.modelCardsrouting.signalsrouting.projectionsfor signal coordination and derived routing outputsrouting.decisions
It no longer owns endpoints, API keys, listeners, or router-global runtime settings.