Skip to main content
orcarouter/auto is a named router we create for every account on signup. It routes each request to the cheapest available chat model in your group, chosen fresh per request.

Usage

response = client.chat.completions.create(
    model="orcarouter/auto",
    messages=[{"role": "user", "content": "..."}],
)
No other setup required — the router exists the moment your account is created.

Default behavior

The seed configuration:
  • Pattern: matches gpt-*, claude-*, gemini-*
  • Strategy: cheapest — picks the model with the lowest per-token price among live channels in your group
  • Default model: gpt-4o-mini (if the pattern resolves to nothing)
You can see and edit your Auto Router in the dashboard under Routing. You can change the pattern, swap the strategy, add extra_body.models fallback overrides, or delete the router entirely — same as any named router.

When to prefer Auto Router over explicit model names

  • You don’t want to pin to a specific model; you want the cheapest live chat model at each request.
  • You’re prototyping and don’t want to care about which provider is up.
  • You want OrcaRouter’s routing to “just work” without thinking about it.

When to prefer explicit model names

  • You need deterministic output — picking different models at different times will change generation style and quality.
  • You’re using features specific to one model (e.g. Claude’s cache_control, or a model’s native image generation).
  • You want predictable per-request cost.

Seeing what Auto Router picked

Check the X-Orca-Resolved-Model response header. See Response Headers.
res = client.chat.completions.with_raw_response.create(
    model="orcarouter/auto", ...
)
actual_model = res.headers.get("X-Orca-Resolved-Model")
# e.g. "gpt-4o-mini"