HiWay2LLMHiWay2LLM/Documentation
Dashboard

Getting started

  • Quickstart
  • Open-source SDK & CLI
  • Drop-in with your existing SDK
  • Authentication

Concepts

  • How smart routing works
  • Pricing model
  • Guardian — anti-loop system
  • Budget Control
  • Provider fallback
  • Semantic cache (Scale+)
  • PII masking
  • A/B Experiments (Scale+)
  • Response envelope
  • Streaming responses
  • Tool calls and function calls
  • System prompts and routing

Features

  • Editing Guardian rules
  • Setting a Budget Control cap
  • Enabling semantic cache
  • Enabling PII masking
  • Running an A/B experiment

Integrations

  • OpenAI Python SDK
  • OpenAI Node.js SDK
  • LangChain
  • Vercel AI SDK
  • n8n workflows
  • curl and raw HTTP

Migrate

  • From OpenRouter
  • From LiteLLM
  • From Vercel AI Gateway
  • From Portkey
  • From direct provider APIs (OpenAI / Anthropic)

API reference

  • POST /v1/chat/completions
  • GET /v1/me
  • GET /v1/models
  • Error codes

Troubleshooting

  • 402 — Quota or budget exceeded
  • 401 — Unauthorized
  • 429 — Rate limited
  • 502 — Upstream unavailable
  • Frequently Asked Questions
  • Glossary
  • Changelog

Page not found

This documentation page doesn't exist or has been moved.

Back to the documentation index