From Portkey
BYOK + smart routing, minus Portkey's guardrails and prompt library.
This guide covers moving a Portkey deployment to HiWay2LLM. Both are OpenAI-compatible routing layers, so the code diff is minimal. The real decisions are around Portkey-specific concepts (virtual keys, guardrails, prompt library) — some map 1-to-1, some don't exist on HiWay. Read the gotchas section before you flip the switch. End-to-end: 10-15 minutes including the provider-keys re-entry.
Prerequisites
A HiWay account and at least one provider key configured in Dashboard → Settings → Providers. Unlike Portkey's virtual keys, you enter the raw provider key directly — HiWay encrypts it AES-256-GCM and calls the upstream on your behalf.
Step 1: Get your HiWay API key
Open Dashboard → Keys and click New key. Your hw_live_ key replaces the Portkey x-portkey-api-key.
Step 2: Swap your configuration
On Portkey you used the https://api.portkey.ai/v1 base URL with an x-portkey-api-key header and usually a x-portkey-virtual-key header pointing to a stored provider credential. On HiWay you use https://app.hiway2llm.com/v1 with a standard Authorization: Bearer hw_live_... header — the upstream provider is resolved from your enabled-provider list in the dashboard, no extra headers.
from openai import OpenAI
client = OpenAI(
base_url="https://app.hiway2llm.com/v1",
api_key="hw_live_YOUR_KEY",
)
response = client.chat.completions.create(
model="auto", # or "openai/gpt-4o", "anthropic/claude-sonnet-4-5", ...
messages=[{"role": "user", "content": "Draft a follow-up email"}],
)
print(response.choices[0].message.content)Step 3: Verify
Send one request, confirm X-HiWay-Routed-Model on the response, and check Dashboard → Usage for the timeline. If you relied on Portkey's traces view, the HiWay equivalent is Dashboard → Analytics plus the _hiway metadata object on each response.
Gotchas specific to Portkey
- Virtual keys → HiWay provider keys: Portkey's virtual key abstraction becomes your entries in Settings → Providers. Same encryption-at-rest posture, simpler mental model — no separate header to send on each request.
- Guardrails are out of scope: Portkey ships a moderation / content-filter layer that runs inline. HiWay doesn't — use OpenAI's Moderation API, Anthropic's prompt-level safety features, or a dedicated service like Lakera. Bring your own moderation if that was load-bearing.
- Prompt library doesn't exist on HiWay: Portkey lets you version prompts server-side and reference them by id. HiWay doesn't have an equivalent — manage prompts in your own code, or use a library like LangChain / Vercel AI SDK prompt templating.
- Fallback config → automatic same-tier fallback: Portkey's
fallbackconfig block is replaced by HiWay's default behavior (retry the next cheapest same-tier model on upstream errors, max 2 retries). If you had ordered fallback chains across specific models, pin withmodel: "<provider>/<id>"and handle retries client-side. - Budget and alerting move too: Portkey's budget limits become Dashboard → Budget Control (monthly cap with BLOCK / DOWNGRADE / LIGHT_ONLY verdicts), and alerting becomes Slack / email webhooks on Guardian fires and budget transitions.
You're done
Should take 10-15 minutes, mostly re-entering provider keys in the dashboard. Ping support@hiway2llm.com if you're stuck.