Updated April 20268 min read

HiWay2LLM vs Portkey

Head-to-head comparison of HiWay2LLM and Portkey. Observability depth vs cost-per-request optimization, pricing models, prompt governance, and when each one wins.

TL;DR

Portkey wins for teams who need deep observability, prompt governance, and virtual keys for team budgets. HiWay wins when cost-per-request optimization is the main KPI. If your #1 problem is 'what is every request doing and who ran it', Portkey. If your #1 problem is 'this bill keeps growing', HiWay.

Portkey and HiWay2LLM both call themselves LLM gateways, both sit in front of the same upstream providers, and both speak the OpenAI API. If you only read the landing pages, they look interchangeable. Under the hood they optimize for two very different things.

Portkey's center of gravity is observability and governance: deep per-request logs, prompt versioning, guardrails, virtual keys that scope team budgets. It is what you reach for when you need to know, at audit time, what every request to every model did and who ran it.

HiWay's center of gravity is cost per request: a router that reads each incoming prompt and picks the cheapest model that can answer it, with 0% inference markup and BYOK. It is what you reach for when your bill is growing faster than your revenue.

Here is the honest side-by-side.

Quick decision

  • Your main pain is "I cannot see what my LLM traffic is doing"? Portkey. That is the job it was built for.
  • Your main pain is "this bill keeps growing"? HiWay. Complexity-based routing plus zero markup is aimed squarely at that.
  • You need prompt versioning, A/B tests on prompts, a shared prompt library for the team? Portkey has a full prompt management product. HiWay does not.
  • You are in the EU or need GDPR-aligned hosting with a signed DPA? HiWay is EU-hosted on OVH. Check Portkey's current data residency options per their public docs.
  • You want the thinnest, cheapest layer possible in the critical path? HiWay. Portkey ships a broader surface area.

Pricing

Portkey runs a tiered SaaS model: a free tier for small volumes, then paid tiers that scale with features (observability retention, seats, enterprise SSO, on-prem option). Pricing is published per their public docs as of 2026-04-22 — check their site for the exact breakdown. The important framing: you are paying for the observability and governance layer, and that value grows with team size.

HiWay charges a flat monthly fee for the routing layer. Inference is billed by the provider directly on your own card at wholesale (BYOK, 0% markup on the token side):

PlanPriceRouted requests / mo
Free$02,500
Build$15/mo100,000
Scale$39/mo500,000
Business$249/mo5,000,000
Enterpriseon requestcustom quotas, SSO, DPA

The bet: flat-fee infrastructure that scales with requests, not with seats or retention windows. Plus smart routing that auto-downgrades simple requests to cheaper models — 40-85% savings on a typical mix — which overtakes the $15/mo Build subscription within hours of real use, at any scale.

The simple heuristic: if your team has many engineers and needs deep prompt governance, Portkey's per-seat/feature pricing earns its keep. If your team is small and your problem is volume, HiWay's per-request flat fee is usually cheaper.

Feature-by-feature

FeatureHiWay2LLMPortkey
Bring your own keys (BYOK)
Both support BYOK — Portkey via virtual keys, HiWay natively
Smart routing by request complexity
Portkey routes by rules and fallbacks you define, not by scoring the prompt
Prompt library + versioning
Portkey ships a full prompt management product
Guardrails (PII, content, schema)
Portkey has a deeper guardrails layer
Per-request observability + retention
Portkey's observability is the core product
Virtual keys for team budgets
Portkey's virtual keys are more granular
OpenAI-compatible API
Automatic fallback across providers
EU hosting (GDPR)
Check Portkey's current residency options
Zero prompt logging by default
Portkey logs by design — that is the product
Pricing model
flat €/mo per request tier, 0% inference markup
tiered SaaS, feature/seat scaled
Primary job
cost optimization
observability + governance

native · partial or plugin · not offered

When to pick which

Pick HiWay2LLM if

  • Your monthly LLM spend is the number you want to move, and you want a router that actively picks cheaper models
  • You want BYOK with zero inference markup and flat per-request pricing
  • You are in the EU or serve EU customers and need GDPR-aligned hosting + a signed DPA
  • You want the thinnest possible layer in the critical path — not a full observability suite you will half-use
  • Zero prompt logging by default is a compliance requirement, not a nice-to-have
  • You want burn-rate alerts and hard budget caps, not just retrospective dashboards

Pick Portkey if

  • Your team needs a shared prompt library with versioning, A/B tests, and rollbacks
  • Deep per-request observability with long retention is a must-have, not a maybe
  • You need granular guardrails (PII detection, schema validation, content filters) built into the gateway
  • Virtual keys per team member with per-key budgets is how you want to scope access
  • Your pain is audit-readiness and governance, not cost-per-request
  • You are standardizing a large engineering org on one LLM gateway and want the broadest feature surface

Migration — what actually changes in your code

If you are on Portkey today, switching is a drop-in at the SDK level. You keep the same messages structure, the same streaming, the same client library. You replace the Portkey base URL and headers with HiWay's base URL and key.

With Portkey
from openai import OpenAI

client = OpenAI(
  base_url="https://api.portkey.ai/v1",
  api_key="PORTKEY_API_KEY",
  default_headers={
      "x-portkey-virtual-key": "VIRTUAL_KEY_ID",
  },
)

response = client.chat.completions.create(
  model="gpt-4o",
  messages=[{"role": "user", "content": "Hello"}],
)
With HiWay2LLM
from openai import OpenAI

client = OpenAI(
  base_url="https://app.hiway2llm.com/v1",
  api_key="hw_live_...",
)

response = client.chat.completions.create(
  model="auto",  # let the router pick
  messages=[{"role": "user", "content": "Hello"}],
)

Two extra steps before the switch: add your provider keys once in the HiWay dashboard (Settings → Providers), and keep model: "auto" if you want the router to pick — or pin a specific model if you want to force it.

Observability-first vs routing-first — the category split

Both products are gateways, but they grew from different starting problems. That origin still shapes the shape of what you get.

Portkey grew from observability. The core value is: every request that leaves your app is captured, indexed, searchable, linked to the prompt version, scored against guardrails. That is what the product is for. Routing and fallbacks are features around that core. If your pain is "I cannot answer the question what did the LLM just do", Portkey is built for you.

HiWay grew from cost. The core value is: every request is scored in under 1ms and sent to the cheapest model that can handle it. Observability exists — per-request logs, cost breakdowns, audit trails — but it is a supporting feature, not the product. If your pain is "this invoice is bigger every month", HiWay is built for you.

This split matters because the two jobs pull the design in opposite directions. Deep observability wants to log everything forever. Cost optimization wants to log as little as possible (cheaper, faster, more private). You can bolt one onto the other, but the primary job shapes the product.

In practice, the cleanest setups we see put a cost-first router like HiWay in the critical path (latency sensitive, low overhead, zero logging by default) and an observability tool alongside it for sampled or audit-mode traffic. The two are not mutually exclusive. But asking one product to do both well usually makes it mediocre at both.

Data & compliance

Portkey's core product is observability, which means by design it captures and retains prompt/response data. That is a feature, not a bug — the whole point is to inspect what your LLMs did. Retention, region, and encryption are configurable per their public docs. If you have regulated data flowing through prompts (health, finance, legal), plan the deployment carefully.

HiWay is operated from France by Mytm-Group, hosted on OVH servers in the EU. Zero prompt logging is the default — prompts transit in-memory and are never persisted. We sign a DPA on request (even on the free plan) and publish our sub-processors. If you need logging for your own debugging, it is opt-in per workspace with a configurable retention window.

Different defaults, different jobs. Pick the one whose default matches your compliance posture.

FAQ

FAQ

Yes, and some teams do. The common pattern: HiWay in the hot path for cost routing and zero-log inference, Portkey on a sampled slice for deep observability and prompt governance. Since both are OpenAI-compatible, the client code changes are small. The cost is an extra hop in the stack, so most teams end up picking one.

Bottom line

Portkey and HiWay are both real products solving real problems. They are not the same problem. Portkey is the answer when your pain is "we need to see, govern, and audit every LLM call our team makes". HiWay is the answer when your pain is "this bill keeps growing and the cheapest capable model should pick itself". Pick based on which sentence you hear yourself say more often.

If cost is the number you want to move, plug your current spend into the savings calculator and see what complexity-based routing does to it.

Try HiWay free — 2,500 requests/mo

BYOK, EU-hosted, no credit card

Share