About NodeHub

An API gateway that sits between your apps and LLM providers. It caches responses and routes requests to reduce costs.

The Problem

LLM API calls are expensive. If you're building with GPT-4, Claude, or similar models, costs add up fast. Many requests are similar or identical, but you pay full price every time.

Switching providers means changing code. Rate limits hit at the worst times. And if you want to track usage across multiple projects or team members, you're stuck building that yourself.

What NodeHub Does

NodeHub is a proxy. Your apps talk to NodeHub, NodeHub talks to the LLM providers. In between, it does a few useful things:

  • Caching: Similar requests return cached responses instantly. No API call, no cost.
  • Routing: Simple queries go to cheaper models. Complex ones go to capable models.
  • Unified API: One OpenAI-compatible endpoint for all providers.
  • Analytics: See where your tokens go, per key, per project.

The result: 40-70% lower costs, depending on your usage patterns.

Business Model

Community Edition is free and open-source (AGPL v3). Self-host it, use it however you want. Basic caching, 5 providers, works fine for most people.

Full Edition costs money. You get better caching algorithms, smart routing, more providers, multiple API keys, webhooks, and longer analytics retention. Available as self-hosted ($29/mo) or SaaS ($49/mo).

Team Edition adds SSO, audit logs, and org management for companies that need that stuff. $99/mo plus $20 per user.

That's it. No usage fees, no per-token charges from us. You pay your LLM providers directly.

Open Source

The Community Edition is on GitHub under AGPL v3. You can read the code, run it yourself, and contribute if you want. We accept PRs.

Why open-source? Because API gateways shouldn't be a black box. You should be able to see what's happening with your data and verify we're not doing anything sketchy.