New API

DeveloperApplicationFree

One API endpoint for all your AI model providers

Last updated Apr 19, 2026

Claim Tool

What is New API?

New API solves a real problem: every AI provider has a different API format, different pricing, and different authentication. Instead of writing integration code for each one, you point your application at New API and it translates everything into a single OpenAI-compatible format. The project is a fork of One API, rebuilt with a React frontend and more features. It supports over 40 AI providers including OpenAI, Anthropic, Google Gemini, Mistral, Cohere, DeepSeek, and local models through Ollama. Every provider gets mapped to the same chat completion, embedding, and image generation endpoints, so your application code stays the same regardless of which model you call. Key management is the core feature. You add API keys from multiple providers, and New API handles routing, rate limiting, and failover automatically. If one provider goes down, traffic shifts to the next available key. If a key hits its rate limit, the next key picks up the request. You get a single dashboard showing usage across all providers, per-key spend tracking, and token-level billing. The billing system supports per-user quotas and pay-per-token models. You can resell AI access to your own users with markup, or just track costs internally. Each user gets their own API key that routes through your provider keys, with spend limits and model access controls. New API runs as a Docker container or a single binary. It uses SQLite by default and supports MySQL and PostgreSQL for production deployments. The admin panel handles everything: adding providers, managing users, viewing logs, and configuring channel priorities. For teams that use multiple AI models, New API eliminates the integration overhead. One endpoint, one authentication system, one billing pipeline. Your code calls /v1/chat/completions and New API figures out the rest.

New API's Top Features

Key capabilities that make New API stand out.

Unified OpenAI-compatible API for 40+ AI providers including OpenAI, Anthropic, Google, Mistral, and DeepSeek

Automatic key rotation and failover when providers hit rate limits or go down

Per-user API keys with spend quotas and model access controls

Token-level billing with usage dashboards and cost tracking per provider

Docker deployment with SQLite, MySQL, or PostgreSQL backends

React admin panel for provider management, user management, and log viewing

Channel priority system to route requests to preferred providers first

Supports chat completions, embeddings, image generation, and audio endpoints

Ollama integration for routing to local and self-hosted models

Open source under MIT license with active community and regular updates

Use Cases

Who benefits most from this tool.

Development teams

Unify multiple AI provider APIs behind a single endpoint for application simplification

SaaS companies

Resell AI model access with per-user billing and quota management

DevOps engineers

Add automatic failover and redundancy for production AI workloads

Finance teams

Track AI spend across teams and providers in one dashboard

Platform teams

Gradually migrate between AI providers without changing application code

Tags

api-gatewayllm-aggregationopenai-compatiblemulti-providerkey-managementbillingai-infrastructurerate-limitingfailoverself-hosted

New API's pricing

User Reviews

Share your thoughts

If you've used this product, share your thoughts with other builders

Recent reviews

Frequently asked questions about New API