Quickstart
Make your first AnyInt request with the OpenAI-compatible gateway, then discover models and choose the right API family for production.
AnyInt is designed to remove the friction of using advanced AI models. Instead of spending time wiring multiple providers, comparing different API styles, and guessing which model to use, you start with one gateway and one key flow.
The fastest path is to begin with the OpenAI-compatible endpoint, confirm which model IDs are available to your account, and then expand into provider-native or media workflows only when you need them.
What you need before you start
- An AnyInt API key
- A model ID available to your account
- An HTTP client or SDK
What AnyInt is helping you avoid
Without a platform layer, teams usually have to:
- register with multiple providers separately
- manage different auth and request formats
- guess which model is good enough for a task
- overuse expensive models because they are unsure what can be downgraded safely
AnyInt reduces that setup cost and provides a path toward smarter model selection through routing and policy.
Pick the right endpoint family
| Family | Base / Path | Auth style | Use it when |
|---|---|---|---|
| OpenAI-compatible | https://gateway.api.anyint.ai/openai/v1 | Authorization: Bearer | You already use the OpenAI SDK or want one simple chat entrypoint |
| Anthropic-compatible | https://gateway.api.anyint.ai/anthropic/v1 | x-api-key + anthropic-version | You need Claude-style message bodies or token counting |
| Gemini-compatible | https://gateway.api.anyint.ai/gemini/v1beta | Authorization: Bearer | You need Gemini-native generateContent, streaming, or function declarations |
| Models API | https://gateway.api.anyint.ai/openai/v1/models | Authorization: Bearer | You want to discover model IDs before hardcoding them |
| AI Music | https://gateway.api.anyint.ai/suno/* | Authorization: Bearer | You want song generation, cover workflows, lyrics, stems, or MV generation |
Use the model ID returned by the models route or shown in your dashboard configuration. Do not assume that a public marketing name is enabled for your account.
Recommended first request flow
- Create an AnyInt API key in the dashboard.
- Call
GET /openai/v1/modelsto discover the model IDs visible to your account. - Make a streamed request against
POST /openai/v1/chat/completions. - Move to Anthropic, Gemini, media, or AI Music routes only when you need a provider-native request format.
- Run Verify Your Integration before using the integration in production.
Python with the OpenAI SDK
from openai import OpenAI
client = OpenAI(
base_url="https://gateway.api.anyint.ai/openai/v1",
api_key="your-anyint-api-key",
)
response = client.chat.completions.create(
model="claude-sonnet-4-6",
stream=True,
messages=[
{"role": "system", "content": "You are a concise assistant."},
{"role": "user", "content": "Say hello from AnyInt."},
],
)
for chunk in response:
delta = chunk.choices[0].delta.content
if delta:
print(delta, end="")This is the closest match to the OpenRouter-style "make your first request" flow, and it is the easiest starting point for apps that already use OpenAI-compatible clients.
Node.js with the OpenAI SDK
import OpenAI from "openai";
const client = new OpenAI({
baseURL: "https://gateway.api.anyint.ai/openai/v1",
apiKey: process.env.ANYINT_API_KEY,
});
const response = await client.chat.completions.create({
model: "claude-sonnet-4-6",
stream: true,
messages: [
{ role: "system", content: "You are a concise assistant." },
{ role: "user", content: "Say hello from AnyInt." },
],
});
for await (const chunk of response) {
const delta = chunk.choices?.[0]?.delta?.content;
if (delta) process.stdout.write(delta);
}cURL
curl https://gateway.api.anyint.ai/openai/v1/chat/completions \
-H "Authorization: Bearer $ANYINT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-6",
"stream": true,
"messages": [
{"role": "system", "content": "You are a concise assistant."},
{"role": "user", "content": "Say hello from AnyInt."}
]
}'With stream: true, the route returns SSE chunks. Each event contains a chat.completion.chunk payload and the stream ends with [DONE].
Discover model IDs first
Use the published models route to inspect what is available to your account before hardcoding model names.
curl https://gateway.api.anyint.ai/openai/v1/models \
-H "Authorization: Bearer $ANYINT_API_KEY"Anthropic-compatible clients can also call GET /anthropic/v1/models with x-api-key and anthropic-version headers.
Typical response fields:
data[].id: the model ID you should send in later requestsdata[].display_name: a human-friendly labeldata[].created_at: the model publish timestamp
When to switch away from the OpenAI-compatible route
| Need | Read next |
|---|---|
| Claude-style message blocks, image input, or token counting | Anthropic Compatible API |
Gemini-native generateContent, streaming, image generation, or function declarations | Gemini Compatible API |
| Image and video generation through DashScope | Media APIs |
| Song generation, covers, stems, lyrics, or MV workflows | AI Music Overview |
Before production
Use Verify Your Integration to confirm your API key handling, model discovery, streaming behavior, async task polling, callback handling, and retry behavior. AnyInt does not require a payment-style test-card workflow for normal AI API integrations; verify the public API contract directly with your own account and safe sample prompts.