Gemini Compatible API
AnyInt exposes Gemini-compatible routes under the /gemini/v1beta prefix. Treat these as provider-native routes and validate them in your environment before production use.
AnyInt exposes Gemini-compatible routes under the /gemini/v1beta prefix. These routes use Gemini-native contents[].parts[] request bodies rather than OpenAI messages.
Published routes
| Route | Use case |
|---|---|
POST /gemini/v1beta/models/{model}:generateContent | Text generation and native function calling |
POST /gemini/v1beta/models/{model}:streamGenerateContent?alt=sse | Streaming generation |
POST /gemini/v1beta/models/{image_model}:generateContent | Text-plus-image output |
Authentication
Authorization: Bearer <ANYINT_API_KEY>When to use Gemini-compatible routes
- You want to keep Gemini-native
contents[].parts[]payloads - You need
streamGenerateContent?alt=sse - You want Gemini-native
tools.functionDeclarations - You want the published Gemini image-generation route instead of a generic chat wrapper
If you want one uniform SDK integration, start with the OpenAI Compatible API instead.
Text generation example
Request fields
| Field | Type | Required | Meaning | Example |
|---|---|---|---|---|
contents | array | Yes | Conversation turns in Gemini-native format. | [{"role":"user","parts":[...]}] |
contents[].role | string | Recommended | Message role. Use user for user prompts and model for prior model turns. | user |
contents[].parts | array | Yes | Content parts for the turn. | [{"text":"Hello"}] |
parts[].text | string | For text input | Text prompt content. | Explain AnyInt. |
parts[].inlineData | object | For inline media | Inline media payload when supported by the model. | {"mimeType":"image/png","data":"..."} |
tools | array | No | Gemini-native tool declarations. | [{"functionDeclarations":[...]}] |
generationConfig | object | No | Generation controls such as temperature, max output tokens, or image response modalities. | {"temperature":0.7} |
safetySettings | array | No | Gemini-compatible safety settings when supported by the selected model. | [{"category":"...","threshold":"..."}] |
Generation config fields
| Field | Type | Required | Meaning | Example |
|---|---|---|---|---|
temperature | number | No | Sampling randomness. | 0.7 |
topP | number | No | Nucleus sampling value. | 1 |
topK | integer | No | Limits sampling to top K tokens when supported. | 40 |
maxOutputTokens | integer | No | Maximum generated tokens. | 512 |
stopSequences | array | No | Stop sequences that end generation. | ["END"] |
responseModalities | array | Image generation only | Required for Gemini image output examples; include TEXT and IMAGE. | ["TEXT","IMAGE"] |
curl https://gateway.api.anyint.ai/gemini/v1beta/models/gemini-3-flash-preview:generateContent \
-H "Authorization: Bearer $ANYINT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [
{"text": "How does AnyInt help multi-provider teams?"}
]
}
]
}'The minimum published request body is contents with at least one content item and one text part.
Streaming example
curl "https://gateway.api.anyint.ai/gemini/v1beta/models/gemini-3-flash-preview:streamGenerateContent?alt=sse" \
-H "Authorization: Bearer $ANYINT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [
{"text": "Stream a short answer about API gateways."}
]
}
]
}'The alt=sse query parameter is required on the published streaming route.
Streaming response behavior
| Field | Meaning |
|---|---|
candidates[] | Incremental candidate outputs from the model |
candidates[].content.parts[] | Generated text or media parts, depending on the model |
usageMetadata | Token usage metadata when returned by the provider |
finishReason | Why generation stopped for the candidate |
Function-calling example
Function declaration fields
| Field | Type | Required | Meaning | Example |
|---|---|---|---|---|
tools[].functionDeclarations | array | Yes | Functions the model may call. | [{"name":"get_weather",...}] |
name | string | Yes | Function name your application recognizes. | get_current_temperature |
description | string | Recommended | Tells the model when to use the function. | Gets the current temperature. |
parameters | object | Yes | JSON-schema-like parameter description. | {"type":"object","properties":{...}} |
parameters.properties | object | Usually | Function argument fields. | {"location":{"type":"string"}} |
parameters.required | array | No | Required argument names. | ["location"] |
curl https://gateway.api.anyint.ai/gemini/v1beta/models/gemini-3-flash-preview:generateContent \
-H "Authorization: Bearer $ANYINT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [
{"text": "What is the temperature in London?"}
]
}
],
"tools": [
{
"functionDeclarations": [
{
"name": "get_current_temperature",
"description": "Gets the current temperature for a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name, e.g. San Francisco"
}
},
"required": ["location"]
}
}
]
}
]
}'This is the published Gemini-native tool pattern. It is the right choice when you want the model to emit function arguments using functionDeclarations rather than OpenAI-style tools.
Image generation example
The published image-generation schema requires generationConfig.responseModalities to include both TEXT and IMAGE.
curl https://gateway.api.anyint.ai/gemini/v1beta/models/gemini-3.1-flash-image-preview:generateContent \
-H "Authorization: Bearer $ANYINT_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"role": "user",
"parts": [
{"text": "Generate a cinematic poster for an AI music launch."}
]
}
],
"generationConfig": {
"responseModalities": ["TEXT", "IMAGE"]
}
}'Common mistakes
- Forgetting
alt=sseon the streaming route - Omitting
role: "user"incontents[] - Calling bare
/v1beta/...instead of the AnyInt/gemini/v1beta/...prefix - Omitting
responseModalitieson image generation - Sending OpenAI
messagesinstead of Geminicontents[].parts[]
Related pages
Anthropic Compatible API
Use this family when your application already speaks Anthropic's Messages API or when you need message-block semantics that are different from OpenAI chat completions.
Kling video
Use Kling V3 Standard/Pro and Kling: Video v3.0 Omni through AnyInt's Transtreams gateway routes. Create video tasks first, then poll the matching task result endpoint.