AnyInt Docs
API Reference

OpenAI Responses API

Use /openai/v1/responses only when a client or agent tool specifically expects OpenAI Responses API wiring.

The Responses API route exists for clients that expect OpenAI's newer responses wire format, especially agent coding tools and SDKs that no longer use Chat Completions directly.

For normal application chat, start with OpenAI Compatible API. Use this page when a tool explicitly asks for a Responses API base URL or wire_api = "responses".

Status

Beta. The non-streaming route has been smoke-tested locally. Streaming compatibility depends on the client and model path, so verify it in your target account before production use.

Base URL

https://gateway.api.anyint.ai/openai/v1

Routes

RoutePurpose
POST /responsesCreate a response
GET /responses/{response_id}Fetch a response by ID when supported by the upstream path

Authentication

Authorization: Bearer <ANYINT_API_KEY>
Content-Type: application/json

Minimal request

Request fields

FieldTypeRequiredMeaningExample
modelstringYesModel ID available to your account. Call Models API first.claude-sonnet-4-6
inputstring or arrayYesUser input for the response. Use a string for simple prompts or structured input items for compatible clients.Reply with exactly: OK
streambooleanNoRequests a streaming response when the selected model path and client support it. Verify before production.false
instructionsstringNoSystem-style instruction for the response.Be concise.
toolsarrayNoTool definitions for Responses-compatible clients when supported.[{"type":"function",...}]
tool_choicestring or objectNoControls tool use when tools are provided.auto
temperaturenumberNoSampling randomness.0.7
top_pnumberNoNucleus sampling control.1
max_output_tokensintegerNoMaximum number of output tokens for Responses-compatible clients.512
metadataobjectNoCustomer-defined metadata for tracing. Do not put API keys or private credentials here.{"workflow":"codex"}
curl https://gateway.api.anyint.ai/openai/v1/responses \
  -H "Authorization: Bearer $ANYINT_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4-6",
    "input": "Reply with exactly: OK"
  }'

Response fields

FieldMeaning
idResponse identifier
objectResponse object type when returned
statusResponse status when returned by the compatible path
modelModel that handled the request
outputStructured output items when returned
output_textFlattened text output when returned by the compatible path
usageToken usage when available
errorError object when the compatible path returns a failed response

Fetch a response

Use this only when the response ID is returned and lookup is supported by the selected path.

curl https://gateway.api.anyint.ai/openai/v1/responses/$RESPONSE_ID \
  -H "Authorization: Bearer $ANYINT_API_KEY"

Codex CLI example

[profiles.anyint]
model_provider = "anyint"
model = "<MODEL_ID>"

[model_providers.anyint]
name = "AnyInt"
base_url = "https://gateway.api.anyint.ai/openai/v1"
env_key = "ANYINT_API_KEY"
wire_api = "responses"

Then run:

export ANYINT_API_KEY="your-anyint-api-key"
codex exec --profile anyint "Reply exactly OK."

Common mistakes

  • Using this route for a normal OpenAI SDK chat integration when /chat/completions would be simpler
  • Assuming streaming is verified for every agent tool and model
  • Forgetting that the base URL ends at /openai/v1, not /openai/v1/responses
  • Hardcoding a model ID before checking Models API

On this page