Agent Coding Tool Integrations
Use AnyInt with agent coding tools that support OpenAI-compatible, Anthropic-compatible, or custom model providers.
Use AnyInt with agent coding tools that support OpenAI-compatible, Anthropic-compatible, or custom model providers.
Before You Start
Create an AnyInt API key, then set it as an environment variable:
export ANYINT_API_KEY="your-anyint-api-key"Use the Models API to find model IDs available to your account:
curl https://gateway.api.anyint.ai/openai/v1/models \
-H "Authorization: Bearer $ANYINT_API_KEY"The OpenAI-compatible and Anthropic-compatible model list endpoints both work:
curl https://gateway.api.anyint.ai/openai/v1/models \
-H "Authorization: Bearer $ANYINT_API_KEY"
curl https://gateway.api.anyint.ai/anthropic/v1/models \
-H "x-api-key: $ANYINT_API_KEY" \
-H "anthropic-version: 2023-06-01"Use the published compatibility-prefixed model routes. Tool configuration should not assume an unprefixed models endpoint.
Common endpoints:
| Compatibility | Base URL | Auth |
|---|---|---|
| OpenAI-compatible | https://gateway.api.anyint.ai/openai/v1 | Authorization: Bearer $ANYINT_API_KEY |
| Anthropic-compatible | https://gateway.api.anyint.ai/anthropic/v1 | x-api-key: $ANYINT_API_KEY + anthropic-version: 2023-06-01 |
| Gemini-compatible | https://gateway.api.anyint.ai/gemini/v1beta | Authorization: Bearer $ANYINT_API_KEY |
Use the OpenAI-compatible endpoint unless the tool specifically expects Anthropic Messages API behavior.
Use the AnyInt Docs MCP Server
The docs repository includes an optional MCP server for AI coding tools that can read public AnyInt documentation directly. This server is for documentation lookup only. It does not call the AnyInt API and does not need your API key.
Use it when you want an agent to answer questions such as:
- Which AnyInt endpoint should I call?
- What auth headers does this API family use?
- Show me the page for OpenAI Responses or Kling video.
For MCP clients that support remote streamable HTTP servers, use the hosted server:
{
"mcpServers": {
"anyint-docs": {
"url": "https://anyint.ai/docs/mcp"
}
}
}No authentication is required because the server only exposes public docs lookup tools.
If your client only supports local stdio MCP servers, run it from a local checkout of anyint-docs:
pnpm install
pnpm mcpThen configure the client with an absolute path to the checkout:
{
"mcpServers": {
"anyint-docs": {
"command": "node",
"args": ["/absolute/path/to/anyint-docs/scripts/mcp-server.mjs"]
}
}
}Available tools:
| Tool | Purpose |
|---|---|
list_topics | Browse public docs sections and pages |
search_docs | Search docs by keyword |
get_page | Fetch one public docs page by id, URL, path, or slug |
list_endpoints | List public endpoint families from api-endpoints.json |
get_endpoint | Fetch one endpoint family with status, auth, docs page, and route list |
If you do not want to use MCP, the same public docs data is available through https://anyint.ai/docs/llms.txt, https://anyint.ai/docs/llms-full.txt, https://anyint.ai/docs/api-endpoints.json, and per-page Markdown URLs such as https://anyint.ai/docs/api-reference/openai-compatible.mdx.
Tool Setup Summary
| Tool | Recommended AnyInt Base URL | Setup |
|---|---|---|
| Claude Code | https://gateway.api.anyint.ai/anthropic | Set ANTHROPIC_BASE_URL, ANTHROPIC_AUTH_TOKEN, and model environment variables. |
| Codex CLI | https://gateway.api.anyint.ai/openai/v1 | Add a custom Responses provider in ~/.codex/config.toml; see Codex CLI notes below. |
| GitHub Copilot | Extension-dependent | Use only if your Copilot extension supports custom OpenAI or Anthropic endpoints. |
| GitHub Copilot CLI | https://gateway.api.anyint.ai/anthropic/v1 | Use only if your Copilot CLI build supports BYOK provider environment variables. |
| Kilo Code | https://gateway.api.anyint.ai/openai/v1 | Add AnyInt as a custom OpenAI-compatible provider. |
| WorkBuddy/CodeBuddy | https://gateway.api.anyint.ai/openai/v1 | Add entries to .codebuddy/models.json. |
| OpenCode | https://gateway.api.anyint.ai/openai/v1 | Add a custom provider in opencode.json. |
| Oh My Pi | https://gateway.api.anyint.ai/openai/v1 | Add AnyInt provider to ~/.omp/agent/models.yml. |
| OpenClaw | https://gateway.api.anyint.ai/openai/v1 | Use onboarding or custom provider settings if available. |
| AstrBot | https://gateway.api.anyint.ai/openai/v1 | Add the provider in the Web UI. |
| Deep Code | https://gateway.api.anyint.ai/openai/v1 | Configure ~/.deepcode/settings.json. |
| Hermes | https://gateway.api.anyint.ai/openai/v1 | Choose a custom OpenAI-compatible provider during setup. |
| nanobot | https://gateway.api.anyint.ai/openai/v1 | Edit ~/.nanobot/config.json. |
| Crush | https://gateway.api.anyint.ai/openai/v1 | Add provider in ~/.config/crush/crush.json. |
| Pi | https://gateway.api.anyint.ai/openai/v1 | Add provider in ~/.pi/agent/models.json. |
| Reasonix | Tool-dependent | Requires custom provider or base URL support. |
| Langcli | Tool-dependent | Requires custom provider or base URL support. |
Claude Code
export ANTHROPIC_BASE_URL="https://gateway.api.anyint.ai/anthropic"
export ANTHROPIC_AUTH_TOKEN="$ANYINT_API_KEY"
export ANTHROPIC_MODEL="<ANTHROPIC_MODEL_ID>"
export ANTHROPIC_DEFAULT_OPUS_MODEL="<ANTHROPIC_MODEL_ID>"
export ANTHROPIC_DEFAULT_SONNET_MODEL="<ANTHROPIC_MODEL_ID>"
export ANTHROPIC_DEFAULT_HAIKU_MODEL="<FAST_MODEL_ID>"
export CLAUDE_CODE_SUBAGENT_MODEL="<FAST_MODEL_ID>"
claudeUse Anthropic-compatible model IDs returned by the Models API or shown in your AnyInt dashboard.
Codex CLI
Codex CLI uses ~/.codex/config.toml. Current Codex CLI releases expect a Responses API provider; OpenAI Chat Completions provider wiring is no longer supported by Codex CLI.
[profiles.anyint]
model_provider = "anyint"
model = "<MODEL_ID>"
[model_providers.anyint]
name = "AnyInt"
base_url = "https://gateway.api.anyint.ai/openai/v1"
env_key = "ANYINT_API_KEY"
wire_api = "responses"Then run:
export ANYINT_API_KEY="your-anyint-api-key"
codex exec --profile anyint "Reply exactly OK."Verification note: AnyInt's non-streaming POST /openai/v1/responses returned a valid 200 response in local testing. Codex CLI 0.128.0 itself did not complete the smoke test because its Responses stream closed before a response.completed event. Treat Codex CLI as not verified until streaming Responses compatibility is confirmed for your AnyInt account and Codex version.
GitHub Copilot CLI
export COPILOT_PROVIDER_TYPE="anthropic"
export COPILOT_PROVIDER_BASE_URL="https://gateway.api.anyint.ai/anthropic/v1"
export COPILOT_PROVIDER_API_KEY="$ANYINT_API_KEY"
export COPILOT_MODEL="<ANTHROPIC_MODEL_ID>"
copilotIf the model is not in Copilot CLI's built-in catalog, also set token limits if needed:
export COPILOT_PROVIDER_MAX_PROMPT_TOKENS=200000
export COPILOT_PROVIDER_MAX_OUTPUT_TOKENS=8192Verification note: GitHub Copilot CLI 0.0.422 did not expose these BYOK provider variables in copilot help environment, and a local smoke test with these variables did not route successfully to AnyInt. Use this only with a Copilot CLI build that explicitly documents custom provider support.
OpenCode
Create or edit opencode.json:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"anyint": {
"npm": "@ai-sdk/openai-compatible",
"name": "AnyInt",
"options": {
"baseURL": "https://gateway.api.anyint.ai/openai/v1",
"apiKey": "{env:ANYINT_API_KEY}"
},
"models": {
"<MODEL_ID>": {
"name": "AnyInt model",
"limit": {
"context": 200000,
"output": 8192
}
}
}
}
},
"model": "anyint/<MODEL_ID>"
}Then run opencode and use /models to switch models.
The limit.output value is important for Anthropic-family models because OpenCode may otherwise request more output tokens than the upstream model allows.
WorkBuddy / CodeBuddy
Create either user-level or project-level config:
~/.codebuddy/models.json
<project>/.codebuddy/models.jsonExample:
{
"models": [
{
"id": "<MODEL_ID>",
"name": "AnyInt Model",
"vendor": "AnyInt",
"url": "https://gateway.api.anyint.ai/openai/v1/chat/completions",
"apiKey": "${ANYINT_API_KEY}",
"maxInputTokens": 200000,
"maxOutputTokens": 8192,
"supportsToolCall": true,
"supportsImages": false
}
],
"availableModels": ["<MODEL_ID>"]
}Restart WorkBuddy/CodeBuddy after editing the file.
Kilo Code, Crush, Pi, Oh My Pi, Deep Code, nanobot
For tools that support custom OpenAI-compatible providers, use this mapping:
Provider name: AnyInt
API type: OpenAI-compatible
Base URL: https://gateway.api.anyint.ai/openai/v1
API key: $ANYINT_API_KEY
Model: <MODEL_ID>
Chat endpoint: /chat/completionsIf the tool asks for the full completion URL, use:
https://gateway.api.anyint.ai/openai/v1/chat/completionsAstrBot, OpenClaw, Hermes
In the provider setup UI or onboarding wizard, choose a custom OpenAI-compatible provider when available:
Provider: AnyInt
Base URL: https://gateway.api.anyint.ai/openai/v1
API key: your AnyInt API key
Model: <MODEL_ID>Then set the configured model as the default chat or coding model.
GitHub Copilot, Reasonix, Langcli
These tools may not expose a stable generic provider configuration in every release.
Use AnyInt directly only if the tool supports one of:
| Required Support | AnyInt Value |
|---|---|
| Custom OpenAI-compatible base URL | https://gateway.api.anyint.ai/openai/v1 |
| Custom Anthropic-compatible base URL | https://gateway.api.anyint.ai/anthropic/v1 |
| Custom API key | $ANYINT_API_KEY |
| Custom model ID | Any model returned by GET /openai/v1/models or GET /anthropic/v1/models |
If a tool only supports a fixed built-in provider list, AnyInt integration requires a plugin, extension, or upstream provider entry.
Verify Your Integration
Run customer-facing checks before depending on AnyInt in production: model discovery, authentication, first requests, streaming, async tasks, callbacks, and error handling.
Models
Model selection in AnyInt is a product decision, not only a code decision. You are choosing a request shape, an output modality, and an operational path at the same time.