AnyInt Docs
Guides

Agent Coding Tool Integrations

Use AnyInt with agent coding tools that support OpenAI-compatible, Anthropic-compatible, or custom model providers.

Use AnyInt with agent coding tools that support OpenAI-compatible, Anthropic-compatible, or custom model providers.

Before You Start

Create an AnyInt API key, then set it as an environment variable:

export ANYINT_API_KEY="your-anyint-api-key"

Use the Models API to find model IDs available to your account:

curl https://gateway.api.anyint.ai/openai/v1/models \
  -H "Authorization: Bearer $ANYINT_API_KEY"

The OpenAI-compatible and Anthropic-compatible model list endpoints both work:

curl https://gateway.api.anyint.ai/openai/v1/models \
  -H "Authorization: Bearer $ANYINT_API_KEY"

curl https://gateway.api.anyint.ai/anthropic/v1/models \
  -H "x-api-key: $ANYINT_API_KEY" \
  -H "anthropic-version: 2023-06-01"

Use the published compatibility-prefixed model routes. Tool configuration should not assume an unprefixed models endpoint.

Common endpoints:

CompatibilityBase URLAuth
OpenAI-compatiblehttps://gateway.api.anyint.ai/openai/v1Authorization: Bearer $ANYINT_API_KEY
Anthropic-compatiblehttps://gateway.api.anyint.ai/anthropic/v1x-api-key: $ANYINT_API_KEY + anthropic-version: 2023-06-01
Gemini-compatiblehttps://gateway.api.anyint.ai/gemini/v1betaAuthorization: Bearer $ANYINT_API_KEY

Use the OpenAI-compatible endpoint unless the tool specifically expects Anthropic Messages API behavior.

Use the AnyInt Docs MCP Server

The docs repository includes an optional MCP server for AI coding tools that can read public AnyInt documentation directly. This server is for documentation lookup only. It does not call the AnyInt API and does not need your API key.

Use it when you want an agent to answer questions such as:

  • Which AnyInt endpoint should I call?
  • What auth headers does this API family use?
  • Show me the page for OpenAI Responses or Kling video.

For MCP clients that support remote streamable HTTP servers, use the hosted server:

{
  "mcpServers": {
    "anyint-docs": {
      "url": "https://anyint.ai/docs/mcp"
    }
  }
}

No authentication is required because the server only exposes public docs lookup tools.

If your client only supports local stdio MCP servers, run it from a local checkout of anyint-docs:

pnpm install
pnpm mcp

Then configure the client with an absolute path to the checkout:

{
  "mcpServers": {
    "anyint-docs": {
      "command": "node",
      "args": ["/absolute/path/to/anyint-docs/scripts/mcp-server.mjs"]
    }
  }
}

Available tools:

ToolPurpose
list_topicsBrowse public docs sections and pages
search_docsSearch docs by keyword
get_pageFetch one public docs page by id, URL, path, or slug
list_endpointsList public endpoint families from api-endpoints.json
get_endpointFetch one endpoint family with status, auth, docs page, and route list

If you do not want to use MCP, the same public docs data is available through https://anyint.ai/docs/llms.txt, https://anyint.ai/docs/llms-full.txt, https://anyint.ai/docs/api-endpoints.json, and per-page Markdown URLs such as https://anyint.ai/docs/api-reference/openai-compatible.mdx.

Tool Setup Summary

ToolRecommended AnyInt Base URLSetup
Claude Codehttps://gateway.api.anyint.ai/anthropicSet ANTHROPIC_BASE_URL, ANTHROPIC_AUTH_TOKEN, and model environment variables.
Codex CLIhttps://gateway.api.anyint.ai/openai/v1Add a custom Responses provider in ~/.codex/config.toml; see Codex CLI notes below.
GitHub CopilotExtension-dependentUse only if your Copilot extension supports custom OpenAI or Anthropic endpoints.
GitHub Copilot CLIhttps://gateway.api.anyint.ai/anthropic/v1Use only if your Copilot CLI build supports BYOK provider environment variables.
Kilo Codehttps://gateway.api.anyint.ai/openai/v1Add AnyInt as a custom OpenAI-compatible provider.
WorkBuddy/CodeBuddyhttps://gateway.api.anyint.ai/openai/v1Add entries to .codebuddy/models.json.
OpenCodehttps://gateway.api.anyint.ai/openai/v1Add a custom provider in opencode.json.
Oh My Pihttps://gateway.api.anyint.ai/openai/v1Add AnyInt provider to ~/.omp/agent/models.yml.
OpenClawhttps://gateway.api.anyint.ai/openai/v1Use onboarding or custom provider settings if available.
AstrBothttps://gateway.api.anyint.ai/openai/v1Add the provider in the Web UI.
Deep Codehttps://gateway.api.anyint.ai/openai/v1Configure ~/.deepcode/settings.json.
Hermeshttps://gateway.api.anyint.ai/openai/v1Choose a custom OpenAI-compatible provider during setup.
nanobothttps://gateway.api.anyint.ai/openai/v1Edit ~/.nanobot/config.json.
Crushhttps://gateway.api.anyint.ai/openai/v1Add provider in ~/.config/crush/crush.json.
Pihttps://gateway.api.anyint.ai/openai/v1Add provider in ~/.pi/agent/models.json.
ReasonixTool-dependentRequires custom provider or base URL support.
LangcliTool-dependentRequires custom provider or base URL support.

Claude Code

export ANTHROPIC_BASE_URL="https://gateway.api.anyint.ai/anthropic"
export ANTHROPIC_AUTH_TOKEN="$ANYINT_API_KEY"
export ANTHROPIC_MODEL="<ANTHROPIC_MODEL_ID>"
export ANTHROPIC_DEFAULT_OPUS_MODEL="<ANTHROPIC_MODEL_ID>"
export ANTHROPIC_DEFAULT_SONNET_MODEL="<ANTHROPIC_MODEL_ID>"
export ANTHROPIC_DEFAULT_HAIKU_MODEL="<FAST_MODEL_ID>"
export CLAUDE_CODE_SUBAGENT_MODEL="<FAST_MODEL_ID>"

claude

Use Anthropic-compatible model IDs returned by the Models API or shown in your AnyInt dashboard.

Codex CLI

Codex CLI uses ~/.codex/config.toml. Current Codex CLI releases expect a Responses API provider; OpenAI Chat Completions provider wiring is no longer supported by Codex CLI.

[profiles.anyint]
model_provider = "anyint"
model = "<MODEL_ID>"

[model_providers.anyint]
name = "AnyInt"
base_url = "https://gateway.api.anyint.ai/openai/v1"
env_key = "ANYINT_API_KEY"
wire_api = "responses"

Then run:

export ANYINT_API_KEY="your-anyint-api-key"
codex exec --profile anyint "Reply exactly OK."

Verification note: AnyInt's non-streaming POST /openai/v1/responses returned a valid 200 response in local testing. Codex CLI 0.128.0 itself did not complete the smoke test because its Responses stream closed before a response.completed event. Treat Codex CLI as not verified until streaming Responses compatibility is confirmed for your AnyInt account and Codex version.

GitHub Copilot CLI

export COPILOT_PROVIDER_TYPE="anthropic"
export COPILOT_PROVIDER_BASE_URL="https://gateway.api.anyint.ai/anthropic/v1"
export COPILOT_PROVIDER_API_KEY="$ANYINT_API_KEY"
export COPILOT_MODEL="<ANTHROPIC_MODEL_ID>"

copilot

If the model is not in Copilot CLI's built-in catalog, also set token limits if needed:

export COPILOT_PROVIDER_MAX_PROMPT_TOKENS=200000
export COPILOT_PROVIDER_MAX_OUTPUT_TOKENS=8192

Verification note: GitHub Copilot CLI 0.0.422 did not expose these BYOK provider variables in copilot help environment, and a local smoke test with these variables did not route successfully to AnyInt. Use this only with a Copilot CLI build that explicitly documents custom provider support.

OpenCode

Create or edit opencode.json:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "anyint": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "AnyInt",
      "options": {
        "baseURL": "https://gateway.api.anyint.ai/openai/v1",
        "apiKey": "{env:ANYINT_API_KEY}"
      },
      "models": {
        "<MODEL_ID>": {
          "name": "AnyInt model",
          "limit": {
            "context": 200000,
            "output": 8192
          }
        }
      }
    }
  },
  "model": "anyint/<MODEL_ID>"
}

Then run opencode and use /models to switch models.

The limit.output value is important for Anthropic-family models because OpenCode may otherwise request more output tokens than the upstream model allows.

WorkBuddy / CodeBuddy

Create either user-level or project-level config:

~/.codebuddy/models.json
<project>/.codebuddy/models.json

Example:

{
  "models": [
    {
      "id": "<MODEL_ID>",
      "name": "AnyInt Model",
      "vendor": "AnyInt",
      "url": "https://gateway.api.anyint.ai/openai/v1/chat/completions",
      "apiKey": "${ANYINT_API_KEY}",
      "maxInputTokens": 200000,
      "maxOutputTokens": 8192,
      "supportsToolCall": true,
      "supportsImages": false
    }
  ],
  "availableModels": ["<MODEL_ID>"]
}

Restart WorkBuddy/CodeBuddy after editing the file.

Kilo Code, Crush, Pi, Oh My Pi, Deep Code, nanobot

For tools that support custom OpenAI-compatible providers, use this mapping:

Provider name: AnyInt
API type: OpenAI-compatible
Base URL: https://gateway.api.anyint.ai/openai/v1
API key: $ANYINT_API_KEY
Model: <MODEL_ID>
Chat endpoint: /chat/completions

If the tool asks for the full completion URL, use:

https://gateway.api.anyint.ai/openai/v1/chat/completions

AstrBot, OpenClaw, Hermes

In the provider setup UI or onboarding wizard, choose a custom OpenAI-compatible provider when available:

Provider: AnyInt
Base URL: https://gateway.api.anyint.ai/openai/v1
API key: your AnyInt API key
Model: <MODEL_ID>

Then set the configured model as the default chat or coding model.

GitHub Copilot, Reasonix, Langcli

These tools may not expose a stable generic provider configuration in every release.

Use AnyInt directly only if the tool supports one of:

Required SupportAnyInt Value
Custom OpenAI-compatible base URLhttps://gateway.api.anyint.ai/openai/v1
Custom Anthropic-compatible base URLhttps://gateway.api.anyint.ai/anthropic/v1
Custom API key$ANYINT_API_KEY
Custom model IDAny model returned by GET /openai/v1/models or GET /anthropic/v1/models

If a tool only supports a fixed built-in provider list, AnyInt integration requires a plugin, extension, or upstream provider entry.

On this page